text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Throughout recorded history , attempts at producing a state of general anesthesia can be traced back to the writings of ancient Sumerians , Babylonians , Assyrians , Akkadians , Egyptians , Persians , Indians , and Chinese . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ]
Despite significant advances in anatomy and surgical techniques during the Renaissance , surgery remained a last-resort treatment largely due to the pain associated with it. [ 13 ] [ 14 ] This limited surgical procedures to addressing only life-threatening conditions, with techniques focused on speed to limit blood loss. All of these interventions carried high risk of complications, especially death. Around 80% of surgeries led to severe infections, and 50% of patients died either during surgery or from complications thereafter. [ 15 ] Many of the patients who were fortunate enough to survive remained psychologically traumatized for the rest of their lives. [ 16 ] However, scientific discoveries in the late 18th and early 19th centuries paved the way for the development of modern anesthetic techniques.
The 19th century was filled with scientific advancements in pharmacology and physiology . During the 1840s, the introduction of diethyl ether (1842), [ 17 ] [ 18 ] [ 19 ] nitrous oxide (1844), [ 20 ] [ 21 ] and chloroform (1847) [ 22 ] [ 23 ] as general anesthetics revolutionized modern medicine . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 13 ] [ 20 ] [ 24 ] [ 25 ] [ 26 ] The late 19th century also saw major advancements to modern surgery with the development and application of antiseptic techniques as a result of the germ theory of disease , which significantly reduced morbidity and mortality rates . [ 15 ]
In the 20th century, the safety and efficacy of general anesthetics were further improved with the routine use of tracheal intubation and advanced airway management techniques, monitoring , and new anesthetic agents with improved characteristics. Standardized training programs for anesthesiologists and nurse anesthetists emerged during this period.
Moreover, the application of economic and business administration principles to healthcare in the late 20th and early 21st centuries led to the introduction of management practices, such as transfer pricing , to improve the efficiency of anesthetists. [ 27 ]
In ancient Greek texts, such as the Hippocratic Corpus and the dialogue Timaeus, the term ἀναισθησία (anaisthēsíā) is used, which translates to "without sensation". This term is derived from the prefix ἀν- (an-), meaning "without", and αἴσθησις (aisthēsis), which means "sensation". The concept of anaisthēsia is significant in understanding the historical foundations of anesthesia and its relevance in medical practices.
In 1679, Steven Blankaart published Lexicon medicum graeco-latinum with the Latin term anaisthesia . In 1684, an English translation appeared titled A Physical Dictionary , with anesthesia defined as a "defect of sensation, as in paralytic and blasted persons". Subsequently, the term and variant spellings like anæsthesia are used in medical literature signifying "insensibility". [ 28 ]
In 1846, in a letter, Oliver Wendell Holmes proposed the term anesthesia to be used for the state induced by an agent and anesthetic for the agent itself. Holmes motivates this with earlier uses of anesthesia in medical literature to mean "insensibility", particularly to "objects of touch". [ 29 ] [ 30 ] [ 28 ]
The first attempts at general anesthesia were probably herbal remedies administered in prehistory . Alcohol is the oldest known sedative ; it was used in ancient Mesopotamia thousands of years ago. [ 31 ]
The first historical achievement in anesthesia occurred around 4000 BC in ancient Mesopotamia . [ 5 ] [ 10 ] [ 32 ] [ 33 ] [ 34 ] This was the advent of ethanol (commonly known as drinking alcohol ), the first general anaesthetic agent. [ 1 ] [ 2 ] [ 3 ] [ 7 ] These accounts of the anesthetic uses of ethanol come from the oldest known historical civilization of Sumer and are documented in the oldest known writing system cuneiform .
The Sumerians are said to have cultivated and harvested the opium poppy ( Papaver somniferum ) in lower Mesopotamia as early as 3400 BC, [ 35 ] [ 36 ] though this has been disputed. [ 37 ] The most ancient testimony concerning the opium poppy found to date was inscribed in cuneiform script on a small white clay tablet at the end of the third millennium BC. This tablet was discovered in 1954 during excavations at Nippur , and is currently kept at the University of Pennsylvania Museum of Archaeology and Anthropology . Deciphered by Samuel Noah Kramer and Martin Leve, it is considered to be the most ancient pharmacopoeia in existence. [ 38 ] [ 39 ] Some Sumerian tablets of this era have an ideogram inscribed upon them, "hul gil", which translates to "plant of joy", believed by some authors to refer to opium. [ 40 ] [ 41 ] The term gil is still used for opium in certain parts of the world. [ 42 ] The Sumerian goddess Nidaba is often depicted with poppies growing out of her shoulders. About 2225 BC, the Sumerian territory became a part of the Babylonian empire. Knowledge and use of the opium poppy and its euphoric effects thus passed to the Babylonians, who expanded their empire eastwards to Persia and westwards to Egypt, thereby extending its range to these civilizations. [ 42 ] British archaeologist and cuneiformist Reginald Campbell Thompson writes that opium was known to the Assyrians in the 7th century BC. [ 43 ] The term "Arat Pa Pa" occurs in the Assyrian Herbal , a collection of inscribed Assyrian tablets dated to c. 650 BC. According to Thompson, this term is the Assyrian name for the juice of the poppy and it may be the etymological origin of the Latin " papaver ". [ 40 ]
The ancient Egyptians had some surgical instruments, [ 44 ] [ 45 ] as well as crude analgesics and sedatives, including possibly an extract prepared from the mandrake fruit. [ 46 ] The use of preparations similar to opium in surgery is recorded in the Ebers Papyrus , an Egyptian medical papyrus written in the Eighteenth Dynasty . [ 42 ] [ 44 ] [ 47 ] However, it is questionable whether opium itself was known in ancient Egypt. [ 48 ] The Greek gods Hypnos (Sleep), Nyx (Night), and Thanatos (Death) were often depicted holding poppies. [ 49 ]
Prior to the introduction of opium to ancient India and China , these civilizations pioneered the use of cannabis incense and aconitum . c. 400 BC, the Sushruta Samhita (a text from the Indian subcontinent on Ayurvedic medicine and surgery) advocates the use of wine with incense of cannabis for anesthesia. [ 50 ] By the 8th century AD, Arab traders had brought opium to India [ 51 ] and China. [ 52 ]
In Classical antiquity , anaesthetics were described by:
Bian Que ( Chinese : 扁鵲, Wade–Giles : Pien Ch'iao , c. 300 BC ) was a legendary Chinese internist and surgeon who reportedly used general anesthesia for surgical procedures. It is recorded in the Book of Master Han Fei ( c. 250 BC ), the Records of the Grand Historian ( c. 100 BC ), and the Book of Master Lie ( c. 300 AD ) that Bian Que gave two men, named "Lu" and "Chao", a toxic drink which rendered them unconscious for three days, during which time he performed a gastrostomy upon them. [ 54 ] [ 55 ] [ 56 ]
Hua Tuo ( Chinese :華佗, c. 140–208 AD ) was a Chinese surgeon of the 2nd century AD. According to the Records of Three Kingdoms ( c. 270 AD ) and the Book of the Later Han ( c. 430 AD ), Hua Tuo performed surgery under general anesthesia using a formula he had developed by mixing wine with a mixture of herbal extracts he called mafeisan (麻沸散). [ 57 ] Hua Tuo reportedly used mafeisan to perform even major operations such as resection of gangrenous intestines . [ 57 ] [ 58 ] [ 59 ] Before the surgery, he administered an oral anesthetic potion , probably dissolved in wine, in order to induce a state of unconsciousness and partial neuromuscular blockade . [ 57 ]
The exact composition of mafeisan, similar to all of Hua Tuo's clinical knowledge, was lost when he burned his manuscripts, just before his death. [ 60 ] The composition of the anesthetic powder was not mentioned in either the Records of Three Kingdoms or the Book of the Later Han . Because Confucian teachings regarded the body as sacred and surgery was considered a form of body mutilation, surgery was strongly discouraged in ancient China . Because of this, despite Hua Tuo's reported success with general anesthesia, the practice of surgery in ancient China ended with his death. [ 57 ]
The name mafeisan combines ma ( 麻 , meaning "cannabis, hemp , numbed or tingling "), fei ( 沸 , meaning " boiling or bubbling"), and san ( 散 , meaning "to break up or scatter", or "medicine in powder form"). Therefore, the word mafeisan probably means something like "cannabis boil powder". Many sinologists and scholars of traditional Chinese medicine have guessed at the composition of Hua Tuo's mafeisan powder, but the exact components still remain unclear. His formula is believed to have contained some combination of: [ 57 ] [ 60 ] [ 61 ] [ 62 ]
Others have suggested the potion may have also contained hashish , [ 58 ] bhang , [ 59 ] shang-luh , [ 54 ] or opium. [ 63 ] Victor H. Mair wrote that mafei "appears to be a transcription of some Indo-European word related to "morphine"." [ 64 ] Some authors believe that Hua Tuo may have discovered surgical analgesia by acupuncture , and that mafeisan either had nothing to do with or was simply an adjunct to his strategy for anesthesia. [ 65 ] Many physicians have attempted to re-create the same formulation based on historical records but none have achieved the same clinical efficacy as Hua Tuo's. In any event, Hua Tuo's formula did not appear to be effective for major operations. [ 64 ] [ 66 ]
Other substances used from antiquity for anesthetic purposes include extracts of juniper and coca . [ 67 ] [ 68 ] [ 69 ]
Ferdowsi (940–1020) was a Persian poet who lived in the Abbasid Caliphate . In Shahnameh , his national epic poem, Ferdowsi described a caesarean section performed on Rudaba . [ citation needed ] A special wine prepared by a Zoroastrian priest was used as an anesthetic for this operation. [ 54 ]
Circa 1020, Ibn Sīnā (980–1037) in The Canon of Medicine described the "soporific sponge", a sponge imbued with aromatics and narcotics , which was to be placed under a patient's nose during surgical operations. [ 70 ] [ 71 ] [ 72 ] Opium made its way from Asia Minor to all parts of Europe between the 10th and 13th centuries. [ 73 ]
Throughout 1200 to 1500 AD in England, a potion called dwale was used as an anesthetic. [ 74 ] This alcohol -based mixture contained bile , opium , lettuce , bryony , henbane , hemlock , and vinegar . [ 74 ] Surgeons roused their patients by rubbing vinegar and salt on their cheekbones. [ 74 ] One can find records of dwale in numerous literary sources, including Shakespeare's Hamlet , and the John Keats poem " Ode to a Nightingale ". [ 74 ] In the 13th century, we have the first prescription of the "spongia soporifica"—a sponge soaked in the juices of unripe mulberry, flax, mandragora leaves, ivy, lettuce seeds, lapathum, and hemlock with hyoscyamus. After treatment and/or storage, the sponge could be heated and the vapors inhaled with anesthetic effect. [ citation needed ]
Alchemist Ramon Llull has been credited with discovering diethyl ether in 1275. [ citation needed ] [ 74 ] Aureolus Theophrastus Bombastus von Hohenheim (1493–1541), better known as Paracelsus , discovered the analgesic properties of diethyl ether around 1525. [ 75 ] It was first synthesized in 1540 by Valerius Cordus , who noted some of its medicinal properties. [ citation needed ] He called it oleum dulce vitrioli , a name that reflects the fact that it is synthesized by distilling a mixture of ethanol and sulfuric acid (known at that time as oil of vitriol). August Sigmund Frobenius gave the name Spiritus Vini Æthereus to the substance in 1730. [ 76 ] [ 77 ]
Joseph Priestley (1733–1804) was an English polymath who discovered nitrous oxide , nitric oxide , ammonia , hydrogen chloride , and (along with Carl Wilhelm Scheele and Antoine Lavoisier ) oxygen . Beginning in 1775, Priestley published his research in Experiments and Observations on Different Kinds of Air , a six-volume work. [ 78 ] The recent discoveries about these and other gases stimulated a great deal of interest in the European scientific community. Thomas Beddoes (1760–1808) was an English philosopher , physician and teacher of medicine, and like his older colleague Priestley, was also a member of the Lunar Society of Birmingham . With an eye toward making further advances in this new science as well as offering treatment for diseases previously thought to be untreatable (such as asthma and tuberculosis ), Beddoes founded the Pneumatic Institution for inhalation gas therapy in 1798 at Dowry Square in Clifton, Bristol . [ 79 ] Beddoes employed chemist and physicist Humphry Davy (1778–1829) as superintendent of the institute, and engineer James Watt (1736–1819) to help manufacture the gases. Other members of the Lunar Society such as Erasmus Darwin and Josiah Wedgwood were also actively involved with the institute.
During the course of his research at the Pneumatic Institution, Davy discovered the anesthetic properties of nitrous oxide. [ 80 ] Davy, who coined the term "laughing gas" for nitrous oxide, published his findings the following year in the now-classic treatise , Researches, chemical and philosophical–chiefly concerning nitrous oxide or dephlogisticated nitrous air, and its respiration . Davy was not a physician, and he never administered nitrous oxide during a surgical procedure. He was, however, the first to document the analgesic effects of nitrous oxide, as well as its potential benefits in relieving pain during surgery: [ 81 ]
As nitrous oxide in its extensive operation appears capable of destroying physical pain, it may probably be used with advantage during surgical operations in which no great effusion of blood takes place.
Takamine Tokumei from Shuri , Ryūkyū Kingdom , is reported to have made a general anesthesia in 1689 in the Ryukyus, now known as Okinawa. He passed on his knowledge to the Satsuma doctors in 1690 and to Ryūkyūan doctors in 1714. [ 82 ]
Hanaoka Seishū (華岡 青洲, 1760–1835) of Osaka was a Japanese surgeon of the Edo period with a knowledge of Chinese herbal medicine , as well as Western surgical techniques he had learned through Rangaku (literally "Dutch learning", and by extension "Western learning"). Beginning in about 1785, Hanaoka embarked on a quest to re-create a compound that would have pharmacologic properties similar to Hua Tuo's mafeisan. [ 83 ] After years of research and experimentation, he finally developed a formula which he named tsūsensan (also known as mafutsu-san ). Like that of Hua Tuo, this compound was composed of extracts of several different plants, including: [ 84 ] [ 85 ] [ 86 ]
The active ingredients in tsūsensan are scopolamine , hyoscyamine , atropine , aconitine and angelicotoxin . When consumed in sufficient quantity, tsūsensan produces a state of general anesthesia and skeletal muscle paralysis . [ 86 ] Shutei Nakagawa (1773–1850), a close friend of Hanaoka, wrote a small pamphlet titled "Mayaku-ko" ("narcotic powder") in 1796. Although the original manuscript was lost in a fire in 1867, this brochure described the current state of Hanaoka's research on general anesthesia. [ 87 ]
On 13 October 1804, Hanaoka performed a partial mastectomy for breast cancer on a 60-year-old woman named Kan Aiya, using tsūsensan as a general anaesthetic . This is generally regarded today as the first reliable documentation of an operation to be performed under general anesthesia. [ 83 ] [ 85 ] [ 88 ] [ 89 ] Hanaoka went on to perform many operations using tsūsensan, including resection of malignant tumors , extraction of bladder stones , and extremity amputations . Before his death in 1835, Hanaoka performed more than 150 operations for breast cancer. [ 83 ] [ 89 ] [ 90 ] [ 91 ]
Friedrich Sertürner (1783–1841) first isolated morphine from opium in 1804; [ 92 ] he named it morphine after Morpheus , the Greek god of dreams. [ 93 ] [ full citation needed ] [ 94 ] [ full citation needed ]
Henry Hill Hickman (1800–1830) experimented with the use of carbon dioxide as an anesthetic in the 1820s. He would make the animal insensible, effectively via almost suffocating it with carbon dioxide, then determine the effects of the gas by amputating one of its limbs. In 1824, Hickman submitted the results of his research to the Royal Society in a short treatise titled Letter on suspended animation: with the view of ascertaining its probable utility in surgical operations on human subjects . The response was an 1826 article in The Lancet titled "Surgical Humbug" that ruthlessly criticised his work. Hickman died four years later at age 30. Though he was unappreciated at the time of his death, his work has since been positively reappraised and he is now recognised as one of the fathers of anesthesia.
By the late 1830s, Humphry Davy's experiments had become widely publicized within academic circles in the northeastern United States. Wandering lecturers would hold public gatherings, referred to as "ether frolics", where members of the audience were encouraged to inhale diethyl ether or nitrous oxide to demonstrate the mind-altering properties of these agents while providing much entertainment to onlookers. [ 95 ] Four notable men participated in these events and witnessed the use of ether in this manner. They were William Edward Clarke (1819–1898), Crawford W. Long (1815–1878), Horace Wells (1815–1848), and William T. G. Morton (1819–1868).
While attending undergraduate school in Rochester, New York, in 1839, classmates Clarke and Morton apparently participated in ether frolics with some regularity. [ 96 ] [ 97 ] [ 98 ] [ 99 ] In January 1842, by now a medical student at Berkshire Medical College , Clarke administered ether to a Miss Hobbie, while Elijah Pope performed a dental extraction . [ 97 ] In so doing, he became the first to administer an inhaled anesthetic to facilitate the performance of a surgical procedure. Clarke apparently thought little of his accomplishment, and chose neither to publish nor to pursue this technique any further. Indeed, this event is not even mentioned in Clarke's biography. [ 100 ]
Crawford W. Long was a physician and pharmacist practicing in Jefferson, Georgia in the mid-19th century. During his time as a student at the University of Pennsylvania School of Medicine in the late 1830s, he had observed and probably participated in the ether frolics that had become popular at that time. At these gatherings, Long observed that some participants experienced bumps and bruises, but afterward had no recall of what had happened. He postulated that diethyl ether produced pharmacologic effects similar to those of nitrous oxide. On 30 March 1842, he administered diethyl ether by inhalation to a man named James Venable, in order to remove a tumor from the man's neck. [ 101 ] Long later removed a second tumor from Venable, again under ether anesthesia. He went on to employ ether as a general anesthetic for limb amputations and childbirth . Long, however, did not publish his experience until 1849, thereby denying himself much of the credit he deserved. [ 101 ]
With the beginnings of modern medicine the stage was set for physicians and surgeons to build a paradigm in which anesthesia became useful. [ 102 ]
On 10 December 1844, Gardner Quincy Colton held a public demonstration of nitrous oxide in Hartford, Connecticut. One of the participants, Samuel A. Cooley, sustained a significant injury to his leg while under the influence of nitrous oxide without noticing the injury. Horace Wells, a Connecticut dentist present in the audience that day, immediately seized upon the significance of this apparent analgesic effect of nitrous oxide. The following day, Wells underwent a painless dental extraction while under the influence of nitrous oxide administered by Colton. Wells then began to administer nitrous oxide to his patients, successfully performing several dental extractions over the next couple of weeks.
William T. G. Morton, another New England dentist, was a former student and then-current business partner of Wells. He was also a former acquaintance and classmate of William Edward Clarke (the two had attended undergraduate school together in Rochester, New York). Morton arranged for Wells to demonstrate his technique for dental extraction under nitrous oxide general anesthesia at Massachusetts General Hospital , in conjunction with the prominent surgeon John Collins Warren . This demonstration, which took place on 20 January 1845, ended in failure when the patient cried out in pain in the middle of the operation. [ 103 ]
On 30 September 1846, Morton administered diethyl ether to Eben Frost, a music teacher from Boston , for a dental extraction. Two weeks later, Morton became the first to publicly demonstrate the use of diethyl ether as a general anesthetic at Massachusetts General Hospital, in what is known today as the Ether Dome . [ 104 ] On 16 October 1846, John Collins Warren removed a tumor from the neck of a local printer, Edward Gilbert Abbott . Upon completion of the procedure, Warren reportedly quipped, "Gentlemen, this is no humbug." News of this event rapidly traveled around the world. [ 105 ] Robert Liston performed the first amputation in December of that year. Morton published his experience soon after. [ 104 ] Harvard University professor Charles Thomas Jackson (1805–1880) later claimed that Morton stole his idea; [ 106 ] Morton disagreed and a lifelong dispute began. [ 105 ] For many years, Morton was credited as being the pioneer of general anesthesia in the Western hemisphere, despite the fact that his demonstration occurred four years after Long's initial experience. Long later petitioned William Crosby Dawson (1798–1856), a United States Senator from Georgia at that time, to support his claim on the floor of the United States Senate as the first to use ether anesthesia. [ 107 ]
In 1847, Scottish obstetrician James Young Simpson (1811–1870) of Edinburgh was the first to use chloroform as a general anesthetic on a human ( Robert Mortimer Glover had written on this possibility in 1842 but only used it on dogs). The use of chloroform anesthesia expanded rapidly thereafter in Europe. Chloroform began to replace ether as an anesthetic in the United States at the beginning of the 20th century. It was soon abandoned in favor of ether when its hepatic and cardiac toxicity , especially its tendency to cause potentially fatal cardiac dysrhythmias , became apparent. In fact, the use of chloroform versus ether as the primary anesthetic gas varied by country and region. For instance, Britain and the American South stuck with chloroform while the American North returned to ether. [ 102 ] John Snow quickly became the most experienced British physician working with the new anesthetic gases of ether and chloroform thus becoming, in effect, the first British anesthetist. Through his careful clinical records he was eventually able to convince the elite of London medicine that anesthesia (chloroform) had a rightful place in childbirth. Thus, in 1853 Queen Victoria's accoucheurs invited John Snow to anesthetize the Queen for the birth of her eighth child. [ 108 ] From the beginnings of ether and chloroform anesthesia until well into the 20th century, the standard method of administration was the drop mask. A mask was placed over the patient's mouth with some fabric in it and the volatile liquid was dropped onto the mask with the patient spontaneously breathing. Later development of safe endotracheal tubes changed this. [ 109 ] Because of the unique social setting of London medicine, anesthesia had become its own speciality there by the end of the nineteenth century, while in the rest of the United Kingdom and most of the world anesthesia remained under the purview of the surgeon who would assign the task to a junior doctor or nurse. [ 102 ]
After Austrian diplomat Karl von Scherzer brought back sufficient quantities of coca leaves from Peru, in 1860 Albert Niemann isolated cocaine, which thus became the first local anesthetic. [ 110 ] [ 111 ]
In 1871, the German surgeon Friedrich Trendelenburg (1844–1924) published a paper describing the first successful elective human tracheotomy to be performed for the purpose of administration of general anesthesia. [ 112 ] [ 113 ] [ 114 ] [ 115 ]
In 1880, the Scottish surgeon William Macewen (1848–1924) reported on his use of orotracheal intubation as an alternative to tracheotomy to allow a patient with glottic edema to breathe, as well as in the setting of general anesthesia with chloroform . [ 116 ] [ 117 ] [ 118 ] All previous observations of the glottis and larynx (including those of Manuel García , [ 119 ] Wilhelm Hack [ 120 ] [ 121 ] and Macewen) had been performed under indirect vision (using mirrors) until 23 April 1895, when Alfred Kirstein (1863–1922) of Germany first described direct visualization of the vocal cords. Kirstein performed the first direct laryngoscopy in Berlin, using an esophagoscope he had modified for this purpose; he called this device an autoscope . [ 122 ] The death of Emperor Frederick III (1831–1888) [ 123 ] may have motivated Kirstein to develop the autoscope. [ 124 ]
The 20th century saw the transformation of the practices of tracheotomy, endoscopy and non-surgical tracheal intubation from rarely employed procedures to essential components of the practices of anesthesia, critical care medicine , emergency medicine , gastroenterology , pulmonology, and surgery.
In 1902, Hermann Emil Fischer (1852–1919) and Joseph von Mering (1849–1908) discovered that diethylbarbituric acid was an effective hypnotic agent. [ 125 ] Also called barbital or Veronal (the trade name assigned to it by Bayer Pharmaceuticals ), this new drug became the first commercially marketed barbiturate ; it was used as a treatment for insomnia from 1903 until the mid-1950s.
Until 1913, oral and maxillofacial surgery was performed by mask inhalation anesthesia , topical application of local anesthetics to the mucosa , rectal anesthesia, or intravenous anesthesia. While otherwise effective, these techniques did not protect the airway from obstruction and also exposed patients to the risk of pulmonary aspiration of blood and mucus into the tracheobronchial tree. In 1913, Chevalier Jackson (1865–1958) was the first to report a high rate of success for the use of direct laryngoscopy as a means to intubate the trachea. [ 126 ] Jackson introduced a new laryngoscope blade that had a light source at the distal tip, rather than the proximal light source used by Kirstein. [ 127 ] This new blade incorporated a component that the operator could slide out to allow room for passage of an endotracheal tube or bronchoscope. [ 128 ]
Also in 1913, Henry H. Janeway (1873–1921) published results he had achieved using a laryngoscope he had recently developed. [ 129 ] An American anesthesiologist practicing at Bellevue Hospital in New York City , Janeway was of the opinion that direct intratracheal insufflation of volatile anesthetics would provide improved conditions for otolaryngologic surgery. With this in mind, he developed a laryngoscope designed for the sole purpose of tracheal intubation. Similar to Jackson's device, Janeway's instrument incorporated a distal light source. Unique, however, was the inclusion of batteries within the handle, a central notch in the blade for maintaining the tracheal tube in the midline of the oropharynx during intubation and a slight curve to the distal tip of the blade to help guide the tube through the glottis. The success of this design led to its subsequent use in other types of surgery. Janeway was thus instrumental in popularizing the widespread use of direct laryngoscopy and tracheal intubation in the practice of anesthesiology. [ 124 ]
In 1928 Arthur Ernest Guedel introduced the cuffed endotracheal tube, which allowed deep enough anesthesia that completely suppressed spontaneously respirations while the gas and oxygen were delivered via positive pressure ventilation controlled by the anesthesiologist. [ 130 ] Also important for the development of modern anesthesia are anesthesia machines . Only three years later Joseph W. Gale developed the technology where the anesthesiologist was able to ventilate only one lung at a time. [ 131 ] This allowed the development of thoracic surgery, which had previously been vexed by the pendelluft [ 132 ] problem in which the bad lung being operated on inflated with patient exhalation due to the loss of vacuum with the thorax being open to the atmosphere. Eventually by early 1980s double lumen endotracheal tubes made out of clear plastic enabled anesthesiologists to selectively ventilate one lung while using flexible fiberoptic bronchoscopy to block off the diseased lung and prevent cross contamination. [ 109 ] One early device, the copper kettle, was developed by Dr. Lucien E. Morris at the University of Wisconsin. [ 133 ] [ 134 ]
Sodium thiopental , the first intravenous anesthetic , was synthesized in 1934 by Ernest H. Volwiler (1893–1992) and Donalee L. Tabern (1900–1974), working for Abbott Laboratories . [ 135 ] It was first used in humans on 8 March 1934 by Ralph M. Waters in an investigation of its properties, which were short-term anesthesia and surprisingly little analgesia. Three months later, John Silas Lundy started a clinical trial of thiopental at the Mayo Clinic at the request of Abbott Laboratories. Volwiler and Tabern were awarded U.S. Patent No. 2,153,729 in 1939 for the discovery of thiopental, and they were inducted into the National Inventors Hall of Fame in 1986.
In 1939, the search for a synthetic substitute for atropine culminated serendipitously in the discovery of meperidine, the first opiate with a structure altogether different from that of morphine. [ 136 ] This was followed in 1947 by the widespread introduction of methadone, another structurally unrelated compound with pharmacological properties similar to those of morphine. [ 137 ]
After World War I , further advances were made in the field of intratracheal anesthesia. Among these were those made by Sir Ivan Whiteside Magill (1888–1986). Working at the Queen's Hospital for Facial and Jaw Injuries in Sidcup with plastic surgeon Sir Harold Gillies (1882–1960) and anesthetist E. Stanley Rowbotham (1890–1979), Magill developed the technique of awake blind nasotracheal intubation. [ 138 ] [ 139 ] [ 140 ] [ 141 ] [ 142 ] [ 143 ] Magill devised a new type of angulated forceps (the Magill forceps ) that are still used today to facilitate nasotracheal intubation in a manner that is little changed from Magill's original technique. [ 144 ] Other devices invented by Magill include the Magill laryngoscope blade, [ 145 ] as well as several apparatuses for the administration of volatile anesthetic agents. [ 146 ] [ 147 ] [ 148 ] The Magill curve of an endotracheal tube is also named for Magill.
The first hospital anesthesia department was established at the Massachusetts General Hospital in 1936, under the leadership of Henry K. Beecher (1904–1976). Beecher, who received his training in surgery, had no previous experience in anesthesia. [ 149 ]
Although initially used to reduce the sequelae of spasticity associated with electroconvulsive therapy for psychiatric disease, curare found use in the operating rooms at Bellvue by E.M. Papper and Stuart Cullen in the 1940s using preparations made by Squibb. [ 150 ] This neuromuscular blockade permitted complete paralysis of the diaphragm and enabled control of ventilation via positive pressure ventilation. [ 109 ] Mechanical ventilation first became common place with the polio epidemics of the 1950s, most notably in Denmark where an outbreak in 1952 lead to the creation of critical care medicine out of anesthesia. At first anesthesiologists hesitated to bring the ventilator into the operating theater unless necessary, but by the 1960s it became standard operating room equipment. [ 109 ]
Sir Robert Macintosh (1897–1989) achieved significant advances in techniques for tracheal intubation when he introduced his new curved laryngoscope blade in 1943. [ 151 ] The Macintosh blade remains to this day the most widely used laryngoscope blade for orotracheal intubation. [ 152 ] In 1949, Macintosh published a case report describing the novel use of a gum elastic urinary catheter as an endotracheal tube introducer to facilitate difficult tracheal intubation. [ 153 ] Inspired by Macintosh's report, P. Hex Venn (who was at that time the anesthetic advisor to the British firm Eschmann Bros. & Walsh, Ltd.) set about developing an endotracheal tube introducer based on this concept. Venn's design was accepted in March 1973, and what became known as the Eschmann endotracheal tube introducer went into production later that year. [ 154 ] The material of Venn's design was different from that of a gum elastic bougie in that it had two layers: a core of tube woven from polyester threads and an outer resin layer. This provided more stiffness but maintained the flexibility and the slippery surface. Other differences were the length (the new introducer was 60 cm (24 in), which is much longer than the gum elastic bougie) and the presence of a 35° curved tip, permitting it to be steered around obstacles. [ 155 ] [ 156 ]
For over a hundred years the mainstay of inhalational anesthetics remained ether with cyclopropane , which had been introduced in the 1930s. In 1956 halothane [ 157 ] was introduced which had the significant advantage of not being flammable. This reduced the risk of operating room fires. In the sixties the halogenated ethers superseded Halothane due to the rare, but significant side effects of cardiac arrhythmias and liver toxicity. The first two halogenated ethers were methoxyflurane and enflurane . These in turn were replaced by the current standards of isoflurane , sevoflurane , and desflurane in the eighties and nineties although methoxyflurane remains in use for prehospital anesthesia in Australia as Penthrox. Halothane remains in common place throughout much of the developing world.
Many new intravenous and inhalational anesthetics were developed and brought into clinical use during the second half of the 20th century. Paul Janssen (1926–2003), the founder of Janssen Pharmaceutica , is credited with the development of over 80 pharmaceutical compounds. [ 158 ] Janssen synthesized nearly all of the butyrophenone class of antipsychotic agents, beginning with haloperidol (1958) and droperidol (1961). [ 159 ] These agents were rapidly integrated into the practice of anesthesia. [ 160 ] [ 161 ] [ 162 ] [ 163 ] [ 164 ] In 1960, Janssen's team synthesized fentanyl , the first of the piperidinone -derived opioids. [ 165 ] [ 166 ] Fentanyl was followed by sufentanil (1974), [ 167 ] alfentanil (1976), [ 168 ] [ 169 ] carfentanil (1976), [ 170 ] and lofentanil (1980). [ 171 ] Janssen and his team also developed etomidate (1964), [ 172 ] [ 173 ] a potent intravenous anesthetic induction agent.
The concept of using a fiberoptic endoscope for tracheal intubation was introduced by Peter Murphy , an English anesthetist, in 1967. [ 174 ] By the mid-1980s, the flexible fiberoptic bronchoscope had become an indispensable instrument within the pulmonology and anesthesia communities. [ 175 ]
The " digital revolution " of the 21st century has brought newer technology to the art and science of tracheal intubation. Several manufacturers have developed video laryngoscopes which employ digital technology such as the CMOS active pixel sensor (APS) to generate a view of the glottis so that the trachea may be intubated. The Glidescope video laryngoscope is one example of such a device. [ 176 ] [ 177 ]
Xenon , which does not act as a greenhouse gas, has recently been approved in some jurisdictions as an anaesthetic agent. [ 178 ] | https://en.wikipedia.org/wiki/History_of_general_anesthesia |
Genetic engineering is the science of manipulating genetic material of an organism. The concept of genetic engineering was first proposed by Nikolay Timofeev-Ressovsky in 1934. [ 1 ] The first artificial genetic modification accomplished using biotechnology was transgenesis, the process of transferring genes from one organism to another, first accomplished by Herbert Boyer and Stanley Cohen in 1973. It was the result of a series of advancements in techniques that allowed the direct modification of the genome . Important advances included the discovery of restriction enzymes and DNA ligases , the ability to design plasmids and technologies like polymerase chain reaction and sequencing . Transformation of the DNA into a host organism was accomplished with the invention of biolistics , Agrobacterium -mediated recombination and microinjection .
The first genetically modified animal was a mouse created in 1974 by Rudolf Jaenisch . In 1976, the technology was commercialised, with the advent of genetically modified bacteria that produced somatostatin , followed by insulin in 1978. In 1983, an antibiotic resistant gene was inserted into tobacco, leading to the first genetically engineered plant . Advances followed that allowed scientists to manipulate and add genes to a variety of different organisms and induce a range of different effects. Plants were first commercialized with virus resistant tobacco released in China in 1992. The first genetically modified food was the Flavr Savr tomato marketed in 1994. By 2010, 29 countries had planted commercialized biotech crops. In 2000 a paper published in Science introduced golden rice , the first food developed with increased nutrient value.
Genetic engineering is the direct manipulation of an organism's genome using certain biotechnology techniques that have only existed since the 1970s. [ 3 ] Human directed genetic manipulation was occurring much earlier, beginning with the domestication of plants and animals through artificial selection . The dog is believed to be the first animal domesticated, possibly arising from a common ancestor of the grey wolf , [ 2 ] with archeological evidence dating to about 12,000 BC. [ 4 ] Other carnivores domesticated in prehistoric times include the cat, which cohabited with human 9,500 years ago. [ 5 ] Archeological evidence suggests sheep, cattle, pigs and goats were domesticated between 9,000 BC and 8,000 BC in the Fertile Crescent . [ 6 ]
The first evidence of plant domestication comes from emmer and einkorn wheat found in pre-Pottery Neolithic A villages in Southwest Asia dated about 10,500 to 10,100 BC. [ 7 ] The Fertile Crescent of Western Asia, Egypt , and India were sites of the earliest planned sowing and harvesting of plants that had previously been gathered in the wild. Independent development of agriculture occurred in northern and southern China, Africa's Sahel , New Guinea and several regions of the Americas. [ 8 ] The eight Neolithic founder crops ( emmer wheat , einkorn wheat , barley , peas , lentils , bitter vetch , chick peas and flax ) had all appeared by about 7,000 BC. [ 9 ] Horticulture first appears in the Levant during the Chalcolithic period about 6,800 to 6,300 BC. [ 10 ] Due to the soft tissues, archeological evidence for early vegetables is scarce. The earliest vegetable remains have been found in Egyptian caves that date back to the 2nd millennium BC . [ 11 ]
Selective breeding of domesticated plants was once the main way early farmers shaped organisms to suit their needs. Charles Darwin described three types of selection: methodical selection, wherein humans deliberately select for particular characteristics; unconscious selection, wherein a characteristic is selected simply because it is desirable; and natural selection , wherein a trait that helps an organism survive better is passed on. [ 12 ] : 25 Early breeding relied on unconscious and natural selection. The introduction of methodical selection is unknown. [ 12 ] : 25 Common characteristics that were bred into domesticated plants include grains that did not shatter to allow easier harvesting, uniform ripening, shorter lifespans that translate to faster growing, loss of toxic compounds, and productivity. [ 12 ] : 27–30 Some plants, like the Banana, were able to be propagated by vegetative cloning . Offspring often did not contain seeds, and was therefore sterile. However, these offspring were usually juicier and larger. Propagation through cloning allows these mutant varieties to be cultivated despite their lack of seeds. [ 12 ] : 31
Hybridization was another way that rapid changes in plant's makeup were introduced. It often increased vigor in plants, and combined desirable traits together. Hybridization most likely first occurred when humans first grew similar, yet slightly different plants in close proximity. [ 12 ] : 32 Triticum aestivum , wheat used in baking bread, is an allopolyploid . Its creation is the result of two separate hybridization events. [ 13 ]
Grafting can transfer chloroplasts , mitochondrial DNA and the entire cell nucleus containing the genome to potentially make a new species making grafting a form of natural genetic engineering. [ 14 ]
X-rays were first used to deliberately mutate plants in 1927. Between 1927 and 2007, more than 2,540 genetically mutated plant varieties had been produced using x-rays. [ 15 ]
Various genetic discoveries have been essential in the development of genetic engineering. Genetic inheritance was first discovered by Gregor Mendel in 1865 following experiments crossing peas. Although largely ignored for 34 years he provided the first evidence of hereditary segregation and independent assortment. [ 16 ] In 1889 Hugo de Vries came up with the name "(pan)gene" after postulating that particles are responsible for inheritance of characteristics [ 17 ] and the term "genetics" was coined by William Bateson in 1905. [ 18 ] In 1928 Frederick Griffith proved the existence of a "transforming principle" involved in inheritance, which Avery, MacLeod and McCarty later (1944) identified as DNA . Edward Lawrie Tatum and George Wells Beadle developed the central dogma that genes code for proteins in 1941. The double helix structure of DNA was identified by James Watson and Francis Crick in 1953.
As well as discovering how DNA works, tools had to be developed that allowed it to be manipulated. In 1970 Hamilton Smith's lab discovered restriction enzymes that allowed DNA to be cut at specific places and separated out on an electrophoresis gel . This enabled scientists to isolate genes from an organism's genome. [ 19 ] DNA ligases , that join broken DNA together, had been discovered earlier in 1967 [ 20 ] and by combining the two enzymes it was possible to "cut and paste" DNA sequences to create recombinant DNA . Plasmids , discovered in 1952, [ 21 ] became important tools for transferring information between cells and replicating DNA sequences. Frederick Sanger developed a method for sequencing DNA in 1977, greatly increasing the genetic information available to researchers. Polymerase chain reaction (PCR), developed by Kary Mullis in 1983, allowed small sections of DNA to be amplified and aided identification and isolation of genetic material.
As well as manipulating the DNA, techniques had to be developed for its insertion (known as transformation ) into an organism's genome. Griffiths experiment had already shown that some bacteria had the ability to naturally take up and express foreign DNA . Artificial competence was induced in Escherichia coli in 1970 when Morton Mandel and Akiko Higa showed that it could take up bacteriophage λ after treatment with calcium chloride solution (CaCl 2 ). [ 22 ] Two years later, Stanley Cohen showed that CaCl 2 treatment was also effective for uptake of plasmid DNA. [ 23 ] Transformation using electroporation was developed in the late 1980s, increasing the efficiency and bacterial range. [ 24 ] In 1907 a bacterium that caused plant tumors, Agrobacterium tumefaciens , was discovered and in the early 1970s the tumor inducing agent was found to be a DNA plasmid called the Ti plasmid . [ 25 ] By removing the genes in the plasmid that caused the tumor and adding in novel genes researchers were able to infect plants with A. tumefaciens and let the bacteria insert their chosen DNA into the genomes of the plants. [ 26 ]
In 1972 Paul Berg used restriction enzymes and DNA ligases to create the first recombinant DNA molecules. He combined DNA from the monkey virus SV40 with that of the lambda virus . [ 27 ] Herbert Boyer and Stanley Norman Cohen took Berg's work a step further and introduced recombinant DNA into a bacterial cell. Cohen was researching plasmids, while Boyers work involved restriction enzymes. They recognised the complementary nature of their work and teamed up in 1972. Together they found a restriction enzyme that cut the pSC101 plasmid at a single point and were able to insert and ligate a gene that conferred resistance to the kanamycin antibiotic into the gap. Cohen had previously devised a method where bacteria could be induced to take up a plasmid and using this they were able to create a bacterium that survived in the presence of the kanamycin. This represented the first genetically modified organism. They repeated experiments showing that other genes could be expressed in bacteria, including one from the toad Xenopus laevis , the first cross kingdom transformation. [ 28 ] [ 29 ] [ 30 ]
In 1974 Rudolf Jaenisch created a transgenic mouse by introducing foreign DNA into its embryo, making it the world's first transgenic animal . [ 31 ] [ 32 ] Jaenisch was studying mammalian cells infected with simian virus 40 (SV40) when he happened to read a paper from Beatrice Mintz describing the generation of chimera mice . He took his SV40 samples to Mintz's lab and injected them into early mouse embryos expecting tumours to develop. The mice appeared normal, but after using radioactive probes he discovered that the virus had integrated itself into the mice genome. [ 33 ] However the mice did not pass the transgene to their offspring. In 1981 the laboratories of Frank Ruddle, Frank Constantini and Elizabeth Lacy injected purified DNA into a single-cell mouse embryo and showed transmission of the genetic material to subsequent generations. [ 34 ] [ 35 ]
The first genetically engineered plant was tobacco, reported in 1983. [ 36 ] It was developed by Michael W. Bevan , Richard B. Flavell and Mary-Dell Chilton by creating a chimeric gene that joined an antibiotic resistant gene to the T1 plasmid from Agrobacterium . The tobacco was infected with Agrobacterium transformed with this plasmid resulting in the chimeric gene being inserted into the plant. Through tissue culture techniques a single tobacco cell was selected that contained the gene and a new plant grown from it. [ 37 ]
The development of genetic engineering technology led to concerns in the scientific community about potential risks. The development of a regulatory framework concerning genetic engineering began in 1975, at Asilomar , California. The Asilomar meeting recommended a set of guidelines regarding the cautious use of recombinant technology and any products resulting from that technology. [ 38 ] The Asilomar recommendations were voluntary, but in 1976 the US National Institute of Health (NIH) formed a recombinant DNA advisory committee. [ 39 ] This was followed by other regulatory offices (the United States Department of Agriculture (USDA), Environmental Protection Agency (EPA) and Food and Drug Administration (FDA), effectively making all recombinant DNA research tightly regulated in the US. [ 40 ]
In 1982 the Organisation for Economic Co-operation and Development (OECD) released a report into the potential hazards of releasing genetically modified organisms into the environment as the first transgenic plants were being developed. [ 41 ] As the technology improved and genetically organisms moved from model organisms to potential commercial products the US established a committee at the Office of Science and Technology (OSTP) to develop mechanisms to regulate the developing technology. [ 40 ] In 1986 the OSTP assigned regulatory approval of genetically modified plants in the US to the USDA, FDA and EPA. [ 42 ] In the late 1980s and early 1990s, guidance on assessing the safety of genetically engineered plants and food emerged from organizations including the FAO and WHO. [ 43 ] [ 44 ] [ 45 ] [ 46 ]
The European Union first introduced laws requiring GMO's to be labelled in 1997. [ 47 ] In 2013 Connecticut became the first state to enact a labeling law in the US, although it would not take effect until other states followed suit. [ 48 ]
The ability to insert, alter or remove genes in model organisms allowed scientists to study the genetic elements of human diseases. [ 49 ] Genetically modified mice were created in 1984 that carried cloned oncogenes that predisposed them to developing cancer. [ 50 ] The technology has also been used to generate mice with genes knocked out . The first recorded knockout mouse was created by Mario R. Capecchi , Martin Evans and Oliver Smithies in 1989. In 1992 oncomice with tumor suppressor genes knocked out were generated. [ 50 ] Creating Knockout rats is much harder and only became possible in 2003. [ 51 ] [ 52 ]
After the discovery of microRNA in 1993, [ 53 ] RNA interference (RNAi) has been used to silence an organism's genes. [ 54 ] By modifying an organism to express microRNA targeted to its endogenous genes, researchers have been able to knockout or partially reduce gene function in a range of species. The ability to partially reduce gene function has allowed the study of genes that are lethal when completely knocked out. Other advantages of using RNAi include the availability of inducible and tissue specific knockout. [ 55 ] In 2007 microRNA targeted to insect and nematode genes was expressed in plants, leading to suppression when they fed on the transgenic plant, potentially creating a new way to control pests. [ 56 ] Targeting endogenous microRNA expression has allowed further fine tuning of gene expression, supplementing the more traditional gene knock out approach. [ 57 ]
Genetic engineering has been used to produce proteins derived from humans and other sources in organisms that normally cannot synthesize these proteins. Human insulin-synthesising bacteria were developed in 1979 and were first used as a treatment in 1982. [ 58 ] In 1988 the first human antibodies were produced in plants. [ 59 ] In 2000 Vitamin A -enriched golden rice , was the first food with increased nutrient value. [ 60 ]
As not all plant cells were susceptible to infection by A. tumefaciens other methods were developed, including electroporation , micro-injection [ 61 ] and particle bombardment with a gene gun (invented in 1987). [ 62 ] [ 63 ] In the 1980s techniques were developed to introduce isolated chloroplasts back into a plant cell that had its cell wall removed. With the introduction of the gene gun in 1987 it became possible to integrate foreign genes into a chloroplast . [ 64 ]
Genetic transformation has become very efficient in some model organisms. In 1998 genetically modified seeds were produced in Arabidopsis thaliana by simply dipping the flowers in an Agrobacterium solution. [ 65 ] The range of plants that can be transformed has increased as tissue culture techniques have been developed for different species.
The first transgenic livestock were produced in 1985, [ 66 ] by micro-injecting foreign DNA into rabbit, sheep and pig eggs. [ 67 ] The first animal to synthesise transgenic proteins in their milk were mice, [ 68 ] engineered to produce human tissue plasminogen activator. [ 69 ] This technology was applied to sheep, pigs, cows and other livestock. [ 68 ]
In 2010 scientists at the J. Craig Venter Institute announced that they had created the first synthetic bacterial genome . The researchers added the new genome to bacterial cells and selected for cells that contained the new genome. To do this the cells undergoes a process called resolution, where during bacterial cell division one new cell receives the original DNA genome of the bacteria, whilst the other receives the new synthetic genome. When this cell replicates it uses the synthetic genome as its template. The resulting bacterium the researchers developed, named Synthia , was the world's first synthetic life form. [ 70 ] [ 71 ]
In 2014 a bacterium was developed that replicated a plasmid containing an unnatural base pair . This required altering the bacterium so it could import the unnatural nucleotides and then efficiently replicate them. The plasmid retained the unnatural base pairs when it doubled an estimated 99.4% of the time. [ 72 ] This is the first organism engineered to use an expanded genetic alphabet. [ 73 ]
In 2015 CRISPR and TALENs was used to modify plant genomes. Chinese labs used it to create a fungus-resistant wheat and boost rice yields, while a U.K. group used it to tweak a barley gene that could help produce drought-resistant varieties. When used to precisely remove material from DNA without adding genes from other species, the result is not subject the lengthy and expensive regulatory process associated with GMOs. While CRISPR may use foreign DNA to aid the editing process, the second generation of edited plants contain none of that DNA. Researchers celebrated the acceleration because it may allow them to "keep up" with rapidly evolving pathogens. The U.S. Department of Agriculture stated that some examples of gene-edited corn, potatoes and soybeans are not subject to existing regulations. As of 2016, other review bodies had yet to make statements. [ 74 ]
In 1976 Genentech , the first genetic engineering company was founded by Herbert Boyer and Robert Swanson and a year later the company produced a human protein ( somatostatin ) in E.coli . Genentech announced the production of genetically engineered human insulin in 1978. [ 75 ] In 1980 the U.S. Supreme Court in the Diamond v. Chakrabarty case ruled that genetically altered life could be patented. [ 76 ] The insulin produced by bacteria, branded humulin , was approved for release by the Food and Drug Administration in 1982. [ 77 ] In 1983 a biotech company, Advanced Genetic Sciences (AGS) applied for U.S. government authorization to perform field tests with the ice-minus strain of P. syringae to protect crops from frost, but environmental groups and protestors delayed the field tests for four years with legal challenges. [ 78 ] In 1987 the ice-minus strain of P. syringae became the first genetically modified organism (GMO) to be released into the environment [ 79 ] when a strawberry field and a potato field in California were sprayed with it. [ 80 ] Both test fields were attacked by activist groups the night before the tests occurred: "The world's first trial site attracted the world's first field trasher". [ 79 ]
The first genetically modified crop plant was produced in 1982, an antibiotic-resistant tobacco plant. [ 81 ] The first field trials of genetically engineered plants occurred in France and the US in 1986, tobacco plants were engineered to be resistant to herbicides . [ 82 ] In 1987 Plant Genetic Systems , founded by Marc Van Montagu and Jeff Schell , was the first company to genetically engineer insect-resistant plants by incorporating genes that produced insecticidal proteins from Bacillus thuringiensis (Bt) into tobacco . [ 83 ]
Genetically modified microbial enzymes were the first application of genetically modified organisms in food production and were approved in 1988 by the US Food and Drug Administration . [ 84 ] In the early 1990s, recombinant chymosin was approved for use in several countries. [ 84 ] [ 85 ] Cheese had typically been made using the enzyme complex rennet that had been extracted from cows' stomach lining. Scientists modified bacteria to produce chymosin, which was also able to clot milk, resulting in cheese curds . [ 86 ] The People's Republic of China was the first country to commercialize transgenic plants, introducing a virus-resistant tobacco in 1992. [ 87 ] In 1994 Calgene attained approval to commercially release the Flavr Savr tomato, a tomato engineered to have a longer shelf life. [ 88 ] Also in 1994, the European Union approved tobacco engineered to be resistant to the herbicide bromoxynil , making it the first genetically engineered crop commercialized in Europe. [ 89 ] In 1995 Bt Potato was approved safe by the Environmental Protection Agency , after having been approved by the FDA, making it the first pesticide producing crop to be approved in the US. [ 90 ] In 1996 a total of 35 approvals had been granted to commercially grow 8 transgenic crops and one flower crop (carnation), with 8 different traits in 6 countries plus the EU. [ 82 ]
By 2010, 29 countries had planted commercialized biotech crops and a further 31 countries had granted regulatory approval for transgenic crops to be imported. [ 91 ] In 2013 Robert Fraley ( Monsanto 's executive vice president and chief technology officer), Marc Van Montagu and Mary-Dell Chilton were awarded the World Food Prize for improving the "quality, quantity or availability" of food in the world. [ 92 ]
The first genetically modified animal to be commercialised was the GloFish , a Zebra fish with a fluorescent gene added that allows it to glow in the dark under ultraviolet light . [ 93 ] The first genetically modified animal to be approved for food use was AquAdvantage salmon in 2015. [ 94 ] The salmon were transformed with a growth hormone -regulating gene from a Pacific Chinook salmon and a promoter from an ocean pout enabling it to grow year-round instead of only during spring and summer. [ 95 ]
Opposition and support for the use of genetic engineering has existed since the technology was developed. [ 79 ] After Arpad Pusztai went public with research he was conducting in 1998 the public opposition to genetically modified food increased. [ 96 ] Opposition continued following controversial and publicly debated papers published in 1999 and 2013 that claimed negative environmental and health impacts from genetically modified crops . [ 97 ] [ 98 ] | https://en.wikipedia.org/wiki/History_of_genetic_engineering |
The history of genetics dates from the classical era with contributions by Pythagoras , Hippocrates , Aristotle , Epicurus , and others. Modern genetics began with the work of the Augustinian friar Gregor Johann Mendel . His works on pea plants, published in 1866, provided the initial evidence that, on its rediscovery in 1900's, helped to establish the theory of Mendelian inheritance .
In ancient Greece , Hippocrates suggested that all organs of the body of a parent gave off invisible “seeds,” miniaturised components, that were transmitted during sexual intercourse and combined in the mother's womb to form a baby. In the Early Modern times, William Harvey 's
book On Animal Generation contradicted Aristotle's theories of genetics and embryology .
The 1900 rediscovery of Mendel's work by Hugo de Vries , Carl Correns and Erich von Tschermak led to rapid advances in genetics. By 1915 the basic principles of Mendelian genetics had been studied in a wide variety of organisms — most notably the fruit fly Drosophila melanogaster . Led by Thomas Hunt Morgan and his fellow "drosophilists", geneticists developed the Mendelian model, which was widely accepted by 1925. Alongside experimental work, mathematicians developed the statistical framework of population genetics , bringing genetic explanations into the study of evolution .
With the basic patterns of genetic inheritance established, many biologists turned to investigations of the physical nature of the gene . In the 1940s and early 1950s, experiments pointed to DNA as the portion of chromosomes (and perhaps other nucleoproteins) that held genes. A focus on new model organisms such as viruses and bacteria, along with the discovery of the double helical structure of DNA in 1953, marked the transition to the era of molecular genetics .
In the following years, chemists developed techniques for sequencing both nucleic acids and proteins, while many others worked out the relationship between these two forms of biological molecules and discovered the genetic code . The regulation of gene expression became a central issue in the 1960s; by the 1970s gene expression could be controlled and manipulated through genetic engineering . In the last decades of the 20th century, many biologists focused on large-scale genetics projects, such as sequencing entire genomes.
The most influential early theories of heredity were that of Hippocrates and Aristotle . Hippocrates' theory (possibly based on the teachings of Anaxagoras ) was similar to Darwin's later ideas on pangenesis , involving heredity material that collects from throughout the body. Aristotle suggested instead that the (nonphysical) form-giving principle of an organism was transmitted through semen (which he considered to be a purified form of blood) and the mother's menstrual blood, which interacted in the womb to direct an organism's early development. [ 1 ] For both Hippocrates and Aristotle—and nearly all Western scholars through to the late 19th century—the inheritance of acquired characters was a supposedly well-established fact that any adequate theory of heredity had to explain. At the same time, individual species were taken to have a fixed essence ; such inherited changes were merely superficial. [ 2 ] The Athenian philosopher Epicurus observed families and proposed the contribution of both males and females of hereditary characters ("sperm atoms"), noticed dominant and recessive types of inheritance and described segregation and independent assortment of "sperm atoms". [ 3 ]
The Roman poet and philosopher Lucretius describes heredity in his work "De rerum natura". [ 4 ]
From this semen, Venus produces a varied variety of characteristics and reproduces ancestral traits of expression, voice or hair; These features, as well as our faces, bodies, and limbs, are also determined by the specific semen of our relatives. [ 5 ]
Similarly, Marcus Terentius Varro in "Rerum rusticarum libri tres" and Publius Vergilius Maro propose that wasps and bees originate from animals like horses, calves, and donkeys, with wasps coming from horses and bees from calves or donkeys. [ 6 ]
In 1000 CE, the Arab physician , Abu al-Qasim al-Zahrawi (known as Albucasis in the West) was the first physician to describe clearly the hereditary nature of haemophilia in his Al-Tasrif . [ 7 ] In 1140 CE, Judah HaLevi described dominant and recessive genetic traits in The Kuzari . [ 8 ]
The preformation theory is a developmental biological theory, which was represented in antiquity by the Greek philosopher Anaxagoras . It reappeared in modern times in the 17th century and then prevailed until the 19th century. Another common term at that time was the theory of evolution, although "evolution" (in the sense of development as a pure growth process) had a completely different meaning than today. The preformists assumed that the entire organism was preformed in the sperm (animalkulism) or in the egg (ovism or ovulism) and only had to unfold and grow. This was contrasted by the theory of epigenesis , according to which the structures and organs of an organism only develop in the course of individual development ( Ontogeny ). Epigenesis had been the dominant opinion since antiquity and into the 17th century, but was then replaced by preformist ideas. Since the 19th century epigenesis was again able to establish itself as a view valid to this day. [ 9 ] [ 10 ]
In the 18th century, with increased knowledge of plant and animal diversity and the accompanying increased focus on taxonomy , new ideas about heredity began to appear. Linnaeus and others (among them Joseph Gottlieb Kölreuter , Carl Friedrich von Gärtner , and Charles Naudin ) conducted extensive experiments with hybridisation, especially hybrids between species. Species hybridisers described a wide variety of inheritance phenomena, include hybrid sterility and the high variability of back-crosses . [ 11 ]
Plant breeders were also developing an array of stable varieties in many important plant species. In the early 19th century, Augustin Sageret established the concept of dominance , recognising that when some plant varieties are crossed, certain characteristics (present in one parent) usually appear in the offspring; he also found that some ancestral characteristics found in neither parent may appear in offspring. However, plant breeders made little attempt to establish a theoretical foundation for their work or to share their knowledge with current work of physiology, [ 12 ] although Gartons Agricultural Plant Breeders in England explained their system. [ 13 ]
Between 1856 and 1865, Gregor Mendel conducted breeding experiments using the pea plant Pisum sativum and traced the inheritance patterns of certain traits. Through these experiments, Mendel saw that the genotypes and phenotypes of the progeny were predictable and that some traits were dominant over others. [ 14 ] These patterns of Mendelian inheritance demonstrated the usefulness of applying statistics to inheritance. They also contradicted 19th-century theories of blending inheritance , showing, rather, that genes remain discrete through multiple generations of hybridisation. [ 15 ]
From his statistical analysis, Mendel defined a concept that he described as a character (which in his mind holds also for "determinant of that character"). In only one sentence of his historical paper, he used the term "factors" to designate the "material creating" the character: " So far as experience goes, we find it in every case confirmed that constant progeny can only be formed when the egg cells and the fertilising pollen are off like the character so that both are provided with the material for creating quite similar individuals, as is the case with the normal fertilisation of pure species. We must, therefore, regard it as certain that exactly similar factors must be at work also in the production of the constant forms in the hybrid plants."(Mendel, 1866).
Mendel's work was published in 1866 as "Versuche über Pflanzen-Hybriden" ( Experiments on Plant Hybridisation ) in the Verhandlungen des Naturforschenden Vereins zu Brünn (Proceedings of the Natural History Society of Brünn) , following two lectures he gave on the work in early 1865. [ 16 ]
Mendel's work was published in a relatively obscure scientific journal , and it was not given any attention in the scientific community. Instead, discussions about modes of heredity were galvanised by Darwin 's theory of evolution by natural selection, in which mechanisms of non- Lamarckian heredity seemed to be required. Darwin's own theory of heredity, pangenesis , did not meet with any large degree of acceptance. [ 17 ] [ 18 ] A more mathematical version of pangenesis, one which dropped much of Darwin's Lamarckian holdovers, was developed as the "biometrical" school of heredity by Darwin's cousin, Francis Galton . [ 19 ]
In 1883 August Weismann conducted experiments involving breeding mice whose tails had been surgically removed. His results — that surgically removing a mouse's tail had no effect on the tail of its offspring — challenged the theories of pangenesis and Lamarckism , which held that changes to an organism during its lifetime could be inherited by its descendants. Weismann proposed the germ plasm theory of inheritance, which held that hereditary information was carried only in sperm and egg cells. [ 20 ]
Hugo de Vries wondered what the nature of germ plasm might be, and in particular he wondered whether or not germ plasm was mixed like paint or whether the information was carried in discrete packets that remained unbroken. In the 1890s he was conducting breeding experiments with a variety of plant species and in 1897 he published a paper on his results that stated that each inherited trait was governed by two discrete particles of information, one from each parent, and that these particles were passed along intact to the next generation. In 1900 he was preparing another paper on his further results when he was shown a copy of Mendel's 1866 paper by a friend who thought it might be relevant to de Vries's work. He went ahead and published his 1900 paper without mentioning Mendel's priority. Later that same year another botanist, Carl Correns , who had been conducting hybridisation experiments with maize and peas, was searching the literature for related experiments prior to publishing his own results when he came across Mendel's paper, which had results similar to his own. Correns accused de Vries of appropriating terminology from Mendel's paper without crediting him or recognising his priority. At the same time another botanist, Erich von Tschermak was experimenting with pea breeding and producing results like Mendel's. He too discovered Mendel's paper while searching the literature for relevant work. In a subsequent paper de Vries praised Mendel and acknowledged that he had only extended his earlier work. [ 20 ]
After the rediscovery of Mendel's work there was a feud between William Bateson and Pearson over the hereditary mechanism, solved by Ronald Fisher in his work " The Correlation Between Relatives on the Supposition of Mendelian Inheritance ".
In 1910, Thomas Hunt Morgan showed that genes reside on specific chromosomes . He later showed that genes occupy specific locations on the chromosome. With this knowledge, Alfred Sturtevant , a member of Morgan's famous fly room , using Drosophila melanogaster , provided the first chromosomal map of any biological organism. In 1928, Frederick Griffith showed that genes could be transferred. In what is now known as Griffith's experiment , injections into a mouse of a deadly strain of bacteria that had been heat-killed transferred genetic information to a safe strain of the same bacteria, killing the mouse.
A series of subsequent discoveries (e.g. [ 21 ] ) led to the realization decades later that the genetic material is made of DNA (deoxyribonucleic acid) and not, as was widely believed until then, of proteins. In 1941, George Wells Beadle and Edward Lawrie Tatum showed that mutations in genes caused errors in specific steps of metabolic pathways . [ 22 ] This showed that specific genes code for specific proteins, leading to the " one gene, one enzyme " hypothesis. [ 23 ] Oswald Avery , Colin Munro MacLeod , and Maclyn McCarty showed in 1944 that DNA holds the gene's information. [ 24 ] In 1952, Rosalind Franklin and Raymond Gosling produced a strikingly clear x-ray diffraction pattern indicating a helical form. Using these x-rays and information already known about the chemistry of DNA, James D. Watson and Francis Crick demonstrated the molecular structure of DNA in 1953. [ 25 ] [ 26 ] Together, these discoveries established the central dogma of molecular biology , which states that proteins are translated from RNA which is transcribed by DNA. This dogma has since been shown to have exceptions, such as reverse transcription in retroviruses .
In 1947, Salvador Luria discovered the reactivation of irradiated phage [ 27 ] leading to many further studies on the fundamental processes of repair of DNA damage (for review of early studies, see [ 28 ] ). In 1958, Meselson and Stahl demonstrated that DNA replicates semiconservatively, leading to the understanding that each of the individual strands in double-stranded DNA serves as a template for new strand synthesis. [ 29 ] In 1960, Jacob and collaborators discovered the operon which consists of a sequence of genes whose expression is coordinated by operator DNA. [ 30 ] In the period 1961 – 1967, through work in several different labs, the nature of the genetic code was determined (e.g. [ 31 ] ).
In 1972, Walter Fiers and his team at the University of Ghent were the first to determine the sequence of a gene: the gene for bacteriophage MS2 coat protein. [ 32 ] Richard J. Roberts and Phillip Sharp discovered in 1977 that genes can be split into segments. This led to the idea that one gene can make several proteins. The successful sequencing of many organisms' genomes has complicated the molecular definition of the gene. In particular, genes do not always sit side by side on DNA like discrete beads. Instead, regions of the DNA producing distinct proteins may overlap, so that the idea emerges that "genes are one long continuum ". [ 33 ] [ 34 ] It was first hypothesised in 1986 by Walter Gilbert that neither DNA nor protein would be required in such a primitive system as that of a very early stage of the earth if RNA could serve both as a catalyst and as genetic information storage processor.
The modern study of genetics at the level of DNA is known as molecular genetics , and the synthesis of molecular genetics with traditional Darwinian evolution is known as the modern evolutionary synthesis . | https://en.wikipedia.org/wiki/History_of_genetics |
Geometry (from the Ancient Greek : γεωμετρία ; geo- "earth", -metron "measurement") arose as the field of knowledge dealing with spatial relationships. Geometry was one of the two fields of pre-modern mathematics , the other being the study of numbers ( arithmetic ).
Classic geometry was focused in compass and straightedge constructions . Geometry was revolutionized by Euclid , who introduced mathematical rigor and the axiomatic method still in use today. His book, The Elements is widely considered the most influential textbook of all time, and was known to all educated people in the West until the middle of the 20th century. [ 1 ]
In modern times, geometric concepts have been generalized to a high level of abstraction and complexity, and have been subjected to the methods of calculus and abstract algebra, so that many modern branches of the field are barely recognizable as the descendants of early geometry. (See Areas of mathematics and Algebraic geometry .)
The earliest recorded beginnings of geometry can be traced to early peoples, such as the ancient Indus Valley (see Harappan mathematics ) and ancient Babylonia (see Babylonian mathematics ) from around 3000 BC. Early geometry was a collection of empirically discovered principles concerning lengths, angles, areas, and volumes, which were developed to meet some practical need in surveying , construction , astronomy , and various crafts. Among these were some surprisingly sophisticated principles, and a modern mathematician might be hard put to derive some of them without the use of calculus and algebra. For example, both the Egyptians and the Babylonians were aware of versions of the Pythagorean theorem about 1500 years before Pythagoras and the Indian Sulba Sutras around 800 BC contained the first statements of the theorem; the Egyptians had a correct formula for the volume of a frustum of a square pyramid.
The ancient Egyptians knew that they could approximate the area of a circle as follows: [ 2 ]
Problem 50 of the Ahmes papyrus uses these methods to calculate the area of a circle, according to a rule that the area is equal to the square of 8/9 of the circle's diameter. This assumes that π is 4×(8/9) 2 (or 3.160493...), with an error of slightly over 0.63 percent. This value was slightly less accurate than the calculations of the Babylonians (25/8 = 3.125, within 0.53 percent), but was not otherwise surpassed until Archimedes ' approximation of 211875/67441 = 3.14163, which had an error of just over 1 in 10,000.
Ahmes knew of the modern 22/7 as an approximation for π , and used it to split a hekat, hekat x 22/x x 7/22 = hekat; [ citation needed ] however, Ahmes continued to use the traditional 256/81 value for π for computing his hekat volume found in a cylinder.
Problem 48 involved using a square with side 9 units. This square was cut into a 3x3 grid. The diagonal of the corner squares were used to make an irregular octagon with an area of 63 units. This gave a second value for π of 3.111...
The two problems together indicate a range of values for π between 3.11 and 3.16.
Problem 14 in the Moscow Mathematical Papyrus gives the only ancient example finding the volume of a frustum of a pyramid, describing the correct formula:
where a and b are the base and top side lengths of the truncated pyramid and h is the height.
The Babylonians may have known the general rules for measuring areas and volumes. They measured the circumference of a circle as three times the diameter and the area as one-twelfth the square of the circumference, which would be correct if π is estimated as 3. The volume of a cylinder was taken as the product of the base and the height, however, the volume of the frustum of a cone or a square pyramid was incorrectly taken as the product of the height and half the sum of the bases. The Pythagorean theorem was also known to the Babylonians. Also, there was a recent discovery in which a tablet used π as 3 and 1/8. The Babylonians are also known for the Babylonian mile, which was a measure of distance equal to about seven miles today. This measurement for distances eventually was converted to a time-mile used for measuring the travel of the Sun, therefore, representing time. [ 3 ] There have been recent discoveries showing that ancient Babylonians may have discovered astronomical geometry nearly 1400 years before Europeans did. [ 4 ]
The Indian Vedic period had a tradition of geometry, mostly expressed in the construction of elaborate altars.
Early Indian texts (1st millennium BC) on this topic include the Satapatha Brahmana and the Śulba Sūtras . [ 5 ] [ 6 ] [ 7 ]
The Śulba Sūtras has been described as "the earliest extant verbal expression of the Pythagorean Theorem in the world, although it had already been known to the Old Babylonians." [ 8 ] They make use of Pythagorean triples , [ 9 ] [ 10 ] which are particular cases of Diophantine equations . [ 11 ]
According to mathematician S. G. Dani, the Babylonian cuneiform tablet Plimpton 322 written c. 1850 BC [ 12 ] "contains fifteen Pythagorean triples with quite large entries, including (13500, 12709, 18541) which is a primitive triple, [ 13 ] indicating, in particular, that there was sophisticated understanding on the topic" in Mesopotamia in 1850 BC. [ 14 ] "Since these tablets predate the Sulbasutras period by several centuries, taking into account the contextual appearance of some of the triples, it is reasonable to expect that similar understanding would have been there in India." [ 14 ] Dani goes on to say: [ 15 ]
As the main objective of the Sulvasutras was to describe the constructions of altars and the geometric principles involved in them, the subject of Pythagorean triples, even if it had been well understood may still not have featured in the Sulvasutras . The occurrence of the triples in the Sulvasutras is comparable to mathematics that one may encounter in an introductory book on architecture or another similar applied area, and would not correspond directly to the overall knowledge on the topic at that time. Since, unfortunately, no other contemporaneous sources have been found it may never be possible to settle this issue satisfactorily.
Thales (635–543 BC) of Miletus (now in southwestern Turkey), was the first to whom deduction in mathematics is attributed. There are five geometric propositions for which he wrote deductive proofs, though his proofs have not survived. Pythagoras (582–496 BC) of Ionia, and later, Italy, then colonized by Greeks, may have been a student of Thales, and traveled to Babylon and Egypt . The theorem that bears his name may not have been his discovery, but he was probably one of the first to give a deductive proof of it. He gathered a group of students around him to study mathematics, music, and philosophy, and together they discovered most of what high school students learn today in their geometry courses. In addition, they made the profound discovery of incommensurable lengths and irrational numbers .
Plato (427–347 BC) was a philosopher, highly esteemed by the Greeks. There is a story that he had inscribed above the entrance to his famous school, "Let none ignorant of geometry enter here." However, the story is considered to be untrue. [ 16 ] Though he was not a mathematician himself, his views on mathematics had great influence. Mathematicians thus accepted his belief that geometry should use no tools but compass and straightedge – never measuring instruments such as a marked ruler or a protractor , because these were a workman's tools, not worthy of a scholar. This dictum led to a deep study of possible compass and straightedge constructions, and three classic construction problems: how to use these tools to trisect an angle , to construct a cube twice the volume of a given cube, and to construct a square equal in area to a given circle. The proofs of the impossibility of these constructions, finally achieved in the 19th century, led to important principles regarding the deep structure of the real number system. Aristotle (384–322 BC), Plato's greatest pupil, wrote a treatise on methods of reasoning used in deductive proofs (see Logic ) which was not substantially improved upon until the 19th century.
Euclid (c. 325–265 BC), of Alexandria , probably a student at the Academy founded by Plato, wrote a treatise in 13 books (chapters), titled The Elements of Geometry , in which he presented geometry in an ideal axiomatic form, which came to be known as Euclidean geometry . The treatise is not a compendium of all that the Hellenistic mathematicians knew at the time about geometry; Euclid himself wrote eight more advanced books on geometry. We know from other references that Euclid's was not the first elementary geometry textbook, but it was so much superior that the others fell into disuse and were lost. He was brought to the university at Alexandria by Ptolemy I , King of Egypt.
The Elements began with definitions of terms, fundamental geometric principles (called axioms or postulates ), and general quantitative principles (called common notions ) from which all the rest of geometry could be logically deduced. Following are his five axioms, somewhat paraphrased to make the English easier to read.
Concepts, that are now understood as algebra , were expressed geometrically by Euclid, a method referred to as Greek geometric algebra .
Archimedes (287–212 BC), of Syracuse , Sicily , when it was a Greek city-state , was one of the most famous mathematicians of the Hellenistic period . He is known for his formulation of a hydrostatic principle (known as Archimedes' principle ) and for his works on geometry, including Measurement of the Circle and On Conoids and Spheroids . His work On Floating Bodies is the first known work on hydrostatics, of which Archimedes is recognized as the founder. Renaissance translations of his works, including the ancient commentaries, were enormously influential in the work of some of the best mathematicians of the 17th century, notably René Descartes and Pierre de Fermat . [ 17 ]
After Archimedes, Hellenistic mathematics began to decline. There were a few minor stars yet to come, but the golden age of geometry was over. Proclus (410–485), author of Commentary on the First Book of Euclid , was one of the last important players in Hellenistic geometry. He was a competent geometer, but more importantly, he was a superb commentator on the works that preceded him. Much of that work did not survive to modern times, and is known to us only through his commentary. The Roman Republic and Empire that succeeded and absorbed the Greek city-states produced excellent engineers, but no mathematicians of note.
The great Library of Alexandria was later burned. There is a growing consensus among historians that the Library of Alexandria likely suffered from several destructive events, but that the destruction of Alexandria's pagan temples in the late 4th century was probably the most severe and final one. The evidence for that destruction is the most definitive and secure. Caesar's invasion may well have led to the loss of some 40,000–70,000 scrolls in a warehouse adjacent to the port (as Luciano Canfora argues, they were likely copies produced by the Library intended for export), but it is unlikely to have affected the Library or Museum, given that there is ample evidence that both existed later. [ 18 ]
Civil wars, decreasing investments in maintenance and acquisition of new scrolls and generally declining interest in non-religious pursuits likely contributed to a reduction in the body of material available in the Library, especially in the 4th century. The Serapeum was certainly destroyed by Theophilus in 391, and the Museum and Library may have fallen victim to the same campaign.
In the Bakhshali manuscript , there is a handful of geometric problems (including problems about volumes of irregular solids). The Bakhshali manuscript also "employs a decimal place value system with a dot for zero." [ 19 ] Aryabhata 's Aryabhatiya (499) includes the computation of areas and volumes.
Brahmagupta wrote his astronomical work Brāhma Sphuṭa Siddhānta in 628. Chapter 12, containing 66 Sanskrit verses, was divided into two sections: "basic operations" (including cube roots, fractions, ratio and proportion, and barter) and "practical mathematics" (including mixture, mathematical series, plane figures, stacking bricks, sawing of timber, and piling of grain). [ 20 ] In the latter section, he stated his famous theorem on the diagonals of a cyclic quadrilateral : [ 20 ]
Brahmagupta's theorem: If a cyclic quadrilateral has diagonals that are perpendicular to each other, then the perpendicular line drawn from the point of intersection of the diagonals to any side of the quadrilateral always bisects the opposite side.
Chapter 12 also included a formula for the area of a cyclic quadrilateral (a generalization of Heron's formula ), as well as a complete description of rational triangles ( i.e. triangles with rational sides and rational areas).
Brahmagupta's formula: The area, A , of a cyclic quadrilateral with sides of lengths a , b , c , d , respectively, is given by
where s , the semiperimeter , given by: s = a + b + c + d 2 . {\displaystyle s={\frac {a+b+c+d}{2}}.}
Brahmagupta's Theorem on rational triangles: A triangle with rational sides a , b , c {\displaystyle a,b,c} and rational area is of the form:
for some rational numbers u , v , {\displaystyle u,v,} and w {\displaystyle w} . [ 21 ]
Parameshvara Nambudiri was the first mathematician to give a formula for the radius of the circle circumscribing a cyclic quadrilateral. [ 22 ] The expression is sometimes attributed to Lhuilier [1782], 350 years later. With the sides of the cyclic quadrilateral being a, b, c, and d , the radius R of the circumscribed circle is:
The first definitive work (or at least oldest existent) on geometry in China was the Mo Jing , the Mohist canon of the early philosopher Mozi (470–390 BC). It was compiled years after his death by his followers around the year 330 BC. [ 23 ] Although the Mo Jing is the oldest existent book on geometry in China, there is the possibility that even older written material existed. However, due to the infamous Burning of the Books in a political maneuver by the Qin dynasty ruler Qin Shihuang (r. 221–210 BC), multitudes of written literature created before his time were purged. In addition, the Mo Jing presents geometrical concepts in mathematics that are perhaps too advanced not to have had a previous geometrical base or mathematic background to work upon.
The Mo Jing described various aspects of many fields associated with physical science, and provided a small wealth of information on mathematics as well. It provided an 'atomic' definition of the geometric point, stating that a line is separated into parts, and the part which has no remaining parts (i.e. cannot be divided into smaller parts) and thus forms the extreme end of a line is a point. [ 23 ] Much like Euclid 's first and third definitions and Plato 's 'beginning of a line', the Mo Jing stated that "a point may stand at the end (of a line) or at its beginning like a head-presentation in childbirth. (As to its invisibility) there is nothing similar to it." [ 24 ] Similar to the atomists of Democritus , the Mo Jing stated that a point is the smallest unit, and cannot be cut in half, since 'nothing' cannot be halved. [ 24 ] It stated that two lines of equal length will always finish at the same place, [ 24 ] while providing definitions for the comparison of lengths and for parallels , [ 25 ] along with principles of space and bounded space. [ 26 ] It also described the fact that planes without the quality of thickness cannot be piled up since they cannot mutually touch. [ 27 ] The book provided definitions for circumference, diameter, and radius, along with the definition of volume. [ 28 ]
The Han dynasty (202 BC – 220 AD) period of China witnessed a new flourishing of mathematics. One of the oldest Chinese mathematical texts to present geometric progressions was the Suàn shù shū of 186 BC, during the Western Han era. The mathematician, inventor, and astronomer Zhang Heng (78–139 AD) used geometrical formulas to solve mathematical problems. Although rough estimates for pi ( π ) were given in the Zhou Li (compiled in the 2nd century BC), [ 29 ] it was Zhang Heng who was the first to make a concerted effort at creating a more accurate formula for pi. Zhang Heng approximated pi as 730/232 (or approx 3.1466), although he used another formula of pi in finding a spherical volume, using the square root of 10 (or approx 3.162) instead. Zu Chongzhi (429–500 AD) improved the accuracy of the approximation of pi to between 3.1415926 and 3.1415927, with 355 ⁄ 113 (密率, Milü, detailed approximation) and 22 ⁄ 7 (约率, Yuelü, rough approximation) being the other notable approximation. [ 30 ] In comparison to later works, the formula for pi given by the French mathematician Franciscus Vieta (1540–1603) fell halfway between Zu's approximations.
The Nine Chapters on the Mathematical Art , the title of which first appeared by 179 AD on a bronze inscription, was edited and commented on by the 3rd century mathematician Liu Hui from the Kingdom of Cao Wei . This book included many problems where geometry was applied, such as finding surface areas for squares and circles, the volumes of solids in various three-dimensional shapes, and included the use of the Pythagorean theorem . The book provided illustrated proof for the Pythagorean theorem, [ 31 ] contained a written dialogue between of the earlier Duke of Zhou and Shang Gao on the properties of the right angle triangle and the Pythagorean theorem, while also referring to the astronomical gnomon , the circle and square, as well as measurements of heights and distances. [ 32 ] The editor Liu Hui listed pi as 3.141014 by using a 192 sided polygon , and then calculated pi as 3.14159 using a 3072 sided polygon. This was more accurate than Liu Hui's contemporary Wang Fan , a mathematician and astronomer from Eastern Wu , would render pi as 3.1555 by using 142 ⁄ 45 . [ 33 ] Liu Hui also wrote of mathematical surveying to calculate distance measurements of depth, height, width, and surface area. In terms of solid geometry, he figured out that a wedge with rectangular base and both sides sloping could be broken down into a pyramid and a tetrahedral wedge. [ 34 ] He also figured out that a wedge with trapezoid base and both sides sloping could be made to give two tetrahedral wedges separated by a pyramid. [ 34 ] Furthermore, Liu Hui described Cavalieri's principle on volume, as well as Gaussian elimination . From the Nine Chapters , it listed the following geometrical formulas that were known by the time of the Former Han dynasty (202 BCE – 9 CE).
Areas for the [ 35 ]
Volumes for the [ 34 ]
Continuing the geometrical legacy of ancient China, there were many later figures to come, including the famed astronomer and mathematician Shen Kuo (1031–1095 CE), Yang Hui (1238–1298) who discovered Pascal's Triangle , Xu Guangqi (1562–1633), and many others.
Thābit ibn Qurra , using what he called the method of reduction and composition, provided two different general proofs of the Pythagorean theorem for all triangles , before which proofs only existed for the theorem for the special cases of a special right triangle . [ 36 ]
A 2007 paper in the journal Science suggested that girih tiles possessed properties consistent with self-similar fractal quasicrystalline tilings such as the Penrose tilings . [ 37 ] [ 38 ]
The transmission of the Greek Classics to medieval Europe via the Arabic literature of the 9th to 10th century " Islamic Golden Age " began in the 10th century and culminated in the Latin translations of the 12th century .
A copy of Ptolemy 's Almagest was brought back to Sicily by Henry Aristippus (d. 1162), as a gift from the Emperor to King William I (r. 1154–1166). An anonymous student at Salerno travelled to Sicily and translated the Almagest as well as several works by Euclid from Greek to Latin. [ 39 ] Although the Sicilians generally translated directly from the Greek, when Greek texts were not available, they would translate from Arabic. Eugenius of Palermo (d. 1202) translated Ptolemy's Optics into Latin, drawing on his knowledge of all three languages in the task. [ 40 ] The rigorous deductive methods of geometry found in Euclid's Elements of Geometry were relearned, and further development of geometry in the styles of both Euclid ( Euclidean geometry ) and Khayyam ( algebraic geometry ) continued, resulting in an abundance of new theorems and concepts, many of them very profound and elegant.
Advances in the treatment of perspective were made in Renaissance art of the 14th to 15th century which went beyond what had been achieved in antiquity.
In Renaissance architecture of the Quattrocento , concepts of architectural order were explored and rules were formulated. A prime example of is the Basilica di San Lorenzo in Florence by Filippo Brunelleschi (1377–1446). [ 41 ]
In c. 1413 Filippo Brunelleschi demonstrated the geometrical method of perspective, used today by artists, by painting the outlines of various Florentine buildings onto a mirror.
Soon after, nearly every artist in Florence and in Italy used geometrical perspective in their paintings, [ 42 ] notably Masolino da Panicale and Donatello . Melozzo da Forlì first used the technique of upward foreshortening (in Rome, Loreto , Forlì and others), and was celebrated for that. Not only was perspective a way of showing depth, it was also a new method of composing a painting. Paintings began to show a single, unified scene, rather than a combination of several.
As shown by the quick proliferation of accurate perspective paintings in Florence, Brunelleschi likely understood (with help from his friend the mathematician Toscanelli ), [ 43 ] but did not publish, the mathematics behind perspective. Decades later, his friend Leon Battista Alberti wrote De pictura (1435/1436), a treatise on proper methods of showing distance in painting based on Euclidean geometry. Alberti was also trained in the science of optics through the school of Padua and under the influence of Biagio Pelacani da Parma who studied Alhazen's Optics .
Piero della Francesca elaborated on Della Pittura in his De Prospectiva Pingendi in the 1470s. Alberti had limited himself to figures on the ground plane and giving an overall basis for perspective. Della Francesca fleshed it out, explicitly covering solids in any area of the picture plane. Della Francesca also started the now common practice of using illustrated figures to explain the mathematical concepts, making his treatise easier to understand than Alberti's. Della Francesca was also the first to accurately draw the Platonic solids as they would appear in perspective.
Perspective remained, for a while, the domain of Florence. Jan van Eyck , among others, was unable to create a consistent structure for the converging lines in paintings, as in London's The Arnolfini Portrait , because he was unaware of the theoretical breakthrough just then occurring in Italy. However he achieved very subtle effects by manipulations of scale in his interiors. Gradually, and partly through the movement of academies of the arts, the Italian techniques became part of the training of artists across Europe, and later other parts of the world.
The culmination of these Renaissance traditions finds its ultimate synthesis in the research of the architect, geometer, and optician Girard Desargues on perspective, optics and projective geometry.
The Vitruvian Man by Leonardo da Vinci (c. 1490) [ 44 ] depicts a man in two superimposed positions with his arms and legs apart and inscribed in a circle and square. The drawing is based on the correlations of ideal human proportions with geometry described by the ancient Roman architect Vitruvius in Book III of his treatise De Architectura .
In the early 17th century, there were two important developments in geometry. The first and most important was the creation of analytic geometry , or geometry with coordinates and equations , by René Descartes (1596–1650) and Pierre de Fermat (1601–1665). This was a necessary precursor to the development of calculus and a precise quantitative science of physics . The second geometric development of this period was the systematic study of projective geometry by Girard Desargues (1591–1661). Projective geometry is the study of geometry without measurement, just the study of how points align with each other. There had been some early work in this area by Hellenistic geometers, notably Pappus (c. 340). The greatest flowering of the field occurred with Jean-Victor Poncelet (1788–1867).
In the late 17th century, calculus was developed independently and almost simultaneously by Isaac Newton (1642–1727) and Gottfried Wilhelm Leibniz (1646–1716). This was the beginning of a new field of mathematics now called analysis . Though not itself a branch of geometry, it is applicable to geometry, and it solved two families of problems that had long been almost intractable: finding tangent lines to odd curves, and finding areas enclosed by those curves. The methods of calculus reduced these problems mostly to straightforward matters of computation.
The very old problem of proving Euclid's Fifth Postulate, the " Parallel Postulate ", from his first four postulates had never been forgotten. Beginning not long after Euclid, many attempted demonstrations were given, but all were later found to be faulty, through allowing into the reasoning some principle which itself had not been proved from the first four postulates. Though Omar Khayyám was also unsuccessful in proving the parallel postulate, his criticisms of Euclid's theories of parallels and his proof of properties of figures in non-Euclidean geometries contributed to the eventual development of non-Euclidean geometry . By 1700 a great deal had been discovered about what can be proved from the first four, and what the pitfalls were in attempting to prove the fifth. Saccheri , Lambert , and Legendre each did excellent work on the problem in the 18th century, but still fell short of success. In the early 19th century, Gauss , Johann Bolyai , and Lobachevsky , each independently, took a different approach. Beginning to suspect that it was impossible to prove the Parallel Postulate, they set out to develop a self-consistent geometry in which that postulate was false. In this they were successful, thus creating the first non-Euclidean geometry. By 1854, Bernhard Riemann , a student of Gauss, had applied methods of calculus in a ground-breaking study of the intrinsic (self-contained) geometry of all smooth surfaces, and thereby found a different non-Euclidean geometry. This work of Riemann later became fundamental for Einstein 's theory of relativity .
It remained to be proved mathematically that the non-Euclidean geometry was just as self-consistent as Euclidean geometry, and this was first accomplished by Beltrami in 1868. With this, non-Euclidean geometry was established on an equal mathematical footing with Euclidean geometry.
While it was now known that different geometric theories were mathematically possible, the question remained, "Which one of these theories is correct for our physical space?" The mathematical work revealed that this question must be answered by physical experimentation, not mathematical reasoning, and uncovered the reason why the experimentation must involve immense (interstellar, not earth-bound) distances. With the development of relativity theory in physics, this question became vastly more complicated.
All the work related to the Parallel Postulate revealed that it was quite difficult for a geometer to separate his logical reasoning from his intuitive understanding of physical space, and, moreover, revealed the critical importance of doing so. Careful examination had uncovered some logical inadequacies in Euclid's reasoning, and some unstated geometric principles to which Euclid sometimes appealed. This critique paralleled the crisis occurring in calculus and analysis regarding the meaning of infinite processes such as convergence and continuity. In geometry, there was a clear need for a new set of axioms, which would be complete, and which in no way relied on pictures we draw or on our intuition of space. Such axioms, now known as Hilbert's axioms , were given by David Hilbert in 1894 in his dissertation Grundlagen der Geometrie ( Foundations of Geometry ).
In the mid-18th century, it became apparent that certain progressions of mathematical reasoning recurred when similar ideas were studied on the number line, in two dimensions, and in three dimensions. Thus the general concept of a metric space was created so that the reasoning could be done in more generality, and then applied to special cases. This method of studying calculus- and analysis-related concepts came to be known as analysis situs, and later as topology . The important topics in this field were properties of more general figures, such as connectedness and boundaries, rather than properties like straightness, and precise equality of length and angle measurements, which had been the focus of Euclidean and non-Euclidean geometry. Topology soon became a separate field of major importance, rather than a sub-field of geometry or analysis.
The 19th century saw the development of the general concept of Euclidean space by Ludwig Schläfli , who extended Euclidean geometry beyond three dimensions. He discovered all the higher-dimensional analogues of the Platonic solids , finding that there are exactly six such regular convex polytopes in dimension four , and three in all higher dimensions.
In 1878 William Kingdon Clifford introduced what is now termed geometric algebra , unifying William Rowan Hamilton 's quaternions with Hermann Grassmann 's algebra and revealing the geometric nature of these systems, especially in four dimensions. The operations of geometric algebra have the effect of mirroring, rotating, translating, and mapping the geometric objects that are being modeled to new positions.
Developments in algebraic geometry included the study of curves and surfaces over finite fields as demonstrated by the works of among others André Weil , Alexander Grothendieck , and Jean-Pierre Serre as well as over the real or complex numbers. Finite geometry itself, the study of spaces with only finitely many points, found applications in coding theory and cryptography . With the advent of the computer, new disciplines such as computational geometry or digital geometry deal with geometric algorithms, discrete representations of geometric data, and so forth. | https://en.wikipedia.org/wiki/History_of_geometry |
Hypertext is text displayed on a computer or other electronic device with references ( hyperlinks ) to other text that the reader can immediately access, usually by a mouse click or keypress sequence. Early conceptions of hypertext defined it as text that could be connected by a linking system to a range of other documents that were stored outside that text. In 1934 Belgian bibliographer, Paul Otlet, developed a blueprint for links that telescoped out from hypertext electrically to allow readers to access documents, books, photographs, and so on, stored anywhere in the world. [ 1 ]
Recorders of information have long looked for ways to categorize and compile it. There are various methods of arranging layers of references/annotations within a document. Other reference works (for example dictionaries, encyclopaedias) also developed a precursor to hypertext: the setting of certain words in small capital letters, indicating that an entry existed for that term within the same reference work. Sometimes the term would be preceded by an index , ☞like this , or an arrow , ➧like this . Janet Murray has referenced Jorge Luis Borges ' " The Garden of Forking Paths " as a precursor to the hypertext novel and aesthetic: [ 2 ]
"The concept Borges described in 'The Garden of Forking Paths'—in several layers of the story, but most directly in the combination book and maze of Ts'ui Pen—is that of a novel that can be read in multiple ways, a hypertext novel. Borges described this in 1941, prior to the invention (or at least the public disclosure) of the electromagnetic digital computer. Borges also mentions how hypertext has three similarities of frued to a labyrinth in which each link brings the navigator to a set of new links, in an ever expanding maze. Not only did he invent the hypertext novel—Borges went on to describe a theory of the universe based upon the structure of such a novel." — Wardrip-Fruin and Montfort [ 3 ]
Umberto Eco has also referenced Finnegans Wake in the same way. [ citation needed ]
Later, several scholars entered the scene who believed that humanity was drowning in information, causing foolish decisions and duplicating efforts among scientists. These scholars proposed or developed proto-hypertext systems predating electronic computer technology. For example, in the early 20th century, two visionaries attacked the cross-referencing problem through proposals based on labor-intensive, brute force methods. Paul Otlet proposed a proto-hypertext concept based on his monographic principle, in which all documents would be decomposed down to unique phrases stored on index cards . In the 1930s, H.G. Wells proposed the creation of a World Brain .
Michael Buckland summarized the very advanced pre-World War II development of microfilm based on rapid retrieval devices, specifically the microfilm based workstation proposed by Leonard Townsend in 1938 and the microfilm and photoelectronic based selector, patented by Emanuel Goldberg in 1931. [ 4 ] Buckland concluded: "The pre-war information retrieval specialists of continental Europe, the 'documentalists,' largely disregarded by post-war information retrieval specialists, had ideas that were considerably more advanced than is now generally realized." But, like the manual index card model, these microfilm devices provided rapid retrieval based on pre-coded indices and classification schemes published as part of the microfilm record without including the link model which distinguishes the modern concept of hypertext from content or category based information retrieval .
All major histories of what we now call hypertext start in 1945, when Vannevar Bush wrote an article in The Atlantic Monthly called As We May Think , about a futuristic device he called a Memex . He described the device as an electromechanical desk linked to an extensive archive of microfilms , able to display books, writings, or any document from a library. The Memex would also be able to create 'trails' of linked and branching sets of pages, combining pages from the published microfilm library with personal annotations or additions captured on a microfilm recorder. Bush's vision was based on extensions of 1945 technology—microfilm recording and retrieval in this case. However, the modern story of hypertext starts with the Memex because As We May Think directly influenced and inspired the two American men generally credited with the invention of hypertext, Ted Nelson and Douglas Engelbart .
Starting in 1963, Ted Nelson developed a model for creating and using linked content he called "hypertext" and "hypermedia" (first published reference 1965). [ 6 ] Ted Nelson said in the 1960s that he began implementation of a hypertext system he theorized which was named Project Xanadu , but his first and incomplete public release was finished much later, in 1998. [ 5 ] He later worked with Andries van Dam to develop the Hypertext Editing System (HES) in 1967 at Brown University . HES was the first hypertext system available on commercial equipment that novices could use, and it didn't have arbitrary limits on text lengths. [ 7 ]
Douglas Engelbart independently began working on his NLS system in 1962 at Stanford Research Institute, although delays in obtaining funding, personnel, and equipment meant that its key features were not completed until 1968. In December of that year, Engelbart demonstrated a hypertext interface to the public for the first time, in what has come to be known as " The Mother of All Demos ". Funding for NLS slowed after 1974.
Later in 1968, van Dam's team incorporated ideas from NLS into a successor to HES: the File Retrieval and Editing System (FRESS), which was the first hypertext system to run on readily-available commercial hardware and OS. [ 8 ] The user interface was simpler than NLS.
By 1976 FRESS received NEH funding and was used in a poetry class in which students could browse and annotate a hyperlinked set of poems and discussion by experts, faculty and other students, in what was arguably the first online scholarly community, [ 7 ] which van Dam says "foreshadowed wikis, blogs and communal documents of all kinds". [ 9 ]
Influential work in the following decade included NoteCards at Xerox PARC and ZOG at Carnegie Mellon . ZOG started in 1972 as an artificial intelligence research project under the supervision of Allen Newell , and pioneered the "frame" or "card" model of hypertext. ZOG was deployed in 1982 on the U.S.S. Carl Vinson and later commercialized as Knowledge Management System . Two other influential hypertext projects from the early 1980s were Ben Shneiderman 's The Interactive Encyclopedia System (TIES) at the University of Maryland (1983) and Intermedia at Brown University (1984).
The first hypermedia application was the Aspen Movie Map in 1978. In 1980, Tim Berners-Lee created ENQUIRE , an early hypertext database system somewhat like a wiki . The early 1980s also saw a number of experimental hypertext and hypermedia programs, many of whose features and terminology were later integrated into the Web. Guide was the first significant hypertext system for personal computers . In 1983, a hypermedia authoring tool, Tutor-Tech, designed for the Apple II , was produced for educators.
In August 1987, Apple Computer released HyperCard for the Macintosh line at the MacWorld convention . Its impact, combined with interest in Peter J. Brown's GUIDE (marketed by OWL and released earlier that year) and Brown University's Intermedia , led to broad interest in and enthusiasm for hypertext and new media. [ 10 ] The first ACM Hypertext academic conference took place in November 1987, in Chapel Hill NC, where many other applications, including the hypertext literature writing software Storyspace were also demoed [ 11 ]
Meanwhile, Nelson, who had been working on and advocating his Xanadu system for over two decades, along with the commercial success of HyperCard, stirred Autodesk to invest in his revolutionary ideas. The project continued at Autodesk for four years, but no product was released.
van Dam's research groups at Brown University continued working as well. For example, in the late '70s Steve Feiner and others developed an ebook system for Navy repair manuals, and in the early '80s Norm Meyrowitz and a large team at Brown's Institute for Research in Information and Scholarship built Intermedia (mentioned above), which was heavily used in humanities and literary computing. In '89 van Dam's helped Lou Reynolds and van Dam's former students Steven DeRose and Jeff Vogel spun off Electronic Book Technologies, whose SGML-based hypertext system DynaText was widely used for large online publishing and e-book projects, such as online documentation for Sun, SGI, HP, Novell, and DEC, as well as aerospace, transport, publishing, and other applications. Brown's Center For Digital Scholarship [1] (née Scholarly Technology Group) was heavily involved in related standards efforts such as the Text Encoding Initiative , Open eBook and XML , as well as enabling a wide variety of humanities hypertext projects.
In the late 1980s, Berners-Lee, then a scientist at CERN , invented the World Wide Web to meet the demand for simple and immediate information-sharing among physicists working at CERN and different universities or institutes all over the world.
"HyperText is a way to link and access information of various kinds as a web of nodes in which the user can browse at will. It provides a single user-interface to large classes of information (reports, notes, data-bases, computer documentation and on-line help). We propose a simple scheme incorporating servers already available at CERN... A program which provides access to the hypertext world we call a browser... "
Tim Berners-Lee, R. Cailliau. 12 November 1990, CERN [ 12 ]
In 1992, Lynx was born as an early Internet web browser. Its ability to provide hypertext links within documents that could reach into documents anywhere on the Internet began the creation of the Web on the Internet.
Early in 1993, the National Center for Supercomputing Applications (NCSA) at the University of Illinois released the first version of their Mosaic web browser to supplement the two existing web browsers : one that ran only on NeXTSTEP and one that was only minimally user-friendly . Because it could display and link graphics as well as text, Mosaic quickly became the replacement for Lynx. Mosaic ran in the X Window System environment, which was then popular in the research community, and offered usable window-based interactions. It allowed images [ 13 ] as well as text to anchor hypertext links. It also incorporated other protocols intended to coordinate information across the Internet, such as Gopher . [ 14 ]
After the release of web browsers for both the PC and Macintosh environments, traffic on the World Wide Web quickly exploded from only 500 known web servers in 1993 to over 10,000 in 1994. Thus, all earlier hypertext systems were overshadowed by the success of the Web, even though it originally lacked many features of those earlier systems, such as an easy way to edit what you were reading, typed links , backlinks, transclusion , and source tracking .
In 1995, Ward Cunningham made the first wiki available, making the Web more hypertextual by adding easy editing, and (within a single wiki) backlinks and limited source tracking. It also added the innovation of making it possible to link to pages that did not yet exist. Wiki developers continue to implement novel features as well as those developed or imagined in the early explorations of hypertext but not included in the original web. | https://en.wikipedia.org/wiki/History_of_hypertext |
Information technology auditing (IT auditing) began as electronic data process (EDP) auditing and developed largely as a result of the rise in technology in accounting systems , the need for IT control, and the impact of computers on the ability to perform attestation services. The last few years have been an exciting time in the world of IT auditing as a result of the accounting scandals and increased regulation. IT auditing has had a relatively short yet rich history when compared to auditing as a whole and remains an ever-changing field.
The introduction of computer technology into accounting systems changed the way data was stored, retrieved and controlled. It is believed that the first use of a computerized accounting system was at General Electric in 1954. During the time period of 1954 to the mid-1960s, the auditing profession was still auditing around the computer. At this time only mainframe computers were used and few people had the skills and abilities to program computers . This began to change in the mid-1960s with the introduction of new, smaller and less expensive machines. This increased the use of computers in businesses and with it came the need for auditors to become familiar with EDP concepts in business . Along with the increase in computer use, came the rise of different types of accounting systems. The industry soon realized that they needed to develop their own software and the first of the generalized audit software (GAS) was developed. In 1968, the American Institute of Certified Public Accountants (AICPA) had the Big Eight (now the Big Four ) accounting firms participate in the development of EDP auditing. The result of this was the release of Auditing & EDP . The book included how to document EDP audits and examples of how to process internal control reviews.
Around this time EDP auditors formed the Electronic Data Processing Auditors Association (EDPAA). The goal of the association was to produce guidelines, procedures and standards for EDP audits. In 1977, the first edition of Control Objectives was published. This publication is now known as Control Objectives for Information and related Technology (COBIT). COBIT is the set of generally accepted IT control objectives for IT auditors. In 1994, EDPAA changed its name to Information Systems Audit and Control Association ( ISACA ). The period from the late 1960s through today has seen rapid changes in technology from the microcomputer and networking to the internet and with these changes came some major events that change IT auditing forever.
The formation and rise in popularity of the Internet and e-commerce have had significant influences on the growth of IT audit. The Internet influences the lives of most of the world and is a place of increased business, entertainment and crime. IT auditing helps organizations and individuals on the Internet find security while helping commerce and communications to flourish.
There are five major events in U.S. history which have had significant impact on the growth of IT auditing. These are the Equity Funding scandal, the development of the Internet and e-commerce, the 1998 IT failure at AT&T Corporation , the Enron and Arthur Andersen LLP scandal, and the September 11, 2001 Attacks .
These events have not only heightened the need for more reliable, accurate, and secure systems but have brought a much needed focus to the importance of the accounting profession. Accountants certify the accuracy of public company financial statements and add confidence to financial markets . The heightened focus on the industry has brought improved control and higher standards for all working in accounting, especially those involved in IT auditing.
The first known case of misuse of information technology occurred at Equity Funding Corporation of America . Beginning in 1964 and continuing on until 1973, managers for the company booked false insurance policies to show greater profits , thus boosting the price of the capital stock of the company. If it wasn't for a whistle blower , the fraud may have never been caught. After the fraud was discovered, it took the auditing firm Touche Ross two years to confirm that the insurance policies were not real. This was one of the first cases where auditors had to audit through the computer rather than around the computer.
In 1998 AT&T suffered an IT failure that impacted worldwide commerce and communication . A major switch failed due to software and procedural errors and left many credit card users unable to access funds for upwards this brought to the forefront our reliance in IT services and reminds us of the need for assurance in our computer systems.
The Enron and Arthur Andersen LLP scandal led to the demise of a foremost accounting firm, an investor loss of more than $60 billion, and the largest bankruptcy in U.S. history. Although Arthur Andersen were found guilty of obstruction of justice for their role in the collapse of the energy giant in the US District Court for the Southern District of Texas (and affirmed by the Fifth Circuit in 2004), the conviction was overturned by the U.S. Supreme Court in Arthur Andersen LLP v. United States . This scandal had a significant impact on the Sarbanes-Oxley Act and was a major self-regulation violation. | https://en.wikipedia.org/wiki/History_of_information_technology_auditing |
The decisive event which established the discipline of information theory , and brought it to immediate worldwide attention, was the publication of Claude E. Shannon 's classic paper " A Mathematical Theory of Communication " in the Bell System Technical Journal in July and October 1948.
In this revolutionary and groundbreaking paper, the work for which Shannon had substantially completed at Bell Labs by the end of 1944, Shannon for the first time introduced the qualitative and quantitative model of communication as a statistical process underlying information theory, opening with the assertion that
With it came the ideas of
Some of the oldest methods of telecommunications implicitly use many of the ideas that would later be quantified in information theory. Modern telegraphy , starting in the 1830s, used Morse code , in which more common letters (like "E", which is expressed as one "dot") are transmitted more quickly than less common letters (like "J", which is expressed by one "dot" followed by three "dashes"). The idea of encoding information in this manner is the cornerstone of lossless data compression . A hundred years later, frequency modulation illustrated that bandwidth can be considered merely another degree of freedom. The vocoder , now largely looked at as an audio engineering curiosity, was originally designed in 1939 to use less bandwidth than that of an original message, in much the same way that mobile phones now trade off voice quality with bandwidth.
The most direct antecedents of Shannon's work were two papers published in the 1920s by Harry Nyquist and Ralph Hartley , who were both still research leaders at Bell Labs when Shannon arrived in the early 1940s.
Nyquist's 1924 paper, "Certain Factors Affecting Telegraph Speed", is mostly concerned with some detailed engineering aspects of telegraph signals. But a more theoretical section discusses quantifying "intelligence" and the "line speed" at which it can be transmitted by a communication system, giving the relation
where W is the speed of transmission of intelligence, m is the number of different voltage levels to choose from at each time step, and K is a constant. [ 1 ]
Hartley's 1928 paper, called simply "Transmission of Information", went further by using the word information (in a technical sense), and making explicitly clear that information in this context was a measurable quantity, reflecting only the receiver's ability to distinguish that one sequence of symbols had been intended by the sender rather than any other—quite regardless of any associated meaning or other psychological or semantic aspect the symbols might represent. This amount of information he quantified as
where S was the number of possible symbols, and n the number of symbols in a transmission. The natural unit of information was therefore the decimal digit, much later renamed the hartley in his honour as a unit or scale or measure of information. The Hartley information , H 0 , is still used as a quantity for the logarithm of the total number of possibilities. [ 2 ]
A similar unit of log 10 probability, the ban , and its derived unit the deciban (one tenth of a ban), were introduced by Alan Turing in 1940 as part of the statistical analysis of the breaking of the German second world war Enigma cyphers. The decibannage represented the reduction in (the logarithm of) the total number of possibilities (similar to the change in the Hartley information); and also the log-likelihood ratio (or change in the weight of evidence) that could be inferred for one hypothesis over another from a set of observations. The expected change in the weight of evidence is equivalent to what was later called the Kullback discrimination information .
But underlying this notion was still the idea of equal a-priori probabilities, rather than the information content of events of unequal probability; nor yet any underlying picture of questions regarding the communication of such varied outcomes.
In a 1939 letter to Vannevar Bush , Shannon had already outlined some of his initial ideas of information theory. [ 3 ]
One area where unequal probabilities were indeed well known was statistical mechanics, where Ludwig Boltzmann had, in the context of his H-theorem of 1872, first introduced the quantity
as a measure of the breadth of the spread of states available to a single particle in a gas of like particles, where f represented the relative frequency distribution of each possible state. Boltzmann argued mathematically that the effect of collisions between the particles would cause the H -function to inevitably increase from any initial configuration until equilibrium was reached; and further identified it as an underlying microscopic rationale for the macroscopic thermodynamic entropy of Clausius .
Boltzmann's definition was soon reworked by the American mathematical physicist J. Willard Gibbs into a general formula for statistical-mechanical entropy, no longer requiring identical and non-interacting particles, but instead based on the probability distribution p i for the complete microstate i of the total system:
This (Gibbs) entropy, from statistical mechanics, can be found to directly correspond to the Clausius's classical thermodynamic definition .
Shannon himself was apparently not particularly aware of the close similarity between his new measure and earlier work in thermodynamics, but John von Neumann was. It is said that, when Shannon was deciding what to call his new measure and fearing the term 'information' was already over-used, von Neumann told him firmly: "You should call it entropy, for two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a name. In the second place, and more important, no one really knows what entropy really is, so in a debate you will always have the advantage."
(Connections between information-theoretic entropy and thermodynamic entropy, including the important contributions by Rolf Landauer in the 1960s, are explored further in the article Entropy in thermodynamics and information theory ).
The publication of Shannon's 1948 paper, " A Mathematical Theory of Communication ", in the Bell System Technical Journal was the founding of information theory as we know it today. Many developments and applications of the theory have taken place since then, which have made many modern devices for data communication and storage such as CD-ROMs and mobile phones possible.
Notable later developments are listed in a timeline of information theory , including: | https://en.wikipedia.org/wiki/History_of_information_theory |
The history of longitude describes the centuries-long effort by astronomers, cartographers and navigators to discover a means of determining the longitude (the east-west position) of any given place on Earth. The measurement of longitude is important to both cartography and navigation . In particular, for safe ocean navigation, knowledge of both latitude and longitude is required, however latitude can be determined with good accuracy with local astronomical observations.
Finding an accurate and practical method of determining longitude took centuries of study and invention by some of the greatest scientists and engineers. Determining longitude relative to the meridian through some fixed location requires that observations be tied to a time scale that is the same at both locations, so the longitude problem reduces to finding a way to coordinate clocks at distant places. Early approaches used astronomical events that could be predicted with great accuracy, such as eclipses, and building clocks, known as chronometers , that could keep time with sufficient accuracy while being transported great distances by ship.
John Harrison 's invention of a chronometer that could keep time at sea with sufficient accuracy to be practical for determining longitude was recognized in 1773 as first enabling determination of longitude at sea. Later methods used the telegraph and then radio to synchronize clocks. Today the problem of longitude has been solved to centimeter accuracy through satellite navigation .
Eratosthenes in the 3rd century BC first proposed a system of latitude and longitude for a map of the world. His prime meridian (line of longitude) passed through Alexandria and Rhodes , while his parallels (lines of latitude) were not regularly spaced, but passed through known locations, often at the expense of being straight lines. [ 1 ] By the 2nd century BC Hipparchus was using a systematic coordinate system, based on dividing the circle into 360°, to uniquely specify places on Earth. [ 2 ] : 31 So longitudes could be expressed as degrees east or west of the primary meridian, as is done today (though the primary meridian is different). He also proposed a method of determining longitude by comparing the local time of a lunar eclipse at two different places, to obtain the difference in longitude between them. [ 2 ] : 11 This method was not very accurate, given the limitations of the available clocks, and it was seldom done – possibly only once, using the Arbela eclipse of 330 BC. [ 3 ] But the method is sound, and this is the first recognition that longitude can be determined by accurate knowledge of time.
Ptolemy , in the 2nd century AD, based his mapping system on estimated distances and directions reported by travellers. Until then, all maps had used a rectangular grid with latitude and longitude as straight lines intersecting at right angles. [ 4 ] : 543 [ 5 ] : 90 For large areas this leads to unacceptable distortion, and for his map of the inhabited world, Ptolemy used projections (to use the modern term) with curved parallels that reduced the distortion. No maps (or manuscripts of his work) exist that are older than the 13th century, but in his Geography he gave detailed instructions and latitude and longitude coordinates for hundreds of locations that are sufficient to re-create the maps. While Ptolemy's system is well-founded, the actual data used are of very variable quality, leading to many inaccuracies and distortions. [ 6 ] [ 4 ] : 551–553 [ 7 ] Apart from the difficulties in estimating rectilinear distances and directions, the most important of these is a systematic over-estimation of differences in longitude. Thus from Ptolemy's tables, the difference in Longitude between Gibraltar and Sidon is 59° 40' 0', compared to the modern value of 40° 23'0', about 48% too high. Russo (2013) has analysed these discrepancies, and concludes that much of the error arises from Ptolemy's underestimate of the size of the Earth, compared with the more accurate estimate of Eratosthenes – the equivalent of 500 stadia to the degree rather than 700. Given the difficulties of astronomical measures of longitude in classical times, most if not all of Ptolemy's values would have been obtained from distance measures and converted to longitude using the 500 value. [ 8 ]
Ancient Hindu astronomers were aware of the method of determining longitude from lunar eclipses, assuming a spherical Earth. The method is described in the Sûrya Siddhânta , a Sanskrit treatise on Indian astronomy thought to date from the late 4th century or early 5th century AD. [ 9 ] Longitudes were referred to a prime meridian passing through Avantī, the modern Ujjain . Positions relative to this meridian were expressed in terms of length or time differences, but degrees were not used in India at this time. It is not clear whether this method was put into practice.
Islamic scholars knew the work of Ptolemy from at least the 9th century AD, when the first translation of his Geography into Arabic was made. He was held in high regard, although his errors were known. [ 10 ] One of their developments was to add more locations to Ptolemy's geographical tables with latitudes and longitudes, and in some cases improving the accuracy. [ 11 ] The methods used to determine most of the longitudes are not given, but a few accounts do give details. Simultaneous observations of two lunar eclipses at two locations were recorded by al-Battānī in 901, comparing Antakya with Raqqa , determining the difference in longitude between the two cities with an error less than 1°. This is considered the best that can be achieved with the methods then available – observation of the eclipse with the naked eye, and determination of local time using an astrolabe to measure the altitude of a suitable "clock star". [ 12 ] [ 13 ] Al-Bīrūnī , early in the 11th century AD, also used eclipse data, but developed an alternative method involving an early form of triangulation. For two locations differing in both longitude and latitude, if the latitudes and the distance between them are known, as well as the size of the earth, it is possible to calculate the difference in longitude. With this method, al-Bīrūnī estimated the longitude difference between Baghdad and Ghazni using distance estimates from travellers over two different routes (and with a somewhat arbitrary adjustment for the crookedness of the roads). His result for the longitude difference between the two cities differs by about 1° from the modern value. [ 14 ] Mercier (1992) notes that this is a substantial improvement over Ptolemy, and that a comparable further improvement in accuracy would not occur until the 17th century in Europe. [ 14 ] : 188
While knowledge of Ptolemy (and more generally of Greek science and philosophy) was growing in the Islamic world, it was declining in Europe. John Kirtland Wright 's (1925) summary is bleak: "We may pass over the mathematical geography of the Christian period [in Europe] before 1100; no discoveries were made, nor were there any attempts to apply the results of older discoveries. ... Ptolemy was forgotten and the labors of the Arabs in this field were as yet unknown". [ 15 ] : 65 Not all was lost or forgotten; Bede in his De natura rerum affirms the sphericity of the earth. But his arguments are those of Aristotle , taken from Pliny . Bede adds nothing original. [ 16 ] [ 17 ]
There is more of note in the later medieval period. Wright (1923) cites a description by Walcher of Malvern of a lunar eclipse in Italy (October 19, 1094), which occurred shortly before dawn. On his return to England, he compared notes with other monks to establish the time of their observation, which was before midnight. The comparison was too casual to allow a measurement of longitude differences, but the account shows that the principle was still understood. [ 18 ] : 81 In the 12th century, astronomical tables were prepared for a number of European cities, based on the work of al-Zarqālī in Toledo . These had to be adapted to the meridian of each city, and it is recorded that the lunar eclipse of September 12, 1178 was used to establish the longitude differences between Toledo, Marseilles , and Hereford . [ 18 ] : 85 The Hereford tables also added a list of over 70 locations, many in the Islamic world, with their longitudes and latitudes. These represent a great improvement on the similar tabulations of Ptolemy. For example, the longitudes of Ceuta and Tyre are given as 8° and 57° (east of the meridian of the Canary Islands), a difference of 49°, compared to the modern value of 40.5°, an overestimate of less than 20%. [ 18 ] : 87–88 In general, the later medieval period showed increasing interest in geography, and a willingness to make observations stimulated by an increase in travel (including pilgrimages and the Crusades ) and by the availability of Islamic sources from Spain and North Africa [ 19 ] [ 20 ] At the end of the medieval period, Ptolemy's work became directly available with the translations made in Florence at the end of the 14th and beginning of the 15th century. [ 21 ]
The 15th and 16th centuries were the time of Portuguese and Spanish voyages of discovery and conquest . In particular, the arrival of Europeans in the New World led to questions of where they actually were. Christopher Columbus made two attempts to discover his longitude by observing lunar eclipses. The first was on Saona Island , now in the Dominican Republic , during his second voyage. He wrote: "In the year 1494, when I was in Saona Island, which stands at the eastern tip of Española island [i.e. Hispaniola ], there was a lunar eclipse on September the 14th, and we noticed that there was a difference of more than five hours and a half between there [Saona] and Cape S.Vincente, in Portugal". [ 22 ] He was unable to compare his observations with ones in Europe, and it is assumed that he used astronomical tables for reference. The second attempt was on the north coast of Jamaica on February 29, 1504, during his fourth voyage. His results were highly inaccurate, with longitude errors of 13 and 38° W respectively. [ 23 ] Randles (1985) documents longitude measurement by the Portuguese and Spanish between 1514 and 1627 both in the Americas and Asia, with errors ranging from 2° to 25°. [ 24 ]
In 1608 a patent was submitted to the government in the Netherlands for a refracting telescope. The idea was picked up by, among others, Galileo who made his first telescope the following year, and began his series of astronomical discoveries that included the satellites of Jupiter, the phases of Venus, and the resolution of the Milky Way into individual stars. Over the next half century, improvements in optics and the use of calibrated mountings, optical grids, and micrometers to adjust positions transformed the telescope from an observation device to an accurate measurement tool. [ 26 ] [ 27 ] [ 28 ] [ 29 ] It also greatly increased the range of events that could be observed to determine longitude.
The second important technical development for longitude determination was the pendulum clock , patented by Christiaan Huygens in 1657. [ 30 ] This gave an increase in accuracy of about 30-fold over previous mechanical clocks – the best pendulum clocks were accurate to about 10 seconds per day. [ 31 ] From the start, Huygens intended his clocks to be used for determination of longitude at sea. [ 32 ] [ 33 ] However, pendulum clocks did not tolerate the motion of a ship sufficiently well, and after a series of trials it was concluded that other approaches would be needed. The future of pendulum clocks would be on land. Together with telescopic instruments, they would revolutionise observational astronomy and cartography in the coming years. [ 34 ] Huygens was also the first to use a balance spring as oscillator in a working clock, and this allowed accurate portable timepieces to be made. But it was not until the work of John Harrison that such clocks became accurate enough to be used as marine chronometers . [ 35 ]
The development of the telescope and accurate clocks increased the range of methods that could be used to determine longitude. With one exception ( magnetic declination ) they all depend on a common principle, which was to determine an absolute time from an event or measurement and to compare the corresponding local time at two different locations. (Absolute here refers to a time that is the same for an observer anywhere on Earth.) Each hour of difference of local time corresponds to a 15 degrees change of longitude (360 degrees divided by 24 hours).
Local noon is defined as the time at which the Sun is at the highest point in the sky. This is hard to determine directly, as the apparent motion of the Sun is nearly horizontal at noon. The usual approach was to take the mid-point between two times at which the Sun was at the same altitude. With an unobstructed horizon, the mid-point between sunrise and sunset could be used. [ 36 ] At night, local time could be obtained from the apparent rotation of the stars around the celestial pole, either measuring the altitude of a suitable star with a sextant , or the transit of a star across the meridian using a transit instrument. [ 37 ]
To determine the measure of absolute time, lunar eclipses continued to be used. Other proposed methods included:
"Lunars" or lunar distances were an early proposal for the calculation of longitude, having been first made practical by Regiomontanus in his 1474 Ephemerides Astronomicae . This almanac is one of the sources used by Amerigo Vespucci in his landmark longitude calculations he made on August 23, 1499 and September 15, 1499 as he explored South America. [ 38 ] [ 39 ] [ 40 ] The method was published by Johannes Werner in 1514, [ 41 ] and discussed in detail by Petrus Apianus in 1524. [ 42 ]
The lunar distance method depends on the motion of the Moon relative to the "fixed" stars, which completes a 360° circuit in 27.3 days on average (a lunar month), giving an observed movement of just over 0.5°/hour. Thus an accurate measurement of the angle is required, since 2 minute of arc (1/30°) difference in the angle between the Moon and the selected star corresponds to a 1° 0' difference in the longitude: 60 nautical miles (110 km) at the equator. [ 43 ] The method also required accurate tables printed before an observation, complicated by calculations to account for parallax and the irregularity of the orbit of the Moon. Neither measuring instruments nor astronomical tables were accurate enough in the early 16th century. Vespucci's first attempt to use the method placed him at 82.5° west of Cadiz , [ 44 ] putting his calculation within 5° of its actual location. [ 38 ] The second one was off by a significant amount, blamed on the inaccurate ephemerides of Regiomontanus. [ 38 ]
Accuracy improved as astronomers and navigators used better methods and instruments. Observatories published ephemerides using better observations and predictions. The Nautical Almanac was published in the UK beginning in 1767 and the American Ephemeris and Nautical Almanac starting in 1852; both included lunar distances and moon culminations.
Moon culminations are performed like a lunar distance, but the calculation is generally simpler. For a culmination, the observer simply records the time of the event and compares it with the reference time in an ephemeris table, correcting for refraction and other errors. This method was established by Nathaniel Pigott around 1786. [ 45 ] A culmination only happens about once a day, so it was combined with other observations to increase accuracy.
Galileo discovered the four brightest moons of Jupiter, Io, Europa, Ganymede and Callisto in 1610. Having determined their orbital periods, he proposed in 1612 that with sufficiently accurate knowledge of their orbits one could use their positions as a universal clock, which would make possible the determination of longitude. Galileo applied for Spain's lucrative prize for solutions to the longitude problem in 1616. He worked on this problem from time to time, but was unable to convince the Spanish court. He later applied to Holland for their prize, but by then he had been tried for heresy by the Roman Inquisition and sentenced to house arrest for the rest of his life. [ 46 ] : 15–16
Galileo's method required a telescope, as the moons are not visible to the naked eye. For use in marine navigation, Galileo proposed the celatone , a device in the form of a helmet with a telescope mounted so as to accommodate the motion of the observer on the ship. [ 47 ] This was later replaced with the idea of a pair of nested hemispheric shells separated by a bath of oil. This would provide a platform that would allow the observer to remain stationary as the ship rolled beneath him, in the manner of a gimballed platform. To provide for the determination of time from the observed moons' positions, a jovilabe was offered; this was an analogue computer that calculated time from the positions and that got its name from its similarities to an astrolabe . [ 48 ] The practical problems were severe and the method was never used at sea.
On land, this method proved useful and accurate. In 1668, Giovanni Domenico Cassini published detailed tables of Jupiter's moons. [ 46 ] : 21 An early use was the measurement of the longitude of the site of Tycho Brahe 's former observatory on the island of Hven . Jean Picard on Hven and Cassini in Paris made observations during 1671 and 1672, and obtained a value of 42 minutes 10 seconds (time) east of Paris, corresponding to 10° 32' 30", about 12 minute of arc (1/5°) higher than the modern value. [ 49 ]
Jupiter's moons provided time information for the French Académie des Sciences' project to survey France that produced a new map in 1744 which showed the coastline was significantly further east than on previous maps. [ 50 ] ( see § Government initiatives , below. )
Several methods depend on the relative motions of the Moon and a star or planet. An appulse is the least apparent distance between the two objects, an occultation occurs when the star or planet passes behind the Moon – essentially a type of eclipse. The times of either of these events can be used as the measure of absolute time in the same way as with a lunar eclipse. Edmond Halley described the use of this method to determine the longitude of Balasore in India, using observations of the star Aldebaran (the "Bull's Eye", being the brightest star in the constellation Taurus ) in 1680, with an error of just over half a degree. [ 51 ] He published a more detailed account of the method in 1717. [ 52 ] A longitude determination using the occultation of a planet, Jupiter , was described by James Pound in 1714. [ 53 ] The 1769 transit of Venus provided an opportunity for determining accurate longitude of over 100 seaports around the world. [ 46 ] : 73
Longitude calculations can be simplified using a clock is set to the local time of a starting point whose longitude is known, transporting it to a new location, and using it for astronomical observations. The longitude of the new location can be determined by comparing the difference of local mean time and the time of the transported clock.
Pocket watches are known since the early 1500s, due to the pomander-shaped watch from 1505 made by Peter Henlein in Nuremberg, Germany, rather far away from the sea. The first to suggest traveling with a clock to determine longitude, in 1530, was Gemma Frisius , a physician, mathematician, cartographer, philosopher, and instrument maker from the Netherlands. The clock would be set to the local time of a starting point whose longitude was known, and the longitude of any other place could be determined by comparing its local time with the clock time: [ 54 ] there is a four-minute difference between locally observed noon and clock noon for each degree of longitude east or west of the initial meridian. [ 55 ] : 259 While the method is mathematically sound, and was partly stimulated by recent improvements in the accuracy of mechanical clocks, it still required far more accurate timekeeping than was available in Frisius's day. The term chronometer was not used until the following century; [ 56 ] it would be over two centuries before this became the standard method for determining longitude at sea, John Harrison receiving an award in 1773 for solving the longitude at sea problem via his chronometer inventions. [ 57 ]
This method is based on the observation that a compass needle does not in general point exactly north. The angle between true north and the direction of the compass needle (magnetic north) is called the magnetic declination or variation, and its value varies from place to place. Several writers proposed that the size of magnetic declination could be used to determine longitude. Mercator suggested that the magnetic north pole was an island in the longitude of the Azores, where magnetic declination was, at that time, close to zero. These ideas were supported by Michiel Coignet in his Nautical Instruction . [ 55 ]
Halley made extensive studies of magnetic variation during his voyages on the pink Paramour . He published the first chart showing isogonic lines – lines of equal magnetic declination – in 1701. [ 58 ] One of the purposes of the chart was to aid in determining longitude, but the method was eventually to fail as changes in magnetic declination over time proved too large and too unreliable to provide a basis for navigation.
Measurements of longitude on land and sea complemented one another. As Edmond Halley pointed out in 1717, "But since it would be needless to enquire exactly what longitude a ship is in, when that of the port to which she is bound is still unknown it were to be wisht that the princes of the earth would cause such observations to be made, in the ports and on the principal head-lands of their dominions, each for his own, as might once for all settle truly the limits of the land and sea." [ 52 ] But determinations of longitude on land and sea did not develop in parallel.
On land the period from the development of telescopes and pendulum clocks until the mid-18th century saw a steady increase in the number of places whose longitude had been determined with reasonable accuracy, often with errors of less than a degree, and nearly always within 2–3°. By the 1720s errors were consistently less than 1°. [ 59 ]
At sea during the same period, the situation was very different. Two problems proved intractable. The first was the need for immediate results. On land, an astronomer at, say, Cambridge Massachusetts could wait for the next lunar eclipse that would be visible both at Cambridge and in London; set a pendulum clock to local time in the few days before the eclipse; time the events of the eclipse; send the details across the Atlantic and wait weeks or months to compare the results with a London colleague who had made similar observations; calculate the longitude of Cambridge; then send the results for publication, which might be a year or two after the eclipse. [ 60 ] And if either Cambridge or London had no visibility because of cloud, wait for the next eclipse. The marine navigator needed the results quickly. The second problem was the marine environment. Making accurate observations in an ocean swell is much harder than on land, and pendulum clocks do not work well in these conditions. Thus longitude at sea could only be estimated from dead reckoning (DR) – by using estimations of speed and course from a known starting position – at a time when longitude determination on land was becoming increasingly accurate.
To compensate for longitude uncertainty, navigators have sometimes relied on their accurate knowledge of latitude. They would sail to the latitude of their destination, then sail toward it along a line of constant latitude, known as running down a westing (if westbound, easting otherwise). [ 61 ] However, the latitude line was usually slower than the most direct or most favorable route, extending the voyage by days or weeks and increasing the risk of short rations, scurvy , and starvation. [ 62 ]
A famous longitude-error disaster occurred in April 1741. George Anson , commanding HMS Centurion , was rounding Cape Horn east to west. Believing himself past the Cape, he turned to the north but soon found himself headed straight towards land. A particularly strong easterly current had put him well to the east of his dead-reckoning position, and he had to resume his westerly course for several days. When finally past the Horn, he headed north for the Juan Fernández Islands to take on supplies for his crew, many of whom were sick with scurvy. On reaching the latitude of Juan Fernández, he did not know whether the islands were to the east or west, and spent 10 days sailing first eastwards and then westwards before finally reaching the islands. During this time over half of the ship's company died of scurvy. [ 35 ] [ 63 ]
In response to the problems of navigation, a number of European maritime powers offered prizes for a method to determine longitude at sea. Philip II of Spain was the first, offering a reward for a solution in 1567; his son, Philip III , increased the reward in 1598 to 6000 gold ducats plus a permanent pension of 2,000 gold ducats a year. [ 46 ] : 15 Holland offered 30,000 florins in the early 17th century. Neither of these prizes produced a solution, [ 64 ] : 9 though Galileo applied for both. [ 46 ] : 16
The second half of the 17th century saw the foundation of official observatories in Paris and London. The Paris Observatory was founded in 1667 under the auspices of the French Académie des Sciences . The Observatory building south of Paris was completed in 1672. [ 65 ] Early astronomers included Jean Picard , Christiaan Huygens , and Dominique Cassini . [ 66 ] : 165–177 It was not intended for any specific project, but soon became involved in the survey of France that led (after many delays due to wars and unsympathetic ministries) to the Academy's first map of France in 1744. The survey used a combination of triangulation and astronomical observations, with the satellites of Jupiter used to determine longitude. By 1684, sufficient data had been obtained to show that previous maps of France had a major longitude error, showing the Atlantic coast too far to the west. In fact France was found to be substantially smaller than previously thought. [ 67 ] [ 68 ] ( Louis XIV commented that they had taken more territory from France than he had gained in all his wars.)
The Royal Observatory in Greenwich east of London, founded in 1675, a few years after the Paris Observatory, was established explicitly to address the longitude problem. [ 69 ] John Flamsteed , the first Astronomer Royal, was instructed to "apply himself with the utmost care and diligence to the rectifying the tables of the motions of the heavens and the places of the fixed stars, so as to find out the so-much-desired longitude of places for the perfecting the art of navigation". [ 70 ] : 268 [ 29 ] The initial work was in cataloguing stars and their position, and Flamsteed created a catalogue of 3,310 stars, which formed the basis for future work. [ 70 ] : 277
While Flamsteed's catalogue was important, it did not in itself provide a solution. In 1714, the British Parliament passed " An Act for providing a public Reward for such Person or Persons as shall discover the Longitude at Sea " ( 13 Ann. c. 14), and set up a board to administer the award. The payout depended on the accuracy of the method: from £10,000 (equivalent to £1,826,000 in 2023) [ 71 ] for an accuracy within one degree of longitude (60 nautical miles (110 km) at the equator) to £20,000 (equivalent to £3,652,000 in 2023) [ 71 ] for accuracy within one half degree. [ 64 ] : 9
This prize in due course produced two workable solutions. The first was lunar distances, which required careful observation, accurate tables, and rather lengthy calculations. Tobias Mayer had produced tables based on his own observations of the moon, and submitted these to the Board in 1755. These observations were found to give the required accuracy, although the lengthy calculations required (up to four hours) were a barrier to routine use. Mayer's widow in due course received an award from the Board. [ 72 ] Nevil Maskelyne , the newly appointed Astronomer Royal who was on the Board of Longitude, started with Mayer's tables and after his own experiments at sea trying out the lunar distance method, proposed annual publication of pre-calculated lunar distance predictions in an official nautical almanac for the purpose of finding longitude at sea. Being very enthusiastic for the lunar distance method, Maskelyne and his team of computers worked feverishly through the year 1766, preparing tables for the new Nautical Almanac and Astronomical Ephemeris. Published first with data for the year 1767, it included daily tables of the positions of the Sun, Moon, and planets and other astronomical data, as well as tables of lunar distances giving the distance of the Moon from the Sun and nine stars suitable for lunar observations (ten stars for the first few years). [ 73 ] [ 74 ] [ 75 ] This publication later became the standard almanac for mariners worldwide. Since it was based on the Royal Observatory, it helped lead to the international adoption a century later of the Greenwich Meridian as an international standard.
The second method was the use of a chronometer . Many, including Isaac Newton , were pessimistic that a clock of the required accuracy could ever be developed. The Earth turns by one degree of longitude in four minutes, [ 76 ] so the maximum acceptable timekeeping error is a few seconds per day. At that time, there were no clocks that could come close to such accuracy under the conditions of a moving ship. John Harrison , a Yorkshire carpenter and clock-maker, spent over three decades in proving it could be done. [ 64 ] : 14–27
Harrison built five chronometers, two of which were tested at sea. His first, H-1, was sent on a preliminary test by the Admiralty , a voyage to Lisbon and back. It lost considerable time on the outward voyage but performed excellently on the return leg, which was not part of the official trial. The perfectionist in Harrison prevented him from sending it on the Board of Longitude's official test voyage to the West Indies (and in any case it was regarded as too large and impractical for service use). He instead embarked on the construction of H-2 , immediately followed by H-3. During construction of H-3 , Harrison realised that the loss of time of the H-1 on the Lisbon outward voyage was due to the mechanism losing time whenever the ship came about to tack down the English Channel. Inspired by this realisation, Harrison produced H-4 with a completely different mechanism. The H-4 sea trial in 1762 satisfied all the requirements for the Longitude Prize. However, the board withheld the prize, and Harrison was forced to fight for his reward, finally receiving payment in 1773 after the intervention of Parliament. [ 64 ] : 26
The French were also very interested in the problem of longitude, and the French Academy examined proposals and also offered prize money, particularly after 1748. [ 77 ] : 160 Initially the assessors were dominated by the astronomer Pierre Bouguer who was opposed to the idea of chronometers, but after his death in 1758 both astronomical and mechanical approaches were considered. Two clock-makers dominated, Ferdinand Berthoud and Pierre Le Roy . Four sea trials took place between 1767 and 1772, evaluating lunar distances as well as a variety of time-keepers. Results for both approaches steadily improved as the trials proceeded, and both methods were deemed suitable for use in navigation. [ 77 ] : 163–174
Although both chronometers and lunar distances had been shown to be practicable methods for determining longitude, it was some while before either became widely used. In the early years, chronometers were very expensive, and the calculations required for lunar distances were still complex and time-consuming, in spite of Maskelyne's work to simplify them. Both methods were initially used mainly in specialist scientific and surveying voyages. On the evidence of ships' logbooks and nautical manuals, lunar distances started to be used by ordinary navigators in the 1780s, and became common after 1790. [ 78 ]
In 1714 Humphry Ditton and William Whiston criticized both astronomical methods and the use of chronometers. They wrote: [ 79 ]
Watches are so influenc'd by heat and cold, moisture and drought; and their small Springs, Wheels, and Pivots are so incapable of that degree of exactness, which is here requir'd, that we believe all wise Men give up their Hopes from them in this Matter. Clocks, govern'd by long Pendulum's, go much truer: But then the difference of Gravity in different Latitudes, the lengthening of the Pendulum-rod by heat, and shortening it by cold; together with the different moisture of the Air, and the tossings of the Ship, all put together, are circumstances so unpromising, that we believe Wise Men are almost out of hope of Success from this Method also.
While chronometers could deal with the conditions of a ship at sea, they could be vulnerable to the harsher outdoor conditions of land-based exploration and surveying, for example in the American North-West, and lunar distances were the main method used by surveyors such as David Thompson . [ 80 ] Between January and May 1793 he took 34 observations at Cumberland House, Saskatchewan , obtaining a mean value of 102° 12' W, about 2' (2.2 km) east of the modern value. [ 81 ] Each of the 34 observations would have required about 3 hours of calculation. These lunar distance calculations became substantially simpler in 1805, with the publication of tables using the Haversine formula by Josef de Mendoza y Ríos . [ 82 ]
The advantage of using chronometers was that though astronomical observations were still needed to establish local time, the observations were simpler and less demanding of accuracy. Once local time had been established, and any necessary corrections made to the chronometer time, the calculation to obtain longitude was straightforward. A contemporary guide to the method was published by William Wales in 1794. [ 83 ] The disadvantage of cost gradually became less as chronometers began to be made in quantity. The chronometers used were not those of Harrison. Other makers such as Thomas Earnshaw , who developed the spring detent escapement, [ 84 ] simplified chronometer design and production. From 1800 to 1850, as chronometers became more affordable and reliable, they increasingly displaced the lunar distance method.
Chronometers needed to be checked and reset at intervals. On short voyages between places of known longitude this was not a problem. For longer journeys, particularly of survey and exploration, astronomical methods continued to be important. An example of the way chronometers and lunars complemented one another in surveying work is Matthew Flinders ' circumnavigation of Australia in 1801–3. Surveying the south coast, Flinders started at King George Sound , a known location from George Vancouver 's earlier survey. He proceeded along the south coast, using chronometers to determine longitude of the features along the way. Arriving at the bay he named Port Lincoln , he set up a shore observatory, and determined the longitude from thirty sets of lunar distances. He then determined the chronometer error, and recalculated all the longitudes of the intervening locations. [ 85 ]
Ships often carried more than one chronometer. Two would give dual modular redundancy , allowing a backup if one should cease to work, but not allowing any error correction if the two displayed a different time, since it would be impossible to know which one was wrong: the error detection obtained would be the same as having only one chronometer and checking it periodically: every day at noon against dead reckoning . Three chronometers provided triple modular redundancy , allowing error correction if one of the three was wrong, so the pilot would take the average of the two with closer readings (average precision vote). This inspired the adage: "Never go to sea with two chronometers; take one or three." [ 86 ] Some vessels carried more than three chronometers – for example, HMS Beagle carried 22 chronometers . [ 87 ]
By 1850, the vast majority of ocean-going navigators worldwide had abandoned the method of lunar distances. Nonetheless, expert navigators continued to learn lunars as late as 1905, though for most this was only a textbook exercise required for certain licenses. Littlehales noted in 1909: "The lunar-distance tables were omitted from the Connaissance des Temps for the year 1905, after having retained their place in the French official ephemeris for 131 years; and from the British Nautical Almanac for 1907, after having been presented annually since the year 1767, when Maskelyne's tables were published." [ 88 ]
Surveying on land continued to use a mixture of triangulation and astronomical methods, to which was added the use of chronometers once they became readily available. An early use of chronometers in land surveying was reported by Simeon Borden in his survey of Massachusetts in 1846. Having checked Nathaniel Bowditch 's value for the longitude of the State House in Boston he determined the longitude of the First Congregational Church at Pittsfield , transporting 38 chronometers on 13 excursions between the two locations. [ 89 ] Chronometers were also transported much longer distances. For example, the United States Coast Survey organised expeditions in 1849 and 1855 in which a total of over 200 chronometers were shipped between Liverpool and Boston , not for navigation, but to obtain a more accurate determination of the longitude of the Observatory at Cambridge, Massachusetts , and thus to anchor the US Survey to the Greenwich meridian. [ 90 ] : 5
The first working telegraphs were established in Britain by Wheatstone and Cooke in 1839, and in the US by Morse in 1844. The idea of using the telegraph to transmit a time signal for longitude determination was suggested by François Arago to Morse in 1837, [ 91 ] and the first test of this idea was made by Capt. Wilkes of the U.S. Navy in 1844, over Morse's line between Washington and Baltimore. Two chronometers were synchronized, and taken to the two telegraph offices to check that time was accurately transmitted. [ 92 ]
The method was soon in practical use for longitude determination, in particular by the U.S. Coast Survey, and over longer and longer distances as the telegraph network spread across North America. Many technical challenges were dealt with. Initially operators sent signals manually and listened for clicks on the line and compared them with clock ticks, estimating fractions of a second. Circuit breaking clocks and pen recorders were introduced in 1849 to automate these process, leading to great improvements in both accuracy and productivity. [ 94 ] : 318–330 [ 95 ] : 98–107 With the establishment of an observatory in Quebec in 1850 under the direction of Edward David Ashe, a network of telegraphic longitude determinations was carried out for eastern Canada, and linked to that of Harvard and Chicago. [ 96 ] [ 97 ]
A big expansion to the "telegraphic net of longitude" was due to the successful completion of the transatlantic telegraph cable between S.W. Ireland and Nova Scotia in 1866. [ 90 ] A cable from Brest in France to Duxbury Massachusetts was completed in 1870, and gave the opportunity to check results by a different route. In the interval, the land-based parts of the network had improved, including the elimination of repeaters. Comparisons of the difference between Greenwich and Cambridge Massachusetts showed differences between measurement of 0.01 second of time, with a probable error of ±0.04 seconds, equivalent to 45 feet. [ 95 ] : 175 Summing up the net in 1897, Charles Schott presented a table of the major locations throughout the United States whose locations had been determined by telegraphy, with the dates and pairings, and the probable error. [ 93 ] [ 98 ] The net was expanded into the American North-West with telegraphic connection to Alaska and western Canada. Telegraphic links between Dawson City , Yukon, Fort Egbert , Alaska, and Seattle and Vancouver were used to provide a double determination of the position of the 141st meridian where it crossed the Yukon River, and thus provide a starting point for a survey of the border between the US and Canada to north and south during 1906–1908 [ 99 ] [ 100 ] William Bowie has given a detailed description of the telegraphic method as used by the United States Coast and Geodetic Survey . [ 101 ]
The U.S. Navy expanded the web into the West Indies and Central and South America in four expeditions in the years 1874–90. One series of observations linked Key West , Florida with the West Indies and Panama City . [ 103 ] A second covered locations in Brazil and Argentina , and also linked to Greenwich via Lisbon . [ 104 ] The third ran from Galveston, Texas , through Mexico and Central America, including Panama, and on to Peru and Chile, connecting to Argentina via Cordoba . [ 102 ] The fourth added locations in Mexico, Central America and the West Indies, and extended the chain to Curaçao and Venezuela . [ 105 ]
East of Greenwich, telegraphic determinations of longitude were made of locations in Egypt, including Suez, as part of the observations of the 1874 transit of Venus directed by Sir George Airy , the British Astronomer Royal . [ 106 ] [ 107 ] Telegraphic observations made as part of the Great Trigonometrical Survey of India, including Madras , were linked to Aden and Suez in 1877. [ 108 ] [ 107 ] In 1875, the longitude of Vladivostok in eastern Siberia was determined by telegraphic connection with Saint Petersburg . The US Navy used Suez, Madras and Vladivostok as the anchor-points for a chain of determinations made in 1881–1882, which extended through Japan , China , the Philippines , and Singapore . [ 109 ]
The telegraphic web circled the globe in 1902 with the connection of Australia and New Zealand to Canada via the All Red Line . This allowed a double determination of longitudes from east and west, which agreed within one second of arc (1/15 second of time). [ 110 ]
The telegraphic net of longitude was less important in Western Europe, which had already mostly been surveyed in detail using triangulation and astronomical observations. But the "American Method" was used in Europe, for example in a series of measurements to determine the longitude difference between the observatories of Greenwich and Paris with greater accuracy than previously available. [ 111 ]
Marconi was granted his patent for wireless telegraphy in 1897. [ 112 ] The potential for using wireless time signals for determining longitude was soon apparent. [ 113 ]
Wireless telegraphy was used to extend and refine the telegraphic web of longitude, giving potentially greater accuracy, and reaching locations that were not connected to the wired telegraph network. An early determination was that between Potsdam and The Brocken in Germany, a distance of about 100 miles (160 km), in 1906. [ 114 ] In 1911 the French determined the difference of longitude between Paris and Bizerte in Tunisia, a distance of 920 miles (1,480 km), and in 1913–14 a transatlantic determination was made between Paris and Washington . [ 115 ]
The first wireless time signals for the use of ships at sea started in 1907, from Halifax, Nova Scotia . [ 116 ] Time signals were transmitted from the Eiffel Tower in Paris starting in 1910. [ 117 ] These signals allowed navigators to check and adjust their chronometers on a frequent basis. [ 118 ] [ 119 ] An international conference in 1912 allocated times for various wireless stations around the world to transmit their signals, allowing for near-worldwide coverage without interference between stations. [ 117 ] Wireless time-signals were also used by land-based observers in the field, in particular surveyors and explorers. [ 120 ]
Radio navigation systems came into general use after World War II . Several systems were developed including the Decca Navigator System , the US coastguard LORAN-C , the international Omega system, and the Soviet Alpha and CHAYKA . The systems all depended on transmissions from fixed navigational beacons. A ship-board receiver calculated the vessel's position from these transmissions. [ 121 ] These systems were the first to allow accurate navigation when astronomical observations could not be made because of poor visibility, and became the established method for commercial shipping until the introduction of satellite-based navigation systems in the early 1990s.
In 1908, Nikola Tesla had predicted:
In the densest fog or darkness of night, without a compass or other instruments of orientation, or a timepiece, it will be possible to guide a vessel along the shortest or orthodromic path, to instantly read the latitude and longitude, the hour, the distance from any point, and the true speed and direction of movement. [ 122 ]
His prediction was fulfilled partially with radio navigation systems, and completely with computer geopositioning systems based on GPS satellite beacons. | https://en.wikipedia.org/wiki/History_of_longitude |
The history of gaseous fuel , important for lighting, heating, and cooking purposes throughout most of the 19th century and the first half of the 20th century, began with the development of analytical and pneumatic chemistry in the 18th century. These "synthetic fuel gases " (also known as "manufactured fuel gas", "manufactured gas" or simply "gas") were made by gasification of combustible materials, usually coal, but also wood and oil, by heating them in enclosed ovens with an oxygen-poor atmosphere. The fuel gases generated were mixtures of many chemical substances , including hydrogen , methane , carbon monoxide and ethylene . Coal gas also contains significant quantities of unwanted sulfur and ammonia compounds, as well as heavy hydrocarbons , and must be purified before use.
The first attempts to manufacture fuel gas in a commercial way were made in the period 1795–1805 in France by Philippe LeBon , and in England by William Murdoch . Although precursors can be found, it was these two engineers who elaborated the technology with commercial applications in mind. Frederick Winsor was the key player behind the creation of the first gas utility, the London-based Gas Light and Coke Company , incorporated by royal charter in April 1812.
Manufactured gas utilities were founded first in England , and then in the rest of Europe and North America in the 1820s. The technology increased in scale. After a period of competition, the business model of the gas industry matured in monopolies, where a single company provided gas in a given zone. The ownership of the companies varied from outright municipal ownership, such as in Manchester, to completely private corporations, such as in London and most North American cities. Gas companies thrived during most of the nineteenth century, usually returning good profits to their shareholders, but were also the subject of many complaints over price.
The most important use of manufactured gas in the early 19th century was for gas lighting , as a convenient substitute for candles and oil lamps in the home. Gas lighting became the first widespread form of street lighting . This use called for gases that burned with a highly luminous flame, called "illuminating gases", Some gas mixtures of low intrinsic luminosity, such as blue water gas , were enriched with oil, for brightness.
In the second half of the 19th century, the manufactured fuel gas industry diversified from lighting to include heat and cooking uses. The threat from electrical light in the later 1870s and 1880s drove this trend strongly. The gas industry did not cede the gas lighting market to electricity immediately, as the invention of the Welsbach mantle , a refractory mesh bag heated to incandescence by a mostly non-luminous flame within, dramatically increased the efficiency of gas lighting. Acetylene was also used from about 1898 for gas cooking and gas lighting (see Carbide lamp ) on a smaller scale, although its use too declined with the advent of electric lighting, and LPG for cooking. [ 1 ] Other technological developments in the late nineteenth century include the use of water gas and machine stoking, although these were not universally adopted.
In the 1890s, [ citation needed ] pipelines from natural gas fields in Texas and Oklahoma were built to Chicago and other cities, and natural gas was used to supplement manufactured fuel gas supplies, eventually completely displacing it. Gas ceased to be manufactured in North America by 1966 (with the exception of Indianapolis and Honolulu ), while it continued in Europe until the 1980s. "Manufactured gas" is again being evaluated as a fuel source, as energy utilities look towards coal gasification once again as a potentially cleaner way of generating power from coal, although nowadays such gases are likely to be called " synthetic natural gas ".
Pneumatic chemistry developed in the eighteenth century with the work of scientists such as Stephen Hales , Joseph Black , Joseph Priestley , and Antoine-Laurent Lavoisier , and others. Until the eighteenth century, gas was not recognized as a separate state of matter. Rather, while some of the mechanical properties of gases were understood, as typified by Robert Boyle 's experiments and the development of the air pump , their chemical properties were not. Gases were regarded in keeping the Aristotelean tradition of four elements as being air, one of the four fundamental elements. The different sorts of airs, such as putrid airs or inflammable air, were looked upon as atmospheric air with some impurities, much like muddied water.
After Joseph Black realized that carbon dioxide was in fact a different sort of gas altogether from atmospheric air, other gases were identified, including hydrogen by Henry Cavendish in 1766. Alessandro Volta expanded the list with his discovery of methane in 1776. It had also been known for a long time that inflammable gases could be produced from most combustible materials, such as coal and wood, through the process of distillation . Stephen Hales, for example, had written about the phenomenon in the Vegetable Staticks in 1722. In the last two decades of the eighteenth century, as more gases were being discovered and the techniques and instruments of pneumatic chemistry became more sophisticated, a number of natural philosophers and engineers thought about using gases in medical and industrial applications. One of the first such uses was ballooning beginning in 1783, but other uses soon followed. [ 2 ]
One of the results of the ballooning craze of 1783–1784 was the first implementation of lighting by manufactured gas. A professor of natural philosophy at the University of Louvain Jan Pieter Minckeleers and two of his colleagues were asked by their patron, the Duke of Arenberg , to investigate ballooning. They did so, building apparatus to generate lighter than air inflammable gases from coal and other inflammable substances. In 1785 Minckeleers used some of this apparatus to gasify coal to light his lecture hall at the university. He did not extend gas lighting much beyond this, and when he was forced to flee Leuven during the Brabant Revolution , he abandoned the project altogether. [ 3 ]
Philippe LeBon was a French civil engineer working in the public engineering corps who became interested while at university in distillation as an industrial process for the manufacturing of materials such as tar and oil. He graduated from the engineering school in 1789, and was assigned to Angoulême. There, he investigated distillation, and became aware that the gas produced in the distillation of wood and coal could be useful for lighting, heating, and as an energy source in engines. He took out a patent for distillation processes in 1794, and continued his research, eventually designing a distillation oven known as the thermolamp . He applied for and received a patent for this invention in 1799, with an addition in 1801. He launched a marketing campaign in Paris in 1801 by printing a pamphlet and renting a house where he put on public demonstrations with his apparatus. His goal was to raise sufficient funds from investors to launch a company, but he failed to attract this sort of interest, either from the French state or from private sources. He was forced to abandon the project and return to the civil engineering corps. Although he was given a forest concession by the French government to experiment with the manufacture of tar from wood for naval use, he never succeed with the thermolamp, and died in uncertain circumstances in 1805. [ 4 ]
Although the thermolamp received some interest in France, it was in Germany that interest was the greatest. A number of books and articles were written on the subject in the period 1802–1812. There were also thermolamps designed and built in Germany, the most important of which were by Zachaus Winzler, an Austrian chemist who ran a saltpetre factory in Blansko. Under the patronage of the aristocratic zu Salm family, he built a large one in Brno. He moved to Vienna to further his work. The thermolamp, however, was used primarily for making charcoal and not for the production of gases. [ 5 ] [ 6 ]
William Murdoch (sometimes Murdock) (1754–1839) was an engineer working for the firm of Boulton & Watt , when, while investigating distillation processes sometime in 1792–1794, he began using coal gas for illumination. He was living in Redruth in Cornwall at the time, and made some small scale experiments with lighting his own house with coal gas. He soon dropped the subject until 1798, when he moved to Birmingham to work at Boulton & Watt's home base of Soho . Boulton & Watt then instigated another small-scale series of experiments. With ongoing patent litigation and their main business of steam engines to attend to, the subject was dropped once again. Gregory Watt, James Watt's second son, while traveling in Europe saw Lebon's demonstrations and wrote a letter to his brother, James Watt Jr. , informing him of this potential competitor. This prompted James Watt Jr. to begin a gaslight development program at Boulton & Watt that would scale up the technology and lead to the first commercial applications of gaslight. [ 7 ] [ 8 ]
After an initial installation at the Soho Foundry in 1803–1804, Boulton & Watt prepared an apparatus for the textile firm of Philips & Lee in Salford near Manchester in 1805–1806. This was to be their only major sale until late 1808. George Augustus Lee was a major motivating force behind the development of the apparatus. He had an avid interest in technology, and had introduced a series of technological innovations at the Salford Mill, such as iron frame construction and steam heating. He continued to encourage the development of gaslight technology at Boulton & Watt. [ 7 ] [ 8 ]
The first company to provide manufactured gas to consumer as a utility was the London-based Gas Light and Coke Company . It was founded through the efforts of a German émigré, Frederick Winsor , who had witnessed Lebon's demonstrations in Paris. He had tried unsuccessfully to purchase a thermolamp from Lebon, but remained taken with the technology, and decided to try his luck, first in his hometown of Brunswick , and then in London in 1804. Once in London, Winsor began an intense campaign to find investors for a new company that would manufacture gas apparatus and sell gas to consumers. He was successful in finding investors, but the legal form of the company was a more difficult problem. By the Bubble Act 1720 , all joint-stock companies above a certain number of shareholders in England needed to receive a royal charter to incorporate, which meant that an act of Parliament was required.
Winsor waged his campaign intermittently to 1807, when the investors constituted a committee charged with obtaining an act of Parliament. They pursued this task over the next three years, encountering adversities en route, the most important of which was the resistance by Boulton & Watt in 1809. In that year, the committee made a serious attempt to get the House of Commons to pass a bill empowering the king to grant the charter, but Boulton & Watt felt their gaslight apparatus manufacturing business was threatened and mounted an opposition through their allies in Parliament. Although a parliamentary committee recommended approval, it was defeated on the third reading.
The following year, the committee tried again, succeeding with the acquiescence of Boulton & Watt because they renounced all powers to manufacture apparatus for sale. The resulting act of Parliament [ which? ] required that the company raise £100,000 before they could request a charter, a condition it took the next two years to fill. George III granted the charter in 1812.
From 1812 to approximately 1825, manufactured gas was predominantly an English technology. A number of new gas utilities were founded to serve London and other cities in the UK after 1812. Liverpool, Exeter, and Preston were the first in 1816. Others soon followed; by 1821, no town with population over 50,000 was without gaslight. Five years later, there were only two towns over 10,000 that were without gaslight. [ 9 ] In London, the growth of gaslight was rapid. New companies were founded within a few years of the Gas Light and Coke Company, and a period of intense competition followed as companies competed for consumers on the boundaries of their respective zones of operations. Frederick Accum , in the various editions of his book on gaslight, gives a good sense of how rapidly the technology spread in the capital. In 1815, he wrote that there were 4,000 lamps in the city, served by 26 miles (42 km) of mains. In 1819, he raised his estimate to 51,000 lamps and 288 miles (463 km) of mains. Likewise, there were only two gasworks in London in 1814, and by 1822, there were seven and by 1829, there were 200 companies. [ 7 ] : 72 The government did not regulate the industry as a whole until 1816, when an act of Parliament [ which? ] created the post of inspector for gasworks, the first holder of which was Sir William Congreve . Even then, no laws were passed regulating the entire industry until an act of Parliament [ which? ] was passed in 1847, although a bill was proposed in 1822, which failed due to opposition from gas companies. [ 7 ] : 83 The charters approved by Parliament did, however, contain various regulations such as how the companies could break up the pavement, etc.
France's first gas company was also promoted by Frederick Winsor after he had to flee England in 1814 due to unpaid debts. He tried to found another gas company in Paris, but it failed in 1819. The government was also interested in promoting the industry, and in 1817 commissioned Chabrol de Volvic to study the technology and build a prototype plant, also in Paris. The plant provided gas for lighting the hôpital Saint Louis and the experiment was judged successful. [ 10 ] King Louis XVIII then decided to give further impulse to the development of the French industry by sending people to England to study the situation there, and to install gaslight at a number of prestigious buildings, such as the Paris Opera building, the national library, etc. A public company was created for this purpose in 1818. [ 11 ] Private companies soon followed, and by 1822, when the government moved to regulate the industry, four were operating in the capital. The regulations passed then prevented the companies from competing, and Paris was effectively divided between the various companies operating as monopolies in their own zones. [ 12 ]
Gaslight spread to other European countries. In 1817, a company was founded in Brussels by P. J. Meeus-Van der Maelen, and began operating the following year. By 1822, there were companies in Amsterdam and Rotterdam using English technology. [ 13 ] In Germany, gaslight was used on a small scale from 1816 onwards, but the first gaslight utility was founded by English engineers and capital. In 1824, the Imperial Continental Gas Association was founded in London to establish gas utilities in other countries. Sir William Congreve, 2nd Baronet , one if its leaders, signed an agreement with the government in Hanover, and the gas lamps were used on streets for the first time in 1826. [ 14 ]
Gaslight was first introduced to the US in 1816 in Baltimore by Rembrandt and Rubens Peale, who lit their museum with gaslight, which they had seen on a trip to Europe. The brothers convinced a group of wealthy people to back them in a larger enterprise. The local government passed a law allowing the Peales and their associates to lay mains and light the streets. A company was incorporated for this purpose in 1817. After some difficulties with the apparatus and financial problems, the company hired an English engineer with experience in gaslight. It began to flourish, and by the 1830s, the company was supplying gas to 3000 domestic customers and 100 street lamps. [ 15 ] Companies in other cities followed, the second being Boston Gas Light in 1822 and New York Gas Light Company in 1825. [ 16 ] A gas works was built in Philadelphia in 1835. [ 17 ]
The Australian Gas Light Company , established in 1837, opened the first gasworks in Australia, at Millers Point in Sydney in 1841. [ 18 ]
Gas lighting was one of the most debated technologies of the first industrial revolution. In Paris, as early as 1823, controversy forced the government to devise safety standards. [ 19 ] The residues produced from distilled coal were often either drained into rivers or stored in basins which polluted (and still pollute) the soil. One early exception was the Edinburgh Gas Works where, from 1822, the residues were carted and later piped to the Bonnington Chemical Works and processed into valuable products. [ 20 ]
Case law in the UK and the US clearly held though, the construction and operation of a gas-works was not the creation of a public nuisance or malum in se , due to the reputation of gas-works as highly undesirable neighbors, and the noxious pollution known to issue from such, especially in the early days of manufactured gas, gas-works were on extremely short notice from the courts that (detectable) contamination outside of their grounds – especially in residential districts – would be severely frowned upon. Indeed, many actions for the abatement of nuisances brought before the courts did result in unfavorable verdicts for gas manufacturers – in one study on early environmental law, actions for nuisance involving gas-works resulted in findings for the plaintiffs 80% of the time, compared with an overall plaintiff victory rate of 28.5% in industrial nuisance cases. [ 21 ]
Injunctions both preliminary and permanent could and were often issued in cases involving gas works. For example, the ill reputation of gas-works became so well known that in City of Cleveland vs. Citizens' Gas Light Co. , 20 N. J. Eq. 201 , a court went so far as to enjoin a future gas-works not yet even built – preventing it from causing annoying and offensive vapours and odors in the first place. The injunction not only regulated the gas manufacturing process – forbidding the use of lime purification – but also provided that if nuisances of any sort were to issue from the works – a permanent injunction forbidding the production of gas would issue from the court. [ 22 ] Indeed, as the Master of the Rolls , Lord Langdale , once remarked in his opinion in Haines v. Taylor , 10 Beavan 80 , that I have been rather astonished to hear the effects of gas works treated as nothing...every man, in these days, must have sufficient experience, to enable him to come to the conclusion, that, whether a nuisance or not, a gas manufactory is a very disagreeable thing. Nobody can doubt that the volatile products which arise from the distillation of coal are extremely offensive. It is quite contrary to common experience to say they are not so...every man knows it. [ 23 ] However, as time went by, gas-works began to be seen as more as a double-edged sword – and eventually as a positive good, as former nuisances were abated by technological improvements, and the full benefits of gas became clear. There were several major impetuses that drove this phenomenon:
Both the era of consolidation of gas-works through high-pressure distribution systems (1900s–1930s) and the end of the era of manufactured gas (1955–1975) saw gas-works being shut down due to redundancies. What brought about the end of manufactured gas was that pipelines began to be built to bring natural gas directly from the well to gas distribution systems. Natural gas was superior to the manufactured gas of that time, being cheaper – extracted from wells rather than manufactured in a gas-works – more user friendly – coming from the well requiring little, if any, purification – and safer – due to the lack of carbon monoxide in the distributed product. Upon being shut down, few former manufactured gas plant sites were brought to an acceptable level of environmental cleanliness to allow for their re-use, at least by contemporary standards. In fact, many were literally abandoned in place, with process wastes left in situ , and never adequately disposed of. In the United States, an EPA report from 1999 indicates that there are 3,000 to 5,000 former manufactured gas plant sites around the country. [ 24 ]
As the wastes produced by former manufactured gas plants were persistent in nature, they often (as of 2009) still contaminate the site of former manufactured gas plants: the waste causing the most concern today is primarily coal tar (mixed long-chain aromatic and aliphatic hydrocarbons, a byproduct of coal carbonization ), while "blue billy" (a noxious byproduct of lime purification contaminated with cyanides) as well as other lime and coal tar residues are regarded as lesser, though significant environmental hazards. Some former manufactured gas plants are owned by gas utilities today, often in an effort to prevent contaminated land from falling into public use, and inadvertently causing the release of the wastes therein contained. Others have fallen into public use, and without proper reclamation, have caused – often severe – health hazards for their users. When and where necessary, former manufactured gas plants are subject to environmental remediation laws, and can be subject to legally mandated cleanups.
The basic design of gaslight apparatus was established by Boulton & Watt and Samuel Clegg in the period 1805–1812. Further improvements were made at the Gas Light and Coke Company, as well as by the growing number of gas engineers such as John Malam and Thomas Peckston after 1812. Boulton & Watt contributed the basic design of the retort, condenser, and gasometer, while Clegg improved the gasometer and introduced lime purification and the hydraulic main, another purifier.
The retort bench was the construction in which the retorts were located for the carbonization (synonymous with pyrolysis) of the coal feedstock and the evolution of coal gas. Over the years of manufactured gas production, advances were made that turned the retort-bench from little more than coal-containing iron vessels over an open fire to a massive, highly efficient, partially automated, industrial-scale, capital-intensive plant for the carbonization of large amounts of coal. Several retort benches were usually located in a single "retort house", which there was at least one of in every gas works.
Initially, retort benches were of many different configurations due to the lack of long use and scientific and practical understanding of the carbonization of coal. Some early retorts were little more than iron vessels filled with coal and thrust upon a coal fire with pipes attached to their top ends. Though practical for the earliest gas works, this quickly changed once the early gas-works served more than a few customers. As the size of such vessels grew – the need became apparent for efficiency in refilling retorts – and it was apparent that filling one-ended vertical retorts was easy; removing the coke and residues from them after the carbonization of coal was far more difficult. Hence, gas retorts transitioned from vertical vessels to horizontal tubular vessels.
Retorts were usually made of cast iron during the early days. Early gas engineers experimented extensively with the best shape, size, and setting. No one form of retort dominated, and many different cross-sections remained in use. After the 1850s, retorts generally became made of fire clay due to greater heat retention, greater durability, and other positive qualities. Cast-iron retorts were used in small gas works, due to their compatibility with the demands there, with the cast-iron retort's lower cost, ability to heat quickly to meet transient demand, and "plug and play" replacement capabilities. This outweighed the disadvantages of shorter life, lower temperature margins, and lack of ability to be manufactured in non-cylindrical shapes. Also, general gas-works practice following the switch to fire-clay retorts favored retorts that were shaped like a "D" turned 90 degrees to the left, sometimes with a slightly pitched bottom section.
With the introduction of the fire-clay retort, higher heats could be held in the retort benches, leading to faster and more complete carbonization of the coal. As higher heats became possible, advanced methods of retort bench firing were introduced, catalyzed by the development of the open hearth furnace by Siemens , circa 1855–1870, leading to a revolution in gas-works efficiency.
Specifically, the two major advances were:
These two advances turned the old, "directly fired" retort bench into the advanced, "indirectly fired", "regenerative" or "generative" retort bench, and lead coke usage within the retort benches (in the larger works) to drop from upwards of 40% of the coke made by the retorts to factors as low as 15% of the coke made by the retorts, leading to an improvement in efficiency of an order of magnitude. These improvements imparted an additional capital cost to the retort bench, which caused them to be slowly incorporated in the smaller gas-works, if they were incorporated at all.
Further increases in efficiency and safety were seen with the introduction of the "through" retort, which had a door at front and rear. This provided for greater efficiency and safety in loading and unloading the retorts, which was a labor-intensive and often dangerous process. Coal could now be pushed out of the retort – rather than pulled out of the retort. One interesting modification of the "through" retort was the "inclined" retort – coming into its heyday in the 1880s – a retort set on a moderate incline, where coal was poured in at one end, and the retort sealed; following pyrolysis, the bottom was opened and the coke poured out by gravity. This was adopted in some gas-works, but the savings in labor were often offset by the uneven distribution and pyrolysis of the coal as well as clumping problems leading to failure of the coal to pour out of the bottom following pyrolysis that were exacerbated in certain coal types. As such, inclined retorts were rendered obsolescent by later advances, including the retort-handling machine and the vertical retort system.
Several advanced retort-house appliances were introduced for improved efficiency and convenience. The compressed-air or steam-driven clinkering pick was found to be especially useful in removing clinker from the primary combustion area of the indirectly fired benches – previously clinkering was an arduous and time-consuming process that used large amounts of retort house labor. Another class of appliances introduced were apparatuses – and ultimately, machines – for retort loading and unloading. Retorts were generally loaded by using an elongated scoop, into which the coal was loaded – a gang of men would then lift the scoop and ram it into the retort. The coal would then be raked by the men into a layer of even thickness and the retort sealed. Gas production would then ensue – and from 8 – 12 hours later, the retort would be opened, and the coal would be either pulled (in the case of "stop-ended" retorts) or pushed (in the case of "through" retorts) out of the retort. Thus, the retort house had heavy manpower requirements – as many men were often required to bear the coal-containing scoop and load the retort.
From the retort, the gas would first pass through a tar/water "trap" (similar to a trap in plumbing) called a hydraulic main, where a considerable fraction of coal tar was given up and the gas was significantly cooled. Then, it would pass through the main out of the retort house into an atmospheric or water-cooled condenser, where it would be cooled to the temperature of the atmosphere or the water used. At this point, it enters the exhauster house and passes through an "exhauster", an air pump which maintains the hydraulic mains and, consequently, the retorts at a negative pressure (with a zero pressure being atmospheric). It would then be washed in a "washer" by bubbling it through water, to extract any remaining tars. After this, it would enter a purifier. The gas would then be ready for distribution, and pass into a gasholder for storage.
Within each retort-house, the retort benches would be lined up next to one another in a long row. Each retort had a loading and unloading door. Affixed to each door was an ascension pipe, to carry off the gas as it was evolved from the coal within. These pipes would rise to the top of the bench where they would terminate in an inverted "U" with the leg of the "U" disappearing into a long, trough-shaped structure (with a covered top) made of cast iron called a hydraulic main that was placed atop the row of benches near their front edge. It ran continuously along the row of benches within the retort house, and each ascension pipe from each retort descended into it.
The hydraulic main had a level of a liquid mixture of (initially) water, but, following use, also coal tar, and ammoniacal liquor. Each retort ascension pipe dropped under the water level by at least a small amount, perhaps by an inch, but often considerably more in the earlier days of gas manufacture. The gas evolved from each retort would thus bubble through the liquid and emerge from it into the void above the liquid, where it would mix with the gas evolved from the other retorts and be drawn off through the foul main to the condenser.
There were two purposes to the liquid seal: first, to draw off some of the tar and liquor, as the gas from the retort was laden with tar, and the hydraulic main could rid the gas of it, to a certain degree; further tar removal would take place in the condenser, washer/scrubber, and the tar extractor. Still, there would be less tar to deal with later. Second, the liquid seal also provided defense against air being drawn into the hydraulic main: if the main had no liquid within, and a retort was left open with the pipe not shut off, and air were to combine with the gas, the main could explode, along with nearby benches.
However, after the early years of gas, research proved that a very deep, excessive seal on the hydraulic main threw a backpressure upon all the retorts as the coal within was gasifying, and this had deleterious consequences; carbon would likely deposit onto the insides of retorts and ascension pipes; and the bottom layer of tar with which the gas would have to travel through in a deeply sealed main robbed the gas of some of its illuminating value. As such, after the 1860s, hydraulic mains were run at around 1 inch of seal, and no more.
Later retort systems (many types of vertical retorts, especially ones in continuous operation) which had other anti-oxygen safeguards, such as check valves, etc., as well as larger retorts, often omitted the hydraulic main entirely and went straight to the condensers – as other apparatus and buildings could be used for tar extraction, the main was unnecessary for these systems.
Condensers were either air-cooled or water-cooled.
Air cooled condensers were often made up from odd lengths of pipe and connections. The main varieties in common use were classified as follows:
(a) Horizontal types
(b) Vertical types
(c) Annular types
(d) The battery condenser.
The horizontal condenser was an extended foul main with the pipe in a zigzag pattern from end to end of one of the retort-house walls. Flange connections were essential as blockages from naphthalene or pitchy deposits were likely to occur. The condensed liquids flowed down the sloping pipes in the same direction as the gas. As long as gas flow was slow, this was an effective method for the removal of naphthalene. Vertical air condensers had gas and tar outlets.
The annular atmospheric condenser was easier to control with respect to cooling rates. The gas in the tall vertical cylinders was annular in form and allowed an inside and outside surface to be exposed to cooling air. The diagonal side pipes conveyed the warm gas to the upper ends of each annular cylinder. Butterfly valves or dampers were fitted to the top of each vertical air pipe, so that the amount of cooling could be regulated.
The battery condenser was a long and narrow box divided internally by baffle-plates which cause the gas to take a circuitous course. The width of the box was usually about 2 feet, and small tubes passed from side to side form the chief cooling surface. The ends of these tubes were left open to allow air to pass through. The obstruction caused by the tubes played a role in breaking up and throwing down the tars suspended in the gas.
Typically, plants using cast-iron mains and apparatus allowed 5 square feet of superficial area per 1,000 cubic feet of gas made per day. This could be slightly reduced when wrought iron or mild steel was used. [ 25 ]
Water-cooled condensers were mainly constructed from riveted mild steel plates (which form the outer shell) and steel or wrought-iron tubes. There were two distinct types used:
(a) Multitubular condensers.
(b) Water-tube condensers.
Unless the cooling water was exceptionally clean, the water-tube condenser was preferred. The major difference between the multitubular and water-tube condenser was that in the former the water passed outside and around the tubes which carry the hot gas, and in the latter type, the opposite was the case. Thus when only muddy water pumped from rivers or canals was available; the water-tube condenser was used. When the incoming gas was particularly dirty and contained an undesirable quantity of heavy tar, the outer chamber was liable to obstruction from this cause.
The hot gas was saturated with water vapor and accounted for the largest portion of the total work of condensation. Water vapor has to lose large quantities of heat, as did any liquefiable hydrocarbon. Of the total work of condensation, 87% was accounted for in removing water vapor and the remainder was used to cool permanent gases and to condensing liquefiable hydrocarbon. [ 26 ]
As extremely finely divided particles were also suspended in the gas, it was impossible to separate the particulate matter solely by a reduction of vapor pressure. The gas underwent processes to remove all traces of solid or liquid matter before it reached the wet purification plant. Centrifugal separators, such as the Colman Cyclone apparatus were utilized for this process in some plants.
The hydrocarbon condensates removed in the order heavy tars, medium tars and finally light tars and oil fog. About 60-65% of the tars would be deposited in the hydraulic main. Most of this tar was heavy tars. The medium tars condensed out during the passage of the products between the hydraulic and the condenser. The lighter tars oil fog would travel considerably further.
In general, the temperature of the gas in the hydraulic main varies between 140-160 o F. The constituents most liable to be lost were benzene, toluene, and, to some extent, xylene, which had an important effect on the ultimate illuminating power of the gas. Tars were detrimental for the illuminating power and were isolated from the gas as rapidly as possible. [ 27 ]
Maintained hydraulic main and condenser at negative pressure.
There were several types of exhausters:
Final extractions of minor deleterious fractions.
Scrubbers which utilized water were designed in the 25 years after the foundation of the industry. It was discovered that the removal of ammonia from the gas depended upon the way in which the gas to be purified was contacted by water. This was found to be best performed by the Tower Scrubber. This scrubber consisted of a tall cylindrical vessel, which contained trays or bricks which were supported on grids. The water, or weak gas liquor, trickled over these trays, thereby keeping the exposed surfaces thoroughly wetted. The gas to be purified was run through the tower to be contacted with the liquid. In 1846 George Lowe patented a device with revolving perforated pipes for supplying water or purifying liquor. At a later date, the Rotary Washer Scrubber was introduced by Paddon, who used it at Brighton about 1870. This prototype machine was followed by others of improved construction; notably by Kirkham, Hulett, and Chandler, who introduced the well-known Standard Washer Scrubber, Holmes, of Huddersfield, and others. The Tower Scrubber and the Rotary Washer Scrubber made it possible to completely remove ammonia from the gas. [ 7 ]
Coal gas coming directly from the bench was a noxious soup of chemicals, and removal of the most deleterious fractions was important, for improving the quality of the gas, for preventing damage to equipment or premises, and for recovering revenues from the sale of the extracted chemicals. Several offensive fractions being present in a distributed gas might lead to problems – Tar in the distributed gas might gum up the pipes (and could be sold for a good price), ammoniacal vapours in the gas might lead to corrosion problems (and the extracted ammonium sulfate was a decent fertilizer), naphthalene vapours in the gas might stop up the gas-mains, and even carbon dioxide in the gas was known to decrease illumination; thus various facilities within the gas-works were tasked with the removal of these deleterious effluents. But these do not compare to the most hazardous contaminant in the raw coal gas: the sulfuret of hydrogen ( hydrogen sulfide , H 2 S). This was regarded as unacceptable for several reasons:
As such, the removal of the sulfuret of hydrogen was given the highest level of priority in the gas-works. A special facility existed to extract the sulfuret of hydrogen – known as the purifier. The purifier was the most important facility in the gas-works, if the retort-bench itself is not included.
Originally, purifiers were simple tanks of lime-water, also known as cream or milk of lime, [ 28 ] where the raw gas from the retort bench was bubbled through to remove the sulfuret of hydrogen. This original process of purification was known as the "wet lime" process. The lime residue left over from the "wet lime" process was one of the first true "toxic wastes", a material called " blue billy ". Originally, the waste of the purifier house was flushed into a nearby body of water, such as a river or a canal. However, after fish kills, the nauseating way it made the rivers stink, and the truly horrendous stench caused by exposure of residuals if the river was running low, the public clamoured for better means of disposal. Thus it was piled into heaps for disposal. Some enterprising gas entrepreneurs tried to sell it as a weed-killer, but most people wanted nothing to do with it, and generally, it was regarded as waste which was both smelly and poisonous, and gas-works could do little with, except bury. But this was not the end of the "blue billy", for after burying it, rain would often fall upon its burial site, and leach the poison and stench from the buried waste, which could drain into fields or streams. Following countless fiascoes with "blue billy" contaminating the environment, a furious public, aided by courts, juries, judges, and masters in chancery, were often very willing to demand that the gas-works seek other methods of purification – and even pay for the damages caused by their old methods of purification.
This led to the development of the "dry lime" purification process, which was less effective than the "wet lime" process, but had less toxic consequences. Still, it was quite noxious. Slaked lime (calcium hydroxide) was placed in thick layers on trays which were then inserted into a square or cylinder-shaped purifier tower which gas was then passed through, from the bottom to the top. After the charge of slaked lime had lost most of its absorption effectiveness, the purifier was then shut off from the flow of gas, and either was opened, or air was piped in. Immediately, the sulfur-impregnated slaked lime would react with the air to liberate large concentrations of sulfuretted hydrogen, which would then billow out of the purifier house, and make the gas-works, and the district, stink of sulfuretted hydrogen. Though toxic in sufficient concentrations or long exposures, the sulfuret was generally just nauseating for short exposures at moderate concentrations, and was merely a health hazard (as compared to the outright danger of "blue billy") for the gas-works employees and the neighbors of the gas-works. The sulfuretted lime was not toxic, but not greatly wanted, slightly stinking of the odor of the sulfuret, and was spread as a low grade fertilizer, being impregnated with ammonia to some degree. The outrageous stinks from many gas-works led many citizens to regard them as public nuisances, and attracted the eye of regulators, neighbors, and courts.
The "gas nuisance" was finally solved by the "iron ore" process. Enterprising gas-works engineers discovered that bog iron ore could be used to remove the sulfuretted hydrogen from the gas, and not only could it be used for such, but it could be used in the purifier, exposed to the air, whence it would be rejuvenated, without emitting noxious sulfuretted hydrogen gas, the sulfur being retained in the iron ore. Then it could be reinserted into the purifier, and reused and rejuvenated multiple times, until it was thoroughly embedded with sulfur. It could then be sold to the sulfuric acid works for a small profit. Lime was sometimes still used after the iron ore had thoroughly removed the sulfuret of hydrogen, to remove carbonic acid (carbon dioxide, CO 2 ), the bisulfuret of carbon ( carbon disulfide , CS 2 ), and any ammonia still aeroform after its travels through the works. But it was not made noxious as before, and usually could fetch a decent rate as fertilizer when impregnated with ammonia. This finally solved the greatest pollution nuisances of the gas-works, but still lesser problems remained – not any that the purifier house could solve, though.
Purifier designs also went through different stages throughout the years.
Gasholders were constructed of a variety of materials, brick, stone, concrete, steel, or wrought iron. The holder or floating vessel is the storage reservoir for the gas, and it serves the purpose of equalizing the distribution of the gas under pressure, and ensures a continuity of supply, while gas remains in the holder. They are cylindrical like an inverted beaker and work up and down in the tank. In order to maintain a true vertical position, the vessel has rollers which work on guide-rails attached to the tank sides and to the columns surrounding the holder.
Gasholders may be either single or telescopic in two or more lifts. When it is made in the telescopic form, its capacity could be increased to as much as four times the capacity of the single-lift holder for equal dimensions of tank. The telescopic versions were found to be useful as they conserved ground space and capital. [ 29 ]
The gasworks had numerous small appurtenances and facilities to aid with diverse gas management tasks or auxiliary services.
As the years went by, boilers (for the raising of steam) became extremely common in most gas-works above those small in size; the smaller works often used gas-powered internal combustion engines to do some of the tasks that steam performed in larger workings.
Steam was in use in many areas of the gasworks, including:
For the operation of the exhauster;
For scouring of pyrolysis char and slag from the retorts and for clinkering the producer of the bench;
For the operation of engines used for conveying, compressing air, charging hydraulics, or the driving of dynamos or generators producing electric current;
To be injected under the grate of the producer in the indirectly fired bench, so as to prevent the formation of clinker, and to aid in the water-gas shift reaction, ensuring high-quality secondary combustion;
As a reactant in the (carburetted) water gas plant, as well as driving the equipment thereof, such as the numerous blowers used in that process, as well as the oil spray for the carburettor;
For the operation of fire, water, liquid, liquor, and tar pumps;
For the operation of engines driving coal and coke conveyor-belts;
For clearing of chemical obstructions in pipes, including naphthalene & tar as well as general cleaning of equipment;
For heating cold buildings in the works, for maintaining the temperature of process piping, and preventing freezing of the water of the gasholder, or congealment of various chemical tanks and wells.
Heat recovery appliances could also be classed with boilers. As the gas industry applied scientific and rational design principles to its equipment, the importance of thermal management and capture from processes became common. Even the small gasworks began to use heat-recovery generators, as a fair amount of steam could be generated for "free" simply by capturing process thermal waste using water-filled metal tubing inserted into a strategic flue.
As the electric age came into being, the gas-works began to use electricity – generated on site – for many of the smaller plant functions previously performed by steam or gas-powered engines, which were impractical and inefficient for small, sub-horsepower uses without complex and failure-prone mechanical linkages. As the benefits of electric illumination became known, sometimes the progressive gasworks diversified into electric generation as well, as coke for steam-raising could be had on-site at low prices, and boilers were already in the works.
According to Meade, the gasworks of the early 20th century generally kept on hand several weeks of coal. This amount of coal could cause major problems, because coal was liable to spontaneous combustion when in large piles, especially if they were rained upon, due to the protective dust coating of the coal being washed off, exposing the full porous surface area of the coal of slightly to highly activated carbon below; in a heavy pile with poor heat transfer characteristics, the heat generated could lead to ignition. But storage in air-entrained confined spaces was not highly looked upon either, as residual heat removal would be difficult, and fighting a fire if it was started could result in the formation of highly toxic carbon monoxide through the water-gas reaction, caused by allowing water to pass over extremely hot carbon (H 2 O + C = H 2 + CO), which would be dangerous outside, but deadly in a confined space.
Coal storage was designed to alleviate this problem. Two methods of storage were generally used; underwater, or outdoor covered facilities. To the outdoor covered pile, sometimes cooling appurtenances were applied as well; for example, means to allow the circulation of air through the depths of the pile and the carrying off of heat. Amounts of storage varied, often due to local conditions. Works in areas with industrial strife often stored more coal. Other variables included national security; for instance, the gasworks of Tegel in Berlin had some 1 million tons of coal (6 months of supply) in gigantic underwater bunker facilities half a mile long (Meade 2e, p. 379).
Machine stoking or power stoking was used to replace labor and minimize disruptions due to labor disputes. Each retort typically required two sets of three stokers. Two of the stokers were required to lift the point of the scoop into the retort, while the third would push it in and turn it over. Coal would be introduced from each side of the retort. The coke produced would be removed from both sides also. Gangs of stokers worked 12-hour shifts, although the labor was not continuous. The work was also seasonal, with extra help being required in the winter time. Machine stoking required more uniform placement of the retorts. Increasing cost of labor increased the profit margin in experimenting with and instituting machine stoking. [ 30 ]
The chemical industries demanded coal tar , and the gas-works could provide it for them; and so the coal tar was stored on site in large underground tanks. As a rule, these were single wall metal tanks – that is, if they were not porous masonry. In those days, underground tar leaks were seen as merely a waste of tar; out of sight was truly out of mind; and such leaks were generally addressed only when the loss of revenue from leaking tar "wells", as these were sometimes called, exceeded the cost of repairing the leak.
Ammoniacal liquor was stored on site as well, in similar tanks. Sometimes the gasworks would have an ammonium sulfate plant, to convert the liquor into fertilizer, which was sold to farmers.
This large-scale gas meter precisely measured gas as it issued from the works into the mains. It was of the utmost importance, as the gasworks balanced the account of issued gas versus the amount of paid for gas, and strived to detect why and how they varied from one another. Often it was coupled with a dynamic regulator to keep pressure constant, or even to modulate the pressure at specified times (a series of rapid pressure spikes was sometimes used with appropriately equipped street-lamps to automatically ignite or extinguish such remotely).
This device injected a fine mist of naphtha into the outgoing gas so as to avoid the crystallization of naphthalene in the mains, and their consequent blockage. Naphtha was found to be a rather effective solvent for these purposes, even in small concentrations. Where troubles with naphthalene developed, as it occasionally did even after the introduction of this minor carburettor, a team of workers was sent out to blow steam into the main and dissolve the blockage; still, prior to its introduction, naphthalene was a very major annoyance for the gasworks.
This steam or gas engine powered device compressed the gas for injection into the high-pressure mains, which in the early 1900s began to be used to convey gas over greater distances to the individual low pressure mains, which served the end-users. This allowed the works to serve a larger area and achieve economies of scale.
Hatheway, Allen W. "Literature of Manufactured Gas" . Retrieved 27 May 2012 . | https://en.wikipedia.org/wiki/History_of_manufactured_fuel_gases |
Marine biology is a hybrid subject that combines aspects of organismal function, ecological interaction and the study of marine biodiversity. [ 1 ] The earliest studies of marine biology trace back to the Phoenicians and the Greeks who are known as the initial explorers of the oceans and their composition. [ 2 ] The first recorded observations on the distribution and habits of marine life were made by Aristotle (384–322 BC). [ 3 ]
Observations made in the earliest studies of marine biology provided an impetus for the age of discovery and exploration that followed. During this time, a vast amount of knowledge was gained about life that exists in the oceans. Individuals who contributed significantly to this pool of knowledge include Captain James Cook (1728–1779), Charles Darwin (1809–1882) and Wyville Thomson (1830–1882). [ 4 ]
These individuals took part in some of the more well-known expeditions of all time, making foundation contributions to marine biology. [ 5 ] The era was important for the history of marine biology, but naturalists were still constrained by available technologies that limited their ability to effectively locate and accurately examine species that inhabited the deep parts of the ocean.
The subsequent creation of marine laboratories was another important development because marine scientists now had places to conduct research and process their specimens from expeditions. Technological advances, such as sound ranging, scuba diving gear, submersibles and remotely operated vehicles, progressively made it easier to study the deep ocean. This allowed marine biologists to explore depths people once thought never existed. [ 6 ]
The history of marine biology can be traced as far back as 1200 BC when the Phoenicians and the Greeks began ocean voyages using celestial navigation. [ 2 ] Phoenicians and Greeks were some of the first known explorers to leave their local communities bordering the Mediterranean Sea . They ventured outside the Mediterranean to the Atlantic Ocean with their knowledge of tides, currents and seasonal changes. It wasn't until much later at around 450 BC when observations of natural phenomena related to the oceans started getting recorded. Herodotus (484–425 BC) wrote of the regular tides in the Persian Gulf , the deposition of silt in the Nile Delta and used the term “Atlantic” to describe the western seas for the first time. It was during this time when many of the first observations about the composition of the oceans were recorded. [ 7 ]
During the sixth century BC, the Greek philosopher Xenophanes (570-475 BC) recognised that some fossil shells were remains of shellfish. He used this to argue that what was at the time dry land was once under the sea. [ 8 ] This was an important step in advancing from simply stating an idea to backing it with evidence and observation. [ 9 ]
Later, during the fourth century BC, another Greek philosopher Aristotle (384–322 BC) initiated the tradition of natural philosophy and influenced the beginnings of marine biology with the early observations he made about marine life. [ 3 ] Aristotle attempted a comprehensive classification of animals which included systematic descriptions of many marine species, [ 10 ] [ 11 ] and particularly species found in the Mediterranean Sea. [ 12 ] These pioneering works include History of Animals , a general biology of animals, Parts of Animals , a comparative anatomy and physiology of animals, and Generation of Animals , on developmental biology. The most striking passages are about the sea-life visible from observation on Lesbos and available from the catches of fishermen. His observations on catfish , electric fish ( Torpedo ) and angler-fish are detailed, as is his writing on cephalopods , namely, Octopus , Sepia ( cuttlefish ) and the paper nautilus ( Argonauta argo ). His description of the hectocotyl arm , used in sexual reproduction, was widely disbelieved until its rediscovery in the 19th century. He separated aquatic mammals from fish, and knew that sharks and rays were part of a group he called Selachē ( selachians ). [ 13 ] He gave accurate descriptions of the ovoviviparous embryological development of the hound shark Mustelus mustelus . [ 14 ] His classification of living things contains elements which were still in use in the 19th century. What the modern zoologist would call vertebrates and invertebrates, Aristotle called "animals with blood" and "animals without blood" (he did not know that complex invertebrates do make use of hemoglobin , but of a different kind from vertebrates). He divided animals with blood into live-bearing (mammals), and egg-bearing (birds and fish). Invertebrates ("animals without blood") he divided into insects, crustacea (further divided into non-shelled – cephalopods – and shelled) and testacea (molluscs). [ 15 ] [ 16 ]
The Polynesians were also very involved in the exploration of marine life and their efforts are often overlooked. [ 17 ] Throughout the time period of 300–1275 AD the Polynesians made efforts to explore and populate the great Polynesian triangle , which is bounded in the east by Easter Island , in the north by Hawaii and in the southwest by New Zealand . The Polynesians were among the first to go out and explore the mysteries of the ocean and marine life. In the years that followed the Polynesian efforts, there were minimal efforts that aimed to further man's understanding of the sea. This ended with the Age of Discovery in the late 15th century. [ 18 ]
Between the late 15th century and early 20th century, humans explored the oceans like never before creating new maps and charts and collecting specimens to bring back to their home ports. Most of the exploration that took place during this time was fueled by European countries such as Spain, Portugal, France, Italy, Scotland and Germany. Some of the landmark explorers of marine biology carried out their famous work during this time period. Explorers such as Captain James Cook, Charles Darwin and Wyville Thomson made revolutionary contributions to the history of marine biology during this time of exploration. [ 19 ]
James Cook is well known for his voyages of exploration for the British Navy in which he mapped out a significant amount of the world's uncharted waters. Cook's explorations took him around the world twice and led to countless descriptions of previously unknown plants and animals. Cook's explorations influenced many others and led to a number of scientists examining marine life more closely. Among those influenced was Charles Darwin who went on to make many contributions of his own. [ 2 ]
Charles Darwin , best known for his theory of evolution , made many significant contributions to the early study of marine biology. He spent much of his time from 1831 to 1836 on the voyage of HMS Beagle collecting and studying specimens from a variety of marine organisms. It was also on this expedition where Darwin began to study coral reefs and their formation. He came up with the theory that the overall growth of corals is a balance between the growth of corals upward and the sinking of the sea floor. [ 20 ] He then came up with the idea that wherever coral atolls would be found, the central island where the coral had started to grow would be gradually subsiding [ 21 ]
Another influential expedition was the voyage of HMS Challenger from 1872 to 1876, organized and later led by Charles Wyville Thomson . It was the first expedition purely devoted to marine science. The expedition collected and analyzed thousands of marine specimens, laying the foundation for present knowledge about life near the deep-sea floor. [ 22 ] The findings from the expedition were a summary of the known natural, physical and chemical ocean science to that time. [ 23 ]
This era of marine exploration came to a close with the first and second round-the-world voyages of the Danish Galathea expeditions and Atlantic voyages by the USS Albatross , the first research vessel purpose built for marine research. These voyages further cleared the way for modern marine biology by building a base of knowledge about marine biology. This was followed by the progressive development of more advanced technologies which began to allow more extensive explorations of ocean depths that were once thought too deep to sustain life. [ 22 ]
In the 1960s and 1970s, ecological research into the life of the ocean was undertaken at institutions set up specifically to study marine biology. Notable was the Woods Hole Oceanographic Institution in America, [ 24 ] [ 25 ] which established a model for other marine laboratories subsequently set up around the world. [ 19 ] [ 26 ] Their findings of unexpectedly high species diversity in places thought to be inhabitable stimulated much theorizing by population ecologists on how high diversification could be maintained in such a food-poor and seemingly hostile environment. [ 25 ]
In the past, the study of marine biology has been limited by a lack of technology as researchers could only go so deep to examine life in the ocean. [ 27 ] Before the mid-twentieth century, the deep-sea bottom could not be seen unless one dredged a piece of it and brought it to the surface. This has changed dramatically due to the development of new technologies in both the laboratory and the open sea. These new technological developments have allowed scientists to explore parts of the ocean they didn't even know existed. [ 28 ]
The development of scuba gear allowed researchers to visually explore the oceans as it contains a self-contained underwater breathing apparatus allowing a person to breathe while being submerged 100 to 200 feet into the ocean. [ 29 ] Submersibles were built like small submarines with the purpose of taking marine scientists to deeper depths of the ocean while protecting them from increasing atmospheric pressures that cause complications deep under water. The first models could hold several individuals and allowed limited visibility but enabled marine biologists to see and photograph the deeper portions of the oceans. [ 29 ] Remotely operated underwater vehicles are now used with and without submersibles to see the deepest areas of the ocean that would be too dangerous for humans. ROVs are fully equipped with cameras and sampling equipment which allows researchers to see and control everything the vehicle does. ROVs have become the dominant type of technology used to view the deepest parts of the ocean. [ 29 ]
In the late 20th century and into the 21st, marine biology was "glorified and romanticized through films and television shows," leading to an influx in interested students who required a damping on their enthusiasm with the day-to-day realities of the field. [ 30 ] | https://en.wikipedia.org/wiki/History_of_marine_biology |
Materials science has shaped the development of civilizations since the dawn of humankind. Better materials for tools and weapons has allowed people to spread and conquer, and advancements in material processing like steel and aluminum production continue to impact society today. Historians have regarded materials as such an important aspect of civilizations such that entire periods of time have defined by the predominant material used ( Stone Age , Bronze Age , Iron Age ). For most of recorded history, control of materials had been through alchemy or empirical means at best. The study and development of chemistry and physics assisted the study of materials, and eventually the interdisciplinary study of materials science emerged from the fusion of these studies. [ 1 ] The history of materials science is the study of how different materials were used and developed through the history of Earth and how those materials affected the culture of the peoples of the Earth. The term " Silicon Age " is sometimes used to refer to the modern period of history during the late 20th to early 21st centuries.
In many cases, different cultures leave their materials as the only records; which anthropologists can use to define the existence of such cultures. The progressive use of more sophisticated materials allows archeologists to characterize and distinguish between peoples. This is partially due to the major material of use in a culture and to its associated benefits and drawbacks. Stone-Age cultures were limited by which rocks they could find locally and by which they could acquire by trading. The use of flint around 300,000 BCE is sometimes [ when? ] considered the beginning of the use of ceramics . The use of polished stone axes marks a significant advance, because a much wider variety of rocks could serve as tools.
The innovation of smelting and casting metals in the Bronze Age started to change the way that cultures developed and interacted with each other. [ citation needed ] Starting around 5,500 BCE, early smiths began to re-shape native metals of copper and gold , without the use of fire and by using tools and weapons. The heating of copper and its shaping with hammers began around 5,000 BCE. [ citation needed ] Melting and casting started around 4,000 BCE. Metallurgy had its dawn with the reduction of copper from its ore around 3,500 BCE. The first alloy , bronze came into use around 3,000 BCE. [ citation needed ]
The use of materials began in the Stone Age. Typically, materials such as bone, fibers, feathers, shells, animal skin, and clay were used for weapons, tools, jewelry, and shelter. The earliest tools were in the Paleolithic age, called Oldowan . These were tools created from chipped rocks that would be used for scavenging purpose. [ citation needed ] As history carried on into the Mesolithic age, tools became more complex and symmetrical in design with sharper edges. Moving into the Neolithic age, agriculture began to develop as new ways to form tools for farming were discovered. Nearing the end of the Stone Age, humans began using copper, gold, and silver as a material. Due to these metals' softness, the general use was for ceremonial purposes and to create ornaments or decorations and did not replace other materials for use in tools. The simplicity of the tools used reflected on the simple understanding of the human species of the time. [ 2 ]
The use of copper had become very apparent to civilizations, such as its properties of elasticity and plasticity that allow it to be hammered into useful shapes, along with its ability to be melted and poured into intricate shapes. Although, the advantages of copper were many, the material was too soft to find large scale usefulness. Through experimentation or by chance, additions to copper lead to increased hardness of a new metal alloy, called bronze. [ 3 ] Bronze was originally composed of copper and arsenic, forming arsenic bronze. [ 4 ]
Iron working came into prominence from about 1,200 BCE. In the 10th century BCE, glass production began in ancient Near East . In the 3rd century BCE, people in ancient India developed wootz steel , the first crucible steel . In the 1st century BCE, glassblowing techniques flourished in Phoenicia . In the 2nd century, CE steel -making became widespread in Han dynasty China. The 4th century CE saw the production of the Iron pillar of Delhi , the oldest surviving example of corrosion-resistant steel.
Wood , bone , stone , and earth are some of the materials, which formed the structures of the Roman Empire . Certain structures were made possible by the character of the land upon which these structures are built. Romans mixed powdered limestone, volcanic ash found from Mount Vesuvius, and water to make a cement paste. [ 5 ] A volcanic peninsula with stone aggregates and conglomerates containing crystalline material will produce material, which weathers differently from soft, sedimentary rock and silt. With the discovery of cement paste, structures could be built with irregular shaped stones and have the binder fill the voids to create a solid structure. The cement gains strength as it hydrates , thus creating a stronger bond over time. With the fall of the west Roman Empire and the rise of the Byzans , this knowledge was mostly lost except to the catholic monks, who were among the few who could read Vitruvius’ Latin and make use of the concrete paste. [ 6 ] That is one of the reasons that the concrete Pantheon of Rome could last for 1,850 years, and why the thatched farmhouses of Holland sketched by Rembrandt have long since decayed.
The use of asbestos as a material blossomed in Ancient Greece , especially when the fireproofing qualities of the material came to light. Many scholars believe the word asbestos comes from an Ancient Greek term, ἄσβεστος ( ásbestos ), meaning "inextinguishable" or "unquenchable". [ 7 ] Clothes for nobles, table clothes and other oven adornments were all furnished with a weave of the fibrous materials, as the materials could be cleansed by throwing them directly into fire. [ 8 ] The use of this material however was not without its downsides, Pliny the Elder , noted a link between the quick death of slaves to work in the asbestos mine. He recommended that slaves working in this environment use the a bladder skin as a makeshift respirator . [ 9 ]
After the thigh bone daggers of the early hunter-gatherers were superseded by wood and stone axes, and then by copper , bronze and iron implements of the Roman civilization, more precious materials could then be sought, and gathered together. Thus the medieval goldsmith Benvenuto Cellini could seek and defend the gold which he had to turn into objects of desire for dukes and popes . The Autobiography of Benvenuto Cellini contains one of the first descriptions of a metallurgical process.
The use of cork , which has been recently added to the category of materials science, had its first mentions beginning with Horace , Pliny, and Plutarch . [ 10 ] It had many uses in antiquity including in fishing and safety devices because of its buoyancy, an engraving medium, sandal soles to increase stature, container stoppers, and being an insulator. It was also used to help cure baldness in the second century. [ 11 ]
In the Ancient Roman Era, glassblowing became an art involving the additions of decor and tints. They were also able to created complex shapes due to the use of a mold. This technology allowed them to imitate gemstones. [ 12 ] Window glass was formed by casting into flat clay molds then removed and cleaned. [ 12 ] The texture in stained glass comes from the texture the sand mold left on the side in contact with the mold. [ 12 ]
Polymeric composites also made an appearance during this time frame in the form of wood . By 80 BC, petrified resin and keratin were used in accessories as amber and tortoise shell respectively. [ 10 ]
In Alexandria in the first century BC, glass blowing was developed in part due to new furnaces that could create higher temperatures by using a clay coated reed pipe. [ 12 ] Plant ash and natron glass, the latter being the primary component, were used in blown pieces. Coastal and semi desert plants worked best due to their low magnesium oxide and potassium oxide content. The Levant , North Africa , and Italy were where blown glass vessels were most common. [ 13 ]
Proto-porcelain material has been discovered dating back to the Neolithic period, with shards of material found in archaeological sites from the Eastern Han period in China. These wares are estimated to have been fired from 1260 °C to 1300 °C. [ 14 ] In the 8th century, porcelain was invented in Tang dynasty , China. Porcelain in china resulted in a methodical development of widely used kilns that increased the quality and quantity that porcelain could be produced. [ 15 ] Tin-glazing of ceramics is invented by Arabic chemists and potters in Basra , Iraq . [ 16 ]
During the Early Middle Ages, the technique of creating windows steered more towards glass blowing non-tinted balls that were later flattened, but then in the late Middle Ages; the methodology returned to that from antiquity with a few minor adjustments, which included rolling with metallic rollers. [ 12 ]
In the 9th century, stonepaste ceramics were invented in Iraq , [ 16 ] and lustreware appeared in Mesopotamia . [ 17 ] In the 11th century, Damascus steel is developed in the Middle East . In the 15th century, Johann Gutenberg develops type metal alloy and Angelo Barovier invents cristallo , a clear soda-based glass.
In 1540, Vannoccio Biringuccio publishes his De la pirotechnia , the first systematic book on metallurgy , in 1556 Georg Agricola writes De Re Metallica , an influential book on metallurgy and mining , and glass lens are developed in the Netherlands and used for the first time in microscopes and telescopes . [ citation needed ]
In the 17th century, Galileo 's Two New Sciences ( strength of materials and kinematics ) includes the first quantitative statements in the science of materials. [ citation needed ]
In the 18th century, William Champion patents a process for the production of metallic zinc by distillation from calamine and charcoal, Bryan Higgins was issued a patent for hydraulic cement ( stucco ) for use as an exterior plaster , and Alessandro Volta makes a copper or zinc acid battery . [ citation needed ]
In the 19th century, Thomas Johann Seebeck invents the thermocouple , Joseph Aspin invents Portland cement , Charles Goodyear invents vulcanized rubber , Louis Daguerre and William Fox Talbot invent silver -based photographic processes, James Clerk Maxwell demonstrates color photography, and Charles Fritts makes the first solar cells using selenium waffles. [ citation needed ]
Before the early 1800s, aluminum had not been produced as an isolated metal. It wasn't until 1825 that; Hans Christian Ørsted discovered how to create elemental aluminum via the reduction of aluminum chloride. Since aluminum is a light element with good mechanical properties, it was widely sought to replace heavier less functional metals like silver and gold. Napoleon III used aluminum plates and utensils for his honored guests, while the rest were given silver. [ 18 ] However, this process was still expensive and was still not able to produce the metal in large quantities. [ 19 ]
In 1886, American Charles Martin Hall and Frenchman Paul Héroult invented a process completely independent of each other to produce aluminum from aluminum oxide via electrolysis. [ 20 ] This process would allow aluminum to be manufactured cheaper than ever before, and laid the groundwork for turning the element from a precious metal into an easily obtainable commodity. Around the same time in 1888, Carl Josef Bayer was working in St Petersburg, Russia to develop a method to make pure alumina for the textile industry. This process involved dissolving the aluminum oxide out of the bauxite mineral to produce gibbsite, which can then be purified back into raw alumina. The Bayer process and the Hall-Héroult process are still used today to produce a majority of the world's alumina and aluminum. [ 21 ]
Most fields of studies have a founding father, such as Newton in physics and Lavoisier in chemistry. Materials science on the other hand has no central figure. [ 22 ] In the 1940s, wartime collaborations of multiple fields of study to produce technological advances became a structure to the future field of study that would become known as material science and engineering. [ 23 ] One of the reasons behind the advancements during this period was due to the cracking problems associated with the Liberty ships being used during WWII. Various reinforcements were applied to the Liberty ships to arrest the cracking problem, which were some of the first structural tests that gave birth to the study of materials. During the Cold War in the 1950s, US President's Science Advisory Committee (PSAC) made materials a priority, when it realized that materials were the limiting factor for advances in space and military technology. In 1958, President Dwight D. Eisenhower created the Advanced Research Project Agency (ARPA), [ 24 ] referred to as the Defense Advanced Research Project Agency (DARPA) since 1996. In 1960, ARPA encouraged the establishment of interdisciplinary laboratories (IDL's) on university campuses, which would be dedicated to the research of materials, as well as to the education of students on how to conduct materials science research. [ 25 ] ARPA offered 4 year IDL contracts to universities, originally to Cornell University , University of Pennsylvania , and Northwestern University , eventually granting nine more contracts. [ 26 ] Although ARPA is no longer in control of the IDL program (the National Science Foundation took over the program in 1972 [ 26 ] ), the original establishment of IDL's marked a significant milestone in the United States ' research and development of materials science. Several institutions departments changed titles from "metallurgy" to "metallurgy and materials science" in the 1960s. [ 22 ] [ 27 ]
In the early part of the 20th century, most engineering schools had a department of metallurgy and perhaps of ceramics as well. Much effort was expended on consideration of the austenite - martensite - cementite phases found in the iron - carbon phase diagram that underlies steel production. [ citation needed ] The fundamental understanding of other materials was not sufficiently advanced for them to be considered as academic subjects. In the post WWII era, the systematic study of polymers advanced particularly rapidly. Rather than create new polymer science departments in engineering schools, administrators and scientists began to conceive of materials science as a new interdisciplinary field in its own right, one that considered all substances of engineering importance from a unified point of view. Northwestern University instituted the first materials science department in 1955. [ 28 ]
Richard E. Tressler was an international leader in the development of high temperature materials. He pioneered high temperature fiber testing and use, advanced instrumentation and test methodologies for thermostructural materials, and design and performance verification of ceramics and composites in high temperature aerospace, industrial and energy applications. He was founding director of the Center for Advanced Materials (CAM ), which supported many faculty and students from the College of Earth and Mineral Science , the Eberly College of Science , the College of Engineering, the Materials Research Laboratory and the Applied Research Laboratories at Penn State on high temperature materials. His vision for interdisciplinary research played a key role in the creation of the Materials Research Institute. Tressler's contribution to materials science is celebrated with a Penn State lecture named in his honor. [ 29 ]
The Materials Research Society (MRS) [ 30 ] has been instrumental in creating an identity and cohesion for this young field in the US. MRS was the brainchild of researchers at Penn State University and grew out of discussions initiated by Prof. Rustum Roy in 1970. The first meeting of MRS was held in 1973. As of 2006 [ needs update ] , MRS has grown into an international society that sponsors a large number of annual meetings and has over 13,000 members. MRS sponsors meetings that are subdivided into symposia on a large variety of topics as opposed to the more focused meetings typically sponsored by organizations like the American Physical Society or the IEEE . The fundamentally interdisciplinary nature of MRS meetings has had a strong influence on the direction of science, particularly in the popularity of the study of soft materials , which are in the nexus of biology, chemistry, physics and mechanical and electrical engineering. Because of the existence of integrative textbooks, materials research societies and university chairs in all parts of the world, BA, MA and PhD programs and other indicators of discipline formation, it is fair to call materials science (and engineering) a discipline. [ 31 ]
The field of crystallography , where X-rays are shone through crystals of a solid material, was founded by William Henry Bragg and his son William Lawrence Bragg at the Institute of Physics during and after World War II . [ citation needed ] Materials science became a major established discipline following the onset of the Silicon Age and Information Age . This led to the development of modern computers and then mobile phones , with the need to make them smaller, faster and more powerful leading to materials science developing smaller and lighter materials capable of dealing with more complex calculations. This in turn enabled computers to be used to solve complex crystallographic calculations and automate crystallography experiments, allowing researchers to design more accurate and powerful techniques. Along with computers and crystallography, the development of laser technology from 1960 onwards led to the development of light-emitting diodes (used in DVD players and smartphones ), fibre-optic communication (used in global telecommunications ), and confocal microscopy , a key tool in materials science. [ 32 ] | https://en.wikipedia.org/wiki/History_of_materials_science |
The history of mathematical notation [ 1 ] covers the introduction, development, and cultural diffusion of mathematical symbols and the conflicts between notational methods that arise during a notation's move to popularity or obsolescence. Mathematical notation [ 2 ] comprises the symbols used to write mathematical equations and formulas . Notation generally implies a set of well-defined representations of quantities and symbols operators. [ 3 ] The history includes Hindu–Arabic numerals , letters from the Roman , Greek , Hebrew , and German alphabets , and a variety of symbols invented by mathematicians over the past several centuries.
The historical development of mathematical notation can be divided into three stages: [ 4 ] [ 5 ]
The more general area of study known as the history of mathematics primarily investigates the origins of discoveries in mathematics. The specific focus of this article is the investigation of mathematical methods and notations of the past.
Many areas of mathematics began with the study of real world problems , before the underlying rules and concepts were identified and defined as abstract structures . For example, geometry has its origins in the calculation of distances and areas in the real world; algebra started with methods of solving problems in arithmetic . The earliest mathematical notations emerged from these problems.
There can be no doubt that most early peoples who left records knew something of numeration and mechanics and that a few were also acquainted with the elements of land-surveying . In particular, the ancient Egyptians paid attention to geometry and numbers, and the ancient Phoenicians performed practical arithmetic, book-keeping , navigation , and land-surveying. The results attained by these people seem to have been accessible (under certain conditions) to travelers, facilitating dispersal of the methods . It is probable that the knowledge of the Egyptians and Phoenicians was largely the result of observation and measurement , and represented the accumulated experience of many ages. Subsequent studies of mathematics by the Greeks were largely indebted to these previous investigations.
Written mathematics began with numbers expressed as tally marks , with each tally representing a single unit. Numerical symbols consisted probably of strokes or notches cut in wood or stone, which were intelligible across cultures. For example, one notch in a bone represented one animal, person, or object. Numerical notation's distinctive feature—symbols having both local and intrinsic values—implies a state of civilization at the period of its invention.
The earliest evidence of written mathematics dates back to the ancient Sumerians and the system of metrology from 3000 BC. From around 2500 BC onwards, the Sumerians wrote multiplication tables on clay tablets and dealt with geometrical exercises and division problems. The earliest traces of Babylonian numerals also date back to this period. [ 8 ] Babylonian mathematics has been reconstructed from more than 400 clay tablets unearthed since the 1850s. [ 9 ] Written in cuneiform , these tablets were inscribed whilst the clay was soft and then baked hard in an oven or by the heat of the sun. Some of these appear to be graded homework. [ citation needed ]
The majority of Mesopotamian clay tablets date from 1800 to 1600 BC, and cover topics which include fractions, algebra, quadratic and cubic equations, and the calculation of regular numbers , reciprocals , and pairs . [ 10 ] The tablets also include multiplication tables and methods for solving linear and quadratic equations . The Babylonian tablet YBC 7289 gives an approximation of √ 2 that is accurate to an equivalent of six decimal places.
Babylonian mathematics were written using a sexagesimal (base-60) numeral system . From this derives the modern-day usage of 60 seconds in a minute, 60 minutes in an hour, and 360 (60 × 6) degrees in a circle, as well as the use of minutes and seconds of arc to denote fractions of a degree. Babylonian advances in mathematics were facilitated by the fact that 60 has many divisors: the reciprocal of any integer which is a multiple of divisors of 60 has a finite expansion in base 60. (In decimal arithmetic, only reciprocals of multiples of 2 and 5 have finite decimal expansions.) Also, unlike the Egyptians, Greeks, and Romans, the Babylonians had a true place-value system, where digits written in the left column represented larger values, much as in the decimal system. They lacked, however, an equivalent of the decimal point, and so the place value of a symbol often had to be inferred from the context.
Initially, the Mesopotamians had symbols for each power of ten. [ 11 ] Later, they wrote numbers in almost exactly the same way as in modern times. Instead of using unique symbols for each power of ten, they wrote only the coefficients of each power of ten, with each digit separated by only a space. By the time of Alexander the Great , they had created a symbol that represented zero and was a placeholder.
Rhetorical algebra was first developed by the ancient Babylonians and remained dominant up to the 16th century. In this system, equations are written in full sentences. For example, the rhetorical form of x + 1 = 2 {\displaystyle x+1=2} is "The thing plus one equals two" or possibly "The thing plus 1 equals 2". [ citation needed ]
The ancient Egyptians numerated by hieroglyphics . [ 12 ] [ 13 ] Egyptian mathematics had symbols for one, ten, one hundred, one thousand, ten thousand, one hundred thousand, and one million. Smaller digits were placed on the left of the number, as they are in Hindu–Arabic numerals. Later, the Egyptians used hieratic instead of hieroglyphic script to show numbers. Hieratic was more like cursive and replaced several groups of symbols with individual ones. For example, the four vertical lines used to represent the number 'four' were replaced by a single horizontal line. This is found in the Rhind Mathematical Papyrus (c. 2000–1800 BC) and the Moscow Mathematical Papyrus (c. 1890 BC). The system the Egyptians used was discovered and modified by many other civilizations in the Mediterranean. The Egyptians also had symbols for basic operations: legs going forward represented addition, and legs walking backward to represent subtraction.
The peoples with whom the Greeks of Asia Minor (amongst whom notation in western history begins) were likely to have come into frequent contact were those inhabiting the eastern littoral of the Mediterranean; Greek tradition uniformly assigned the special development of geometry to the Egyptians, and the science of numbers to either the Egyptians or the Phoenicians.
The history of mathematics cannot with certainty be traced back to any school or period before that of the Ionian Greeks. Still, the subsequent history may be divided into periods, the distinctions between which are tolerably well-marked. Greek mathematics , which originated with the study of geometry, tended to be deductive and scientific from its commencement. Since the fourth century AD, Pythagoras has commonly been given credit for discovering the Pythagorean theorem , a theorem in geometry that states that in a right-angled triangle the area of the square on the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares of the other two sides. [ 14 ] However, this geometric relationship appears in a few earlier ancient mathematical texts (albeit not as a formalized theorem), notably Plimpton 322 , a Babylonian tablet of mathematics from around 1900 BC. The study of mathematics as a subject in its own right began in the 6th century BC with the Pythagoreans , who coined the term "mathematics" from the ancient Greek mathema (μάθημα), meaning "subject of instruction". [ 15 ]
Plato 's influence was especially strong in mathematics and the sciences. He helped to distinguish between pure and applied mathematics by widening the gap between "arithmetic" (now called number theory ) and "logistic" (now called arithmetic ). Greek mathematics greatly refined the methods (especially through the introduction of deductive reasoning and mathematical rigor in proofs ) and expanded the subject matter of mathematics. [ 16 ] Aristotle is credited with what later would be called the law of excluded middle .
Abstract or pure mathematics [ 17 ] deals with concepts like magnitude and quantity without regard to any practical application or situation, and includes arithmetic and geometry . In contrast, in mixed or applied mathematics , mathematical properties and relationships are applied to real-world objects to model laws of physics, for example in hydrostatics , optics , and navigation . [ 17 ]
Archimedes is generally considered to be the greatest mathematician of antiquity and one of the greatest of all time. [ 18 ] [ 19 ] He used the method of exhaustion to calculate the area under the arc of a parabola with the summation of an infinite series , and gave a remarkably accurate approximation of pi . [ 20 ] He also defined the spiral bearing his name, formulae for the volumes of surfaces of revolution , and an ingenious system for expressing very large numbers.
The ancient Greeks made steps in the abstraction of geometry. Euclid's Elements (c. 300 BC) is the earliest extant documentation of the axioms of plane geometry—though Proclus tells of an earlier axiomatisation by Hippocrates of Chios [ 21 ] —and is one of the oldest extant Greek mathematical treatises. Consisting of thirteen books, it collects theorems proven by other mathematicians, supplemented by some original work. The document is a successful collection of definitions, postulates (axioms), propositions (theorems and constructions), and mathematical proofs of the propositions, and covers topics such as Euclidean geometry, geometric algebra, elementary number theory, and the ancient Greek version of algebraic systems. The first theorem given in the text, Euclid's lemma , captures a fundamental property of prime numbers . The text was ubiquitous in the quadrivium and was instrumental in the development of logic, mathematics, and science. Autolycus ' On the Moving Sphere is another ancient mathematical manuscript of the time. [ citation needed ]
The next phase of notation for algebra was syncopated algebra, in which some symbolism is used, but which does not contain all of the characteristics of symbolic algebra. For instance, there may be a restriction that subtraction may be used only once within one side of an equation, which is not the case with symbolic algebra. Syncopated algebraic expression first appeared in a serious of books called Arithmetica , by Diophantus of Alexandria (3rd century AD; many lost), followed by Brahmagupta 's Brahma Sphuta Siddhanta (7th century).
The ancient Greeks employed Attic numeration , [ 22 ] which was based on the system of the Egyptians and was later adapted and used by the Romans . Greek numerals one through four were written as vertical lines, as in the hieroglyphics. The symbol for five was the Greek letter Π (pi), representing the Greek word for 'five' ( pente ). Numbers six through nine were written as a Π with vertical lines beside it. Ten was represented by the letter Δ (delta), from word for 'ten' ( deka ), one hundred by the letter from the word for hundred, and so on. This system was 'acrophonic' since it was based on the first sound of the numeral. [ 22 ]
Milesian (Ionian) numeration was another Greek numeral system. It was constructed by partitioning the twenty-four letters of the Greek alphabet, plus three archaic letters, into three classes of nine letters each, and using them to represent the units, tens, and hundreds. [ 22 ] ( Jean Baptiste Joseph Delambre 's Astronomie Ancienne, t. ii.)
This system appeared in the third century BC, before the letters digamma (Ϝ), koppa (Ϟ), and sampi (Ϡ) became obsolete. When lowercase letters became differentiated from uppercase letters, the lowercase letters were used as the symbols for notation. Multiples of one thousand were written as the nine numbers with a stroke in front of them: thus, one thousand was ",α", two thousand was ",β", etc. The letter M (for μύριοι , as in "myriad") was used to multiply numbers by ten thousand. For example, the number 88,888,888 would be written as M,ηωπη*ηωπη. [ 23 ]
Milesian numeration, though far less convenient than modern numerals, was formed on a perfectly regular and scientific plan, [ 24 ] and could be used with tolerable effect as an instrument of calculation, to which purpose the Roman system was totally inapplicable.
Greek mathematical reasoning was almost entirely geometric (albeit often used to reason about non-geometric subjects such as number theory ), and hence the Greeks had no interest in algebraic symbols. An exception was the great algebraist Diophantus of Alexandria . [ 25 ] His Arithmetica was one of the texts to use symbols in equations. It was not completely symbolic, but was much more so than previous books. In it, an unknown number was called s ; the square of s was Δ y {\displaystyle \Delta ^{y}} ; the cube was K y {\displaystyle K^{y}} ; the fourth power was Δ y Δ {\displaystyle \Delta ^{y}\Delta } ; and the fifth power was Δ K y {\displaystyle \Delta K^{y}} . [ 26 ] So for example, the expression:
would be written as: [ citation needed ]
The ancient Chinese used numerals that look much like the tally system. [ 27 ] Numbers one through four were horizontal lines. Five was an X between two horizontal lines; it looked almost exactly the same as the Roman numeral for ten. Nowadays, this huama numeral system is only used for displaying prices in Chinese markets or on traditional handwritten invoices.
Mathematics in China emerged independently by the 11th century BC, [ 28 ] but has much older roots. The ancient Chinese were acquainted with astronomical cycles, geometrical implements like the rule , compass , and plumb-bob , and machines like the wheel and axle . The Chinese independently developed very large and negative numbers , decimals , a place value decimal system, a binary system , algebra, geometry, and trigonometry. As in other early societies, the purpose of astronomy was to perfect the agricultural calendar and other practical tasks, not to establish a formal system ; thus, the duties of the Chinese Board of Mathematics were confined to the annual preparation of the dates and predictions of the almanac.
Early Chinese mathematical inventions include a place value system known as counting rods [ 29 ] [ 30 ] (which emerged during the Warring States period ), certain geometrical theorems (such as the ratio of sides ), and the suanpan (abacus) for performing arithmetic calculations. Mathematical results were expressed in writing. Ancient Chinese mathematicians did not develop an axiomatic approach, but made advances in algorithm development and algebra. Chinese algebra reached its zenith in the 13th century, when Zhu Shijie invented the method of four unknowns. [ clarification needed ] Early China exemplifies how a civilization may possess considerable skill in the applied arts with only scarce understanding of the formal mathematics on which those arts are founded.
Due to linguistic and geographic barriers, as well as content, the mathematics of ancient China and the mathematics of the ancient Mediterranean world are presumed to have developed more or less independently. The final form of The Nine Chapters on the Mathematical Art and the Book on Numbers and Computation and Huainanzi are roughly contemporary with classical Greek mathematics. Some exchange of ideas across Asia through known cultural exchanges from at least Roman times is likely. Frequently, elements of the mathematics of early societies correspond to rudimentary results found later in branches of modern mathematics such as geometry or number theory . For example, the Pythagorean theorem was attested in the Zhoubi Suanjing , and knowledge of Pascal's triangle has also been shown to have existed in China centuries before Blaise Pascal , [ 31 ] articulated by mathematicians like the polymath Shen Kuo .
The state of trigonometry advanced during the Song dynasty (960–1279), when Chinese mathematicians had greater need of spherical trigonometry in calendrical science and astronomical calculations. [ 32 ] Shen Kuo used trigonometric functions to solve mathematical problems of chords and arcs. [ 32 ] Shen's work on arc lengths provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing . [ 33 ] As the historians L. Gauchet and Joseph Needham state, Guo Shoujing used spherical trigonometry in his calculations to improve the calendar system and Chinese astronomy . [ 32 ] [ 34 ] Chinese mathematics later incorporated the work and teaching of Arab missionaries with knowledge of spherical trigonometry who had come to China during the 13th century.
The Hindu–Arabic numeral system and the rules for the use of its operations, in use throughout the world today, likely evolved over the course of the first millennium AD in India and was transmitted to the west via Islamic mathematics. [ 35 ] [ 36 ] Islamic mathematics developed and expanded the mathematics known to Central Asian civilizations, [ 37 ] including the addition of the decimal point notation to the Arabic numerals . [ contradictory ]
The algebraic notation of the Indian mathematician Brahmagupta was syncopated (that is, some operations and quantities had symbolic representations). Addition was indicated by placing the numbers side by side, subtraction by placing a dot over the subtrahend (the number to be subtracted), and division by placing the divisor below the dividend, similar to our notation but without the bar. Multiplication, evolution, and unknown quantities were represented by abbreviations of appropriate terms. [ 38 ]
Despite their name, Arabic numerals have roots in India. The reason for this misnomer is Europeans saw the numerals used in an Arabic book, Concerning the Hindu Art of Reckoning , by Muhammed ibn-Musa al-Khwarizmi . Al-Khwārizmī wrote several important books on the Hindu–Arabic numerals and on methods for solving equations. His book On the Calculation with Hindu Numerals (c. 825), along with the work of Al-Kindi , were instrumental in spreading Indian mathematics and numerals to the West. Al-Khwarizmi did not claim the numerals as Arabic, but over several Latin translations, the fact that the numerals were Indian in origin was lost. The word algorithm is derived from the Latinization of Al-Khwārizmī's name, Algoritmi, and the word algebra from the title of one of his works, Al-Kitāb al-mukhtaṣar fī hīsāb al-ğabr wa'l-muqābala ( The Compendious Book on Calculation by Completion and Balancing ).
The modern Arabic numeral symbols used around the world first appeared in Islamic North Africa in the 10th century. A distinctive Western Arabic variant of the Eastern Arabic numerals began to emerge around the 10th century in the Maghreb and Al-Andalus (sometimes called ghubar numerals, though the term is not always accepted), which are the direct ancestor of the modern Arabic numerals used throughout the world. [ 39 ]
Many Greek and Arabic texts on mathematics were then translated into Latin , which led to further development of mathematics in medieval Europe. In the 12th century, scholars traveled to Spain and Sicily seeking scientific Arabic texts, including al-Khwārizmī's (translated into Latin by Robert of Chester ) and the complete text of Euclid's Elements (translated in various versions by Adelard of Bath , Herman of Carinthia , and Gerard of Cremona ). [ 40 ] [ 41 ] One of the European books that advocated using the numerals was Liber Abaci , by Leonardo of Pisa, better known as Fibonacci . Liber Abaci is better known for containing a mathematical problem in which the growth of a rabbit population ends up being the Fibonacci sequence .
The transition to symbolic algebra, where only symbols are used, can first be seen in the work of Ibn al-Banna' al-Marrakushi (1256–1321) and Abū al-Ḥasan ibn ʿAlī al-Qalaṣādī (1412–1482). [ 42 ] [ 43 ] Al-Qalasādī was the last major medieval Arab algebraist , who improved on the algebraic notation earlier used in the Maghreb by Ibn al-Banna. [ 44 ] In contrast to the syncopated notations of their predecessors, Diophantus and Brahmagupta , which lacked symbols for mathematical operations , [ 45 ] al-Qalasadi's algebraic notation was the first to have symbols for these functions and was thus "the first steps toward the introduction of algebraic symbolism". He represented mathematical symbols using characters from the Arabic alphabet . [ 44 ]
The 14th century saw the development of new mathematical concepts to investigate a wide range of problems. [ 46 ] The two most widely used arithmetic symbols are addition and subtraction, + and −. The plus sign was used starting around 1351 by Nicole Oresme [ 47 ] and publicized in his work Algorismus proportionum (1360). [ 48 ] It is thought to be an abbreviation for "et", meaning "and" in Latin, in much the same way the ampersand sign also began as "et". Oresme at the University of Paris and the Italian Giovanni di Casali independently provided graphical demonstrations of the distance covered by a body undergoing uniformly accelerated motion, asserting that the area under the line depicting the constant acceleration and represented the total distance traveled. [ 49 ] The minus sign was used in 1489 by Johannes Widmann in Mercantile Arithmetic or Behende und hüpsche Rechenung auff allen Kauffmanschafft . [ 50 ] Widmann used the minus symbol with the plus symbol to indicate deficit and surplus, respectively. [ 51 ] In Summa de arithmetica, geometria, proportioni e proportionalità , [ 52 ] Luca Pacioli used plus and minus symbols and algebra, though much of the work originated from Piero della Francesca whom he appropriated and purloined. [ citation needed ]
The radical symbol (√), for square root, was introduced by Christoph Rudolff in the early 1500s. Michael Stifel 's important work Arithmetica integra [ 53 ] contained important innovations in mathematical notation. In 1556 Niccolò Tartaglia used parentheses for precedence grouping. In 1557 Robert Recorde published The Whetstone of Witte , which introduced the equal sign (=), as well as plus and minus signs, to the English reader. In 1564 Gerolamo Cardano analyzed games of chance beginning the early stages of probability theory . Rafael Bombelli published his L'Algebra (1572) in which he showed how to deal with the imaginary quantities that could appear in Cardano's formula for solving cubic equations. Simon Stevin 's book De Thiende ("The Art of Tenths"), published in Dutch in 1585, contained a systematic treatment of decimal notation , which influenced all later work on the real number system . The new algebra (1591) of François Viète introduced the modern notational manipulation of algebraic expressions.
John Napier is best known as the inventor of logarithms (published in Description of the Marvelous Canon of Logarithms ) [ 54 ] and made common the use of the decimal point in arithmetic and mathematics. [ 55 ] [ 56 ] After Napier, Edmund Gunter created the logarithmic scales (lines, or rules); William Oughtred used two such scales sliding by one another to perform direct multiplication and division and is credited as the inventor of the slide rule in 1622. In 1631 Oughtred introduced the multiplication sign (×), his proportionality sign (∷), and abbreviations 'sin' and 'cos' for the sine and cosine functions. [ 57 ] Albert Girard also used the abbreviations 'sin', 'cos', and 'tan' for the trigonometric functions in his treatise.
René Descartes is credited as the father of analytical geometry , the bridge between algebra and geometry, crucial to the discovery of infinitesimal calculus and analysis . In the 17th century, Descartes introduced Cartesian co-ordinates which allowed the development of analytic geometry, bringing the notation of equations to geometry. Blaise Pascal influenced mathematics throughout his life; for instance, his Traité du triangle arithmétique ("Treatise on the Arithmetical Triangle") (1653) described a convenient tabular presentation for binomial coefficients , now called Pascal's triangle . John Wallis introduced the infinity symbol (∞) and also used this notation for infinitesimals , for example, 1 / ∞ .
Johann Rahn introduced the division sign (÷, an obelus variant repurposed) and the therefore sign (∴) in 1659. William Jones used π in Synopsis palmariorum mathesios [ 58 ] in 1706 because it is the initial letter of the Greek word perimetron (περιμετρον), which means perimeter in Greek. This usage was popularized in 1737 by Euler. In 1734, Pierre Bouguer used double horizontal bar below the inequality sign . [ 59 ]
The study of linear algebra emerged from the study of determinants , which were used to solve systems of linear equations . Calculus had two main systems of notation, each created by one of its creators: that developed by Isaac Newton and that developed by Gottfried Leibniz . Leibniz's notation is used most often today.
Newton's notation was simply a dot or dash placed above the function. For example, the derivative of the function x would be written as x ˙ {\displaystyle {\dot {x}}} . The second derivative of x would be written as x ¨ {\displaystyle {\ddot {x}}} . In modern usage, this notation generally denotes derivatives of physical quantities with respect to time, and is used frequently in the science of mechanics . Leibniz, on the other hand, used the letter d as a prefix to indicate differentiation, and introduced the notation representing derivatives as if they were a special type of fraction. For example, the derivative of the function x with respect to the variable t in Leibniz's notation would be written as d x d t {\textstyle {dx \over dt}} . This notation makes explicit the variable with respect to which the derivative of the function is taken. Leibniz also created the integral symbol ( ∫ ). For example: ∫ − N N f ( x ) d x {\textstyle \int _{-N}^{N}f(x)\,dx} . When finding areas under curves, integration is often illustrated by dividing the area into infinitely many tall, thin rectangles, whose areas are added. Thus, the integral symbol is an elongated S , representing the Latin word summa , meaning "sum".
At this time, letters of the alphabet were to be used as symbols of quantity ; and although much diversity existed with respect to the choice of letters, there came to be several universally recognized rules . [ 24 ] Here thus in the history of equations the first letters of the alphabet became indicatively known as coefficients , while the last letters as unknown terms (an incerti ordinis ). In algebraic geometry , again, a similar rule was to be observed: the last letters of the alphabet came to denote the variable or current coordinates . Certain letters were by universal consent appropriated as symbols for the frequently occurring numbers (such as π {\displaystyle \pi } for 3.14159... and e for 2.7182818... ), and other uses were to be avoided as much as possible. [ 24 ] Letters, too, were to be employed as symbols of operation, and with them other previously mentioned arbitrary operation characters. The letters d and elongated S were to be appropriated as operative symbols in differential calculus and integral calculus , and Δ {\displaystyle \Delta } and Σ {\displaystyle \Sigma } in the calculus of differences . [ 24 ] In functional notation , a letter, as a symbol of operation, is combined with another which is regarded as a symbol of quantity . [ 24 ]
Thus, f ( x ) {\displaystyle f(x)} denotes the mathematical result of the performance of the operation f {\displaystyle f} upon the subject x {\displaystyle x} . If upon this result the same operation is repeated, the new result would be expressed by f [ f ( x ) ] {\displaystyle f[f(x)]} , or more concisely by f 2 ( x ) {\displaystyle f^{2}(x)} , and so on. The quantity x {\displaystyle x} itself regarded as the result of the same operation f {\displaystyle f} upon some other function; the proper symbol for which is, by analogy, f − 1 ( x ) {\displaystyle f^{-1}(x)} . Thus f {\displaystyle f} and f − 1 {\displaystyle f^{-1}} are symbols of inverse operations , the former cancelling the effect of the latter on the subject x {\displaystyle x} . f ( x ) {\displaystyle f(x)} and f − 1 ( x ) {\displaystyle f^{-1}(x)} in a similar manner are termed inverse functions .
Beginning in 1718, Thomas Twinin used the division slash ( solidus ), deriving it from the earlier Arabic horizontal fraction bar . Pierre-Simon, Marquis de Laplace developed the widely used Laplacian differential operator (e.g. Δ f ( p ) {\displaystyle \Delta f(p)} ). In 1750, Gabriel Cramer developed Cramer's Rule for solving linear systems.
Leonhard Euler was one of the most prolific mathematicians in history, and also a prolific inventor of canonical notation. His contributions include his use of e to represent the base of natural logarithms . It is not known exactly why e was chosen, but it was probably because the first four letters of the alphabet were already commonly used to represent variables and other constants. Euler consistently used π {\displaystyle \pi } to represent pi . The use of π {\displaystyle \pi } was suggested by William Jones , who used it as shorthand for perimeter . Euler used i {\displaystyle i} to represent the square root of negative one ( − 1 {\textstyle {\sqrt {-1}}} ) although he earlier used it as an infinite number. Today, the symbol created by John Wallis , ∞ {\displaystyle \infty } , is used for infinity, as in e.g. ∑ n = 1 ∞ 1 n 2 {\textstyle \sum _{n=1}^{\infty }{\frac {1}{n^{2}}}} . For summation , Euler used an enlarged form of the upright capital Greek letter sigma (Σ), known as capital-sigma notation . This is defined as:
∑ i = m n a i = a m + a m + 1 + a m + 2 + ⋯ + a n − 1 + a n . {\displaystyle \sum _{i=m}^{n}a_{i}=a_{m}+a_{m+1}+a_{m+2}+\cdots +a_{n-1}+a_{n}.}
where i represents the index of summation; a i is an indexed variable representing each successive term in the series; m is the lower bound of summation, and n is the upper bound of summation. The term " i = m " under the summation symbol means that the index i starts equal to m . The index, i , is incremented by 1 for each successive term, stopping when i = n .
For functions , Euler used the notation f ( x ) {\displaystyle f(x)} to represent a function of x {\displaystyle x} .
The mathematician William Emerson [ 60 ] developed the proportionality sign (∝). Proportionality is the ratio of one quantity to another, and the sign is used to indicate the ratio between two variables is constant. [ 61 ] [ 62 ] Much later in the abstract expressions of the value of various proportional phenomena, the parts-per notation would become useful as a set of pseudo-units to describe small values of miscellaneous dimensionless quantities . Marquis de Condorcet , in 1768, advanced the partial differential sign (∂), known as the curly d or Jacobi's delta . The prime symbol (′) for derivatives was made by Joseph-Louis Lagrange .
But in our opinion truths of this kind should be drawn from notions rather than from notations.
At the turn of the 19th century, Carl Friedrich Gauss developed the identity sign for congruence relation and, in quadratic reciprocity , the integral part . Gauss developed functions of complex variables , functions of geometry, and functions for the convergence of series . He devised satisfactory proofs of the fundamental theorem of algebra and the quadratic reciprocity law . Gauss developed the Gaussian elimination method of solving linear systems, which was initially listed as an advancement in geodesy . [ 64 ] He would also develop the product sign ( ∏ {\textstyle \textstyle \prod } ).
In the 1800s, Christian Kramp promoted factorial notation during his research in generalized factorial function which applied to non-integers. [ 65 ] Joseph Diaz Gergonne introduced the set inclusion signs (⊆, ⊇), later redeveloped by Ernst Schröder . Peter Gustav Lejeune Dirichlet developed Dirichlet L -functions to give the proof of Dirichlet's theorem on arithmetic progressions and began analytic number theory . In 1829, Carl Gustav Jacob Jacobi published Fundamenta nova theoriae functionum ellipticarum with his elliptic theta functions .
Matrix notation would be more fully developed by Arthur Cayley in his three papers, on subjects which had been suggested by reading the Mécanique analytique [ 66 ] of Lagrange and some of the works of Laplace. Cayley defined matrix multiplication and matrix inverses . Cayley used a single letter to denote a matrix, [ 67 ] thus treating a matrix as an aggregate object. He also realized the connection between matrices and determinants, [ 68 ] and wrote "There would be many things to say about this theory of matrices which should, it seems to me, precede the theory of determinants." [ 69 ]
William Rowan Hamilton introduced the nabla symbol ( ∇ {\displaystyle \nabla } or, later called del , ∇) for vector differentials . [ 70 ] [ 71 ] This was previously used by Hamilton as a general-purpose operator sign . [ 72 ] H ^ {\displaystyle {\hat {H}}} , H {\displaystyle H} , and H ˇ {\displaystyle {\check {H}}} are used for the Hamiltonian operator in quantum mechanics and H {\displaystyle {\mathcal {H}}} (or ℋ ) for the Hamiltonian function in classical Hamiltonian mechanics . In mathematics, Hamilton is perhaps best known as the inventor of quaternion notation and biquaternions .
In 1864 James Clerk Maxwell reduced all of the then-current knowledge of electromagnetism into a linked set of differential equations with 20 equations in 20 variables, contained in A Dynamical Theory of the Electromagnetic Field . [ 74 ] (See Maxwell's equations .) The method of calculation that is necessary to employ was given by Lagrange, and afterwards developed, with some modifications, by Hamilton's equations . It is usually referred to as Hamilton's principle ; when the equations in the original form are used, they are known as Lagrange's equations . In 1871 Richard Dedekind defined a field to be a set of real or complex numbers which is closed under the four arithmetic operations. In 1873 Maxwell presented A Treatise on Electricity and Magnetism .
In 1878 William Kingdon Clifford published his Elements of Dynamic . [ 75 ] Clifford developed split-biquaternions (e.g. q = w + x i + y j + z k {\displaystyle q=w+xi+yj+zk} ) which he called algebraic motors . Clifford obviated quaternion study by separating the dot product and cross product of two vectors from the complete quaternion notation.
The common vector notations are used when working with spatial vectors or more abstract members of vector spaces , while angle notation (or phasor notation) is a notation used in electronics .
Lord Kelvin 's aetheric atom theory (1860s) led Peter Guthrie Tait , in 1885, to publish a topological table of knots with up to ten crossings known as the Tait conjectures . Tensor calculus was developed by Gregorio Ricci-Curbastro between 1887 and 1896, presented in 1892 under the title Absolute differential calculus , [ 76 ] and the contemporary usage of "tensor" was stated by Woldemar Voigt in 1898. [ 77 ] In 1895, Henri Poincaré published Analysis Situs . [ 78 ] In 1897, Charles Proteus Steinmetz would publish Theory and Calculation of Alternating Current Phenomena , with the assistance of Ernst J. Berg. [ 79 ]
In 1895 Giuseppe Peano issued his Formulario mathematico , [ 80 ] an effort to digest mathematics into terse text based on special symbols. He would provide a definition of a vector space and linear map . He would also introduce the intersection sign ( ∩ {\displaystyle \cap } ), the union sign ( ∪ {\displaystyle \cup } ), the membership sign (∈), and existential quantifier (∃). Peano would pass to Bertrand Russell his work in 1900 at a Paris conference; it so impressed Russell that he too was taken with the drive to render mathematics more concisely. The result was Principia Mathematica written with Alfred North Whitehead . This treatise marks a watershed in modern literature where symbol became dominant. Peano's Formulario Mathematico , though less popular than Russell's work, continued through five editions. The fifth appeared in 1908 and included 4,200 formulas and theorems.
Ricci-Curbastro and Tullio Levi-Civita popularized the tensor index notation around 1900. [ 81 ]
Georg Cantor introduced Aleph numbers , so named because they use the aleph symbol (א) with natural number subscripts to denote cardinality in infinite sets. For the ordinals he employed the Greek letter ω ( omega ). This notation is still in use today in ordinal notation of a finite sequence of symbols from a finite alphabet that names an ordinal number according to some scheme which gives meaning to the language.
After the turn of the 20th century, Josiah Willard Gibbs introduced into physical chemistry the middle dot for dot product and the multiplication sign for cross products . He also supplied notation for the scalar and vector products, which were introduced in Vector Analysis . Bertrand Russell shortly afterward introduced logical disjunction (or) in 1906. Gerhard Kowalewski and Cuthbert Edmund Cullis [ 82 ] [ 83 ] [ 84 ] introduced and helped standardized matrices notation, and parenthetical matrix and box matrix notation, respectively.
Albert Einstein , in 1916, introduced Einstein notation , which summed over a set of indexed terms in a formula, thus exerting notational brevity. For example, for indices ranging over the set {1, 2, 3 },
is reduced by convention to:
Upper indices are not exponents but are indices of coordinates, coefficients , or basis vectors .
In 1917 Arnold Sommerfeld created the contour integral sign, and Dimitry Mirimanoff proposed the axiom of regularity . In 1919, Theodor Kaluza would solve general relativity equations using five dimensions , the results would have electromagnetic equations emerge. [ 85 ] This would be published in 1921 in "Zum Unitätsproblem der Physik". [ 86 ] In 1922, Abraham Fraenkel and Thoralf Skolem independently proposed replacing the axiom schema of specification with the axiom schema of replacement . Also in 1922, Zermelo–Fraenkel set theory was developed. In 1923, Steinmetz would publish Four Lectures on Relativity and Space . Around 1924, Jan Arnoldus Schouten developed the modern notation and formalism for the Ricci calculus framework during the absolute differential calculus applications to general relativity and differential geometry in the early twentieth century. Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields . [ 87 ] [ 88 ] [ 89 ] [ 90 ] In 1925, Enrico Fermi described a system comprising many identical particles that obey the Pauli exclusion principle , afterwards developing a diffusion equation ( Fermi age equation ). In 1926, Oskar Klein develop the Kaluza–Klein theory . In 1928, Emil Artin abstracted ring theory with Artinian rings . In 1933, Andrey Kolmogorov introduces the Kolmogorov axioms . In 1937, Bruno de Finetti deduced the " operational subjective " concept .
Mathematical abstraction began as a process of extracting the underlying essence of a mathematical concept, [ 91 ] [ 92 ] removing any dependence on real world objects with which it might originally have been connected, [ 93 ] and generalizing it so that it has wider applications or matching among other abstract descriptions of equivalent phenomena . Two abstract areas of modern mathematics are category theory and model theory . Bertrand Russell [ 94 ] once said, "Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as little as the physicist means to say." Though, one can substitute mathematics for real world objects, and wander off through equation after equation, and can build a concept structure which has no relation to reality. [ 95 ]
Some of the introduced mathematical logic notation during this time included the set of symbols used in Boolean algebra . This was created by George Boole in 1854. Boole himself did not see logic as a branch of mathematics, but it has come to be encompassed anyway. Symbols found in Boolean algebra include ∧ {\displaystyle \land } (and), ∨ {\displaystyle \lor } (or), and ¬ {\displaystyle \lnot } (not). With these symbols, and letters to represent different truth values , one can make logical statements such as a ∨ ¬ a = 1 {\displaystyle a\lor \lnot a=1} , that is "( a is true or a is not true) is true", meaning it is true that a is either true or not true (i.e. false). Boolean algebra has many practical uses as it is, but it also was the start of what would be a large set of symbols to be used in logic. Most of these symbols can be found in propositional calculus , a formal system described as L = L ( A , Ω , Z , I ) {\displaystyle {\mathcal {L}}={\mathcal {L}}\ (\mathrm {A} ,\ \Omega ,\ \mathrm {Z} ,\ \mathrm {I} )} . A {\displaystyle \mathrm {A} } is the set of elements, such as the a in the example with Boolean algebra above. Ω {\displaystyle \Omega } is the set that contains the subsets that contain operations, such as ∨ {\displaystyle \lor } or ∧ {\displaystyle \land } . Z {\displaystyle \mathrm {Z} } contains the inference rules , which are the rules dictating how inferences may be logically made, and I {\displaystyle \mathrm {I} } contains the axioms . Predicate logic, originally called predicate calculus , expands on propositional logic by the introduction of variables , usually denoted by x , y , z , or other lowercase letters, and by sentences containing variables, called predicates . These are usually denoted by an uppercase letter followed by a list of variables, such as P( x ) or Q( y , z ). Predicate logic uses special symbols for quantifiers : ∃ for "there exists" and ∀ for "for all".
To every ω-consistent recursive class κ of formulae there correspond recursive class signs r , such that neither v Gen r nor Neg ( v Gen r ) belongs to Flg (κ) (where v is the free variable of r ).
While proving his incompleteness theorems , Kurt Gödel created an alternative to the symbols normally used in logic. He used Gödel numbers —numbers assigned to represent mathematical operations—and variables with the prime numbers greater than 10. With Gödel numbers, a logic statement can be broken down into a number sequence. By taking the n prime numbers to the power of the Gödel numbers in the sequence, and then multiplying the terms together, a unique final product is generated. In this way, every logic statement can be encoded as its own number. [ 97 ]
For example, take the statement "There exists a number x such that it is not y ". Using the symbols of propositional calculus, this would become
If the Gödel numbers replace the symbols, it becomes:
There are ten numbers, so the first ten prime numbers are used:
Then, each prime is raised to the power of the corresponding Gödel number, and multiplied:
The resulting number is approximately 3.096262735 × 10 78 {\displaystyle 3.096262735\times 10^{78}} .
The abstraction of notation is an ongoing process. The historical development of many mathematical topics exhibits a progression from the concrete to the abstract. Throughout 20th century, various set notations were developed for fundamental object sets . Around 1924, David Hilbert and Richard Courant published Methods of mathematical physics. Partial differential equations . [ 98 ] In 1926, Oskar Klein and Walter Gordon proposed the Klein–Gordon equation to describe relativistic particles:
1 c 2 ∂ 2 ∂ t 2 ψ − ∇ 2 ψ + m 2 c 2 ℏ 2 ψ = 0. {\displaystyle {\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}\psi -\nabla ^{2}\psi +{\frac {m^{2}c^{2}}{\hbar ^{2}}}\psi =0.}
The first formulation of a quantum theory describing radiation and matter interaction is due to Paul Adrien Maurice Dirac , who, during 1920, was first able to compute the coefficient of spontaneous emission of an atom . [ 99 ] In 1928, the relativistic Dirac equation was formulated by Dirac to explain the behavior of the relativistically moving electron . The Dirac equation in the form originally proposed by Dirac is:
( β m c 2 + ∑ k = 1 3 α k p k c ) ψ ( x , t ) = i ℏ ∂ ψ ( x , t ) ∂ t {\displaystyle \left(\beta mc^{2}+\sum _{k=1}^{3}\alpha _{k}p_{k}\,c\right)\psi (\mathbf {x} ,t)=i\hbar {\frac {\partial \psi (\mathbf {x} ,t)}{\partial t}}}
where, ψ = ψ( x , t ) is the wave function for the electron , x and t are the space and time coordinates, m is the rest mass of the electron, p is the momentum (understood to be the momentum operator in the Schrödinger theory ), c is the speed of light , and ħ = h /2 π is the reduced Planck constant . Dirac described the quantification of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli , Eugene Wigner , Pascual Jordan , and Werner Heisenberg , and an elegant formulation of quantum electrodynamics due to Enrico Fermi , [ 100 ] physicists came to believe that, in principle, it would be possible to perform any computation for any physical process involving photons and charged particles.
In 1931, Alexandru Proca developed the Proca equation ( Euler–Lagrange equation ) for the vector meson theory of nuclear forces and the relativistic quantum field equations . John Archibald Wheeler in 1937 developed the S-matrix . Studies by Felix Bloch with Arnold Nordsieck , [ 101 ] and Victor Weisskopf , [ 102 ] in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory , a problem already pointed out by Robert Oppenheimer . [ 103 ] Infinities emerged at higher orders in the series, making such computations meaningless and casting serious doubts on the internal consistency of the theory itself. With no solution for this problem known at the time, it appeared that a fundamental incompatibility existed between special relativity and quantum mechanics .
In the 1930s, the double-struck capital Z ( Z {\displaystyle \mathbb {Z} } ) for integer number sets was created by Edmund Landau . Nicolas Bourbaki created the double-struck capital Q ( Q {\displaystyle \mathbb {Q} } ) for rational number sets. In 1935 Gerhard Gentzen made universal quantifiers . André Weil and Nicolas Bourbaki would develop the empty set sign (∅) in 1939. That same year, Nathan Jacobson would coin the double-struck capital C ( C {\displaystyle \mathbb {C} } ) for complex number sets.
Around the 1930s, Voigt notation (so named to honor Voigt's 1898 work) would be developed for multilinear algebra as a way to represent a symmetric tensor by reducing its order. Schönflies notation became one of two conventions used to describe point groups (the other being Hermann–Mauguin notation ). Also in this time, van der Waerden notation [ 104 ] [ 105 ] became popular for the usage of two-component spinors ( Weyl spinors ) in four spacetime dimensions. Arend Heyting would introduce Heyting algebra and Heyting arithmetic .
The arrow (→) was developed for function notation in 1936 by Øystein Ore to denote images of specific elements and to denote Galois connections . Later, in 1940, it took its present form ( f: X→Y ) through the work of Witold Hurewicz . Werner Heisenberg , in 1941, proposed the S-matrix theory of particle interactions.
Bra–ket notation ( Dirac notation ) is a standard notation for describing quantum states , composed of angle brackets and vertical bars . It can also be used to denote abstract vectors and linear functionals . It is so called because the inner product (or dot product on a complex vector space) of two states is denoted by a ⟨bra|ket⟩: ⟨ ϕ | ψ ⟩ {\displaystyle \langle \phi |\psi \rangle } . The notation was introduced in 1939 by Paul Dirac , [ 106 ] though the notation has precursors in Grassmann 's use of the notation [ φ | ψ ] for his inner products nearly 100 years previously. [ 107 ]
Bra–ket notation is widespread in quantum mechanics : almost every phenomenon that is explained using quantum mechanics—including a large portion of modern physics —is usually explained with the help of bra–ket notation. The notation establishes an encoded abstract representation-independence, producing a versatile specific representation (e.g., x , or p , or eigenfunction base) without much ado, or excessive reliance on, the nature of the linear spaces involved. The overlap expression ⟨ φ | ψ ⟩ is typically interpreted as the probability amplitude for the state ψ to collapse into the state ϕ . The Feynman slash notation (Dirac slash notation [ 108 ] ) was developed by Richard Feynman for the study of Dirac fields in quantum field theory .
Geoffrey Chew , along with others, would promote matrix notation for the strong interaction in particle physics, and the associated bootstrap principle , in 1960. In the 1960s, set-builder notation was developed for describing a set by stating the properties that its members must satisfy. Also in the 1960s, tensors are abstracted within category theory by means of the concept of monoidal category . Later, multi-index notation eliminates conventional notions used in multivariable calculus , partial differential equations , and the theory of distributions , by abstracting the concept of an integer index to an ordered tuple of indices.
In the modern mathematics of special relativity , electromagnetism , and wave theory , the d'Alembert operator ( ◻ {\displaystyle \scriptstyle \Box } ) is the Laplace operator of Minkowski space . The Levi-Civita symbol ( ε ), also known as the permutation symbol, is used in tensor calculus .
Feynman diagrams are used in particle physics, equivalent to the operator -based approach of Sin-Itiro Tomonaga and Julian Schwinger . The orbifold notation system, invented by William Thurston , has been developed for representing types of symmetry groups in two-dimensional spaces of constant curvature.
The tetrad formalism ( tetrad index notation ) was introduced as an approach to general relativity that replaces the choice of a coordinate basis by the less restrictive choice of a local basis for the tangent bundle (a locally defined set of four linearly independent vector fields called a tetrad ). [ 109 ]
In the 1990s, Roger Penrose proposed Penrose graphical notation ( tensor diagram notation ) as a, usually handwritten, visual depiction of multilinear functions or tensors . [ 110 ] Penrose also introduced abstract index notation . His usage of the Einstein summation was in order to offset the inconvenience in describing contractions and covariant differentiation in modern abstract tensor notation, while maintaining explicit covariance of the expressions involved. [ citation needed ]
John Conway furthered various notations, including the Conway chained arrow notation , the Conway notation of knot theory , and the Conway polyhedron notation . The Coxeter notation system classifies symmetry groups, describing the angles between with fundamental reflections of a Coxeter group . It uses a bracketed notation, with modifiers to indicate certain subgroups. The notation is named after H. S. M. Coxeter ; Norman Johnson more comprehensively defined it.
Combinatorial LCF notation , devised by Joshua Lederberg and extended by Harold Scott MacDonald Coxeter and Robert Frucht , was developed for the representation of cubic graphs that are Hamiltonian . [ 111 ] [ 112 ] The cycle notation is the convention for writing down a permutation in terms of its constituent cycles . [ 113 ] This is also called circular notation and the permutation called a cyclic or circular permutation. [ 114 ]
In 1931, IBM produces the IBM 601 Multiplying Punch ; it is an electromechanical machine that could read two numbers, up to eight digits long, from a card and punch their product onto the same card. [ 115 ] In 1934, Wallace Eckert used a rigged IBM 601 Multiplying Punch to automate the integration of differential equations. [ 116 ]
In 1962, Kenneth E. Iverson developed an integral part notation, which became known as Iverson notation, that developed into APL . [ 117 ] In the 1970s within computer architecture , Quote notation was developed for a representing number system of rational numbers . Also in this decade, the Z notation (just like the APL language , long before it) uses many non- ASCII symbols, the specification includes suggestions for rendering the Z notation symbols in ASCII and in LaTeX . There are presently various C mathematical functions (Math.h) and numerical libraries used to perform numerical calculations in software development . These calculations can be handled by symbolic executions —analyzing a program to determine what inputs cause each part of a program to execute. Mathematica and SymPy are examples of computational software programs based on symbolic mathematics . | https://en.wikipedia.org/wiki/History_of_mathematical_notation |
Mechanical engineering is a discipline centered around the concept of using force multipliers , moving components, and machines . It utilizes knowledge of mathematics , physics , materials sciences , and engineering technologies. It is one of the oldest and broadest of the engineering disciplines .
Engineering arose in early civilization as a general discipline for the creation of large scale structures such as irrigation, architecture, and military projects. Advances in food production through irrigation allowed a portion of the population to become specialists in Ancient Babylon . [ 1 ]
All six of the classic simple machines were known in the ancient Near East . The wedge and the inclined plane (ramp) were known since prehistoric times. [ 2 ] The wheel , along with the wheel and axle mechanism, was invented in Mesopotamia (modern Iraq) during the 5th millennium BC. [ 3 ] The lever mechanism first appeared around 5,000 years ago in the Near East , where it was used in a simple balance scale , [ 4 ] and to move large objects in ancient Egyptian technology . [ 5 ] The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC, [ 4 ] and then in ancient Egyptian technology circa 2000 BC. [ 6 ] The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC, [ 7 ] and ancient Egypt during the Twelfth Dynasty (1991-1802 BC). [ 8 ] The screw , the last of the simple machines to be invented, [ 9 ] first appeared in Mesopotamia during the Neo-Assyrian period (911-609) BC. [ 7 ] The Egyptian pyramids were built using three of the six simple machines, the inclined plane, the wedge, and the lever, to create structures like the Great Pyramid of Giza . [ 10 ]
The Assyrians were notable in their use of metallurgy and incorporation of iron weapons. Many of their advancements were in military equipment. They were not the first to develop them, but did make advancements on the wheel and the chariot. They made use of pivot-able axles on their wagons, allowing easy turning. They were also one of the first armies to use the move-able siege tower and battering ram. [ 1 ]
The application of mechanical engineering can be seen in the archives of various ancient societies. The pulley appeared in Mesopotamia in 1,500 BC, improving water transportation. German Archaeologist Robert Koldewey found that the Hanging Gardens likely used a mechanical pump powered by these pulleys to transport water to the roof gardens. [ 11 ] The Mesopotamians would advance even further by replacing "the substitution of continuous for intermittent motion, and the rotary for back-and-forth motion" by 1,200 BC. [ 1 ]
In Ancient Egypt , the screw pump is another example of the use of engineering to boost efficiency of water transportation. Although the Early Egyptians built colossal structures such as the pyramids, they did not develop pulleys for the lifting of heavy stone, and used the wheel very little. [ 1 ]
The earliest practical water-powered machines, the water wheel and watermill , first appeared in the Persian Empire , in what are now Iraq and Iran, by the early 4th century BC. [ 12 ]
In Ancient Greece , Archimedes (287–212 BC) developed several key theories in the field of mechanical engineering including mechanical advantage , the Law of the Lever , and his name sake, Archimedes’ law . In Ptolematic Egypt , the Museum of Alexandria developed crane pulleys with block and tackles to lift stones. These cranes were powered with human tread wheels and were based on earlier Mesopotamian water-pulley systems. [ 1 ] The Greeks would later develop mechanical artillery independently of the Chinese. The first of these would fire darts, but advancements allowed for stone to be tossed at enemy fortifications or formations. [ 1 ]
In Roman Egypt , Heron of Alexandria (c. 10–70 AD) created the first steam-powered device, the Aeolipile . [ 13 ] The first of its kind, it did not have the capability to move or power anything but its own rotation.
In China , Zhang Heng (78–139 AD) improved a water clock and invented a seismometer . Ma Jun (200–265 AD) invented a chariot with differential gears.
Leo the Philosopher is noted to have worked on a signal system using clocks in the Byzantine Empire in 850, connecting Constantinople with the Cilician Frontier and was a continuation of the complex city clocks in Eastern Rome. These grand machines diffused into the Arabian Empire under Harun al-Rashid . [ 14 ]
Another grand mechanical device was the Organ , which was reintroduced in 757 when Constantine V gifted one to Pepin the short . [ 14 ]
With the exception of a few machines, engineering and science stagnated in the West due to the collapse of the Roman Empire during late antiquity.
During the Islamic Golden Age (7th to 15th century), Muslim inventors made remarkable contributions in the field of mechanical technology. Al-Jazari , who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and presented many mechanical designs. [ 15 ] Al-Jazari is also the first known person to create devices such as the crankshaft and camshaft , which now form the basics of many mechanisms. [ 16 ]
The earliest practical wind-powered machines, the windmill and wind pump , first appeared in the Muslim world during the Islamic Golden Age , in what are now Iran, Afghanistan, and Pakistan, by the 9th century AD. [ 17 ] [ 18 ] [ 19 ] [ 20 ] The earliest practical steam-powered machine was a steam jack driven by a steam turbine , described in 1551 by Taqi al-Din Muhammad ibn Ma'ruf in Ottoman Egypt . [ 21 ] [ 22 ]
The automatic flute player, which was invented in the 9th century by the Banū Mūsā brothers in Baghdad , is the first known example of a programmable machine. The work of the Banu Musa was influenced by their Hellenistic forebears, but it also makes significant improvements over Greek creation. [ 23 ] The pinned-barrel mechanism, which allowed for programmable variations in the rhythm and melody of the music, was the key contribution given by the Banu Musa. [ 24 ] In 1206, the Muslim inventor Al-Jazari (in the Artuqid Sultnate ) described a drum machine which may have been an example of a programmable automaton. [ 25 ]
The cotton gin was invented in India by the 6th century AD, [ 26 ] and the spinning wheel was invented in the Islamic world by the early 11th century, [ 27 ] both of which were fundamental to the growth of the cotton industry . The spinning wheel was also a precursor to the spinning jenny , which was a key development during the early Industrial Revolution in the 18th century. [ 28 ] The crankshaft and camshaft were invented by Al-Jazari in Northern Mesopotamia circa 1206, [ 29 ] [ 30 ] [ 31 ] and they later became central to modern machinery such as the steam engine , internal combustion engine and automatic controls . [ 32 ]
The medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an escapement mechanism into his astronomical clock tower two centuries before escapement devices were found in medieval European clocks and also invented the world's first known endless power-transmitting chain drive . [ 33 ]
The Middle Ages saw the wide spread adoption of machines to aid in labor. The many rivers of England and northern Europe allowed the power of moving water to be utilized. The water-mill became instrumental in the production of many goods such as food, fabric, leathers, and papers. These machines used were some of the first to use cogs and gears, which greatly increased the mills productivity. The camshaft allowed rotational force to be converted into directional force. Less significantly, tides of bodies of water were also harnessed. [ 34 ]
Wind-power later became the new source of energy in Europe, supplementing the water mill. This advancement moved out of Europe into the Middle East during the Crusades. [ 34 ]
Metallurgy advanced by a large degree during the Middle Ages, with higher quality iron allowing for more sturdy constructions and designs. Mills and mechanical power provided a consistent supply of trip-hammer strikes and air from the bellows. [ 34 ]
During the 17th century, important breakthroughs in the foundations of mechanical engineering occurred in England . Sir Isaac Newton formulated Newton's Laws of Motion and developed Calculus , the mathematical basis of physics. Newton was reluctant to publish his works for years, but he was finally persuaded to do so by his colleagues, such as Sir Edmond Halley , much to the benefit of all mankind. Gottfried Wilhelm Leibniz is also credited with creating Calculus during this time period.
Leonardo Da Vinci was a notable engineer, designing and studying many mechanical systems that were focused around transportation and warfare [ 35 ] His designs would later be compared to early aircraft design. [ 36 ]
Although wind power provided a source of energy away from riverside estate and saw massive improvements in its harnessing, it could not replace the consistent and strong power of the watermill. Water would remain the primary source of power of pre-industrial urban industry through the Renaissance. [ 37 ]
At the end of the Renaissance, scientists and engineers were beginning to experiment with steam power. Most of the early apparatuses faced problems of low horsepower, inefficiency, or danger. The need arose for an effective and economical power source because of the flooding of deep-mines in England , which could not be pumped out using alternative methods. The first working design was Thomas Savory 's 1698 patent. He continuously worked on improving and marketing the invention across England. At the same time, others were working on improvements to Savory's design, which did not transfer heat effectively. [ 38 ]
Thomas Newcomen would take all the advancements of the engineers and develop the Newcomen Atmospheric Engine . This new design would greatly reduce heat loss, move water directly from the engine, and allow variety of proportions to be built in. [ 38 ]
The Industrial Revolution brought steam powered factories utilizing mechanical engineering concepts. These advances allowed an incredible increase in production scale, numbers, and efficiency.
During the 19th century, material sciences advances had begun to allow implementation of steam engines into Steam Locomotives and Steam-Powered Ships , quickly increasing the speed at which people and goods could move across the world. The reason for these advances were the machine tools were developed in England, Germany , and Scotland . These allowed mechanical engineering to develop as a separate field within engineering. They brought with them manufacturing machines and the engines to power them. [ 39 ]
At the near end of the Industrial Revolution, internal combustion engine technology brought with it the piston airplane and automobile . Aerospace Engineering would develop in the early 20th century as a offshoot of mechanical engineering, eventually incorporating rocketry.
Coal was replaced by oil based derivatives for many applications.
With the advents of computers in the 20th century, more precise design and manufacturing methods were available to engineers. Automated and Computerized manufacturing allowed many new fields to emerge from Mechanical Engineering such as Industrial Engineering . Although a majority of automobiles remain to be gas powered, electric vehicles have risen as a feasible alternative. [ 40 ]
Because of the increased complexity of engineering projects, many disciplines of engineer collaborate and specialize in sub fields . [ 41 ] One of these collaborations is the field of robotics , in which electrical engineers , computer engineers , and mechanical engineers can specialize in and work together.
The first British professional society of mechanical engineers was formed in 1847 Institution of Mechanical Engineers , thirty years after the civil engineers formed the first such professional society Institution of Civil Engineers . [ 42 ]
In the United States, the American Society of Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional engineering society, after the American Society of Civil Engineers (1852) and the American Institute of Mining Engineers (1871). [ 43 ]
The first schools in the United States to offer a mechanical engineering education were the United States Military Academy in 1817, an institution now known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825. Education in mechanical engineering has historically been based on a strong foundation in mathematics and science. [ 44 ]
In the 20th century, many governments began regulating both the title of engineer and the practice of engineering , requiring a degree from an accredited university and to past a qualifying test. | https://en.wikipedia.org/wiki/History_of_mechanical_engineering |
Metallurgy in China has a long history, with the earliest metal objects in China dating back to around 3,000 BCE. The majority of early metal items found in China come from the North-Western Region (mainly Gansu and Qinghai , 青海). China was the earliest civilization to use the blast furnace and produce cast iron . [ 1 ]
Archaeological evidence indicates that the earliest metal objects in China were made in the late fourth millennium BCE. Copper was generally the earliest metal to be used by humanity, and was used in China since at least 3000 BCE. [ 2 ] [ 3 ]
Early metal-using communities have been found at the Qijia and Siba sites in Gansu . The metal knives and axes recovered in Qijia apparently point to some interactions with Siberian and Central Asian cultures, in particular with the Seima-Turbino complex , [ 5 ] or the Afanasievo culture . [ 6 ] Archeological evidence points to plausible early contact between the Qijia culture and Central Asia. [ 5 ] Similar sites have been found in Xinjiang in the west and Shandong , Liaoning and Inner Mongolia in the east and north. The Central Plain sites associated with the Erlitou culture also contain early metalworks . [ 7 ]
Copper manufacturing, more complex than jade working, gradually appeared in the Yangshao period (5000–3000 BCE). Jiangzhai is the only place where copper artifacts were found in the Banpo culture. Archaeologists have found remains of copper metallurgy in various cultures from the late fourth to the early third millennia BCE. These include the copper-smelting remains and copper artifacts of the Hongshan culture (4700–2900) and copper slag at the Yuanwozhen site. This indicates that inhabitants of the Yellow River valley had already learned how to make copper artifacts by the later Yangshao period. [ 8 ]
The Qijia culture (c. 2500–1900) of Qinghai , Gansu, and western Shaanxi produced copper and bronze utilitarian items and gold, copper, and bronze ornaments. The earliest metalworks in this region are found at a Majiayao site at Linjia , Dongxiang , Gansu. [ 7 ] "Their dates range from 2900 to 1600 BCE. These metal objects represent the Majiayao 馬家窯 type of the Majiayao culture (c. 3100–2700 BCE), Zongri 宗日 Culture (c. 3600–2050 BCE), Machang 馬廠 Type (c. 2300–2000 BCE), Qijia 齊家 Culture (c. 2050–1915 BCE), and Siba 四壩 Culture (c. 2000–1600 BCE)." [ 9 ]
At Dengjiawan, in the Shijiahe site complex in Hubei , some pieces of copper were discovered; they are the earliest copper objects discovered in southern China. [ 10 ] The Linjia site (林家遺址, Línjiā yízhǐ) has the earliest evidence for bronze in China, dating to c. 3000 BCE. [ 11 ]
Bronze technology was imported to China from the steppes. [ 12 ] The oldest bronze object found in China was a knife found at a Majiayao culture site in Dongxiang , Gansu, and dated to 2900–2740 BC. [ 13 ] Further copper and bronze objects have been found at Machang-period sites in Gansu. [ 14 ] Metallurgy spread to the middle and lower Yellow River region in the late 3rd millennium BC. [ 15 ] Contacts between the Afanasievo culture and the Majiayao culture and the Qijia culture have been considered for the transmission of bronze technology. [ 16 ] From around 2000 BCE, cast bronze objects such as the socketed spear with single side hook were imported and adapted from the Seima-Turbino culture . [ 17 ]
The Erlitou culture (c. 1900 – 1500 BCE), Shang dynasty (c. 1600 – 1046 BCE) and Sanxingdui culture (c. 1250 – 1046 BCE) of early China used bronze vessels for rituals (see Chinese ritual bronzes ) as well as farming implements and weapons. [ 18 ] By 1500 BCE, excellent bronzes were being made in China in large quantities, partly as a display of status, and as many as 200 large pieces were buried with their owner for use in the afterlife, as in the Tomb of Fu Hao , a Shang queen.
In the tomb of the first Qin Emperor and multiple Warring States period tombs, extremely sharp swords and other weapons were found, coated with chromium oxide, which made the weapons rust resistant. [ 19 ] [ 20 ] [ 21 ] The layer of chromium oxide used on these swords was 10 to 15 micrometers and left them in pristine condition to this day. Chromium was first scientifically attested in the 18th century. [ 22 ]
The beginning of new breakthroughs in metallurgy occurred towards the Yangzi River's south in China's southeastern region in the Warring States period such as gilt-bronze swords. [ 23 ]
There are two types of bronze smelting techniques in early China, namely the section mold process and the lost-wax process. The earliest bronze ware found in China is the bronze knife (F20: 18) unearthed at the Majiayao in Linjia, Dongxiang, Gansu, and dated to about 3000 BC. [ 24 ] This bronze knife uses the section mold process, which is spliced by two molds.
The section mold process is a commonly used bronze casting method in the Shang dynasty, that is, the mud is selected, and after selecting, filtration, showering, deposition and other procedures, the mud is cooled to a moderate hardness as a backup, and then the mud is made according to the shape of the vessel to be made. There are two types of molds, which is inner mold and outer mold. The inner mold is only the shape of the bronze ware, without decoration; the outer model should consider the division of the bronze ware after casting in the future, that is, the block during the production of the clay model, and also engrave the inscriptions and inscriptions of the bronze ware decoration on the clay model. After the clay mold are done, put it in a cool place to dry in the shade, and then put it into the furnace for roasting. After the mold are heated, they become pottery molds unearthed during modern archaeological discoveries.
After the pottery mold is fired, do not rush out of the furnace. After the copper furnace has liquefied the required copper, the pottery mold that still has residual temperature is taken out and poured. In this way, the temperature difference between the copper liquid and the pottery mold is not large, and the pottery mold is not easy to burst. The quality of the finished product is relatively high. After the copper liquid is poured, remove the pottery molds and molds according to the blocks they were made. If they can't be removed, they can be broken with a hammer. The bronze will come out, and after grinding, it is the finished product. [ 25 ]
According to some scholars, lost-wax casting was used in China already during the Spring and Autumn period (770 – 476 BCE), although this is often disputed. [ 26 ]
The lost-wax method is used in most parts of the world. As the name suggests, the lost-wax method is to use wax as a mold, and heat it to melt the wax mold and lose it, thereby casting bronze ware, making the model (the outer layer of the wax model is coated with mud), lost-wax (heating to make the wax flow out), pouring copper liquid to fill the cavity left by the wax model, etc.
The development and spread of the lost-wax method in the West has never stopped, but the main bronze casting method in the Bronze Age in China is the section mold process. When the lost-wax method was introduced into China is also a topic of academic discussion. But there is no doubt that the lost-wax method already existed in China during the Spring and Autumn period. In 1978, the Bronze Zun-Pan unearthed from the tomb of Marquis Yi of Zeng in Leigudun, Suixian County, Hubei Province, used a mixed process of section mold method and lost-wax method. [ 27 ]
The early Iron Age in China began before 1000 BCE, with the introduction of ironware, such as knives, swords, and arrowheads, from the west into Xinjiang , before it further diffused to Qinghai and Gansu. [ 28 ] In 2008, two iron fragments were excavated at the Mogou site , in Gansu . They have been dated to the 14th century BCE, belonging to the period of Siwa culture . One of the fragments was made of bloomery iron rather than meteoritic iron . [ 29 ]
Cast iron farm tools and weapons were widespread in China by the 5th century BC, employing workforces of over 200 men in iron smelters from the 3rd century onward. The earliest known blast furnaces are attributed to the Han dynasty in the 1st century AD. [ 30 ] [ 31 ] These early furnaces had clay walls and used phosphorus -containing minerals as a flux . [ 32 ] Chinese blast furnaces ranged from around two to ten meters in height, depending on the region. The largest ones were found in modern Sichuan and Guangdong , while the 'dwarf" blast furnaces were found in Dabieshan . In construction, they are both around the same level of technological sophistication [ 33 ]
There is no evidence of the bloomery in China after the appearance of the blast furnace and cast iron. In China, blast furnaces produced cast iron, which was then either converted into finished implements in a cupola furnace, or turned into wrought iron in a fining hearth. [ 34 ] If iron ores are heated with carbon to 1420–1470 K, a molten liquid is formed, an alloy of about 96.5% iron and 3.5% carbon. This product is strong, can be cast into intricate shapes, but is too brittle to be worked, unless the product is decarburized to remove most of the carbon. The vast majority of Chinese iron manufacture, from the late Zhou dynasty onward, was of cast iron. [ 35 ] However forged swords began to be made in the Warring-States-period : "Earliest iron and steel Jian also appear, made by the earliest and most basic forging and folding techniques." [ 36 ] Iron would become, by around 300 BCE, the preferred metal for tools and weapons in China. [ 37 ]
The primary advantage of the early blast furnace was in large scale production and making iron implements more readily available to peasants. [ 38 ] Cast iron is more brittle than wrought iron or steel, which required additional fining and then cementation or co-fusion to produce, but for menial activities such as farming it sufficed. By using the blast furnace, it was possible to produce larger quantities of tools such as ploughshares more efficiently than the bloomery. In areas where quality was important, such as warfare, wrought iron and steel were preferred. Nearly all Han period weapons are made of wrought iron or steel, with the exception of axe-heads, of which many are made of cast iron. [ 39 ]
The effectiveness of the Chinese human and horse powered blast furnaces was enhanced during this period by the engineer Du Shi (c. AD 31), who applied the power of waterwheels to piston - bellows in forging cast iron. [ 40 ] Early water-driven reciprocators for operating blast furnaces were built according to the structure of horse powered reciprocators that already existed. That is, the circular motion of the wheel, be it horse driven or water driven, was transferred by the combination of a belt drive , a crank-and-connecting-rod, other connecting rods , and various shafts, into the reciprocal motion necessary to operate a push bellow. [ 41 ] [ 42 ]
Donald Wagner suggests that early blast furnace and cast iron production evolved from furnaces used to melt bronze . Certainly, though, iron was essential to military success by the time the State of Qin had unified China (221 BC). Usage of the blast and cupola furnace remained widespread during the Song and Tang dynasties . [ 43 ] By the 11th century, the Song dynasty Chinese iron industry made a switch of resources from charcoal to coke in casting iron and steel, sparing thousands of acres of woodland from felling. This may have happened as early as the 4th century AD. [ 44 ] [ 45 ]
Blast furnaces were also later used to produce gunpowder weapons such as cast iron bomb shells and cast iron cannons during the Song dynasty . [ 46 ]
Shen Kuo 's written work of 1088 contains, among other early descriptions of inventions, a method of repeated forging of cast iron under a cold blast similar to the modern Bessemer process . [ 47 ] [ 48 ] [ 49 ] [ 50 ] [ 51 ] [ 52 ] [ 53 ]
Chinese metallurgy was widely practiced during the Middle Ages; during the 11th century, the growth of the iron industry caused vast deforestation due to the use of charcoal in the smelting process. [ 54 ] [ 55 ] To remedy the problem of deforestation, the Song Chinese discovered how to produce coke from bituminous coal as a substitute for charcoal. [ 54 ] [ 55 ] Although hydraulic-powered bellows for heating the blast furnace had been written about since Du Shi 's (d. 38) invention of them in the 1st century CE, the first known illustration of a bellows in operation is found in a book written in 1313 by Wang Zhen ( fl. 1290–1333). [ 56 ]
Gold-crafting technology developed in Northwest China during the early Iron Age , following the arrival of new technological skills from the Central Asian steppes, even before the establishment of the Xiongnu (209 BCE-150 CE). [ 57 ] These technological and artistic exchanges attest to the magnitude of communication networks between China and the Mediterranean, even before the establishment of the Silk Road . [ 57 ] The sites of Dongtalede (Ch: 东塔勒德, 9th–7th century BCE) in Xinjiang , or Xigoupan (Ch:西沟畔, 4th–3rd century BCE) in the Ordos region of Inner Mongolia , are known for numerous artifacts reminiscent of the Scytho-Siberian art of Central Asia. [ 57 ]
During the Qing dynasty the gold and silver smiths of Ningbo were noted for the delicacy and tastefulness of their work. [ 58 ] [ 59 ] [ 60 ] [ 61 ] [ 62 ] [ 63 ] [ 64 ]
Chinese mythology generally reflects a time when metallurgy had long been practiced. According to the Romanian anthropologist, orientalist, and philosopher Mircea Eliade , the Iron Age produced a large number of rites, myths and symbols; the blacksmith was the main agent of diffusion of mythology, rites and metallurgical mysteries. [ 65 ] The secret knowledge of metallurgists and their powers made them founders of the human world and masters of the spirit world. [ 66 ] This metallurgical model was reinterpreted again by Taoist alchemists.
Some metalworkers illustrate the close relationship between Chinese mystical and sovereign power and the mining and metallurgy industries. Although the name Huangdi is absent from Shang or Zhou inscriptions, it appears in the Spring and Autumn period 's Guoyu and Zuo zhuan . According to Mitarai (1984), Huangdi may have lived in early antiquity and led a regional ethnic group who worshiped him as a deity; [ full citation needed ] "The Yellow Emperor fought Chiyou at Mount Kunwu whose summit was covered with a large quantity of red copper". [ 67 ]
"The seventy-two brothers of Chiyou had copper heads and iron fronts; they ate iron and stones [...] In the province of Ji where Chiyou is believed to have lived (Chiyou shen), when we dig the earth and we find skulls that seem to be made of copper and iron, they are identified as the bones of Chiyou." [ 68 ] Chiyou was the leader of the indigenous Sanmiao (or Jiuli) tribes who defeated Xuanyuan, the future Yellow Emperor. Chiyou, a rival of the Yellow Emperor, belonged to a clan of blacksmiths. The advancement of weaponry is sometimes attributed to the Yellow Emperor and Chiyou, and Chiyou reportedly discovered the process of casting . Kunwu is associated with a people, a royal blacksmith, a mountain which produces metals, and a sword. [ 69 ] Kui , a master of music and dance cited by Shun, was succeeded by Yu the Great . Yu the Great, reported founder of the Xia dynasty (China's first), spent many years working on flood control and is credited with casting the Nine Tripod Cauldrons . Helped by dragons descended from heaven, he died on Mount Xianglu in Zhejiang . [ 70 ] In these myths and legends, mines and forges are associated with leadership. [ 71 ] | https://en.wikipedia.org/wiki/History_of_metallurgy_in_China |
During the thirteenth century, Mosul , Iraq became home to a school of luxury metalwork which rose to international renown. Artifacts classified as Mosul are some of the most intricately designed and revered pieces of the Middle Ages . [ 3 ]
The school of metalwork in Mosul is believed to have been founded in the early 13th century under Zengid patronage. During this time, the Zengid region was operating as a vassal under the Ayyubid Sultanate . Control over Mosul as a city central to trade between China, the Mediterranean, Anatolia, and Mesopotamia was contested between the Zengids and the Ayyubid sultan, Saladin , throughout the early acquisitions of the Ayyubid Sultanate in Syria and Iraq after the decline of Fatimid rule. [ 5 ] However, the Zengids remained in Mosul and were allowed some degree of authority under the Sultanate.
Around 1256, the Mongol occupation of Iraq began, and the region became a part of the Ilkhanate . [ 6 ] Of the artifacts agreed to be "nabish al-Mawsili" (of Mosul), approximately 80% were produced after the commencement of Mongol rule in Mosul. [ 7 ] However, it is unclear as to whether or not all of these artifacts were produced within Mosul and later exported as esteemed gifts, or created elsewhere by Mosulian artisans who relocated but maintained the "al-Mawsili" signature.
The process of creating these luxuriously inlaid objects is somewhat complicated and has multiple stages. First, designs are formed on the surface of the metal (usually copper or brass ) by relief, piercing, engraving, or chasing. Color is then added to the crevices of the surface by encrustation, overlay or, most commonly, inlay of precious metals. These metal inlays could be sheets or wires hammered into place. The area around the inlaid design was often roughened or covered with some sort of black material. Each craftsman in the industry had their own personal specialization. This specialization could be in a particular metal, technique, object, or step in the process. [ 8 ] There are two reasons the casting step of the process usually took place in an urban workshop. The first is simply because most patrons were located in these urban areas. The second is because it would be too difficult to move all of the heavy equipment necessary for casting from one rural location to the next. Inlayers and precious metalworkers were able to travel with ease and were not confined to the workshops as casters were. [ 8 ] There were three main inlay innovations that are believed to have originated in Mosul in the thirteenth century- gold inlays, black inlay, and background scrolls inlaid with silver. [ 9 ]
The designs themselves are quite varied in subject matter. Some of the popular motifs include: astrology, hunting, enthronements, battles, court life, and genre scenes. Genre scenes , images of everyday life are particularly prominent. [ 8 ] Among the original design traditions there is evidence that can trace them to East Asia through the designs within textiles. Mosul was a great textile industry during the same period that they were producing these inlaid objects and they happened to specialize in reproductions of Chinese silks. It is speculated that many of the traditional metalwork designs were heavily influenced or even direct copies of these silk reproductions. [ 9 ]
Historically, many scholars have argued that the Mongol sack of Mosul led to the demise of the luxury metalworking industry, however modern scholarship and an abundance of evidence disproves this. For example, it is known that Mosul metalworkers received an imperial commission by Il-Khan Abu Sa'id in the last years of the Ilkanate. [ 9 ] Not only did Mosul continue to produce elaborate inlaid objects after the Mongol sack, they also altered their traditional stylistic choices to coalesce with Mongol taste. There was a new emphasis on minuscule style, the figures represented reflect the Ilkanhid fashion of the period, and they started to put more emphasis on pattern over figuration. [ 9 ]
One of the finest examples of the Mosul school of metalworking is the Blacas Ewer .
Another item tentatively attributed to Mosul is the Courtauld bag , which is thought to be the world's oldest surviving handbag . [ 10 ]
The scholarship surrounding Mosul Metalwork has been ongoing for a very long time, since it became the first Islamic objects d'art studied in Europe, due to its early arrival on the continent. The diverse opinions on what constitutes as Mosul Metalwork arise due to the style's dispersion across lands and through the component of signatures which identify creators as "al Mawsili", meaning "of Mosul". Within the section of metalwork with signatures, twenty-seven out of the thirty-five state themselves as "al- Mawsili". Out of those, eight state their provenance through the name of the people for which they were created along with statements declaring their engendering within Mosul. [ 7 ] [ 9 ] Some notable scholars that have helped shape the basis of this study include: Joseph Toussaint Reinaud, Henri Lavoix, Gaston Migeon , Max Van Berchem , Mehmed Aga-Oglu, David Storm Rice.
In the early years of Mosul Metalwork, around 1828, Joseph Toussaint Reinaud , published a collection that included the first item to clearly state its creation in Mosul, the 'Blacas ewer', an artifact consistently scrutinized by scholars when exploring Mosul style. Then in the 1860s the credibility of Mosul was being questioned by scholars, it was during that century that Henry Lavoix declared that Damascus , Aleppo , Mosul, and Egypt all created inlaid metalwork, but specifically singled out Mosul as a source for a unique style unseen throughout the medium. [ 7 ]
A critical point in the scholarship came in the beginning of the 20th, through Gaston Migeon , whose claims over the precedency of Mosul caused objection and an urgency for reliability. [ 7 ] Migeon also wrote the first comprehensive article introducing the inlaid Islamic metalwork. In the following years, the fluctuation of precedence of Mosul and the lack of it continued, leading up to David Storm Rice, who released the first series of articles exploring the complexities of multiple objects, a process similar to that of Max Van Brehmen and Mehmed Aga-Oglu, two scholars that impacted the relevance and viability of Mosul Metalwork, some of which included the Blacas Ewer, Louvre basin and the Munich Tray.
Present day, Mosul Metalwork is still elusive, and lacks a sustaining amount of scholarship, but scholars continue to construct a field that utilizes substantiated evidence through designs, inscription, and other items engendered specifically in Mosul around the 13th century. [ 7 ] An example of this is represented in an article written by Ruba Kana' An who utilizes its iconography and description to construct the argument stating the Freer Ewer as one of many metalworks constructed in Mosul. [ 12 ] | https://en.wikipedia.org/wiki/History_of_metallurgy_in_Mosul |
The history of metallurgy in the Indian subcontinent began prior to the 3rd millennium BCE. [ 1 ] Metals and related concepts were mentioned in various early Vedic age texts. The Rigveda already uses the Sanskrit term ayas ( Sanskrit : अयस् , romanized : áyas , lit. 'metal; copper; iron'). [ 2 ] The Indian cultural and commercial contacts with the Near East and the Greco-Roman world enabled an exchange of metallurgic sciences. [ 3 ] The advent of the Mughals (established: April 21, 1526—ended: September 21, 1857) further improved the established tradition of metallurgy and metal working in India. [ 4 ] During the period of British rule in India (first by the East India Company and then by the Crown ), the metalworking industry in India stagnated due to various colonial policies, though efforts by industrialists led to the industry's revival during the 19th century.
Recent excavations in Middle Ganga Valley done by archaeologist Rakesh Tewari show iron working in India may have begun as early as 1800 BCE. [ 5 ] Archaeological sites in India, such as Malhar, Dadupur, Raja Nala Ka Tila and Lahuradewa in the state of Uttar Pradesh show iron implements in the period between 1800 BCE – 1200 BCE. Sahi (1979: 366) concluded that by the early 13th century BCE, iron smelting was definitely practiced on a bigger scale in India, suggesting that the date the technology's inception may well be placed as early as the 16th century BCE. [ 6 ] However, reviewing the claims of early uses of iron during c. 1800-1000 BCE, archaeologist Suraj Bhan noted, "the stratigraphical context and chronology of iron is not beyond doubt" at these sites (namely Malhar, Dadupur, and Lahuradeva) — although "there is no doubt" that iron was being used in the Ganges Plains "a few centuries before the rise of urbanization [...] around 600 BCE". [ 7 ]
The Black and Red Ware culture was another early Iron Age archaeological culture of the northern Indian subcontinent . It is dated to roughly the 12th – 9th centuries BCE, and associated with the post- Rigvedic Vedic civilization . It extended from the upper Gangetic plain in Uttar Pradesh to the eastern Vindhya range and West Bengal .
Perhaps as early as 500 BCE, although certainly by 200 CE, high quality steel was being produced in southern India by what Europeans would later call the crucible technique . In this system, high-purity wrought iron, charcoal, and glass were mixed in crucibles and heated until the iron melted and absorbed the carbon. The resulting high-carbon steel, called fūlāḏ by the Arabs ( Arabic : فولاذ , romanized : fūlāḏ , lit. 'steel; wootz') and wootz by later Europeans, was exported throughout much of Asia and Europe.
Will Durant wrote in The Story of Civilization I: Our Oriental Heritage :
"Something has been said about the chemical excellence of cast iron in ancient India, and about the high industrial development of the Gupta times, when India was looked to, even by Imperial Rome , as the most skilled of the nations in such chemical industries as dyeing , tanning , soap -making, glass and cement ... By the sixth century the Hindus were far ahead of Europe in industrial chemistry; they were masters of calcinations , distillation , sublimation , steaming , fixation , the production of light without heat , the mixing of anesthetic and soporific powders, and the preparation of metallic salts , compounds and alloys . The tempering of steel was brought in ancient India to a perfection unknown in Europe till our own times; King Porus is said to have selected, as a specially valuable gift for Alexander , not gold or silver, but thirty pounds of steel. The Moslems took much of this Hindu chemical science and industry to the Near East and Europe ; the secret of manufacturing "Damascus" blades , for example, was taken by the Arabs from the Persians , and by the Persians from India."
The Sanskrit term ayas means metal and can refer to bronze , copper or iron .
The Rigveda refers to ayas , and also states that the Dasyus had ayas (RV 2.20.8). In RV 4.2.17, "the gods [are] smelting like copper /metal ore the human generations".
The references to ayas in the Rig Veda probably refer to bronze or copper rather than to iron. [ 8 ] Scholars like Bhargava [ 9 ] maintain that Rigveda was written in the Vedic state of Brahmavarta and Khetri Copper mines formed an important location in Brahmavarta. Vedic people had used Copper extensively in agriculture, Water purification, tools, utensils etc., D. K. Chakrabarti (1992) argued: "It should be clear that any controversy regarding the meaning of ayas in the Rgveda or the problem of the Rgvedic familiarity or unfamiliarity with iron is pointless. There is no positive evidence either way. It can mean both copper-bronze and iron and, strictly on the basis of the contexts, there is no reason to choose between the two."
The Arthashastra lays down the role of the Director of Metals, the Director of Forest Produce and the Director of Mining. [ 10 ] It is the duty of the Director of Metals to establish factories for different metals. The Director of Mines is responsible for the inspection of mines . The Arthashastra also refers to counterfeit coins . [ 10 ]
There are many references to ayas in the early Indian texts. [ 11 ]
The Atharvaveda and the Shatapatha Brahmana refer to kṛṣṇa-ayas ( Sanskrit : कृष्णायस् , romanized : kṛṣṇāyas / kṛṣṇa-ayas , lit. 'black metal'), which could be iron (but possibly also iron ore and iron items not made of smelted iron). There is also some controversy if the term śyāma-ayas ( Sanskrit : श्यामायस् , romanized : śyāmāyas / śyāma-ayas , lit. 'black metal'), refers to iron or not. In later texts the term refers to iron . In earlier texts, it could possibly also refer to darker-than-copper bronze , an alloy of copper and tin . [ 12 ] [ 13 ] Copper can also become black when heated. [ 14 ] Oxidation with the use of sulphides can produce the same effect. [ 14 ] [ 15 ]
The Yajurveda seems to know iron. [ 10 ] In the Taittiriya Samhita are references to ayas and at least one reference to smiths . [ 10 ] The Satapatha Brahmana 6.1.3.5 refers to the smelting of metallic ore. [ 16 ] In the Manu Smriti (6.71), the following analogy is found: "For as the impurities of metallic ores, melted in the blast (of a furnace), are consumed, even so the taints of the organs are destroyed through the suppression of the breath." Metal was also used in agriculture , and the Buddhist text Suttanipata has the following analogy: "for as a ploughshare that has got hot during the day when thrown into the water splashes, hisses and smokes in volumes..." [ 10 ]
In the Charaka Samhita an analogy occurs that probably refers to the lost wax technique. [ 16 ] The Silpasastras (the Manasara , the Manasollasa (Abhilashitartha Chintamani) and the Uttarabhaga of Silparatna ) describe the lost wax technique in detail. [ 16 ]
The Silappadikaram says that copper-smiths were in Puhar and in Madura . [ 16 ] According to the History of the Han Dynasty by Ban Gu , Kashmir and "Tien-chu" were rich in metals. [ 16 ]
The post-1400 CE treatise Rasaratnakara that deals with preparations of rasa ( mercury ) compounds. [ 17 ] It gives a survey of the status of metallurgy and alchemy in the land. Extraction of metals such as silver, gold, tin and copper from their ores and their purification were also mentioned in the treatise. The Rasa Ratnasamuccaya describes the extraction and use of copper. [ 18 ]
Chakrabarti (1976) has identified six early iron-using centres in India: Baluchistan , the Northwest, the Indo-Gangetic divide and the upper Gangetic valley, eastern India, Malwa and Berar in central India and the megalithic south India. [ 10 ] The central Indian region seems to be the earliest iron-using centre. [ 19 ]
According to Tewari, iron using and iron "was prevalent in the Central Ganga Plain and the Eastern Vindhyas from the early 2nd millennium BC." [ 20 ]
The earliest evidence for smelted iron in India dates to 1300 to 1000 BCE. [ 21 ] These early findings also occur in places like the Deccan and the earliest evidence for smelted iron occurs in Central India, not in north-western India. [ 22 ] Moreover, the dates for iron in India are not later than in those of Central Asia, and according to some scholars (e.g. Koshelenko 1986) the dates for smelted iron may actually be earlier in India than in Central Asia and Iran. [ 23 ] The Iron Age did however not necessary imply a major social transformation, and Gregory Possehl wrote that "the Iron Age is more of a continuation of the past then a break with it". [ 24 ]
Archaeological data suggests that India was "an independent and early centre of iron technology." [ 25 ] According to Shaffer, the "nature and context of the iron objects involved [of the BRW culture] are very different from early iron objects found in Southwest Asia." [ 26 ] In Central Asia, the development of iron technology was not necessarily connected with Indo-Iranian migrations either. [ 27 ]
J.M. Kenoyer (1995) also remarks that there is a "long break in tin acquisition" necessary for the production of "tin bronzes" in the Indus Valley region, suggesting a lack of contact with Baluchistan and northern Afghanistan, or the lack of migrants from the north-west who could have procured tin.
The copper - bronze metallurgy in the Harappan civilization was widespread and had a high variety and quality. [ 28 ] The early use of iron may have developed from the practice of copper-smelting. [ 29 ] While there is to date no proven evidence for smelted iron in the Indus Valley civilization , iron ore and iron items have been unearthed in eight Indus Valley sites, some of them dating to before 2600 BCE. [ 30 ] There remains the possibility that some of these items were made of smelted iron, and the term " kṛṣṇa-ayas " might possibly also refer to these iron items, even if they are not made of smelted iron.
Lothali copper is unusually pure, lacking the arsenic typically used by coppersmiths across the rest of the Indus valley. Workers mixed tin with copper for the manufacture of celts , arrowheads, fishhooks, chisels, bangles, rings, drills and spearheads, although weapon manufacturing was minor. They also employed advanced metallurgy in following the cire perdue technique of casting, and used more than one-piece moulds for casting birds and animals. [ 31 ] They also invented new tools such as curved saws and twisted drills unknown to other civilizations at the time. [ 32 ]
Copper technology may date back to the 4th millennium BCE in the Himalaya region. [ 18 ] It is the first element to be discovered in metallurgy , Copper and its alloys were also used to create copper-bronze images such as Buddhas or Hindu/ Mahayana Buddhist deities. [ 16 ] Xuanzang also noted that there were copper-bronze Buddha images in Magadha . [ 16 ] In Varanasi , each stage of the image manufacturing process is handled by a specialist. [ 33 ]
Other metal objects made by Indian artisans include lamps . [ 34 ] Copper was also a component in the razors for the tonsure ceremony. [ 16 ]
One of the most important sources of history in the Indian subcontinent are the royal records of grants engraved on copper-plate grants (tamra-shasan or tamra-patra). Because copper does not rust or decay, they can survive indefinitely. Collections of archaeological texts from the copper-plates and rock-inscriptions have been compiled and published by the Archaeological Survey of India during the past century. The earliest known copper-plate known as the Sohgaura copper-plate is a Maurya record that mentions famine relief efforts. It is one of the very few pre- Ashoka Brahmi inscriptions in India.
Brass was used in Lothal and Atranjikhera in the 3rd and 2nd millennium BCE. [ 35 ] Brass and probably zinc was also found at Taxila in 4th to 3rd century BCE contexts. [ 36 ]
The deepest gold mines of the Ancient world were found in the Maski region in Karnataka. [ 37 ] There were ancient silver mines in northwest India. Dated to the middle of the 1st millennium BCE. gold and silver were also used for making utensils for the royal family and nobilities.the royal family wore costly fabrics that were made from gold and silver thin fibres embroidered or woven into fabrics or dress.
Recent excavations in Middle Ganges Valley show iron working in India may have begun as early as 1800 BCE. [ 38 ] In the 5th century BCE, the Greek historian Herodotus observed that "Indian and the Persian army used arrows tipped with iron." [ 39 ] Ancient Romans used armour and cutlery made of Indian iron. Pliny the Elder also mentioned Indian iron. [ 39 ] Muhammad al-Idrisi wrote the Hindus excelled in the manufacture of iron, and that it would be impossible to find anything to surpass the edge from Hindwani steel. [ 40 ] Quintus Curtius wrote about an Indian present of steel to Alexander. [ 41 ] Ferrum indicum appeared in the list of articles subject to duty under Marcus Aurelius and Commodus . [ 10 ] Indian Wootz steel was held in high regard in Europe, and Indian iron was often considered to be the best. [ 42 ]
The first form of crucible steel was wootz , developed in India some time around 300 BCE. In its production the iron was mixed with glass and then slowly heated and then cooled. As the mixture cooled the glass would bond to impurities in the steel and then float to the surface, leaving the steel considerably purer. Carbon could enter the iron by diffusing in through the porous walls of the crucibles. Carbon dioxide would not react with the iron, but the small amounts of carbon monoxide could, adding carbon to the mix with some level of control. Wootz was widely exported throughout the Middle East , where it was combined with a local production technique around 1000 CE to produce Damascus steel , famed throughout the world. [ 43 ] Wootz derives from the Tamil term for steel urukku . [ 44 ] Indian wootz steel was the first high quality steel that was produced.
Henry Yule quoted the 12th-century Arab Edrizi who wrote: "The South Indians excel in the manufacture of iron, and in the preparations of those ingredients along with which it is fused to obtain that kind of soft iron which is usually styled Indian steel. They also have workshops wherein are forged the most famous sabres in the world. ...It is not possible to find anything to surpass the edge that you get from Indian steel (al-hadid al-Hindi). [ 39 ]
As early as the 17th century, Europeans knew of India's ability to make crucible steel from reports brought back by travelers who had observed the process at several places in southern India. Several attempts were made to import the process, but failed because the exact technique remained a mystery. Studies of wootz were made in an attempt to understand its secrets, including a major effort by the famous scientist, Michael Faraday , son of a blacksmith . Working with a local cutlery manufacturer he wrongly concluded that it was the addition of aluminium oxide and silica from the glass that gave wootz its unique properties.
After the Indian Rebellion of 1857 , many Indian wootz steel swords were ordered to be destroyed by the East India Company . The metalworking industry in India went into decline during the period of British Crown control due to various colonial policies, but steel production was revived in India by Jamsetji Tata . [ 39 ]
Zinc was extracted in India as early as in the 4th to 3rd century BCE. Zinc production may have begun in India, and ancient northwestern India is the earliest known civilization that produced zinc on an industrial scale. [ 45 ] The distillation technique was developed around 1200 CE at Zawar in Rajasthan . [ 35 ]
In the 17th century, China exported Zinc to Europe under the name of totamu or tutenag. The term tutenag may derive from the South Indian term Tutthanagaa (zinc). [ 46 ] In 1597, Libavius, a metallurgist in England received some quantity of Zinc metal and named it as Indian/Malabar lead. [ 47 ] In 1738, William Champion is credited with patenting in Britain a process to extract zinc from calamine in a smelter, a technology that bore a strong resemblance to and was probably inspired by the process used in the Zawar zinc mines in Rajasthan . [ 39 ] His first patent was rejected by the patent court on grounds of plagiarising the technology common in India. However, he was granted the patent on his second submission of patent approval. Postlewayt 's Universal Dictionary of 1751 still wasn't aware of how Zinc was produced. [ 36 ]
The Arthashastra describes the production of zinc. [ 48 ] The Rasaratnakara by Nagarjuna describes the production of brass and zinc. [ 49 ] There are references of medicinal uses of zinc in the Charaka Samhita (300 BCE). The Rasaratna Samuchaya (800 CE) explains the existence of two types of ores for zinc metal, one of which is ideal for metal extraction while the other is used for medicinal purpose. [ 50 ] It also describes two methods of zinc distillation. [ 36 ]
Recent excavations in Middle Ganges Valley conducted by archaeologist Rakesh Tewari show iron working in India may have begun as early as 1800 BCE. [ 38 ] Archaeological sites in India, such as Malhar , Dadupur, Raja Nala Ka Tila and Lahuradewa in the state of Uttar Pradesh show iron implements in the period between 1800 BCE-1200 BCE. [ 38 ] Sahi (1979: 366) concluded that by the early 13th century BCE, iron smelting was definitely practiced on a bigger scale in India, suggesting that the date the technology's early period may well be placed as early as the 16th century BCE. [ 38 ]
Some of the early iron objects found in India are dated to 1400 BCE by employing the method of radio carbon dating. [ 51 ] Spikes , knives , daggers , arrow -heads, bowls , spoons , saucepans , axes , chisels , tongs , door fittings etc. ranging from 600 BCE—200 BCE have been discovered from several archaeological sites. [ 51 ] In Southern India (present day Mysore ) iron appeared as early as the 12th or 11th century BCE. [ 52 ] These developments were too early for any significant close contact with the northwest of the country. [ 52 ]
The earliest available Bronze age swords of copper discovered from the Harappan sites in Pakistan date back to 2300 BCE. [ 53 ] Swords have been recovered in archaeological findings throughout the Ganges - Jamuna Doab region of India, consisting of bronze but more commonly copper . [ 53 ] Diverse specimens have been discovered in Fatehgarh , where there are several varieties of hilt. [ 53 ] These swords have been variously dated to periods between 1700 and 1400 BCE, but were probably used more extensively during the opening centuries of the 1st millennium BCE. [ 53 ]
The beginning of the 1st millennium BCE saw extensive developments in iron metallurgy in India. [ 52 ] Technological advancement and mastery of iron metallurgy was achieved during this period of peaceful settlements. [ 52 ] The years between 322 and 185 BCE saw several advancements being made to the technology involved in metallurgy during the politically stable Maurya period (322—185 BCE). [ 54 ] Greek historian Herodotus (431—425 BCE) wrote the first western account of the use of iron in India. [ 51 ]
Perhaps as early as 300 BCE—although certainly by 200 CE—high quality steel was being produced in southern India by what Europeans would later call the crucible technique. [ 55 ] In this system, high-purity wrought iron, charcoal, and glass were mixed in a crucible and heated until the iron melted and absorbed the carbon. [ 55 ] The first crucible steel was the wootz steel that originated in India before the beginning of the common era. [ 56 ] Wootz steel was widely exported and traded throughout ancient Europe, China, the Arab world, and became particularly famous in the Middle East , where it became known as Damascus steel . Archaeological evidence suggests that this manufacturing process was already in existence in South India well before the common era. [ 57 ] [ 58 ]
Zinc mines of Zawar , near Udaipur , Rajasthan , were active during 400 BCE. [ 59 ] There are references of medicinal uses of zinc in the Charaka Samhita (300 BCE). [ 59 ] The Periplus Maris Erythraei mentions weapons of Indian iron and steel being exported from India to Greece. [ 60 ]
The world's first iron pillar was the Iron pillar of Delhi —erected at the times of Chandragupta II Vikramaditya (375–413), often considered as one of the finest pieces of ancient metallurgy. [ 62 ] [ 63 ] The swords manufactured in Indian workshops find written mention in the works of Muhammad al-Idrisi (flourished 1154). [ 64 ] Indian Blades made of Damascus steel found their way into Persia . [ 60 ] European scholars—during the 14th century—studied Indian casting and metallurgy technology. [ 65 ] The Rasaratna Samuccaya (16th century CE) [ 17 ] explains the existence of two types of ores for zinc metal, one of which is ideal for metal extraction while the other is used for medicinal purpose. [ 59 ] Indian metallurgy under the Mughal emperor Akbar (reign: 1556–1605) produced excellent small firearms. [ 66 ] Gommans (2002) holds that Mughal handguns were probably stronger and more accurate than their European counterparts. [ 67 ]
Srivastava & Alam (2008) comment on Indian coinage of the Mughal Empire (established: April 21, 1526 - ended: September 21, 1857) during Akbar's regime: [ 68 ]
Akbar reformed Mughal currency to make it one of the best known of its time. The new regime possessed a fully functioning trimetallic (silver, copper, and gold) currency, with an open minting system in which anyone willing to pay the minting charges could bring metal or old or foreign coin to the mint and have it struck. All monetary exchanges were, however, expressed in copper coins in Akbar's time. In the 17th century, following the silver influx from the New World , silver rupee with new fractional denominations replaced the copper coin as a common medium of circulation. Akbar's aim was to establish a uniform coinage throughout his empire; some coins of the old regime and regional kingdoms also continued.
Statues of Nataraja and Vishnu were cast during the reign of the imperial Chola dynasty (200–1279) in the 9th century. [ 65 ] The casting could involve a mixture of five metals: copper, zinc, tin, gold, and silver. [ 65 ] Considered great feat in metallurgy, the hollow, Seamless, celestial globe was invented in Kashmir by Ali Kashmiri ibn Luqman in 998 AH (1589-90 CE), and twenty other such globes were later produced in Lahore and Kashmir during the Mughal Empire . [ 69 ] These Indian metallurgists pioneered the method of lost-wax casting , and disguised plugs, in order to produce these globes. [ 69 ]
The first iron-cased and metal-cylinder rockets ( Mysorean rockets ) were developed by the Mysorean army of the South Indian Kingdom of Mysore in the 1780s. [ 70 ] The Mysoreans successfully used these iron-cased rockets against the Presidency armies of the East India Company during the Anglo-Mysore Wars . [ 70 ]
Modern steel making in India began with the setting of first blast furnace of India at Kulti in 1870 and production began in 1874, which was set up by Bengal Iron Works. The Ordnance Factory Board established Metal & Steel Factory (MSF) at Calcutta, in 1872 [ 72 ] [ 73 ] The Tata Iron and Steel Company (TISCO) was established by Dorabji Tata in 1907, as part of his father's conglomerate. By 1939 Tata operated the largest steel plant in the British Empire, and accounted for a significant proportion of the 2 million tons pig iron and 1.13 of steel produced in British India annually. [ 74 ] [ 75 ] | https://en.wikipedia.org/wiki/History_of_metallurgy_in_the_Indian_subcontinent |
The history of metallurgy in the Urals stands out to historians and economists as a separate stage in the history of Russian industry and covers the period from the 4th millennium BC to the present day. [ 1 ] The emergence of the mining district is connected with the history of Ural metallurgy . The geography of the Ural metallurgy covers the territories of modern Perm Krai , Sverdlovsk Oblast , Udmurtia , Bashkortostan , Chelyabinsk Oblast and Orenburg Oblast . [ 2 ]
In the 18th century, periods of formation and development of industrial metallurgical centers stand out in Urals metallurgy, for example, the rapid construction and economic growth of more than two hundred metallurgy factories during the 18th to the first half of the 19th centuries [ 3 ] until the abolition of serfdom on February 19, 1861 in the Russian Empire, which led to reductions in the labor force. [ 4 ] There was also a sharp drop in production rates in the early 1900s but that was followed by recovery and growth by 1913. In the 20th century, after recovering from the decline caused by the Russian Revolution(s) : 1905, February 1917, and October 1917 and the Russian Civil War (November 1917 - June 1923), [ 5 ] Ural metallurgy had a strategic impact on ensuring the defense of the USSR on the Eastern Front of World War II which is known in Russia as the Great Patriotic War. In the 21st century, the development of metallurgical enterprises in the Urals is associated with the formation of vertically integrated full cycle companies.
The main milestones in the development of metal production technologies in the Urals include the transition from bloomery or the old iron production method to the Kontuazsky forge (for remelting heavy scrap) [ 6 ] and the puddling method [ 7 ] in the second half of the 19th century. Later, there was the development of hot blast at the end of the 19th century. Further, there was a transition to coke fuel and the introduction of steam engines . Finally, there was the development of open-hearth and Bessemer methods of steel production at the beginning of the 20th century.
The first period of metallurgical production in the Urals date back to the 4th and 3rd millennia BC. During the Bronze Age , primitive copper-bronze metallurgy was developed among the pastoral tribes of the Urals. The beginning of the development of the Kargalinsky copper ore deposit, located along the Kargalka and Yangiz Rivers, began during this period. [ 8 ] In the first half of the third millennium BC, centers of copper metallurgy were formed in the Western Urals and in the Kama region, the ore base that provided numerous mineral deposits of copper sandstone . [ 9 ]
The second millennium BC was characterized by the massive spread of copper-bronze metallurgy practically throughout the Urals, and the development of new technologies and metal processing. The Seima-Turbino phenomenon of the distribution of high-quality bronze products in the vast expanses of the forest-steppe zone of Eurasia belongs to this period. [ 10 ] [ 11 ] The centers of metallurgy of the Southern Urals of the 2nd millennium BC include settlements of the Sintashta , Abashevo and Arkaim cultures. [ 12 ] The development of bronze metallurgy in the Urals was hindered by the lack of tin deposits, the alloying of copper which allowed to obtain high-quality bronze. [ note 1 ] Therefore, the metal objects found at the excavations of the Bronze Age settlements are mainly represented by products made of ordinary copper and arsenic bronze. [ 14 ]
During the period from the end of the 2nd millennium BC to the beginning of the 1st millennium BC, the most ore-rich areas of the southern Ural copper mines were depleted and abandoned. In the middle of the first millennium BC, metallurgical products were mastered by representatives of Srubnaya culture . In the second half of the 1st millennium BC, there were isolated pockets of Ananyino culture in the Kama-Volga region and Itkul culture in the Urals. [ 14 ]
The appearance of iron in the Urals dates back to the 1st millennium BC. In the Kama-Volga region iron products were made from the 8th to the 6th centuries BC, and in the Ural Mountains from the fifth to the fourth centuries BC. In general, the massive penetration of primitive iron metallurgy with the use of forges in the Urals began in the middle of the 1st millennium BC. Forest tribes of the northern Urals and in the north of Western Siberia mastered iron metallurgy by the end of the 1st millennium BC. In the settlements of the Gorokhovo and Kara-Abyz cultures, along with bronze, iron products were found to be in use. [ 15 ]
The 1st millennium AD was characterized by the massive distribution of iron in the Urals and Western Siberia . The oldest blast furnace in the Urals, belonging to the Pyanoborsk culture , was discovered by Vladimir Gening at the settlement of Cheganda I, on the territory of modern Udmurtia . Also, for the settlements of the Upper Kama region at the beginning of the Iron Age , the separation of metallurgical production into a separate craft was characteristic, which made up the specialization of entire villages or parts of them. The spread of iron crafts was facilitated by the resettlement of the Ugric tribes of the Petrogrom culture in the Urals. Remains of iron-smelting furnaces of the 6th-9th centuries were found during excavations of hill forts near modern Yekaterinburg . [ 16 ] [ 17 ]
In the 11th to the 13th centuries, metal goods made by Western European artisans began to penetrate the Urals through trade routes, which contributed to the expansion of the range of products smelted. Excavations at the Kama settlements or hill forts of Idnakar , Vasyakar, Dondykar , Kushmansky , and others have shown that in the 11th to the 15th centuries the main unit for smelting iron [ note 2 ] was a blast furnace . The metalworking complexes consisted of forges and tool kits. The development of heat treatment and welding of metals proceeded unevenly throughout the Urals. [ 20 ] In the 1st millennium, the main products of metallurgists were items for military and hunting purposes: arrowheads, spears, axes, knives, and fishhooks. From the beginning of the second millennium, agricultural implements began to predominate. [ 21 ]
By the end of the 1st millennium, ore mining and its own copper-bronze and iron production in the Urals gradually ceased due to the depletion of available resources, competition with more developed cultures, and ethnographic changes that had begun. The penetration of Russians into the Urals, associated mainly with the abundance of furs in the region, facilitated the infiltration of new technologies, including metallurgical ones. In the 17th to the 18th centuries, abandoned ancient mines served as a kind of indicator for geologists in search of ore. With the help of such finds, the Gumeshev and Kargalin deposits of copper ores, the deposits of the Verkh-Isetsky (Upper Iset) and Kyshtym mining districts, as well as the Mednorudyansk deposit were discovered. [ 22 ] [ 23 ] [ 24 ]
During the period of active colonization of the Urals, which began in the 14th to early 15th centuries, there were rumors about subsurface deposits in Perm land and Yugra . But when conditions were dangerous for the settlers, because of the indigenous population, industrial land development was practically not carried out. In 1491, Ivan III sent an expedition to the Northern Urals, to Pechora , with the task of searching for silver and copper ores. As a result, a small silver ore deposit was discovered on the Tsilma River, which was quickly developed. Ivan IV declared the prospecting and mining of ores a state monopoly and in 1567-1568 he sent an expedition to search for silver and copper ores on the Yayva River . The expedition ended in vain. In 1568, Ivan IV allocated extensive lands to Y. A. Stroganov in the Kama region with permission to use iron ores, but was banned from using silver, copper and tin ores, and he had to immediately report their discovery to Moscow. [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ]
The active resettlement of Russians to the Urals was facilitated by the agrarian crisis of the agricultural central part of Russia at the end of the 16th century . From 1579 to 1678 the Russian population of Great Perm increased from 2,197 to 11,811 households (by 463%). By 1724, the population of the Urals was already about 1 million people, while the total population of Russia was about 14 million people. [ 32 ]
Until the beginning of the 17th century, all of the Ural and Russian metallurgy was locally handcrafted production in the form of small peasant blast furnaces and forges , in which all the processes of obtaining finished products were concentrated. [ 33 ]
Beginning in 1618, the government on an almost continual basis organized expeditions to the Urals and Siberia to search for ore deposits. Also, the practice of issuing permits was used, which made it possible to search for ores throughout the territory of the state. [ 34 ]
In the 16th-17th centuries, primitive blast furnaces were built by peasant families in the forests adjacent to their villages. The resulting metal pieces were processed into iron in forges or sold. [ 35 ] It is known that 40 years before the arrival of Georg Wilhelm de Gennin to the Urals, the peasants of the Aramil settlement smelted iron in small furnaces and sold it, paying tithes to the district office. [ 36 ] [ 37 ] Even at the beginning of the 18th century, the smelting of ore in small blast furnaces [ note 3 ] was widespread in many regions of the Urals. In 1720-1722, the artisanal farms of the Kungur district produced 3 thousand poods of iron, 203 poods of strip iron and 897 poods of other varieties. [ 38 ] [ 39 ] [ 40 ] Subsequently, artisanal metallurgical production was legally prohibited on the initiative of G. W. de Gennin. [ 41 ] [ 42 ]
In the 1630s, with the involvement of foreign engineers, the construction of arms metallurgical factories began in the central part of Russia. [ 43 ] Despite the construction of more than 20 state and private factories in the central region in the 17th century, the country experienced a shortage of metal and continued to buy it abroad. In 1629, 25 thousand poods of iron bars were bought in Sweden . [ 44 ] To meet the needs of the Ural and Siberian enterprises (primarily salt-making) and settlements settled by Russians, iron was purchased in the central regions. At the same time, the cost of the metal increased sharply with the distance to the east due to transport costs. [ 45 ] The impetus for the development of the Ural industry at the beginning of the 17th century was the plans of the authorities to create metallurgical enterprises in the eastern regions of Russia. After his trip abroad, Peter I , realizing the shortage of coal in the central regions and the need to strengthen weapons potential, ordered the construction of mining plants in the Urals, providing them with engineers from Tula, Kashira and other factories. The Ural factories were built on the model of factories in central Russia, which, in turn, were created using the French, German and Swedish types. [ 46 ] The rapid development of the metallurgical industry in the Urals in the 17th-18th centuries was facilitated by the abundance of rich natural alloyed (copper, chromium and vanadium ) ores in the region, as well as the availability of accessible forest and water resources. [ 47 ] The lack of railways led to the development of a large number of small mines. Iron ore reserves were considered practically inexhaustible, while copper ore reserves, on the contrary, were quickly depleted, which led to the closure of 40 copper smelters in the Western Urals in the late 17th - first half of the 18th century. [ 48 ] [ 49 ] [ 50 ] [ 51 ]
In the absence of their own specialists in mining and metallurgy, craftsmen were invited from abroad, but they worked mainly in the central regions of the country. In 1618-1622, the Englishman John Water, and in 1626 Fritsch, Gerold and Bulmerr, together with Russian attendants, carried out fruitless expeditions to search for ores in the region of the upper Kama and the Pechora . Others such as the Bergman brothers also unsuccessfully searched for ore in 1626 in the Cherdyn region. Only in 1635, the Saxon, Aris Petzold, and the Moscow merchant Nadia Sveteshnikov found two copper deposits, which became the basis of the first copper smelter in the Urals - Pyskorsky . [ 52 ] [ 53 ] Failures of geological exploration expeditions at the beginning of the 17th century forced the state to weaken its monopoly on exploration for non-ferrous and precious metals . Major rewards were promised for the found deposits. This decision was followed by a series of discoveries of new deposits of copper and iron in the Urals. [ 54 ] [ 55 ] In particular, thanks to local residents who brought samples of bog ore to the offices of the Turin and Tobolsk governors for a fee, the deposits of the first iron-making plant in the Urals - Nitsynsky - were discovered. In the 1670s, the expeditions, not finding ore in the Penza district, began to advance to the Urals and found silver ores along the banks of the Kama, Yayva and Kosva . [ 56 ] [ 57 ] [ 28 ] [ 54 ]
Government incentives for the found ores led to a sharp increase in exploration activity in the Urals. In the second half of the 17th century, the focus of the search shifted from the Kama region to the Verkhotursky district , where a number of large copper and iron deposits were discovered. [ 58 ] In 1669-1674, the state organized an expedition to the Trans-Urals to search for silver and gold ores. During the expedition, no suitable ore was found. Rich ores were found only at the end of the 17th century far beyond the Urals, in the valley of the Argun River , on the basis of which, in 1704, the first Russian silver-smelting Nerchinsk plant was launched. [ 59 ] [ 60 ]
In general, the Ural metallurgy in the 17th century did not go beyond the limits of artisanal production, the central regions received greater development during this period. [ 61 ] [ 62 ]
With the appearance of the first factories in the Urals and the establishment of production and economic relations between the authorities and the owners, pronounced features of a subsistence economy appeared: everything necessary to ensure production was prepared and carried out at the factories on their own. Mining factories [ note 5 ] had their own landholdings, mines, quarries, forestry, stable yards, hayfields, marinas, courts, mills, and various auxiliary workshops. Such industrial and economic complexes were called mining districts and were legally described in the Mountain Regulations of 1806. [ 66 ] [ 67 ] [ 68 ] [ 69 ] The first mining plants of the Urals were fortified settlements with defensive structures to protect them from the raids of the Bashkirs . [ 70 ] [ 71 ] [ 72 ]
In total, about 250-260 mining plants of various specializations were built in the Urals and the Kama region: iron foundries, copper smelting, iron-making and processing plants. In total, there were about 500 mining plants in Russia. [ note 6 ] [ 64 ] The first Ural ironworks of the 17th century did not have blast furnaces and were small forges of several smelting furnaces. [ 74 ] [ 51 ] Such factories include Nitsynsky (founded in 1630), [ 75 ] Krasnoborsky (1640), Tumashevsky (1669), Dalmatovsky Monastery Zhelezenskoye settlement (1683, the Kamensky iron foundry was founded on the site of the plant) and the plant in Aramashevskaya Sloboda (1654). [ 76 ] [ 77 ] [ 78 ] The first full-fledged mining plants in the Urals were the Nevyansky and Kamensky plants, founded in 1699-1700 and equipped with blast furnaces, the last mining plant was Ivano-Pavlovsky, launched in 1875. Later, metallurgical plants and mills were already under construction. [ 79 ] [ 80 ] [ 81 ] [ 82 ] [ 50 ] Actually, the beginning of the history of the mining industry in the Urals is considered to be January 1697, when Governor D. M. Protasyev reported to Moscow about the discovery of iron ore on the Tagil and Neyva rivers. [ 83 ] The iron obtained from this ore was studied by Moscow gunsmiths and the Tula blacksmith N. D. Antufiev (Demidov) and was given a high appraisal. [ 84 ] On May 10 and June 15, 1697, decrees were issued for the construction of the first Ural factories. And the date of birth of Ural metallurgy is considered to be 1701 when blast-furnace plants were launched and produced the first cast iron. [ 85 ] [ 86 ] [ 87 ]
A specific feature of the Ural mining plants was the obligatory presence of a dam and a pond , which ensured the operation of factory mechanisms through water wheels . Therefore, mining plants were built in close proximity to ore deposits and the river. [ 88 ] In a drought, when the water level in the navigable river decreased, the passage of ships was ensured by the synchronous discharge of water from several factory ponds located on the tributaries. The supply of charcoal was provided by the vast forest dachas assigned to the factories. The length of the dams of large factories reached 200–300 m and more (the largest dam of the Byngovsky plant was 695 m long), the width was 30–40 m, and the height was 6–10 m. [ 89 ] Due to the climatic conditions of the Urals, it was necessary to maintain a large volume of water in the pond in order to avoid it freezing in the winter. The complete dependence of factories on the availability of water in the ponds led to frequent shutdowns of enterprises or their individual shops for a period of up to 200 days a year. [ 90 ] To increase the water pressure, various methods were used: connecting ponds through channels with lakes or other ponds, replenishing ponds from high-mountain reservoirs through gutters. [ 91 ] Another difference from European dams was the presence of pine or larch log cabins with valves to regulate the water level in the pond. A wide (up to 10 m and more) slot or "Veshnyak" [ note 7 ] served to let in excess water during spring floods or in summer after heavy rains. A narrower (about 2 m wide) working slot was intended to supply water to a water conduit - a wooden trough, which was laid along the entire length of the plant's territory and through which water was supplied by a system of wooden pipes and gutters to the impellers of numerous plant mechanisms. The dams of large factories had several slots. All production buildings were located along the working slots. At the same time, industries that required more energy to drive mechanisms were located closer to the dam. Directly behind the dam there was usually a blast-furnace shop, behind it - blast factories, further along the trough there were drilling, stacking, steel, armature and auxiliary factories. The blast furnace was connected to the dam by a bridge across which ore, coal and fluxes were delivered. Almost all the Ural mining plants of the 18th century had two blast furnaces in their composition; in the future, the number of furnaces could increase. Pig iron, as a rule, was sent to a blast factory, where it was processed into blast iron and pounded with hammers. At large factories, the number of hammers reached 8-13. [ 93 ] [ 94 ] [ 95 ] [ 96 ] [ 97 ]
As a rule, the factory office, the manor house, the houses of the employees of the plant administration, and the church were located on the square in front of the plant. Subsequently, with the expansion of factories, such a layout became environmental stress on factory settlements, which gradually grew into cities. Factory ponds, where industrial waste was dumped, were at the same time a source of drinking water, which contributed to the spread of all kinds of illnesses. [ 95 ] [ 98 ] The plants located close to each other were eventually united by one settlement: Verkh-Neyvinsky and Nizhny-Verkhneyvinsky plants in Verkh-Neyvinsky , Yekaterinburg and Verkh-Isetsky plants in Yekaterinburg, and others. [ 99 ]
The management of state-owned factories was carried out on the military settlements model. Mining superiors, who received the title of generals, were appointed by the authorities. The plant was provided with a military garrison, which partly supported the convoy with products. The work was led by mining officers and craftsmen, who were replaced on average every five years. In 1834, state-owned factories were legally equated with military organizations and their workers with soldiers. The management of private factories was carried out by the factory owners under the supervision of the state. The presence of one owner of factories in different regions contributed to the exchange of experience and technologies between enterprises. [ 100 ] [ 69 ]
In literature, over time, the term "mining district" became more widely used meaning a historically established complex of enterprises with lands and forests belonging to it, pits, mines, and a mining population living on its territory. [ 101 ] Since the beginning of the 20th century, the term "mining plant" is practically not used. [ note 8 ] [ 105 ]
At the turn of the 17th-18th centuries, the country's need for metal was exacerbated by the outbreak of wars for access to the Black and Baltic Seas . Olonets and Kashiro-Tula plants in the central and northwestern parts of Russia had already depleted forest and ore bases and did not meet the growing demand for weapons-grade metal, and could not produce high-quality metal due to the presence of harmful impurities in the ores, primarily sulfur and phosphorus . [ note 9 ] [ 107 ] [ 108 ] These same prerequisites contributed to the shift of priority from the smelting of non-ferrous and noble metals towards iron. After the defeat of the Russian troops at Narva on November 19, 1700, the Swedes were left with all the Russian artillery, which exacerbated the need for accelerated production of guns. To make up for these losses, Peter I gave the order to melt the church bells into cannons and mortars. As a result, 300 cannons were cast in one year. [ 109 ]
In 1696, at the initiative of the head of the Siberian order, the Duma clerk A.A. Vinius , the ore found in the Verkhotursky district was sent for examination to the Moscow gunsmiths and the Tula blacksmith N. D. Antufiev (Demidov). The samples were highly appraised, which played a decisive role in government decision-making. On May 10 and June 15, 1697, decrees of Peter I were issued on the construction of the first Ural blast furnace plants. The construction was supervised by the Siberian Order headed by A. A. Vinius. [ 85 ] [ 110 ] [ 111 ] [ 112 ] The first craftsmen arrived in the Urals to build the Nevyansk and Kamensk factories in the spring of 1700. By 1717, out of 516 workers at the Nevyansk plant, 118 people came from central Russia, including 52 from Tula, and 66 people from Moscow and the Moscow region. [ 113 ] The launch of the first two plants in 1701 showed good prospects for Ural metallurgy. In 1702, the Uktussky, Verkhne- and Nizhne-Alapaevsky plants were launched, supplying metal, including for the construction of buildings in St. Petersburg . [ 114 ] [ 115 ] [ 116 ]
On March 4, 1702, by the decree of Peter I, the unfinished Nevyansk plant was transferred to the private property of N. D. Demidov. He proved to be a talented organizer and was able to significantly increase production volumes, with the support of the authorities. Demidov easily achieved the registration of additional peasants to factories, as well as relaxation in taxes and supervision by local administrations. [ 117 ] Since 1716, the Demidovs became the first Russian exporters of iron to Western Europe . In total, the Demidovs built 55 metallurgical plants, including 40 in the Urals. By 1740, the Demidov factories produced about 64% of all Ural and 46% of Russian iron. [ 118 ] [ 119 ] [ 120 ] [ 121 ] At the same time, the productivity of the Demidovs' factories was on average 70% higher than that of state-owned. [ 122 ]
In April 1703, the first convoy with guns and iron made in the Urals (323 cannons, 12 mortars , 14 howitzers ) was sent from the Utkinskaya pier on the Chusovaya River. From the factories, the guns were transported by horse-drawn transport 176 versts to Chusovaya, then they were delivered by water to Moscow or St. Petersburg, wintering in Tver . The first convoy arrived in Moscow in 11 weeks and 6 days, on July 18, 1703. Tests of the first guns, which had been cast in a hurry, were unsuccessful: of the first two guns, one was torn into 20 parts due to the poor quality of the cast iron. Later, in the course of mass testing of the guns, 102 guns out of 323 were torn apart. After that, A. A. Vinius ordered the guns to be tested at the factories before shipment. [ 123 ] Later, due to the unsatisfactory quality of the metal and high transportation costs, [ 124 ] the manufacture of cannons was moved to the factories of the Central part of Russia. By a decree on 19 January 1705, the smelting of cannons at the Ural factories was terminated. [ 125 ]
In the first years of the 18th century, with the launch of the first state-owned and private factories, the production base of mining districts and the management system of the enterprises included in them began to be built. Almost all of the first Ural factories were built by local peasants, who were then assigned to factories. In 1700, the first registration of more than 1.6 thousand peasants to the Nevyansk plant was carried out. In 1703, an additional postscript was made to the same plant, which was already owned by N. D. Demidov. By 1762, about 70% of state peasants were assigned to factories in the Middle Urals and Kamsky Urals. The registered peasants at the factories performed mainly auxiliary work: they prepared firewood for the production of coal and heating houses, mined and fired ore and limestone , transported goods, and erected dams. [ 126 ] [ 127 ] [ 128 ] [ 129 ] [ 130 ] On December 10, 1719, the privileges of miners were enshrined in law with the Berg Privilege, which allowed representatives of all classes to search for ores and build metallurgical plants. At the same time, manufacturers and artisans were exempted from state taxes and recruiting , and their houses were exempt from the post of troops. The law also guaranteed the inheritance of the ownership of factories, proclaimed industrial activity a matter of state importance and protected manufacturers from interference in their affairs by local authorities. The same law established the Berg Board , and managed the entire mining and metallurgical industry, and local administrations. The provisions of the Berg Privilege were extended to foreign nationals in 1720, and remained in force until the early 19th century. [ 131 ] [ 132 ] [ 133 ] [ 134 ] [ 135 ]
In the 1720s, V.N. Tatishchev and, later, V. de Gennin, who founded the Yekaterinburg state-owned plant in 1723, were sent to the Urals as leaders of the local mining administration. Tatishchev came into conflict with Demidov, trying to weaken his power at the beginning of his work in the Urals. Demidov complained of infringement in Petersburg, and Tatishchev was recalled. Later, de Gennin, who came to replace Tatishchev, and completed the construction of the plant in 1722-1723, confirmed the abuse of the Demidovs in organizing the work of private plants. [ 136 ] [ 137 ] [ 138 ] [ 139 ] In 1720, Tatishchev established the Office of Mining Affairs in Kungur , and in 1722 transferred it to the Uktussky plant and renamed it the Siberian Mining Authority, and then the Siberian Higher Mining Authority. De Gennin transferred the Office to Yekaterinburg in 1723 and renamed the institution the Siberian Ober-Bergamt. The achievements of Tatishchev include creating competition for the Demidovs by inviting other mining companies to the Urals, developing rules for managing mining plants and staffing standards. [ 140 ] [ 141 ] [ 142 ]
In the 1720s and 1740s, the Yekaterinburg plant, which gave rise to Yekaterinburg, was the largest metallurgical plant in Europe. The blast furnaces of the plant were more economical and more productive than the English and Swedish ones, which were considered the best in the industry at that time. If the specific consumption of charcoal per 100 kg of iron in Swedish furnaces ranged from 300 to 350 kg, then in Yekaterinburg the consumption of coal was 150–170 kg. [ 145 ] [ 146 ] [ 147 ] [ 148 ] [ 149 ] [ 150 ]
On January 18, 1721, a decree was issued that allowed factory owners, regardless of whether they had a noble rank, to buy serfs . At the same time, the villages purchased by the tycoon with their population could only be sold together with the factory. Later, these peasants and the factories that used their labor became known as possessory factories. Later, in 1744, the norms for the purchase of peasants with factories were established: in the factories of ferrous metallurgy with one blast furnace — 100 peasants, and in the copper smelters - 200 men for every thousand pounds of copper. [ 115 ] [ 152 ] [ 153 ] [ 134 ] The addition of peasants to factories led to unrest and riots, which were suppressed during the 2nd half of the 18th century. Later, until the middle of the 19th century, free labor contributed to the intensive development of the metallurgical industry. [ 154 ]
In the first quarter of the 18th century, 20 [ note 10 ] blast furnaces were built in the Urals, and in 1725 they smelted about 0.6 million poods of cast iron. [ 155 ] During the same period, small businesses built several small metallurgical plants: Mazuevsky, Shuvakishsky, and Davydovsky. All of them existed for no more than 40 years. [ 156 ] After the end of the Northern War , due to a decrease in demand for ferrous metals, the construction of iron smelters was suspended, mainly copper smelters were built. From 1721 to 1725, 11 plants were built in the Urals, of which only Nizhny Tagil was blast-furnace and iron-making, the rest were either copper-smelting ( Polevskoy and Pyskorsky), or copper-smelting and iron-making (Verkhne-Uktussky and Yekaterinburg). [ 157 ] In total, from 1701 to 1740, 24 state and 31 private metallurgical plants were built in the Urals, which determined the specialization of the region as a quality industrial metallurgical center. [ 158 ] [ 159 ] Private factories were characterized by higher profitability compared to state-owned. [ 160 ] The growth of iron smelting in the Urals over 25 years (from 1725 to 1750) amounted to 250%: from 0.6 million poods to 1.5 million poods. [ 161 ]
In the 1730s, the construction of fortresses and factories began in the Southern Urals, on the lands of the Bashkirs. [ 162 ] [ 163 ] In 1734, Anna Ioannovna approved the project of colonization of the Southern Urals submitted by the Chief Secretary of the Senate , I. K. Kirilov, and appointed him the Chief Commander of the Orenburg expedition. The tasks of the expedition included the construction of the fortress city of Orenburg , a line of defensive fortresses in order to exclude the raids of the Bashkirs, development of the natural resources of the region, and the opening of trade routes to Asia . In autumn of 1736, 100 versts to the south-east of Ufa and 10 versts from the Tabynsky fortress, the construction of the Resurrection (Tabynsky) copper smelter, the first in the Southern Urals, was started. On May 22, 1744, a decree of the Berg Collegium was issued, which allowed for the purchase from the Bashkirs and other owners of the deposit, forests and land for the construction of mining plants. In the period from 1745 to 1755, 20 factories were built on the territory of Bashkiria . By 1781, there were 38 factories in total. During the years of the Peasant War , 89 mining plants were damaged to varying degrees. With the beginning of the uprising, in the first half of October 1773, the closest private copper plants to Orenburg were seized: Verkhotorsky, Voskresensky, Preobrazhensky and Kano-Nikolsky plants. From November to December, all plants in the Southern Urals (24 plants) were seized. By the beginning of 1774, the uprising covered the Middle Urals, the number of captured factories in January reached 39, in February - 92. Individual factories resumed work for short periods of time in 1774, despite the occupation. With the suppression of the uprising, the work of the factories began to recover. By the beginning of 1775, about two thirds of all the Ural factories were working. By the end of 1775, the least destroyed factories of the Southern Urals began to resume their work. [ 164 ] [ 165 ] [ 166 ]
Since the middle of the 18th century, state-owned Ural factories began to produce gold , and since 1819, platinum . Later, mining was permitted for all Russian subjects, which led to the rapid propagation of gold mines in the Urals. [ 167 ] In the 1750s and 1760s, the construction of factories in the Urals continued intensively, thanks to the high profitability of production and the support of the authorities. In addition to the Demidovs and Stroganovs , entrepreneurs Osokins, Tverdyshevs, I. S. Myasnikov, and M. M. Pokhodyashin, as well as officials and nobles: P. I. Shuvalov , M. M. Golitsyn, and A. I. Glebov began to build factories. [ 168 ] Only the Yekaterinburg and Kamensky plants remained in the state administration, the rest were transferred to private management. Later, many private factories were returned to the treasury for debts (in 1764 — the factories of Count Shuvalov, in 1770 — Count Chernyshev's , in 1781 — Count Vorontsov's ). [ 169 ] By the end of the 17th century, the largest companies in Russia were the Demidovs, Yakovlevs, Batashovs and Mosolovs, which produced about half of all iron in the country. [ 170 ]
In 1767, about 140 metallurgical plants operating in the Urals made the region a leader in world iron production and secured a monopoly position in Russia in copper smelting. [ 171 ] By the end of the 18th century, the number of serf workers in the Ural factories reached 74.1 thousand people, and the number of registered peasants reached 212.7 thousand people. In 1800, the Ural factories produced 80.1% of cast iron, 88.3% of iron, and 100% of copper of the all-Russian production volume. Thanks to this, Russia came out on top in the world for iron production and smelted from 20 to 27% of the world's copper. [ 172 ] [ 173 ] [ 174 ]
From the end of the 18th century to the beginning of the 19th century, problems with the supply of wood worsened at most of the Ural mining plants. The forests of the factory dachas were cut down at a distance of 5 to 25 versts. The old factories had even greater distances: the Kamensk plant had 50-55 versts, and the Nevyansk plant had 40-70 versts. Decrees were issued prohibiting unauthorized logging. [ 175 ] [ 176 ]
The industrial revolution at the Ural mining plants consisted of three major stages:
The replacement of wooden bellows with cylindrical blowers in the early 19th century reduced coal consumption by up to 20% and doubled the productivity of blast furnaces. Further development of blast furnace technology was associated with increasing the height of the furnaces, optimizing their profile and increasing the power of the blower motors. Cupola furnaces appeared at factories, and the casting of metals became a separate production. In 1808, serf S. I. Badaev invented a method for producing cast steel, later called Badaevskaya, for which he received his freedom and in 1811 was sent to the Votkinsk Plant to organize production. At the Zlatoust plant since 1828, experiments on the production of cast steel were conducted by P. P. Anosov . [ 178 ] [ 179 ]
Foreign engineers played a significant role in the development of existing plants and the construction of new ones in the Urals. In the 18th century, up to 600 German metallurgists worked at the factories of the Yekaterinburg Department at various times. At the beginning of the 19th century, 140 craftsmen from Europe were invited to the Izhevsk Arms Factory , and 115 German gunsmiths and steelworkers were invited to the Zlatoust Arms Factory. After the end of the contract, many foreigners remained in the factories as freelance workers. [ note 11 ] [ 184 ] [ 185 ]
Administrative changes at the beginning of the 19th century were associated with the approval in 1806 of the Mining Charter, compiled by A. F. Deryabin and later included in part in the Code of Laws of 1832 , [ note 12 ] and the formation of the Mining Department, which was transformed in 1811 into the Department of Mining and Salt Affairs. [ 187 ]
In the period from 1801 to 1860, 37 new plants were built in the Urals, including 3 copper smelters. Next to the previously built factories, auxiliary plants were built, which used the wastewater of the main factories and were actually their rolling shops. During the same period, 14 Ural copper smelters were closed due to the refusal of mints' coinage and the transition to paper money. To stabilize the situation, the government in 1834 abolished all taxes from factories, except for tithes. At the same time, the level of copper production at the beginning of the century was reached only in 1826. Since the 1850s, due to the appearance of cheap English, and later — Chilean , North American , and Australian copper on the market, the metallurgical industry of the Southern Urals entered a period of long-term crisis. In 1859, the price of Russian copper in comparison with the level of 1854 had decreased by 50%. [ 188 ] [ 133 ] [ 189 ]
Steam engines were introduced and took root in the Urals slowly. The first steam engines appeared in the Ural factories in the last years of the 18th century. From 1800 to the 1810s, machines often failed and consumed a lot of firewood, which caused their slow spread. In the 1830s, the machines became more reliable, there were machine-building enterprises that designed, assembled, and repaired steam engines. In 1834, the Cherepanovs built the first steam locomotive and the first railway with a length of 853.4 m, designed to deliver ore from the Vysokogorsky mine to the Vysky plant. By 1840, the number of steam engines in the Ural factories reached 73 units. Also in the 1840s, hydraulic turbines became widespread in the Urals, replacing low-performance water wheels. [ 190 ]
In the 1840s, the introduction of the Kontuaz method of iron production began at the Ural factories. The Yuryuzan-Ivanovsky plant in 1840 and the Simsky plant in 1842 were the first to switch to it. Subsequently, the Kontuaz forges were built at state-owned factories, and later at private ones. By 1861, 364 Kontuaz forges operated at 37 factories in the Urals. In the 1860s and 1870s, when production was already supplanted by steelmaking, Lancashire forges appeared in the Urals. A more productive puddling process was introduced in the Urals in 1817 in a test mode at the Pozhevsky plant, from 1825 to 1830 at the Nizhniy Tagil plant, and in September 1837 the Votkinsky plant completely switched to puddling. [ 192 ] By 1861, among 58 factories, there were 201 puddling furnaces, 34 gas puddling, 153 welding, and 23 gas welding furnaces. Before the widespread use of steel-making processes in 1857, P. M. Obukhov invented a cheap method, called Obukhov, of steel production at the Zlatoust plant. [ 193 ] [ 194 ] [ 195 ]
The height of the Ural blast furnaces in the 19th century reached 18 meters, which significantly exceeded the height of European furnaces. This advantage made it possible to carry out the blast furnace process on a cold blast with relatively low costs. This led to the later introduction of hot blast in the Urals, although successful experiments on its use were carried out back in the 1830s and 1840s at the Kushvinsky, Lysvensky, Verkh-Isetsky, and other plants. [ 196 ] Thanks to the events of the Industrial Revolution in England , the average productivity of blast furnaces in the Ural factories in the second half of the 19th century was already inferior to those in England. So, in 1800, one blast furnace in the Urals produced an average of 91.6 thousand poods of iron, and in 1860, 137 thousand poods. British furnaces produced respectively 65.5 thousand and 426 thousand poods. [ 197 ]
Since the middle of the 19th century, rolling production has developed, and steel and iron casting continued to develop. Castings from the Kasli plant have become world-famous. At large factories, rail rolling production was mastered. [ 196 ] In 1859, 12.2 million poods of iron were smelted at the factories of the Urals, which was about 2/3 of all iron smelted in Russia. [ 198 ]
During the Patriotic War of 1812 , many Ural factories were converted to weapons production facilities. The Kamensk plant issued 87,274 poods of artillery pieces during 1810-1813. [ 109 ] During the war years, 47 private factories crossed over to manufacturing shells , some of which had never produced such products. Often, production plans were frustrated, and the cast guns did not withstand testing due to haste and unexploited technologies. The victory in the Patriotic War of 1812 did not allow the authorities to identify these problems. [ 199 ] At the same time, the war substantially reduced the demand of the domestic market for metal, which led to inflation and long standstills at factories. [ 200 ]
The casting of artillery pieces resumed in 1834. Before the beginning of the Crimean War, from 1834 to 1852, the Ural factories cast 1,542 guns instead of the 3,250 ordered, on average, orders for the production of shells were fulfilled by about 23-25 %. Already during the war, the supply of 60-pounder guns was disrupted due to a gap in testing. During the defense of Sevastopol, 900 Ural guns were unsuitable. [ 199 ]
The development of the Ural copper-smelting industry in the 19th century was associated with an increase in the height of furnaces, the use of hot blast and coal . Steam engines began to be used to lift the ore to the surface and pump water out of the mines. Copper production has shifted to the Northern and Southern Urals. In the second half of the 19th century, copper smelting began to decline due to the depletion of deposits and reduced demand from mints. [ 201 ]
Since the 1820s, gold and platinum mining has been rapidly developing in the Urals. In 1823, there were 309 mines in the region. 105 poods of gold were mined. In 1842, [ note 13 ] the largest Ural gold nugget weighing 36.04 kg was found at the Tsarevo-Alexandrovsky mine. Platinum was mined at the mines of the Nizhniy Tagil district of the Demidovs, at the Isov mines of the Verkh-Isetsk district, and at the Krestovozdvizhensk mines. In the 19th century, the Urals produced 93–95% of the world's platinum. [ 203 ]
In the 18th century and first half of the 19th century, the use of adolescent and child labor was widespread in the mines and factories of the Urals, sanctioned by a number of legislative acts and reinforced in the Mining Statute of 1842. In the 1850s, children and adolescents accounted for 30 to 50% of all workers in factories, and in mines - from 40 to 85%. At the beginning of the 19th century, women were employed in 17% of factories. In the 1850s, female labor was already more widely used, and the proportion of women was about 10% of the workers. [ 204 ]
By the time of the abolition of serfdom , the Ural metallurgy was in a deep crisis, which was facilitated by a sharp increase in grain prices in 1857 due to crop failures, especially significant in the Northern Urals. [ 205 ] Of the 41 mining districts, 13 had a total debt of 8.1 million rubles, which increased to 12.4 million rubles by the end of the 1860s. The transition to freelance labor led to a sharp reduction in the number of workers in factories. If in 1860 there were 8,663 workers at the seven Goroblagodatsky factories, in 1861 — 7,030, then in 1862 the number decreased to 4,671 people, in 1863 - to 3,097, and in 1864 - to 2,839 people. [ 4 ] During this period, there were 154 metallurgical plants of various specializations and gold crafts in the Urals, including 24 state-owned, 78 possessory, and 52 manorial ones. Of these, 115 enterprises were located within the Perm Governorate , 26 in the Orenburg Governorate , and 13 in the Vyatka Governorate . [ 206 ]
In 1824, to support the miners, the government established a State Loan Bank. According to the data of 1849, the State Loan Bank pledged the Kanonikolsky, Beloretsky, Voskresensky, Troitsky, Blagoveshchensky, Yuryuzan-Ivanovsky mining districts a total amount of 1,106,995 rubles in silver. In 1851, the Beloretsk Mining District was re-mortgaged to the bank, and in 1852, the Preobrazhensky Plant was mortgaged to private investors in the amount of 300 thousand rubles with the obligation to pay the debt to the bank. [ 207 ] In general, the pre-reform level of production at the Ural factories was reached only in 1870. The Government provided support to mining companies in the form of soft loans secured by metals and orders for the construction of railways. The industry was heavily influenced by commercial banks and wealthy entrepreneurs who bought up entire mountain districts. In the 1880s, mining plants began to be incorporated . [ 208 ]
In 1870, at the invitation of the Russian government, the Austrian metallurgist P. von Tunner visited an industrial exhibition in St. Petersburg and inspected the Ural metallurgical plants. [ 209 ] As a result of this trip, in 1871, he published a book [ 210 ] with a description of the factories, in which he noted the technical and organizational backwardness of the metallurgy of the Urals, and the high cost of production. Von Tunner's book eventually became the first systematic description of the Ural mining plants. [ 209 ]
The lack of customs regulation of foreign supplies of metals had a negative impact on the development of Ural metallurgy. European metallurgical companies in the 2nd half of the 19th century actively united in syndicates to regulate market prices and control production volumes. The surplus, as a rule, was exported to the Russian markets and sold at low prices. This led to overstocking of markets and lower prices for metals. The amount of unsold metal at the Nizhny Novgorod Fair was 0.9 million poods in 1883, 1.16 million poods in 1884, 1.84 million poods in 1885, and 1.94 million poods in 1886. [ 211 ]
In the 1880s and 1890s, 16 metallurgical plants were built in the Urals, including the large Chusovsky (1883) and Nadezhdinsky plants(1896). The old factories underwent significant modernization, including the introduction of mechanical processing plants, the construction of open-hearth shops, power plants, and air heaters. The introduction of hot blast was promoted in the 1860s and 1870s in the factories of the Urals. Rashet blast furnaces equipped with trapping devices for heating the air were used. [ 212 ] [ 213 ] Despite these successes, since 1896, the Urals has lost the primacy in the share of metal produced to enterprises in Southern Russia. [ 214 ] [ 215 ] In 1900, the Ural factories smelted 50.1 million poods of iron. The first open-hearth furnaces in the Urals were built in 1871 at Votkinsky and in 1875 at the Perm Cannon Factory . By 1900, there were a total of 42 of the furnace. Bessemerization in the Urals was first introduced at the Nizhnesaldinsky and Katav-Ivanovsky plants. In 1900, 48.9% of the Ural finished ferrous metal was produced by open-hearth and Bessemer methods. [ 216 ]
By the end of the 19th century, with the expansion of factories in the Urals, the problems with the depletion of forest resources and environmental pollution intensified. [ 217 ]
In 1899, on behalf of S. Yu. Witte, an expedition of scientists headed by D. I. Mendeleev was sent to the Urals, the main task of which was to find out the causes of stagnation in the metallurgical industry. In his report, Mendeleev called the main reasons for the industrial crisis of the Ural metallurgy, off-road conditions, the preserved serf relations between factory owners and peasants, the use of outdated equipment and technologies, the monopoly of large entrepreneurs on ore and forests, and the arbitrariness of local authorities. As a result of the expedition, a plan was drawn up for the development of Ural metallurgy with an increase in the volume of iron smelting to 300 million poods per year, which did not find the support of the authorities. [ 218 ] [ 219 ] [ 220 ]
At the beginning of the 20th century, the entire Russian industry was in a deep crisis, the consequences of which affected the factories of the Urals until 1909. In 1909, the Ural iron and steel plants smelted 34.7 million tons of iron, which is 30.9% less than in 1900. During the crisis years, the share of finished iron increased, new markets were searched for, syndicates and associations were created to fight the competition of factories in Southern Russia. To a lesser extent, the crisis affected the copper smelting industry, thanks to continued demand and an increase in customs duties on copper imports. In the first decade of the 20th century, small technically backward factories with worn-out equipment, which had become unprofitable, were closed. Of the 111 metallurgical plants operating in the Urals in 1900, 35 plants were shut down by 1913. In conditions of tough competition, factories were forced to modernize: blast furnaces with a lightweight casing were erected, hot blast was introduced everywhere, steam engines and ore preparation for smelting, furnaces and puddling furnaces were replaced by open-hearth furnaces, more powerful rolling mills were built, and factories received electricity. In the mountainous districts, the optimization and reorganization of capacities were carried out: the final processing was concentrated, as a rule, at the main plant of the district, the rest of the factories provided supplies of iron. During the Russo-Japanese War , the Izhevsk, Perm, and Zlatoust arms factories sharply increased the production of guns, rifles, and shells. [ 221 ]
In 1908, the construction of the Porogi electrometallurgical plant for the production of ferroalloys, and one of the first hydroelectric power plants in Russia to provide the plant with electricity began. Until 1931, the plant was the only producer of ferroalloys in the country. [ 222 ] [ 223 ]
In 1910, an industrial boom began, which continued until the First World War . From 1910 to 1913, the production of iron increased to 55.3 million poods (by 29.9%), finished metal products - up to 40.8 million poods (by 9.6%). But the share of the Ural factories in the all-Russian iron smelting fell to 21.6%. Commercial banks actively invested in the development of the metallurgy of the Urals. The most important role in the Urals was played by the Azov-Don Commercial Bank , Saint Petersburg International Commercial Bank , and Russo-Asiatic Bank . [ 224 ] The volume of investments at the turn of the 20th century was estimated at 10.8 million rubles. Modernization and reconstruction of mountain districts continued. In 1911, a new blast furnace with a volume of 150 m³ and an open-hearth furnace with a capacity of 25 tons were launched at the Nizhniy Tagil plant; two Bessemer converters and two new blast furnaces were installed at the Nizhnesaldinsky plant. The Votkinsk plant was reconstructed for the production of steam locomotives and river vessels. The factories that produced weapons were reconstructed and switched over to the production of civilian products. Also in the pre-war years, the concentration of production at large factories increased: in 1914, out of 49 Ural plants, 16 had the productivity of more than 1 million poods of iron per year and produced 65% of the total volume, including 5 factories with a capacity of more than 2 million poods of iron per year. Nadezhdinsky, Nizhnesaldinsky, Zlatoustovsky, Chusovskoy, and Votkinsky produced 36.1% of the total volume. [ 225 ]
Copper smelters of the Urals at the beginning of the 20th century mastered pyrite smelting, which made it possible to process poor sulfur ores. In the pre-war years, the Nizhnekyshtymsky Copper Electrolytic Plant, the Karabashsky, and Kalatinsky plants were launched. Through syndicates formed, British companies owned 65.5% of the copper mined in the Urals. The gold-platinum mining industry underwent mechanization. The first Dutch dredges appeared in 1900 at the Neozhidany Mine on the Is River. By 1913, the number of dredges in the Urals reached 50, they ensured the extraction of 20% of gold and 50% of platinum. Until 1913, the average production of gold in the Urals was 550-650 poods per year, while the average production of platinum was 300-350 poods per year. [ 226 ]
The modernization of private and state-owned factories and the construction of railways, which began in the 1910s, were not completed by the beginning of the war. Thinking the war would be brief, the government did not involve the private factories of the Urals in the production of guns and shells until the summer of 1915. As a result, the Ural industry was late in getting involved in providing the army with weapons and equipment. In 1914-1916, state-owned factories maintained the pre-war production of iron, but completely stopped the production of roofing iron in favor of military products. The production of high-grade iron and projectile steel was almost doubled. The sharp increase in production volumes was hindered by the lack of fuel resources, labor, and means for transporting goods. In 1915-1916, due to the lack of fuel in the Urals, 22 blast furnaces were stopped, and 11 furnaces worked at a reduced capacity. The situation was aggravated by the disorganization of railway transportation due to the priority of military needs and the mobilization of qualified personnel. In the summer of 1915, a commission headed by General A. A. Manikovsky was sent to the Urals to negotiate with private factory owners and explore the possibility of private factories participating in the production of military products, and to coordinate the actions of private factories. On November 7, 1915, the Ural Factory Meeting was established under the leadership of the Chief Head of the Ural Mining Department, P. I. Yegorov. In the future, it became obvious that the created administrative apparatus could not fulfill the tasks assigned to it. The difficult situation at the front in 1915 and the acute shortage of weapons forced the government to accept the inflated demands of entrepreneurs. As a result of negotiations, military orders were accepted by private factory owners at increased prices. The total cost of the orders was estimated at 200 million rubles. [ 227 ] [ 228 ]
The position of the working people worsened during the war years. The working day increased to 12 hours, women and children worked on an equal basis with men, but they were paid half as much. The organization of production was unsatisfactory: the factories received orders that they could not fulfill due to the lack of necessary equipment. After the defeat of the Russian troops in 1915-1916, 87% of the Ural factories switched to the production of military products. With the support of the authorities, commercial companies with the participation of foreign capital developed. [ 230 ] In 1915-1918, large machine-building plants were evacuated from the front-line territories of the Baltics and Petrograd, to the Urals. The staff of the arms factories was replenished with evacuated specialists. [ 231 ]
After the February Revolution , power passed into the hands of provincial commissars appointed by the Provisional Government . Ural miners supported the Provisional Government and its bodies. On March 4, 1917, the Council of Miners' Congresses asked the government to appoint a commissar to control the work of the Ural factories. Such a commissar was appointed, businessman V. I. Europeus, who headed the created Provisional Committee of the Ural Mining District. At some factories ( Nyazepetrovsky , Sosvensky, Bilimbaevsky , Zlatoustovsky, Nizhne-Ufaleysky), before the October Revolution , power was partially or completely seized by the Soviets of Workers' Deputies . The state of production continued to deteriorate, there was a critical shortage of fuel, railroad transportation became practically unmanageable, enterprises worked with interruptions, equipment was not repaired or updated in a timely manner. Smelting of pig iron and steel declined rapidly, and the number of industrial accidents increased. The commission sent by the Provisional Government in 1917 to restore the working capacity of the Ural enterprises failed to manage the task. [ 232 ]
After the October Revolution, in November 1917, the Ural Factory Meeting was reorganized under the leadership of the Bolsheviks . Its powers were extended by the decree of the Supreme Economic Council of the Republic to the Vyatka, Orenburg, Perm and Ufa provinces, and a number of adjacent districts. The Ural Mining Board and the Yekaterinburg Bureau of the Council of the Congress of Mining Industrialists of the Urals were liquidated. In November - December 1917, the boards of Ural joint-stock companies suspended the transfer of money to factories where Soviet control was introduced, which led to delays in the payment of wages and the accumulation of debts for the supply of raw materials and food. There were pockets of famine and epidemics of diseases, the situation of workers-prisoners of war was especially difficult. In December 1917, the Council of People's Commissars began the nationalization of the mountain districts of the Urals, earlier than other enterprises of the country. [ 233 ] By July 1918, more than 4,340 enterprises (25 out of 34 mountain districts of the Urals) had been nationalized. In 1918, in addition to the factory committees established earlier, business councils were established to manage the factories, whose activities were coordinated by the regional board of the nationalized enterprises of the Urals. Such actions led to a certain dual power in the management of enterprises in the industry, and since March 1918, factory committees have been merged with trade unions . Since 1918, systematic training of engineers and workers for the metallurgical industry began in educational institutions of the Urals. [ 234 ]
Due to supply disruptions and salary delays in the summer and autumn of 1918, anti-Soviet demonstrations took place at the Ural factories. By July 1918, out of 89 Ural blast furnaces, 51 were in operation, and 59 of 88 open-hearth furnaces were in operation. In August, Soviet power was overthrown in Izhevsk and Votkinsk . At the same time, the Regional Government of the Urals was formed in Yekaterinburg, the Ural Industrial Committee was created to manage the industry, the Main Directorate of Mining Affairs of the Urals was established to manage the mining industry, which was transformed in December into the Ural Mining Administration. On August 19, the Provisional Regional Government of the Urals, in its declaration, announced its intention to return the factories to their previous owners. By December 10, 1918, only 36 mining and 9 small and medium-sized coal enterprises in the Urals and Siberia were denationalized. All these changes had practically no effect on the real state of the Ural industry. The plans of the Kolchak government to subsidize the Ural factories also did not come true. The situation was aggravated by the complete dependence of the inhabitants of the factory settlements on the work of enterprises and the political struggle of the provisional governments. In late 1918 - early 1919, the enterprises of Verkh-Isetsky, Revdinsky, Shaitansky, Zlatoustovsky, and a number of other districts were stopped. [ 235 ] [ 236 ]
After the restoration of Soviet power in the Urals in mid-1919, the management of the factories was centralized under the auspices of the Supreme Council of the National Economy. Later, the Ural Industrial Bureau of the Supreme Economic Council was created. The debts of the enterprises were canceled, a free supply of raw materials and materials was established, finished products were also handed over without payment according to centralized orders. By the end of 1919, 14 blast furnaces, 16 open-hearth furnaces, and 49 rolling mills were operating at the Ural factories. To manage the factories, five regional departments were created: Vysokogorskoe (18 enterprises), Bogoslovskoe (5 enterprises), Yekaterinburg (31 enterprises), Permskoe (17 enterprises), and Yuzhno-Uralskoe (20 enterprises). In 1920, the re-evacuation of workers and specialists from Siberia began, as well as the return of the factory equipment taken out by the White Guards . On the whole, in 1919-1920, only 20% of the metallurgical plants of the Urals operated, and the volume of production was about 10% of the pre-war level. Of the 7 blast furnaces of the Nadezhdinsky Plant, the largest at that time, only one operated; the plants of the Goroblagodatsky Mining District were completely stopped. In total, in the Urals in December 1920, only 9 blast furnaces, 10 open-hearth furnaces, and about a dozen rail, pipe-rolling, and sheet mills were operating, which were completely shut down by August 1921. During the years of the Civil War , the equipment of enterprises was significantly damaged. The smelting of iron in 1921 amounted to 69 thousand tons, which was 7.5% of the pre-war level. [ 237 ] [ 238 ]
With the end of the war and the adoption of the New Economic Policy (NEP) in March 1921, the restoration of the Ural industry began. Uralplan was created, under whose auspices the development of a program for the integrated development of the region was carried out. Most of the enterprises switched to a self-supporting scheme, which led to the emergence of industrial trusts, which united factories according to industry characteristics. 5 metallurgical trusts were formed in geographical areas, as well as separate trusts "Uralzoloto," "Uralmed," and trusts for the extraction of coal. In 1925, Uralplan developed a "Three-Year Program for the Development of the Metallurgical Industry in the Urals," then a plan for the development of the Urals for 1925-1930 was drawn up, which included, among other things, the construction of the Magnitogorsk Metallurgical Complex . Concession contracts for the smelting of metals and the extraction of minerals operated with varying degrees of success. In 1927, the number of concession contracts in the Urals included 12 companies. Subsequently, the trusts were downsized with the allocation of iron ore trusts. In total, on October 1, 1925, there were 31 trusts in the Urals. After the adoption of the first five-year plan , in 1929 the trust system was abolished. [ 239 ]
In the 1920s-1930s, the concentration of production and specialization of factories, which began at the beginning of the 20th century, continued in the metallurgy of the Urals. The Nadezhdinsky Plant focused on rolling of all Ural rails, the Nizhny Salda Plant switched to the production of shaped rolled products, the production of pipes was concentrated at the Pervouralsk Plant , the Verkh-Isetsky Plant switched to the production of transformer steel . Machine-building and mechanical enterprises were separated from metallurgical enterprises. Small mines were actively closed, ore mining was concentrated at the large deposits of Bakalsky, Tagilo-Kushvinsky, Nadezhdinsky, and Alapaevsky districts. Geological exploration work began in 1920 in the Urals, and by 1933 the explored reserves of iron ore amounted to about 2 billion tons, including 478 million tons along Magnitnaya Mountain . There was an acute shortage of fuel resources, which forced metallurgists to switch to mineral fuel. The first successful blast-furnace smelting in the Urals of Kuznetsk coke took place on June 13, 1924 at the Nizhnesaldinsky Plant. Later, Kushvinsky, Nizhnetagilsky, and other plants switched to using coke. By 1926, 37% of the Ural pig iron was smelted using coke, the number of operating blast furnaces was 32, open-hearth furnaces was 47 (in 1913 - 61 and 75, respectively), and their productivity increased 1.5 and 1.7 times, respectively, to the level of production in 1913. [ 240 ]
The recovery of the copper and gold-platinum industries was much slower due to the greater damage inflicted during the Civil War. In 1921-1922, the extraction of copper ore in the Urals amounted to only 2.2% of the level of 1913, gold - 1.9%, and platinum - 4.3%. By 1928, production amounted to 585.4 thousand tons (88.7% of the level of 1913), and 15 copper mines were able to resume operation. [ 241 ]
At the end of the 1920s, Soviet design institutes, with the involvement of foreign companies, began designing the giants of the Ural metallurgy and mechanical engineering — Magnitogorsk, Chelyabinsk and Novotagilsky metallurgical plants, Ural Heavy Machinery Plant , Uralvagonzavod and Pyshminsky Copper-Electrolyte Plant . On May 15, 1930, the Central Committee of the CPSU (b) issued a resolution "On the work of Uralmet," which emphasized the need to create a coal and metallurgical center in the East of the USSR on the basis of coal and ore deposits of the Urals and Siberia. Investments in the construction of new and reconstruction of old plants have increased dramatically. In 1925-1926, 52.6 million rubles were disbursed, and in 1932 — already 1447.7 million rubles. The management of the metallurgical industry was also centralized. In 1931, the Main Directorate of the Metallurgical Industry of the VSNH was liquidated, the main committees were created: Glavchermet, Glavspetsstal, Glavmetiz, and Glavtrubostal . Later, in 1939, the People's Commissariats of Ferrous and Non-Ferrous Metallurgy of the USSR were established. [ 242 ]
During the 1st and 2nd five-year plans , mining, ore dressing and ore preparation for smelting developed intensively. Flotation and melting of concentrates in water packs and reverberatory furnaces have been successfully applied in non-ferrous metallurgy. By 1934, 62% of all mined ore was being enriched in the Urals. By the beginning of the 2nd five-year plan, drilling at the mines was fully mechanized. Iron ore production by 1937 reached 8.7 million tons (31% of production in the USSR), and copper ore by 1935 had reached 2.96 million tons. The conversion of blast furnaces to mineral fuel continued: in 1940 86.8% of Ural pig iron was smelted on coke. Only 8 furnaces worked on charcoal, producing special and high-quality cast iron. During the same period, non-ferrous metallurgy plants were built: Krasnouralsky and Sredneuralsky copper smelters, Pyshminsky Copper Electrolytic, Ural Aluminum , Chelyabinsk Zinc , Ufaleysky, Rezhsky and Yuzhno-Uralsky Nickel , Solikamsk and Berezniki Magnesium. Most of the equipment of the new plants was purchased abroad. In 1931, 600 million rubles were spent on the purchase of imported equipment, in 1932 — 270 million rubles, in 1933 — 60 million rubles. [ 243 ]
In 1933 and 1937, the People's Commissar of Heavy Industry of the USSR , G.K. Ordzhonikidze issued orders for the development of the gold-platinum industry. The measures taken made it possible in 1936 to extract in the Urals a record 12.8 tons of gold (156.3% compared to the level of 1913) and 4.8 tons of platinum (97.8% compared to the level of 1913). [ 244 ]
By the end of 1941, the Germans occupied most of the industrial territory of the USSR, where 59 blast furnaces, 126 open-hearth furnaces and 13 Electric arc furnace , 16 converters and 105 rolling mills functioned, about 66% of Soviet pig iron, more than 50% of steel and 60% of aluminum were produced. During 1941-1942, equipment and personnel of 832 large factories, which ended up in the front-line zone, were evacuated to the Urals. At the Novotagilsk Plant , an armored mill was launched, removed from the Kirov Plant . At the Sinarsky Plant , a thin-walled pipe workshop was launched from the equipment of the Dnepropetrovsk Pipe Plant. A middle-sheet workshop was built at the Magnitogorsk plant with equipment from Zaporizhstal , and an armored mill was evacuated from the Mariupol Plant . The Urals became the main supplier of metal in the country. The production of civilian products was minimized. All metallurgical plants switched to the production of weapons. To increase the production of alloy steels required for the manufacture of military equipment, the production of ferroalloys was often carried out in units not intended for this - blast furnaces and open-hearth furnaces. [ 245 ]
During the war years, the construction of metallurgical enterprises continued. Magnitogorsk and Nizhnetagilsky combines, Zlatoust, Pervouralsky, Beloretsk Iron and Steel Works , Chusovsky Metallurgical, Magnitogorsk Hardware , and Chelyabinsk Ferroalloy Plant were declared shock construction sites . In total, during the war years, 10 blast furnaces and 32 open-hearth furnaces, 16 electric furnaces, 16 ferroalloy furnaces, 2 Bessemer converters, 12 rolling and 6 pipe rolling mills, 11 coke batteries , more than 100 shafts and coal mines were built, and launched in the Urals. Chelyabinsk and Chebarkul Metallurgical, Chelyabinsk Pipe Rolling , Magnitogorsk Calibration, Berezniki Magnesium, Bogoslovsky Aluminum , and Miass Machine-Building Plant were also built. [ 246 ]
In conditions of martial law, it was required to dramatically increase the volume of mined ore. Priority was given to the rich and accessible deposits of the Magnitnaya and Vysokaya mountains, which in 1943 accounted for 81.1% of all Ural iron ore. Intensive production at these fields led to their rapid depletion. To provide manganese in a short time, the Polunochnoye and Marsyatskoye fields were developed in the north of the modern Sverdlovsk Oblast . At the Magnitogorsk Combine, the smelting of armor steel in open-hearth furnaces was first mastered and extended to other plants. During the war years, the Izhevsky Metallurgical Plant mastered the smelting of 19 new grades of steel, and for the first time applied stamping of breeches and planting of barrels on horizontal forging machines. At the Pervouralsk Novotrubny plant, reinforced with evacuated equipment from Ukrainian factories, during the war, 5 new workshops were built and the production of 129 types of pipes was mastered. At Uralvagonzavod , Uralmash , and the Chelyabinsk Tractor Plant , the production of tanks was launched in the shortest possible time. In Sverdlovsk , and Ust-Katav , from the framework of the evacuated equipment, the production of artillery pieces and shells was built, supplementing the potential of the Motovilikhinsky, Zlatoust, and Izhevsky arms factories. [ 247 ]
Due to the expansion of the Ural Aluminum Plant, the volume of aluminum production increased during the war years from 13.3 to 71.5 thousand tons. In 1942, UAZ produced 100% of aluminum in the USSR. About 80% of all shell and cartridge cases during the war years were made of copper smelted by the Pyshminsky plant. The South Ural Nickel Plant significantly increased the production of nickel and cobalt . The Chelyabinsk Zinc Plant provided 75% of zinc supplies by the end of the war. At the Solikamsk Magnesium Plant, the design capacity was closed 4.5 times due to the addition of evacuated equipment. On July 22, 1943, the first magnesium was produced by the Bereznikovsky Plant, having been completed in a short time because of a simplification of the project. Evacuated equipment spread to Revda , Kamensk-Uralsky , Verkhnyaya Salda , and Orsk . Plants for the processing of non-ferrous metals and the production of aluminum and magnesium alloys were created. In 1942, the Kirovgrad Hard Alloys Plant was commissioned, which began to produce hard alloy armor-piercing cores for shells and cartridges. In March 1942, the Kamensk-Uralsky Foundry was launched, which throughout the war was the only enterprise that produced aircraft wheels. [ 248 ]
During the Great Patriotic War, the scientific potential of the Urals was strengthened by evacuated institutes. The Academy of Sciences of the USSR was located in Sverdlovsk. Academicians I. P. Bardin and M. A. Pavlov made a great contribution to the development of Ural metallurgy during the war years. Geological research in the Urals was led by A. N. Zavaritsky , D. V. Nalivkin , and V. I. Luchitsky . Academician L. D. Shevyakov made a significant contribution to the development of the Ural coal industry. V. V. Wolf developed and introduced a new method of processing Ural bauxite . N. S. Siunov invented a transformer to improve welding performance. A. E. Malakhov discovered new cobalt deposits. P. S. Mamykin was engaged in the development of new refractory materials. [ 249 ]
In general, up to 90% of iron ore, 70% of manganese, and 100% of aluminum, nickel, chromium, and platinum were produced in the Urals during the war years. Pig iron production increased by 88.4%, steel by 65.5%, rolled metal production by 54.9%, rough copper by 59.9%, electrolytic copper by 94.8%, nickel by 186.5%, aluminum by 554.1%, and cobalt by 1782.1%. The volume of production of defense equipment has grown sixfold. In total, the Urals produced about 40% of all military products of the country: 70% of all tanks (including 60% of medium and 100% of heavy), 50% of artillery pieces, and 50% of ammunition. [ 250 ]
After the war, the engineers evacuated from the western regions returned to their native places, which led to a shortage of engineering and technical personnel at Ural enterprises. Also, the Urals experienced a lack of funding for the reconversion of factories, since the bulk of the funds were directed to the restoration of areas liberated from occupation. At most factories in the region, the equipment required repair and updating. The equipment coming to the Urals on account of reparations from Germany and other aggressor countries was outdated and worn out. [ 251 ] [ 252 ]
The restructuring of the Ural metallurgy for the production of peacetime assortment was completed in 1946. Replacement and reconstruction of technological lines was often accompanied by a decrease in the quality of products due to untrained personnel and organizational problems. Since 1948, there has been a steady increase in production volumes. The construction of the Orsko-Khalilovsky Metallurgical Plant was continued, the capacities of the Magnitogorsk, Novotagilsky, Chelyabinsk, Chusovsky, and Lysvensky plants increased. The development of jet aviation , the nuclear industry , rocket engineering , and cosmonautics in the post-war period created the need for high-alloy steels, non-ferrous metal products, and the requirements for the quality of metals also sharply increased. [ 253 ] [ 252 ]
The main directions of technical progress in ferrous metallurgy in the post-war period were:
In April 1959, the Magnitogorsk Iron and Steel Works began heating open-hearth furnaces with associated gas . By the end of the 1960s, more than 80% of steel was smelted in furnaces using natural gas. Since 1956, at the Nizhny Tagil Metallurgical Plant, and later at all Ural plants, oxygen enrichment began to be used, which made it possible to increase the productivity of open-hearth furnaces by 15-25% and reduce specific fuel consumption by 15-20%. In the 1960s, more than 60% of open-hearth steel and 72% of electric steel were smelted using oxygen. [ 254 ]
In the copper-smelting industry, the mechanization of cleaning tuyeres and loading furnaces, and automation of units were introduced. These measures made it possible to almost double the production of copper at the Krasnouralsk and Kirovgrad copper-smelting plants. Due to the introduction of the roasting of copper charge and zinc concentrates in a fluidized bed at the Chelyabinsk Zinc Plant, zinc production increased, and the integrated use of raw materials was improved. At the Ufaleisk Nickel Plant, sulfation roasting of nickel matte was introduced; at the Karabash Plant, a system for automation of the thermal regime of reverberatory furnaces was introduced, which made it possible to increase the production of nickel and copper. At the Ural aluminum smelter, a continuous bauxite leaching process was developed, and two-tier thickeners were installed, which increased the production of alumina . [ 254 ]
The restoration and development of metallurgy in the Urals in the post-war period was stimulated by a significant increase in capital investments. In 1961-1970, out of 2,457 million rubles of capital investments in metallurgy, 2,074 million rubles (84.4%) were invested in five enterprises: Magnitogorsk Iron and Steel Works - 752 million rubles (30.6%), Chelyabinsk Metallurgical Plant - 610 million rubles (24.8%), Nizhniy Tagil Iron and Steel Works - 401 million rubles (16.3%), Orsko-Khalilovskiy Iron and Steel Works - 230 million rubles (9.4%), Verkh-Isetsky Iron and Steel Works - 81 million rubles (3.3%). From 1946 to 1965, 4 blast furnaces, 6 coke oven batteries, 14 open-hearth furnaces, 6 rolling shops were built and launched at the Magnitogorsk Iron and Steel Works. From 1947 to 1959, 4 blast furnaces, 18 open-hearths, 6 rolling mills, a unique converter shop for processing vanadium pig iron, [ note 14 ] and the country's first continuous casting machine were built at the Nizhniy Tagil Metallurgical Plant from 1947 to 1959. During the same period, blast furnaces, open-hearths, and electric furnaces were built and reconstructed at the Chelyabinsk Metallurgical Plant, and a sinter plant , electric steel-making shops No. 1 and No. 2, sheet-rolling, crimping, and section-rolling shops were put into operation. In 1950, a by-product coke plant was launched at the Orsko-Khalilovsky Metallurgical Plant; in 1955-1963 - 3 blast furnaces; in 1958-1966 - 9 open-hearth furnaces, blooming and sheet rolling mills. In 1950, the Magnitogorsk, Nizhniy Tagil and Chelyabinsk Combines smelted a total of 71.6% of all Ural pig iron, 53.4% of steel, and 57.1% of rolled products. Although the level of technical equipment of the Ural metallurgical plants was lower than in other regions, the cost of iron and steel produced was 10-15% less than the average for the USSR Ministry of Ferrous Metallurgy . [ 256 ]
To cover the shortage of iron ore, supplies of ore from the Sokolovsko-Sarbaisky Plant in Kazakhstan began in 1957, and from the 1960s from the mines of the Kursk Magnetic Anomaly and the Kola Peninsula . At Vysokogorsky Mining and Processing Plant , the Magnetitovaya, Operational, and Yuzhnaya mines were commissioned in 1949-1954 to mine deep horizons. In 1963, the Kachkanarsky Mining and Processing Plant was put into operation, extracting iron ore with a relatively low (15-16%) iron content, but containing valuable vanadium, which significantly increases the strength properties of steel. To provide copper ore raw materials in the late 1950s - early 1960s, the Gaysky and Uchalinsky mining and processing plants were built. The main supplier of the Ural aluminum raw materials in the post-war years was the North Ural bauxite mines. The Bogoslovsky and Uralsky aluminum plants in 1949-1953 carried out the reconstruction of production facilities and mastered new technological processes. [ 257 ] [ 258 ]
In the 1950s and 1960s, a massive reconstruction of the Verkhnesaldinsky Metallurgical Plant was carried out with the transition to the production of semi-finished products from titanium alloys. The area of the plant was increased by 5 times. The world's largest press with a force of 75 thousand tons was installed for stamping slabs. Rolling, forging, and stamping shops were built. Since 1966, the production of small diameter pipes has been mastered at cold rolling mills. After reconstruction, the plant became the world's largest producer of titanium and aluminum alloys. [ 260 ]
In the 1970s, reconstruction and refit of the Ural ferrous metallurgy enterprises were carried out. At the Kachkanarsky Mining and Processing Plant, for the first time in the Urals, the production of iron ore pellets was started, at the Nizhny Tagil Metallurgical Plant — wide flange beams, at the Chelyabinsk Metallurgical Plant — stainless steel sheets, at the Magnitogorsk Metallurgical Plant - bent profiles, and at the Verkh-Isetsky Plant — cold-rolled transformer sheets. The Ural Plant of Precision Alloys was built. The deposits of the region provided only 50% of iron ore raw materials for metallurgical plants during this period. Steelmaking in the 1970s developed through the introduction of out-of-furnace steel processing , reconstruction of open-hearth furnaces into two-shaft furnaces, and increasing the capacity of converters. Demand from the engineering and oil industries contributed to the expansion of pipe rolling enterprises. In 1975, the Sinarsky Pipe Plant launched pipe rolling shop No. 2, which produced drill pipes. In December 1976, a pipe rolling shop was launched at the Seversky Pipe Plant . At the Pervouralsky Novotrubny Plant in 1976, for the first time in the country, the production of stainless steel pipes for the nuclear industry was mastered. About 65% of Ural steel pipes were produced by the Chelyabinsk Pipe Rolling Plant, the Sinarsky Pipe Plant was the largest producer of cast iron pipes in the country. In general, Ural plants produced more than 33% of all pipes produced in the USSR. In 1980, Ural metallurgical plants smelted 28.6 million tons of pig iron, which became an absolute historical record for the region. [ 261 ]
In 1990, Ural metallurgy accounted for the production of 24.5% of all-Union cast iron, 26.1% of steel, 27.5% of rolled products, and 30.7% of steel pipes. In the last years of the existence of the USSR, in the Ural metallurgical industry, problems with an outdated equipment park and extensive development based on outdated technologies have become aggravated. The proportion of obsolete equipment in ferrous metallurgy was estimated at 57%, in non-ferrous - 70%. About 90% of the equipment of blast furnace shops and 85% of the equipment of rolling shops in the Urals by the early 1990s had a service life of more than 20–25 years. In 1985, the share of open-hearth steel in total production in the Urals was 78.2%, while in Western countries and in Japan in the 1980s, environmentally dirty open-hearth production was discontinued. The problem of environmental pollution has significantly worsened. The surroundings of the Karabash plant have become an ecological disaster zone. There were 0.2 million hectares (200,000 hectares = 494,210.76 acres) of land that were dumps and slurry pits . [ 262 ]
There are three stages in the post-Soviet history of Ural metallurgy: [ 263 ]
Restructuring and the transition to market conditions led to a 2-fold reduction in production at the Ural metallurgical enterprises. The Nizhniy Tagil and Orsko-Khalilovsky plants went bankrupt. Some enterprises went through bankruptcy proceedings several times. The privatization and corporatization of the enterprises of the Ural metallurgy were completed in 1992-1994. [ 264 ] In the late 1990s - early 2000s, vertically integrated structures began to form around large enterprises, including all stages of a closed technological cycle. In addition to the Magnitogorsk Metallurgical Combine, the MMK Group includes: the Magnitogorsk Hardware, Metallurgical, and Calibration Plants; the Mechel group united the Chelyabinsk Metallurgical Plant, Yuzhuralnickel, Beloretsk Metallurgical Plant , Izhstal , and Korshunovsky Mining and Processing Plant ; NTMK and Kachkanarsky Mining and Processing Plant, together with the West Siberian and Kuznetsk metallurgical plants, entered Evraz -holding; Chelyabinsk Pipe Rolling Plant and Pervouralsk Novotrubny Plants were merged into the ChTPZ Group; copper smelters became part of UMMC and Russian Copper Company ; aluminum - Rusal and SUAL , united in 2007. [ 265 ]
The main directions of development of ferrous metallurgy in the Urals in the market conditions were the reconstruction of blast furnaces with optimization of the profile and process control systems, the replacement of open-hearth furnaces with oxygen converters and electric furnaces, the widespread introduction of out-of-furnace steel processing, evacuation of steel before casting, as well as an increase in the share of continuous casting of steel. From 1985 to 2000, the share of the Ural steel smelted by the open-hearth method decreased from 78.2% to 46.9%; the share of converter steel in the same period increased from 15% to 46.9%, the share of continuously cast steel - from 1.2% to 33.1%. The share of electric steel in the same period remained at the level of about 6-7%. [ 266 ]
After the reconstruction and launch of new capacities, about 85% of the Ural steel was produced at the 4 largest metallurgical plants: MMK (39.1% of the total steel volume in 2006), NTMK (17.6%), Mechel (15.2% ), and Ural steel (11.4%). [ 267 ]
In the late 20th - early 21st century, the Ural metallurgical plants are developing taking into account the interests of holding structures. The main directions of development are the automation of production and the minimization of costs. [ 268 ] Key investment development projects are the reconstruction of the converter shop and the construction of a pulverized coal injection unit at NTMK in 2010-2012, the launch in 2010 of the " Vysota 239 " large-diameter pipe production shop at ChTPZ, as well as the launch in 2009 of Mill-5000 at MMK, which supplies workpieces including for the new ChelPipe workshop. By 2014, the share of Ural steel processed by the out-of-furnace method and poured at the continuous casting machine was brought to 100%. [ 269 ] [ 270 ]
In 2008, the Ural plants produced 43.1% of all-Russian pig iron, 43.4% of steel, 43.4% of rolled products, 46.4% of pipes, 47.9% of hardware, 72.8% of ferroalloys, about 80% of bauxite, 60% of alumina, 36% of refined copper, 100% of titanium and magnesium alloys, 64% of zinc, 15% of lead, and 8% of aluminum. The largest contribution to the metallurgy of the region is made by enterprises of the Chelyabinsk and Sverdlovsk Oblasts. [ 271 ] As of 2013, the contribution of the Ural enterprises was estimated at 38% of steel and rolled products and about 50% of steel pipes. [ 270 ] | https://en.wikipedia.org/wiki/History_of_metallurgy_in_the_Urals |
The history of metamaterials begins with artificial dielectrics in microwave engineering as it developed just after World War II . Yet, there are seminal explorations of artificial materials for manipulating electromagnetic waves at the end of the 19th century. [ 1 ] Hence, the history of metamaterials is essentially a history of developing certain types of manufactured materials, which interact at radio frequency , microwave , and later optical frequencies . [ 2 ] [ 3 ] [ 4 ] [ 5 ]
As the science of materials has advanced, photonic materials have been developed which use the photon of light as the fundamental carrier of information. This has led to photonic crystals , and at the beginning of the new millennium, the proof of principle for functioning metamaterials with a negative index of refraction in the microwave - (at 10.5 Gigahertz ) and optical [ 4 ] [ 5 ] range. This was followed by the first proof of principle for metamaterial cloaking (shielding an object from view), also in the microwave range, about six years later. [ 6 ] However, a cloak that can conceal objects across the entire electromagnetic spectrum is still decades away. Many physics and engineering problems need to be solved.
Nevertheless, negative refractive materials have led to the development of metamaterial antennas and metamaterial microwave lenses for miniature wireless system antennas which are more efficient than their conventional counterparts. Also, metamaterial antennas are now commercially available. Meanwhile, subwavelength focusing with the superlens is also a part of present-day metamaterials research. [ 6 ]
Classical waves transfer energy without transporting matter through the medium (material). For example, waves in a pond do not carry the water molecules from place to place; rather the wave's energy travels through the water, leaving the water molecules in place. Additionally, charged particles, such as electrons and protons create electromagnetic fields when they move, and these fields transport the type of energy known as electromagnetic radiation, or light. A changing magnetic field will induce a changing electric field and vice versa—the two are linked. These changing fields form electromagnetic waves. Electromagnetic waves differ from mechanical waves in that they do not require a medium to propagate. This means that electromagnetic waves can travel not only through air and solid materials, but also through the vacuum of space. [ 7 ]
The " history of metamaterials " can have a variety starting points depending on the properties of interest. Related early wave studies started in 1904 and progressed through more than half of the first part of the twentieth century. This early research included the relationship of the phase velocity to group velocity and the relationship of the wave vector and Poynting vector . [ 8 ] [ 9 ] [ 10 ]
In 1904 the possibility of negative phase velocity accompanied by an anti-parallel group velocity were noted by Horace Lamb (book: Hydrodynamics ) and Arthur Schuster (Book: Intro to Optics ). [ 11 ] However both thought practical achievement of these phenomena were not possible. In 1945 Leonid Mandelstam (also "Mandel'shtam") studied the anti-parallel phase and group advance in more detail. [ 11 ] He is also noted for examining the electromagnetic characteristics of materials demonstrating negative refraction, as well as the first left-handed material concept. These studies included negative group velocity. He reported that such phenomena occurs in a crystal lattice . This may be considered significant because the metamaterial is a man made crystal lattice (structure). [ 8 ] [ 9 ] [ 12 ] [ 13 ] In 1905 H.C. Pocklington also studied certain effects related to negative group velocity. [ 14 ]
V.E. Pafomov (1959), and several years later, the research team V.M. Agranovich and V.L. Ginzburg (1966) reported the repercussions of negative permittivity , negative permeability , and negative group velocity in their study of crystals and excitons . [ 8 ] [ 9 ]
In 1967, V.G. Veselago from Moscow Institute of Physics and Technology considered the theoretical model of medium that is now known as a metamaterial. [ 11 ] However, physical experimentation did not occur until 33 years after the paper's publication due to lack of available materials and lack of sufficient computing power. It was not until the 1990s that materials and computing power became available to artificially produce the necessary structures. Veselago also predicted a number of electromagnetic phenomena that would be reversed including the refractive index . In addition, he is credited with coining the term "left handed material" for the present day metamaterial because of the anti-parallel behavior of the wave vector and other electromagnetic fields . Moreover, he noted that the material he was studying was a double negative material, as certain metamaterials are named today, because of the ability to simultaneously produce negative values for two important parameters, e.g. permittivity and permeability. In 1968, his paper was translated and published in English. [ 10 ] [ 15 ] He was nominated later for a Nobel prize.
Later still, developments in nanofabrication and subwavelength imaging techniques are now taking this work into optical wavelengths . [ 16 ]
In the 19th century Maxwell's equations united all previous observations, experiments, and established propositions pertaining to electricity and magnetism into a consistent theory, which is also fundamental to optics . [ 17 ] Maxwell's work demonstrated that electricity, magnetism and even light are all manifestations of the same phenomenon, namely the electromagnetic field . [ 18 ]
Likewise, the concept of using certain constructed materials as a method for manipulating electromagnetic waves dates back to the 19th century. Microwave theory had developed significantly during the latter part of the 19th century with the cylindrical parabolic reflector , dielectric lens , microwave absorbers, the cavity radiator, the radiating iris, and the pyramidal electromagnetic horn . [ 1 ] The science involving microwaves also included round, square, and rectangular waveguides precluding Sir Rayleigh 's published work on waveguide operation in 1896. Microwave optics, involving the focusing of microwaves, introduced quasi-optical components, and a treatment of microwave optics was published in 1897 (by Righi). [ 3 ] [ 19 ] [ 20 ]
Jagadish Chandra Bose was a scientist involved in original microwave research during the 1890s. As officiating professor of physics at Presidency College he involved himself with laboratory experiments and studies involving refraction , diffraction and polarization , as well as transmitters , receivers and various microwave components. [ 21 ] [ 22 ]
He connected receivers to a sensitive galvanometer , and developed crystals to be used as a receiver. The crystals operated in the shortwave radio range. Crystals were also developed to detect both white and ultraviolet light . These crystals were patented in 1904 for their capability to detect electromagnetic radiation . Furthermore, it appears that his work also anticipated the existence of p-type and n-type semiconductors by 60 years. [ 21 ]
For the general public in 1895, Bose was able to remotely ring a bell and explode gunpowder with the use of electromagnetic waves. In 1896, it was reported that Bose had transmitted electromagnetic signals over almost a mile. [ 21 ] In 1897, Bose reported on his microwave research (experiments) at the Royal Institution in London. There he demonstrated his apparatus at wavelengths that ranged from 2.5 centimeters to 5 millimeters. [ 21 ]
In 1898, Jagadish Chandra Bose conducted the first microwave experiment on twisted structures. These twisted structures match the geometries that are known as artificial chiral media in today's terminology. By this time, he had also researched double refraction (birefringence) in crystals. Other research included polarization of electric field "waves" that crystals produce. He discovered this type of polarization in other materials including a class of dielectrics . [ 3 ] [ 21 ] [ 23 ]
In addition, chirality as optical activity in a given material is a phenomenon that has been studied since the 19th century. By 1811, a study of quartz crystals revealed that such crystalline solids rotate the polarization of polarized light denoting optical activity. By 1815, materials other than crystals, such as oil of turpentine were known to exhibit chirality. However, the basic cause was not known. Louis Pasteur solved the problem (chirality of the molecules) originating a new discipline known as stereochemistry . At the macroscopic scale, Lindman applied microwaves to the problem with wire spirals (wire helices) in 1920 and 1922. [ 24 ] [ 25 ]
Karl F. Lindman , from 1914 and into the 1920s, studied artificial chiral media formed by a collection of randomly oriented small spirals . He was written about by present-day metamaterials scientists : Ismo V. Lindell, Ari H. Sihvola, and Juhani Kurkijarvi. [ 26 ]
Much of the historic research related to metamaterials is weighted from the view of antenna beam shaping within microwave engineering just after World War II. Furthermore, metamaterials appear to be historically linked to the body of research pertaining to artificial dielectrics throughout the late 1940s, the 1950s and the 1960s. The most common use for artificial dielectrics throughout prior decades has been in the microwave regime for antenna beam shaping . The artificial dielectrics had been proposed as a low cost and lightweight "tool". Research on artificial dielectrics, other than metamaterials, is still ongoing for pertinent parts of the electromagnetic spectrum. [ 2 ] [ 27 ] [ 28 ] [ 29 ]
Pioneering works in microwave engineering on artificial dielectrics in microwave were produced by Winston E. Kock , Seymour Cohn, John Brown, and Walter Rotman . Periodic artificial structures were proposed by Kock, Rotman, and Sergei Schelkunoff . There is also an extensive reference list that is focused on the properties of artificial dielectrics in the 1991 book, Field Theory of Guided Waves by Robert E. Collin . [ 2 ] [ 29 ] [ 30 ] [ 31 ]
Schelkunoff achieved notice for contributions to antenna theory and electromagnetic wave propagation. [ 2 ] "Magnetic particles made of capacitively loaded loops were also suggested by Sergei Schelkunoff in 1952 (who was a senior colleague of Winston Kock at
Bell Labs at the time). However, Schelkunoff suggested these particles as a means of synthesizing high permeability (and not negative) values
but he recognized that such high permeability artificial dielectrics would be quite dispersive." [ 29 ]
W.E. Kock proposed metallic and wire lenses for antennas. Some of these are the metallic delay lens, parallel-wire lens, and the wire mesh lens. In addition, he conducted analytical studies regarding the response of customized metallic particles to a quasistatic electromagnetic radiation. As with the current large group of researchers conveying the behavior of metamaterials, Kock noted behaviors and structure in artificial materials that are similar to metamaterials. [ 29 ] [ 30 ] [ 32 ] [ 33 ]
He employed particles, which would be of varying geometric shape ; spheres , discs, ellipsoids and prolate or oblate spheroids , and would be either isolated or set in a repeating pattern as part of an array configuration . Furthermore, he was able to determine that such particles behave as a dielectric medium. He also noticed that the permittivity " ε " and permeability " μ " of these particles can be purposely tuned, but not independently. [ 29 ] [ 33 ]
With metamaterials, however, local values for both ε and μ are designed as part of the fabrication process, or analytically designed in theoretical studies. Because of this process, individual metamaterial inclusions can be independently tuned. [ 29 ] [ 33 ] [ 34 ]
With artificial dielectrics Kock was able to see that any value for permittivity and permeability, arbitrarily large or small, can be achieved, and that this included the possibility of negative values for these parameters. The optical properties of the medium depended solely on the particles’ geometrical shape and spacing, rather than on their own intrinsic behavior. His work also anticipated the split-ring resonator , a fabricated periodic structure that is a common workhorse for metamaterials. [ 34 ]
Kock, however, did not investigate the simultaneous occurrence of negative values of ε and μ, which has become one of the first achievements defining modern metamaterials. This was because research in artificial materials was oriented toward other goals, such as creating plasma media at RF or microwave frequencies related to the overarching needs of NASA and the space program at that time. [ 34 ] [ 35 ]
Walter Rotman and R.F. Turner advanced microwave beam shaping systems with a lens that has three perfect focal points; two symmetrically located off-axis and one on-axis. They published the design equations for the improved straight-front-face lens, the evaluation of its phase control capabilities, scanning capabilities, and the demonstrated fabrication techniques applicable to this type of design. [ 31 ] Rotman invented other periodic structures that include many types of surface wave antennas: the trough waveguide, the channel waveguide, and the sandwich wire antenna. [ 36 ]
"At frequencies of a few hundred gigahertz and lower, electrons are the principle particles which serve as the workhorse of devices. On the other hand,
at infrared through optical to ultraviolet wavelengths, the photon is the fundamental particle of choice." [ 37 ] The word 'photonics' appeared in the late 1960s to describe a research field whose goal was to use light to perform functions that traditionally fell within the typical domain of electronics, such as telecommunications, information processing, among other processes. [ 38 ] The term photonics more specifically connotes:
Hence, as photonic materials are used, the photons, rather than electrons, become the fundamental carriers of information. Furthermore, the photon appears to be a more efficient carrier of information, and materials that can process photonic signals are both in use and in further development. Additionally, developing photonic materials will lead to further miniaturization of components. [ 38 ]
In 1987 Eli Yablonovitch proposed controlling spontaneous emissions and constructing physical zones in periodic dielectrics that forbid certain wavelengths of electromagnetic radiation. These capabilities would be built into three-dimensional periodic dielectric structures (artificial dielectric). He noted that controlling spontaneous emission is desirable for semiconductor processes. [ 39 ]
Historically, and conventionally, the function or behavior of materials can be altered through their chemistry . This has long been known. For example, adding lead changes the color or hardness of glass . However, at the end of the 20th century this description was expanded by John Pendry , a physicist from Imperial College in London . [ 40 ] In the 1990s he was consulting for a British company, Marconi Materials Technology , as a condensed matter physics expert. The company manufactured a stealth technology made of a radiation-absorbing carbon that was for naval vessels . However, the company did not understand the physics of the material. The company asked Pendry if he could understand how the material worked. [ 40 ]
Pendry discovered that the radiation absorption property did not come from the molecular or chemical structure of the material, i.e. the carbon per se. This property came from the long and thin, physical shape of the carbon fibers . He realized rather than conventionally altering a material through its chemistry, as lead does with glass, the behavior of a material can be altered by changing a material's internal structure on a very fine scale. The very fine scale was less than the wavelength of the electromagnetic radiation that is applied. The theory applies across the electromagnetic spectrum that is in use by today's technologies. The radiations of interest are from radio waves, and microwaves, through infrared to the visible wavelengths. [ 40 ] [ 41 ] Scientists view this material as "beyond" conventional materials. Hence, the Greek word "meta" was attached, and these are called metamaterials . [ 40 ]
After successfully deducing and realizing the carbon fiber structure, Pendry further proposed that he try to change the magnetic properties of a non-magnetic material, also by altering its physical structure. The material would not be intrinsically magnetic, nor inherently susceptible to being magnetized. Copper wire is such a non-magnetic material. He envisioned fabricating a non-magnetic composite material, which could mimic the movements of electrons orbiting atoms. However, the structures are fabricated on a scale that is magnitudes larger than the atom, yet smaller than the radiated wavelength.
He envisioned and hypothesized miniature loops of copper wire set in a fiberglass substrate could mimic the action of electrons but on a larger scale. Furthermore, this composite material could act like a slab of iron . In addition, he deduced that a current run through the loops of wire results in a magnetic response . [ 40 ]
This metamaterial idea resulted in variations. Cutting the loops results in a magnetic resonator, which acts like a switch. The switch, in turn, would allow Pendry to determine or alter the magnetic properties of the material simply by choice. At the time, Pendry didn't realize the significance of the two materials he had engineered. By combining the electrical properties of Marconi's radar-absorbing material with his new man-made magnetic material he had unwittingly placed in his hands a new way to manipulate electromagnetic radiation. In 1999, Pendry published his new conception of artificially produced magnetic materials in a notable physics journal. This was read by scientists all over the world, and it "stoked their imagination". [ 40 ] [ 42 ]
In 1967, Victor Veselago produced an often cited, seminal work on a theoretical material that could produce extraordinary effects that are difficult or impossible to produce in nature. At that time he proposed that a reversal of Snell's law , an extraordinary lens , and other exceptional phenomena can occur within the laws of physics . This theory lay dormant for a few decades. There were no materials available in nature, or otherwise, that could physically realize Veselago's analysis. [ 6 ] [ 15 ] [ 43 ] Not until thirty-three years later did the properties of this material, a metamaterial , became a subdiscipline of physics and engineering .
However, there were certain observations, demonstrations, and implementations that closely preceded this work. Permittivity of metals, with values that could be stretched from the positive to the negative domain, had been studied extensively. In other words, negative permittivity was a known phenomenon by the time the first metamaterial was produced. Contemporaries of Kock were involved in this type of research. The concentrated effort was led by the US government for researching interactions between the ionosphere and the re-entry of NASA space vehicles.
In the 1990s, Pendry et al. developed sequentially repeating thin wire structures, analogous to crystal structures . These extended the range of material permittivity. However, a more revolutionary structure developed by Pendry et al. was a structure that could control magnetic interactions ( permeability ) of the radiated light, albeit only at microwave frequencies. This sequentially repeating, split ring structure, extended material magnetic parameters into the negative. This lattice or periodic, "magnetic" structure was constructed from non-magnetic components.
Hence, in electromagnetic domain, a negative value for permittivity and permeability occurring simultaneously was a requirement to produce the first metamaterials. These were beginning steps for proof of principle regarding Veselago's original 1967 proposal.
In 2000, a team of UCSD researchers produced and demonstrated metamaterials, which exhibited unusual physical properties that were never before produced in nature . These materials obey the laws of physics , but behave differently from normal materials. In essence these negative index metamaterials were noted for having the ability to reverse many of the physical properties that govern the behavior of ordinary optical materials. One of those unusual properties is the capability to reverse, for the first time, the Snell's law of refraction . Until this May 2000 demonstration by the UCSD team, the material was unavailable. Advances during the 1990s in fabrication and computation capabilities allowed these first metamaterials to be constructed. Thus, testing the "new" metamaterial began for the effects described by Victor Veselago 30 years earlier, but only at first in the microwave frequency domain. Reversal of group velocity was explicitly announced in the related published paper. [ note 1 ] [ 44 ] [ 45 ] [ 6 ]
The super lens or superlens is a practical structure based on John Pendry 's work describing a perfect lens that can go beyond the diffraction limit by focusing all four fourier components . Pendry's paper described a theoretical novel lens that could capture images below the diffraction limit by employing the negative refractive index behavior. The super lens is a practical realization of this theory. It is a working lens that can capture images below the diffraction limit even though limitations occur due to the inefficiencies of conventional materials. This means that although there are losses, enough of an image is returned to show this work was a successful demonstration. [ 46 ]
Ulf Leonhardt was born in East Germany , and presently occupies the theoretical physics chair at the University of St. Andrews in Scotland , and is considered one the leaders in the science of creating an invisibility cloak . Around 1999, Leonhardt began work on how to build a cloaking device with a few other colleagues. Leonhardt stated that at the time invisibility was not considered fashionable. He then wrote a theoretical study entitled " Optical Conformal Mapping ". The first sentence sums up the objective: "An invisibility device should guide light around an object as if nothing were there." [ dubious – discuss ]
In 2005, he sent the paper to three notable scientific journals , Nature , Nature Physics , and Science . Each journal, in turn, rejected the paper. In 2006, Physical Review Letters rejected the paper for publication, as well. However, according to the PRL assessment, one of the anonymous reviewers noted that (he or she ) had been to two meetings in the previous months with John Pendry 's group, who were also working on a cloaking device. From the meetings, the reviewer also became aware of a patent that Pendry and his colleagues were supposed to file. Leonhardt was at the time unaware of the Pendry group's work. Because of the Pendry meetings, Leonhardt's work was not really considered new physics by the reviewer and, therefore, did not merit publication in Physical Review Letters. [ dubious – discuss ]
Later in 2006, Science (the journal) reversed its decision and contacted Leonhardt to publish his paper because it had just received a theoretical study from Pendry's team entitled " Controlling Electromagnetic Fields ". Science considered both papers strikingly similar and published them both in the same issue of Science Express on May 25, 2006. The published papers touched off research efforts by a dozen groups to build cloaking devices at locations around the globe, which would test out the mathematics of both papers. [ dubious – discuss ] [ 47 ]
Only months after the submission of notable invisibility cloak theories, a practical device was built and demonstrated by David Schurig and David Smith , engineering researchers of Duke University (October 2006). It was limited to the microwave range so the object was not invisible to the human eye. However, it demonstrated proof of principle . [ 48 ]
The original theoretical papers on cloaking opened a new science discipline called transformation optics . [ 49 ] [ 50 ] | https://en.wikipedia.org/wiki/History_of_metamaterials |
Model organisms are specific organisms studied to gain knowledge of other organisms, to generalize both within and between species . Model organisms offer standards for comparison of other organisms. [ 1 ] Model organism strains are standardized by inbreeding and cloning , to limit genetic variation and create a precise basis for comparison. [ 1 ] Some organisms
are experimentally convenient
and/or important for their history
and research community.
The idea of the model organism first took root in the middle of the 19th century with the work of scientists like Charles Darwin and Gregor Mendel and their respective work on natural selection and the genetics of heredity . Beginning in the early 1900s, laboratory experimentation on Drosophila was expanded to use tobacco mosaic virus , E. coli , C57BL/6 (lab mice), etc. These organisms have led to many advances in the past century.
Some of the first work with what would be considered model organisms started because Gregor Johann Mendel felt that the views of Darwin were insufficient in describing the formation of a new species and he began his work with the pea plants that are so famously known today. In his experimentation to find a method by which Darwin's ideas could be explained he hybridized and cross-bred the peas and found that in so doing he could isolate phenotypic characteristics of the peas. These discoveries made in the 1860s lay dormant for nearly forty years until they were rediscovered in 1900. Mendel's work was then correlated with what was being called chromosomes within the nucleus of each cell. Mendel created a practical guide to breeding and this method has successfully been applied to select for some of the first model organisms of other genus and species such as Guinea pigs , Drosophila (fruit fly), mice, and viruses like the tobacco mosaic virus . [ 2 ]
The fruit fly Drosophila melanogaster made the jump from nature to laboratory animal in 1901. At Harvard University, Charles W. Woodworth suggested to William E. Castle that Drosophila might be used for genetical work. [ 3 ] Castle, along with his students, then first brought the fly into their labs for experimental use. By 1903 William J. Moenkhaus had brought Drosophila back to his lab at Indiana University Med School. Moenkhaus in turn convinced entomologist Frank E. Lutz that it would be a good organism for the work he was doing at Carnegie Institution's Station for Experimental Evolution at Cold Springs Harbor, Long Island on experimental evolution. Sometime in the year 1906 Drosophila was adopted by the man who would become very well known for his work with the flies, Thomas Hunt Morgan . A man by the name of Jacques Loeb also tried experimentation in mutations of Drosophila independently of Morgan's work during the first decade of the twentieth century. [ 4 ]
Thomas Hunt Morgan is considered to be one of the most influential men in experimental biology during the early twentieth century and his work with the Drosophila was extensive. He was one of the first in the field to realize the potential of mapping the chromosomes of Drosophila melanogaster and all known mutants. He would later expand his findings to a comparative study of other species. With careful and painstaking observation he and other "Drosophilists" were able to control for mutations and cross breed for new phenotypes. Through many years of work like this standards of these flies have become quite uniform and are still used in research today. [ 5 ]
Insects were not the only organisms entering the laboratories as test subjects. Bacteria had also been introduced and with the invention of the electron microscope in 1931 by Ernst Ruska , a whole new field of microbiology was born. [ 6 ] This invention allowed microbiologists to see objects that were far too small to be seen by any light microscope and thus viruses which had perplexed biologists of many fields for years, now came under scientific scrutiny. [ 7 ] In 1932 Wendell Stanley began a direct competition with Carl G. Vinson to be the first to completely isolate the tobacco mosaic virus (TMV), a virus that had been until then invisibly killing tobacco plants across England. [ 8 ] It was Stanley who would accomplish this task first by changing the pH to be more acidic. In doing so he was able to conclude that the virus was either a protein or closely related to one.
There are very important reasons why these new, much smaller organisms such as E. coli made their way into the molecular biologists' laboratories. Organisms like Drosophila and Tribolium were much too large and too complex for the simple quantitative experiments that men like Wendell Stanley wanted to perform. [ 9 ] Before the use of these simple organisms molecular biologist had comparatively complex organisms to work with.
Today these viruses, including bacteriophages, are used extensively in genetics. They are critical in helping researchers to produce DNA within bacteria. Tobacco mosaic virus has RNA that stacks itself in a distinctive way that was influenced development of the double-helical model of the structure for DNA. [ 10 ]
Bacteriophages ( viruses that infect bacteria) have been studied since 1939 as experimental model organisms for understanding numerous fundamental biological processes at the molecular level. The phage group , initially an informal network of biologists centered on Max Delbrück , contributed substantially to bacterial genetics and the origins of molecular biology in the mid-20th century. [ 11 ] Studies of bacteriophages led to considerable insight into numerous fundamental biologic problems. Thus understanding was gained on the functions and interactions of the proteins employed in the machinery of DNA replication , DNA repair and recombination , and on how viruses are assembled from protein and nucleic acid components (molecular morphogenesis ). Experiments with bacteriophage led to the elucidation of the role of chain terminating codons , and also contributed to the sequence hypothesis , the concept that the amino acid sequence of a protein is specified by the nucleotide sequence of the gene determining the protein.
Both the community of insects and the viruses were a good start to the history of model organisms, but there are yet still more players involved. At the turn of the century much biomedical research was being done using animals and especially mammalian bodies to further biologists' understanding of life processes. It was around this time that American humane societies became very involved with preserving the rights of animal and for the first time were beginning to gain public support for this endeavor. At this same time American biology was also going through its own internal reforms. From 1900 to 1910 thirty medical schools were forced to close. During this time of unrest a man named Clarence Cook Little , through a series of luckily timed events, became a researcher at Harvard Medical School and worked on mouse cancers. He began developing large, mutant strain, colonies of mice. Under the charge of Dr. William Castle, Little helped to expand the animal breeding habits in the Bussey laboratory at Harvard. Due to freedom in the way Castle was allowed to run the laboratory and his financial backing by the University they were able to create an extensive program in mammalian genetics. [ 12 ]
The mice turned out to be an almost perfect solution for test subjects for mammalian genetic research. The fact that they had been bred by ‘rat fanciers' for hundreds of years allowed for diverse populations of an animal while the public held far less sentiment for these rodents than they did for dogs and cats. Because of social allowance, Little was able to take new ideas of ‘pure genetic strains' merging from plant genetics as well as work with Drosophila and run with them. The idea of inbreeding to achieve this goal of a ‘pure strain' in mice was one that may have created a negative response to the fertility of the mice thus discontinuing the strain. Little achieved his goal of a genetically pure strain of mice by 1911 and published his finding shortly thereafter.
He would continue his work with these mice and used his research to demonstrate that inbreeding is an effective way of eliminate variation and served to preserve unique genetic variants. Around this time as well there was much work being done with these mice and cancer and tumor research. [ 13 ]
Throughout the 1920s, work continued with these mice as model organisms for research into tumors and genetics. It was during the great depression that this field of study would take its biggest blow. With the economy at rock bottom labs were forced into selling many of their mice just to keep from shutting down. This necessity for funds all but stopped the continuation of these strains of mice. The transition for these laboratories to exporters of massive quantities of mice was one that was rather easily made if there were adequate facilities for their production on site. Eventually in the mid-1930s the market would return and genetics laboratories around the country resumed regular funding and thus continued in the areas of research they had started before the depression. As research into continued, so did the production of mice in places like Jackson Laboratory. Facilities like these were able to produce mice for research facilities around the world. These mice were bred with Mendelian breeding technique of which Little had implemented as standard practice around 1911. This meant that the mice being experimented on were not only the same within the laboratory, but in different laboratories around the world. [ 14 ]
The mouse has remained important as molecular genetics and genomics have progressed; sequencing of a reference mouse genome was completed in 2002. [ 15 ] More broadly, comparative genomics has advanced our understanding and reinforced the importance of model organisms, especially ones with relatively small and nonrepetitive genomes. | https://en.wikipedia.org/wiki/History_of_model_organisms |
The history of molecular evolution starts in the early 20th century with "comparative biochemistry", but the field of molecular evolution came into its own in the 1960s and 1970s, following the rise of molecular biology . The advent of protein sequencing allowed molecular biologists [ citation needed ] to create phylogenies based on sequence comparison, and to use the differences between homologous sequences as a molecular clock to estimate the time since the last common ancestor. In the late 1960s, the neutral theory of molecular evolution provided a theoretical basis for the molecular clock, though both the clock and the neutral theory were controversial, since most evolutionary biologists held strongly to panselectionism , with natural selection as the only important cause of evolutionary change. After the 1970s, nucleic acid sequencing allowed molecular evolution to reach beyond proteins to highly conserved ribosomal RNA sequences, the foundation of a reconceptualization of the early history of life .
Before the rise of molecular biology in the 1950s and 1960s, a small number of biologists had explored the possibilities of using biochemical differences between species to study evolution . Alfred Sturtevant predicted the existence of chromosomal inversions in 1921 and with Dobzhansky constructed one of the first molecular phylogenies on 17 Drosophila Pseudo-obscura strains from the accumulation of chromosomal inversions observed from the hybridization of polyten [ check spelling ] chromosomes. [ 1 ] Ernest Baldwin worked extensively on comparative biochemistry beginning in the 1930s, and Marcel Florkin pioneered techniques for constructing phylogenies based on molecular and biochemical characters in the 1940s. However, it was not until the 1950s that biologists developed techniques for producing biochemical data for the quantitative study of molecular evolution . [ 2 ]
The first molecular systematics research was based on immunological assays and protein "fingerprinting" methods. Alan Boyden —building on immunological methods of George Nuttall —developed new techniques beginning in 1954, and in the early 1960s Curtis Williams and Morris Goodman used immunological comparisons to study primate phylogeny. Others, such as Linus Pauling and his students, applied newly developed combinations of electrophoresis and paper chromatography to proteins subject to partial digestion by digestive enzymes to create unique two-dimensional patterns, allowing fine-grained comparisons of homologous proteins. [ 3 ]
Beginning in the 1950s, a few naturalists also experimented with molecular approaches—notably Ernst Mayr and Charles Sibley . While Mayr quickly soured on paper chromatography, Sibley successfully applied electrophoresis to egg-white proteins to sort out problems in bird taxonomy, soon supplemented that with DNA hybridization techniques—the beginning of a long career built on molecular systematics . [ 4 ]
While such early biochemical techniques found grudging acceptance in the biology community, for the most part they did not impact the main theoretical problems of evolution and population genetics. This would change as molecular biology shed more light on the physical and chemical nature of genes.
At the time that molecular biology was coming into its own in the 1950s, there was a long-running debate—the classical/balance controversy—over the causes of heterosis , the increase in fitness observed when inbred lines are outcrossed. In 1950, James F. Crow offered two different explanations (later dubbed the classical and balance positions) based on the paradox first articulated by J. B. S. Haldane in 1937: the effect of deleterious mutations on the average fitness of a population depends only on the rate of mutations (not the degree of harm caused by each mutation) because more-harmful mutations are eliminated more quickly by natural selection, while less-harmful mutations remain in the population longer. H. J. Muller dubbed this " genetic load ". [ 5 ]
Muller, motivated by his concern about the effects of radiation on human populations, argued that heterosis is primarily the result of deleterious homozygous recessive alleles, the effects of which are masked when separate lines are crossed—this was the dominance hypothesis , part of what Dobzhansky labeled the classical position . Thus, ionizing radiation and the resulting mutations produce considerable genetic load even if death or disease does not occur in the exposed generation, and in the absence of mutation natural selection will gradually increase the level of homozygosity. Bruce Wallace , working with J. C. King , used the overdominance hypothesis to develop the balance position , which left a larger place for overdominance (where the heterozygous state of a gene is more fit than the homozygous states). In that case, heterosis is simply the result of the increased expression of heterozygote advantage . If overdominant loci are common, then a high level of heterozygosity would result from natural selection, and mutation-inducing radiation may in fact facilitate an increase in fitness due to overdominance. (This was also the view of Dobzhansky.) [ 6 ]
Debate continued through 1950s, gradually becoming a central focus of population genetics. A 1958 study of Drosophila by Wallace suggested that radiation-induced mutations increased the viability of previously homozygous flies, providing evidence for heterozygote advantage and the balance position; Wallace estimated that 50% of loci in natural Drosophila populations were heterozygous. Motoo Kimura 's subsequent mathematical analyses reinforced what Crow had suggested in 1950: that even if overdominant loci are rare, they could be responsible for a disproportionate amount of genetic variability. Accordingly, Kimura and his mentor Crow came down on the side of the classical position. Further collaboration between Crow and Kimura led to the infinite alleles model , which could be used to calculate the number of different alleles expected in a population, based on population size, mutation rate, and whether the mutant alleles were neutral, overdominant, or deleterious. Thus, the infinite alleles model offered a potential way to decide between the classical and balance positions, if accurate values for the level of heterozygosity could be found. [ 7 ]
By the mid-1960s, the techniques of biochemistry and molecular biology—in particular protein electrophoresis —provided a way to measure the level of heterozygosity in natural populations: a possible means to resolve the classical/balance controversy. In 1963, Jack L. Hubby published an electrophoresis study of protein variation in Drosophila ; [ 8 ] soon after, Hubby began collaborating with Richard Lewontin to apply Hubby's method to the classical/balance controversy by measuring the proportion of heterozygous loci in natural populations. Their two landmark papers, published in 1966, established a significant level of heterozygosity for Drosophila (12%, on average). [ 9 ] However, these findings proved difficult to interpret. Most population geneticists (including Hubby and Lewontin) rejected the possibility of widespread neutral mutations; explanations that did not involve selection were anathema to mainstream evolutionary biology. Hubby and Lewontin also ruled out heterozygote advantage as the main cause because of the segregation load it would entail, though critics argued that the findings actually fit well with overdominance hypothesis. [ 10 ]
While evolutionary biologists were tentatively branching out into molecular biology, molecular biologists were rapidly turning their attention toward evolution.
After developing the fundamentals of protein sequencing with insulin between 1951 and 1955, Frederick Sanger and his colleagues had published a limited interspecies comparison of the insulin sequence in 1956. Francis Crick , Charles Sibley and others recognized the potential for using biological sequences to construct phylogenies, though few such sequences were yet available. By the early 1960s, techniques for protein sequencing had advanced to the point that direct comparison of homologous amino acid sequences was feasible. [ 11 ] In 1961, Emanuel Margoliash and his collaborators completed the sequence for horse cytochrome c (a longer and more widely distributed protein than insulin), followed in short order by a number of other species.
In 1962, Linus Pauling and Emile Zuckerkandl proposed using the number of differences between homologous protein sequences to estimate the time since divergence , an idea Zuckerkandl had conceived around 1960 or 1961. This began with Pauling's long-time research focus, hemoglobin , which was being sequenced by Walter Schroeder ; the sequences not only supported the accepted vertebrate phylogeny, but also the hypothesis (first proposed in 1957) that the different globin chains within a single organism could also be traced to a common ancestral protein. [ 12 ] Between 1962 and 1965, Pauling and Zuckerkandl refined and elaborated this idea, which they dubbed the molecular clock , and Emil L. Smith and Emanuel Margoliash expanded the analysis to cytochrome c. Early molecular clock calculations agreed fairly well with established divergence times based on paleontological evidence. However, the essential idea of the molecular clock—that individual proteins evolve at a regular rate independent of a species' morphological evolution—was extremely provocative (as Pauling and Zuckerkandl intended it to be). [ 13 ]
From the early 1960s, molecular biology was increasingly seen as a threat to the traditional core of evolutionary biology. Established evolutionary biologists—particularly Ernst Mayr , Theodosius Dobzhansky and G. G. Simpson , three of the founders of the modern evolutionary synthesis of the 1930s and 1940s—were extremely skeptical of molecular approaches, especially when it came to the connection (or lack thereof) to natural selection . Molecular evolution in general—and the molecular clock in particular—offered little basis for exploring evolutionary causation. According to the molecular clock hypothesis, proteins evolved essentially independently of the environmentally determined forces of selection; this was sharply at odds with the panselectionism prevalent at the time. Moreover, Pauling, Zuckerkandl, and other molecular biologists were increasingly bold in asserting the significance of "informational macromolecules" (DNA, RNA and proteins) for all biological processes, including evolution. [ 14 ] The struggle between evolutionary biologists and molecular biologists—with each group holding up their discipline as the center of biology as a whole—was later dubbed the "molecular wars" by Edward O. Wilson , who experienced firsthand the domination of his biology department by young molecular biologists in the late 1950s and the 1960s. [ 15 ]
In 1961, Mayr began arguing for a clear distinction between functional biology (which considered proximate causes and asked "how" questions) and evolutionary biology (which considered ultimate causes and asked "why" questions) [ 16 ] He argued that both disciplines and individual scientists could be classified on either the functional or evolutionary side, and that the two approaches to biology were complementary. Mayr, Dobzhansky, Simpson and others used this distinction to argue for the continued relevance of organismal biology, which was rapidly losing ground to molecular biology and related disciplines in the competition for funding and university support. [ 17 ] It was in that context that Dobzhansky first published his famous statement, " nothing in biology makes sense except in the light of evolution ", in a 1964 paper affirming the importance of organismal biology in the face of the molecular threat; Dobzhansky characterized the molecular disciplines as " Cartesian " (reductionist) and organismal disciplines as " Darwinian ". [ 18 ]
Mayr and Simpson attended many of the early conferences where molecular evolution was discussed, critiquing what they saw as the overly simplistic approaches of the molecular clock. The molecular clock, based on uniform rates of genetic change driven by random mutations and drift, seemed incompatible with the varying rates of evolution and environmentally-driven adaptive processes (such as adaptive radiation ) that were among the key developments of the evolutionary synthesis. At the 1962 Wenner-Gren conference, the 1964 Colloquium on the Evolution of Blood Proteins in Bruges , Belgium , and the 1964 Conference on Evolving Genes and Proteins at Rutgers University , they engaged directly with the molecular biologists and biochemists, hoping to maintain the central place of Darwinian explanations in evolution as its study spread to new fields. [ 19 ]
Though not directly related to molecular evolution, the mid-1960s also saw the rise of the gene-centered view of evolution , spurred by George C. Williams 's Adaptation and Natural Selection (1966). Debate over units of selection , particularly the controversy over group selection , led to increased focus on individual genes (rather than whole organisms or populations) as the theoretical basis for evolution. However, the increased focus on genes did not mean a focus on molecular evolution; in fact, the adaptationism promoted by Williams and other evolutionary theories further marginalized the apparently non-adaptive changes studied by molecular evolutionists.
The intellectual threat of molecular evolution became more explicit in 1968, when Motoo Kimura introduced the neutral theory of molecular evolution . [ 20 ] Based on the available molecular clock studies (of hemoglobin from a wide variety of mammals, cytochrome c from mammals and birds, and triosephosphate dehydrogenase from rabbits and cows), Kimura (assisted by Tomoko Ohta ) calculated an average rate of DNA substitution of one base pair change per 300 base pairs (encoding 100 amino acids) per 28 million years. For mammal genomes, this indicated a substitution rate of one every 1.8 years, which would produce an unsustainably high substitution load unless the preponderance of substitutions was selectively neutral. Kimura argued that neutral mutations occur very frequently, a conclusion compatible with the results of the electrophoretic studies of protein heterozygosity. Kimura also applied his earlier mathematical work on genetic drift to explain how neutral mutations could come to fixation , even in the absence of natural selection; he soon convinced James F. Crow of the potential power of neutral alleles and genetic drift as well. [ 21 ]
Kimura's theory—described only briefly in a letter to Nature —was followed shortly after with a more substantial analysis by Jack L. King and Thomas H. Jukes —who titled their first paper on the subject " Non-Darwinian Evolution ". [ 22 ] Though King and Jukes produced much lower estimates of substitution rates and the resulting genetic load in the case of non-neutral changes, they agreed that neutral mutations driven by genetic drift were both real and significant. The fairly constant rates of evolution observed for individual proteins was not easily explained without invoking neutral substitutions (though G. G. Simpson and Emil Smith had tried). Jukes and King also found a strong correlation between the frequency of amino acids and the number of different codons encoding each amino acid. This pointed to substitutions in protein sequences as being largely the product of random genetic drift. [ 23 ]
King and Jukes' paper, especially with the provocative title, was seen as a direct challenge to mainstream neo-Darwinism, and it brought molecular evolution and the neutral theory to the center of evolutionary biology. It provided a mechanism for the molecular clock and a theoretical basis for exploring deeper issues of molecular evolution, such as the relationship between rate of evolution and functional importance. The rise of the neutral theory marked synthesis of evolutionary biology and molecular biology—though an incomplete one. [ 24 ]
With their work on firmer theoretical footing, in 1971 Emile Zuckerkandl and other molecular evolutionists founded the Journal of Molecular Evolution .
The critical responses to the neutral theory that soon appeared marked the beginning of the neutralist-selectionist debate . In short, selectionists viewed natural selection as the primary or only cause of evolution, even at the molecular level, while neutralists held that neutral mutations were widespread and that genetic drift was a crucial factor in the evolution of proteins. Kimura became the most prominent defender of the neutral theory—which would be his main focus for the rest of his career. With Ohta, he refocused his arguments on the rate at which drift could fix new mutations in finite populations, the significance of constant protein evolution rates, and the functional constraints on protein evolution that biochemists and molecular biologists had described. Though Kimura had initially developed the neutral theory partly as an outgrowth of the classical position within the classical/balance controversy (predicting high genetic load as a consequence of non-neutral mutations), he gradually deemphasized his original argument that segregational load would be impossibly high without neutral mutations (which many selectionists, and even fellow neutralists King and Jukes, rejected). [ 25 ]
From the 1970s through the early 1980s, both selectionists and neutralists could explain the observed high levels of heterozygosity in natural populations, by assuming different values for unknown parameters. Early in the debate, Kimura's student Tomoko Ohta focused on the interaction between natural selection and genetic drift, which was significant for mutations that were not strictly neutral, but nearly so. In such cases, selection would compete with drift: most slightly deleterious mutations would be eliminated by natural selection or chance; some would move to fixation through drift. The behavior of this type of mutation, described by an equation that combined the mathematics of the neutral theory with classical models, became the basis of Ohta's nearly neutral theory of molecular evolution . [ 26 ]
In 1973, Ohta published a short letter in Nature [ 27 ] suggesting that a wide variety of molecular evidence supported the theory that most mutation events at the molecular level are slightly deleterious rather than strictly neutral. Molecular evolutionists were finding that while rates of protein evolution (consistent with the molecular clock ) were fairly independent of generation time , rates of noncoding DNA divergence were inversely proportional to generation time. Noting that population size is generally inversely proportional to generation time, Tomoko Ohta proposed that most amino acid substitutions are slightly deleterious while noncoding DNA substitutions are more neutral. In this case, the faster rate of neutral evolution in proteins expected in small populations (due to genetic drift) is offset by longer generation times (and vice versa), but in large populations with short generation times, noncoding DNA evolves faster while protein evolution is retarded by selection (which is more significant than drift for large populations). [ 28 ]
Between then and the early 1990s, many studies of molecular evolution used a "shift model" in which the negative effect on the fitness of a population due to deleterious mutations shifts back to an original value when a mutation reaches fixation. In the early 1990s, Ohta developed a "fixed model" that included both beneficial and deleterious mutations, so that no artificial "shift" of overall population fitness was necessary. [ 29 ] According to Ohta, however, the nearly neutral theory largely fell out of favor in the late 1980s, because of the mathematically simpler neutral theory for the widespread molecular systematics research that flourished after the advent of rapid DNA sequencing . As more detailed systematics studies started to compare the evolution of genome regions subject to strong selection versus weaker selection in the 1990s, the nearly neutral theory and the interaction between selection and drift have once again become an important focus of research. [ 30 ]
While early work in molecular evolution focused on readily sequenced proteins and relatively recent evolutionary history, by the late 1960s some molecular biologists were pushing further toward the base of the tree of life by studying highly conserved nucleic acid sequences. Carl Woese , a molecular biologist whose earlier work was on the genetic code and its origin, began using small subunit ribosomal RNA to reclassify bacteria by genetic (rather than morphological) similarity. Work proceeded slowly at first, but accelerated as new sequencing methods were developed in the 1970s and 1980s. By 1977, Woese and George Fox announced that some bacteria, such as methanogens , lacked the rRNA units that Woese's phylogenetic studies were based on; they argued that these organisms were actually distinct enough from conventional bacteria and the so-called higher organisms to form their own kingdom, which they called archaebacteria . Though controversial at first (and challenged again in the late 1990s), Woese's work became the basis of the modern three-domain system of Archaea , Bacteria , and Eukarya (replacing the five-domain system that had emerged in the 1960s). [ 31 ]
Work on microbial phylogeny also brought molecular evolution closer to cell biology and origin of life research. The differences between archaea pointed to the importance of RNA in the early history of life. In his work with the genetic code, Woese had suggested RNA-based life had preceded the current forms of DNA-based life, as had several others before him—an idea that Walter Gilbert would later call the " RNA world ". In many cases, genomics research in the 1990s produced phylogenies contradicting the rRNA-based results, leading to the recognition of widespread lateral gene transfer across distinct taxa. Combined with the probable endosymbiotic origin of organelle -filled eukarya, this pointed to a far more complex picture of the origin and early history of life, one which might not be describable in the traditional terms of common ancestry. [ 32 ] | https://en.wikipedia.org/wiki/History_of_molecular_evolution |
In chemistry , the history of molecular theory traces the origins of the concept or idea of the existence of strong chemical bonds between two or more atoms .
A modern conceptualization of molecules began to develop in the 19th century along with experimental evidence for pure chemical elements and how individual atoms of different chemical elements such as hydrogen and oxygen can combine to form chemically stable molecules such as water molecules.
The modern concept of molecules can be traced back towards pre-scientific and Greek philosophers such as Leucippus and Democritus who argued that all the universe is composed of atoms and voids .
Circa 450 BC Empedocles imagined fundamental elements ( fire ( ), earth ( ), air ( ), and water ( )) and "forces" of attraction and repulsion allowing the elements to interact. Prior to this, Heraclitus had claimed that fire or change was fundamental to our existence, created through the combination of opposite properties. [ 1 ]
In the Timaeus , Plato , following Pythagoras , considered mathematical entities such as number, point, line and triangle as the fundamental building blocks or elements of this ephemeral world, and considered the four elements of fire, air, water and earth as states of substances through which the true mathematical principles or elements would pass. [ 2 ] A fifth element, the incorruptible quintessence aether , was considered to be the fundamental building block of the heavenly bodies.
The viewpoint of Leucippus and Empedocles, along with the aether, was accepted by Aristotle and passed to medieval and renaissance Europe.
The earliest views on the shapes and connectivity of atoms was that proposed by Leucippus , Democritus , and Epicurus who reasoned that the solidness of the material corresponded to the shape of the atoms involved. Thus, iron atoms are solid and strong with hooks that lock them into a solid; water atoms are smooth and slippery; salt atoms, because of their taste, are sharp and pointed; and air atoms are light and whirling, pervading all other materials. [ 3 ]
It was Democritus that was the main proponent of this view. Using analogies based on the experiences of the senses , he gave a picture or an image of an atom in which atoms were distinguished from each other by their shape, their size, and the arrangement of their parts. Moreover, connections were explained by material links in which single atoms were supplied with attachments: some with hooks and eyes others with balls and sockets (see diagram). [ 4 ]
With the rise of scholasticism and the decline of the Roman Empire, the atomic theory was abandoned for many ages in favor of the various four element theories and later alchemical theories. The 17th century, however, saw a resurgence in the atomic theory primarily through the works of Gassendi , and Newton .
Among other scientists of that time Gassendi deeply studied ancient history, wrote major works about Epicurus natural philosophy and was a persuasive propagandist of it. He reasoned that to account for the size and shape of atoms moving in a void could account for the properties of matter. Heat was due to small, round atoms; cold, to pyramidal atoms with sharp points, which accounted for the pricking sensation of severe cold; and solids were held together by interlacing hooks. [ 5 ]
Newton, though he acknowledged the various atom attachment theories in vogue at the time, i.e. "hooked atoms", "glued atoms" (bodies at rest), and the "stick together by conspiring motions" theory, rather believed, as famously stated in "Query 31" of his 1704 Opticks , that particles attract one another by some force, which "in immediate contact is extremely strong, at small distances performs the chemical operations, and reaches not far from particles with any sensible effect." [ 6 ]
In a more concrete manner, however, the concept of aggregates or units of bonded atoms, i.e. " molecules ", traces its origins to Robert Boyle 's 1661 hypothesis, in his famous treatise The Sceptical Chymist , that matter is composed of clusters of particles and that chemical change results from the rearrangement of the clusters. Boyle argued that matter's basic elements consisted of various sorts and sizes of particles, called " corpuscles ", which were capable of arranging themselves into groups.
In 1680, using the corpuscular theory as a basis, French chemist Nicolas Lemery stipulated that the acidity of any substance consisted in its pointed particles, while alkalis were endowed with pores of various sizes. [ 7 ] A molecule, according to this view, consisted of corpuscles united through a geometric locking of points and pores.
An early precursor to the idea of bonded "combinations of atoms", was the theory of "combination via chemical affinity ". For example, in 1718, building on Boyle's conception of combinations of clusters, the French chemist Étienne François Geoffroy developed theories of chemical affinity to explain combinations of particles, reasoning that a certain alchemical "force" draws certain alchemical components together. Geoffroy's name is best known in connection with his tables of " affinities " ( tables des rapports ), which he presented to the French Academy in 1718 and 1720.
These were lists, prepared by collating observations on the actions of substances one upon another, showing the varying degrees of affinity exhibited by analogous bodies for different reagents . These tables retained their vogue for the rest of the century, until displaced by the profounder conceptions introduced by CL Berthollet .
In 1738, Swiss physicist and mathematician Daniel Bernoulli published Hydrodynamica , which laid the basis for the kinetic theory of gases. In this work, Bernoulli positioned the argument, still used to this day, that gases consist of great numbers of molecules moving in all directions, that their impact on a surface causes the gas pressure that we feel, and that what we experience as heat is simply the kinetic energy of their motion. The theory was not immediately accepted, in part because conservation of energy had not yet been established, and it was not obvious to physicists how the collisions between molecules could be perfectly elastic.
In 1789, William Higgins published views on what he called combinations of "ultimate" particles, which foreshadowed the concept of valency bonds . If, for example, according to Higgins, the force between the ultimate particle of oxygen and the ultimate particle of nitrogen were 6, then the strength of the force would be divided accordingly, and similarly for the other combinations of ultimate particles:
Similar to these views, in 1803 John Dalton took the atomic weight of hydrogen, the lightest element, as unity, and determined, for example, that the ratio for nitrous anhydride was 2 to 3 which gives the formula N 2 O 3 . Dalton incorrectly imagined that atoms "hooked" together to form molecules. Later, in 1808, Dalton published his famous diagram of combined "atoms":
Amedeo Avogadro created the word "molecule". [ 8 ] His 1811 paper "Essay on Determining the Relative Masses of the Elementary Molecules of Bodies", he essentially states, i.e. according to Partington 's A Short History of Chemistry , that: [ 9 ]
The smallest particles of gases are not necessarily simple atoms, but are made up of a certain number of these atoms united by attraction to form a single molecule .
Note that this quote is not a literal translation. Avogadro uses the name "molecule" for both atoms and molecules. Specifically, he uses the name "elementary molecule" when referring to atoms and to complicate the matter also speaks of "compound molecules" and "composite molecules".
During his stay in Vercelli, Avogadro wrote a concise note ( memoria ) in which he declared the hypothesis of what we now call Avogadro's law : equal volumes of gases, at the same temperature and pressure, contain the same number of molecules . This law implies that the relationship occurring between the weights of same volumes of different gases, at the same temperature and pressure, corresponds to the relationship between respective molecular weights . Hence, relative molecular masses could now be calculated from the masses of gas samples.
Avogadro developed this hypothesis to reconcile Joseph Louis Gay-Lussac 's 1808 law on volumes and combining gases with Dalton's 1803 atomic theory . The greatest difficulty Avogadro had to resolve was the huge confusion at that time regarding atoms and molecules—one of the most important contributions of Avogadro's work was clearly distinguishing one from the other, admitting that simple particles too could be composed of molecules and that these are composed of atoms. Dalton, by contrast, did not consider this possibility. Curiously, Avogadro considers only molecules containing even numbers of atoms; he does not say why odd numbers are left out.
In 1826, building on the work of Avogadro, the French chemist Jean-Baptiste Dumas states:
Gases in similar circumstances are composed of molecules or atoms placed at the same distance, which is the same as saying that they contain the same number in the same volume.
In coordination with these concepts, in 1833 the French chemist Marc Antoine Auguste Gaudin presented a clear account of Avogadro's hypothesis, [ 10 ] regarding atomic weights, by making use of "volume diagrams", which clearly show both semi-correct molecular geometries, such as a linear water molecule, and correct molecular formulas, such as H 2 O:
In two papers outlining his "theory of atomicity of the elements" (1857–58), Friedrich August Kekulé was the first to offer a theory of how every atom in an organic molecule was bonded to every other atom. He proposed that carbon atoms were tetravalent, and could bond to themselves to form the carbon skeletons of organic molecules.
In 1856, Scottish chemist Archibald Couper began research on the bromination of benzene at the laboratory of Charles Wurtz in Paris. [ 11 ] One month after Kekulé's second paper appeared, Couper's independent and largely identical theory of molecular structure was published. He offered a very concrete idea of molecular structure, proposing that atoms joined to each other like modern-day Tinkertoys in specific three-dimensional structures. Couper was the first to use lines between atoms, in conjunction with the older method of using brackets, to represent bonds, and also postulated straight chains of atoms as the structures of some molecules, ring-shaped molecules of others, such as in tartaric acid and cyanuric acid . [ 12 ] In later publications, Couper's bonds were represented using straight dotted lines (although it is not known if this is the typesetter's preference) such as with alcohol and oxalic acid below:
In 1861, an unknown Vienna high-school teacher named Joseph Loschmidt published, at his own expense, a booklet entitled Chemische Studien I , containing pioneering molecular images which showed both "ringed" structures as well as double-bonded structures, such as: [ 13 ]
Loschmidt also suggested a possible formula for benzene, but left the issue open. The first proposal of the modern structure for benzene was due to Kekulé, in 1865. The cyclic nature of benzene was finally confirmed by the crystallographer Kathleen Lonsdale . Benzene presents a special problem in that, to account for all the bonds, there must be alternating double carbon bonds:
In 1865, German chemist August Wilhelm von Hofmann was the first to make stick-and-ball molecular models, which he used in lecture at the Royal Institution of Great Britain , such as methane shown below:
The basis of this model followed the earlier 1855 suggestion by his colleague William Odling that carbon is tetravalent . Hofmann's color scheme, to note, is still used to this day : carbon = black, nitrogen = blue, oxygen = red, chlorine = green, sulfur = yellow, hydrogen = white. [ 14 ] The deficiencies in Hofmann's model were essentially geometric: carbon bonding was shown as planar , rather than tetrahedral, and the atoms were out of proportion, e.g. carbon was smaller in size than the hydrogen.
In 1864, Scottish organic chemist Alexander Crum Brown began to draw pictures of molecules, in which he enclosed the symbols for atoms in circles, and used broken lines to connect the atoms together in a way that satisfied each atom's valence.
The year 1873, by many accounts, was a seminal point in the history of the development of the concept of the "molecule". In this year, the renowned Scottish physicist James Clerk Maxwell published his famous thirteen page article 'Molecules' in the September issue of Nature . [ 15 ] In the opening section to this article, Maxwell clearly states:
An atom is a body which cannot be cut in two; a molecule is the smallest possible portion of a particular substance.
After speaking about the atomic theory of Democritus , Maxwell goes on to tell us that the word 'molecule' is a modern word. He states, "it does not occur in Johnson's Dictionary . The ideas it embodies are those belonging to modern chemistry." We are told that an 'atom' is a material point, invested and surrounded by 'potential forces' and that when 'flying molecules' strike against a solid body in constant succession it causes what is called pressure of air and other gases. At this point, however, Maxwell notes that no one has ever seen or handled a molecule.
In 1874, Jacobus Henricus van 't Hoff and Joseph Achille Le Bel independently proposed that the phenomenon of optical activity could be explained by assuming that the chemical bonds between carbon atoms and their neighbors were directed towards the corners of a regular tetrahedron. This led to a better understanding of the three-dimensional nature of molecules.
Emil Fischer developed the Fischer projection technique for viewing 3-D molecules on a 2-D sheet of paper:
In 1898, Ludwig Boltzmann , in his Lectures on Gas Theory , used the theory of valence to explain the phenomenon of gas phase molecular dissociation, and in doing so drew one of the first rudimentary yet detailed atomic orbital overlap drawings. Noting first the known fact that molecular iodine vapor dissociates into atoms at higher temperatures, Boltzmann states that we must explain the existence of molecules composed of two atoms, the "double atom" as Boltzmann calls it, by an attractive force acting between the two atoms. Boltzmann states that this chemical attraction, owing to certain facts of chemical valence, must be associated with a relatively small region on the surface of the atom called the sensitive region .
Boltzmann states that this "sensitive region" will lie on the surface of the atom, or may partially lie inside the atom, and will firmly be connected to it. Specifically, he states "only when two atoms are situated so that their sensitive regions are in contact, or partly overlap, will there be a chemical attraction between them. We then say that they are chemically bound to each other." This picture is detailed below, showing the α-sensitive region of atom-A overlapping with the β-sensitive region of atom-B: [ 16 ]
In the early 20th century, the American chemist Gilbert N. Lewis began to use dots in lecture, while teaching undergraduates at Harvard , to represent the electrons around atoms. His students favored these drawings, which stimulated him in this direction. From these lectures, Lewis noted that elements with a certain number of electrons seemed to have a special stability. This phenomenon was pointed out by the German chemist Richard Abegg in 1904, to which Lewis referred to as "Abegg's law of valence" (now generally known as Abegg's rule ). To Lewis it appeared that once a core of eight electrons has formed around a nucleus, the layer is filled, and a new layer is started. Lewis also noted that various ions with eight electrons also seemed to have a special stability. On these views, he proposed the rule of eight or octet rule : Ions or atoms with a filled layer of eight electrons have a special stability . [ 17 ]
Moreover, noting that a cube has eight corners Lewis envisioned an atom as having eight sides available for electrons, like the corner of a cube. Subsequently, in 1902 he devised a conception in which cubic atoms can bond on their sides to form cubic-structured molecules.
In other words, electron-pair bonds are formed when two atoms share an edge, as in structure C below. This results in the sharing of two electrons. Similarly, charged ionic-bonds are formed by the transfer of an electron from one cube to another, without sharing an edge A . An intermediate state B where only one corner is shared was also postulated by Lewis.
Hence, double bonds are formed by sharing a face between two cubic atoms. This results in the sharing of four electrons.
In 1913, while working as the chair of the department of chemistry at the University of California, Berkeley , Lewis read a preliminary outline of paper by an English graduate student, Alfred Lauck Parson , who was visiting Berkeley for a year. In this paper, Parson suggested that the electron is not merely an electric charge but is also a small magnet (or " magneton " as he called it) and furthermore that a chemical bond results from two electrons being shared between two atoms. [ 18 ] This, according to Lewis, meant that bonding occurred when two electrons formed a shared edge between two complete cubes.
On these views, in his famous 1916 article The Atom and the Molecule , Lewis introduced the "Lewis structure" to represent atoms and molecules, where dots represent electrons and lines represent covalent bonds . In this article, he developed the concept of the electron-pair bond , in which two atoms may share one to six electrons, thus forming the single electron bond , a single bond , a double bond , or a triple bond .
In Lewis' own words:
An electron may form a part of the shell of two different atoms and cannot be said to belong to either one exclusively.
Moreover, he proposed that an atom tended to form an ion by gaining or losing the number of electrons needed to complete a cube. Thus, Lewis structures show each atom in the structure of the molecule using its chemical symbol. Lines are drawn between atoms that are bonded to one another; occasionally, pairs of dots are used instead of lines. Excess electrons that form lone pairs are represented as pair of dots, and are placed next to the atoms on which they reside:
To summarize his views on his new bonding model, Lewis states: [ 19 ]
Two atoms may conform to the rule of eight, or the octet rule, not only by the transfer of electrons from one atom to another, but also by sharing one or more pairs of electrons...Two electrons thus coupled together, when lying between two atomic centers, and held jointly in the shells of the two atoms, I have considered to be the chemical bond. We thus have a concrete picture of that physical entity, that "hook and eye" which is part of the creed of the organic chemist.
The following year, in 1917, an unknown American undergraduate chemical engineer named Linus Pauling was learning the Dalton hook-and-eye bonding method at the Oregon Agricultural College , which was the vogue description of bonds between atoms at the time. Each atom had a certain number of hooks that allowed it to attach to other atoms, and a certain number of eyes that allowed other atoms to attach to it. A chemical bond resulted when a hook and eye connected. Pauling, however, wasn't satisfied with this archaic method and looked to the newly emerging field of quantum physics for a new method.
In 1927, the physicists Fritz London and Walter Heitler applied the new quantum mechanics to the deal with the saturable, nondynamic forces of attraction and repulsion, i.e., exchange forces, of the hydrogen molecule. Their valence bond treatment of this problem, in their joint paper, [ 20 ] was a landmark in that it brought chemistry under quantum mechanics. Their work was an influence on Pauling, who had just received his doctorate and visited Heitler and London in Zürich on a Guggenheim Fellowship .
Subsequently, in 1931, building on the work of Heitler and London and on theories found in Lewis' famous article, Pauling published his ground-breaking article "The Nature of the Chemical Bond" [ 21 ] (see: manuscript ) in which he used quantum mechanics to calculate properties and structures of molecules, such as angles between bonds and rotation about bonds. On these concepts, Pauling developed hybridization theory to account for bonds in molecules such as CH 4 , in which four sp³ hybridised orbitals are overlapped by hydrogen 's 1s orbital, yielding four sigma (σ) bonds . The four bonds are of the same length and strength, which yields a molecular structure as shown below:
Owing to these exceptional theories, Pauling won the 1954 Nobel Prize in Chemistry . Notably he has been the only person to ever win two unshared Nobel Prizes , winning the Nobel Peace Prize in 1963.
In 1926, French physicist Jean Perrin received the Nobel Prize in physics for proving, conclusively, the existence of molecules. He did this by calculating the Avogadro number using three different methods, all involving liquid phase systems. First, he used a gamboge soap-like emulsion, second by doing experimental work on Brownian motion , and third by confirming Einstein's theory of particle rotation in the liquid phase. [ 22 ]
In 1937, chemist K.L. Wolf introduced the concept of supermolecules ( Übermoleküle ) to describe hydrogen bonding in acetic acid dimers . This would eventually lead to the area of supermolecular chemistry , which is the study of non-covalent bonding.
In 1951, physicist Erwin Wilhelm Müller invents the field ion microscope and is the first to see atoms , e.g. bonded atomic arrangements at the tip of a metal point. [ 23 ]
In 1968-1970 Leroy Cooper, PhD of the University of California at Davis completed his thesis which showed what molecules looked like. He used x-ray deflection off crystals and a complex computer program written by Bill Pentz of the UC Davis Computer Center. This program took the mapped deflections and used them to calculate the basic shapes of crystal molecules. His work showed that actual molecular shapes in quartz crystals and other tested crystals looked similar to the long envisioned merged various sized soap bubbles theorized, except instead of being merged spheres of different sizes, actual shapes were rigid mergers of more tear dropped shapes that stayed fixed in orientation. This work verified for the first time that crystal molecules are actually linked or stacked merged tear drop constructions. [ citation needed ]
In 1999, researchers from the University of Vienna reported results from experiments on wave-particle duality for C 60 molecules. [ 24 ] The data published by Anton Zeilinger et al. were consistent with Louis de Broglie 's matter waves . This experiment was noted for extending the applicability of wave–particle duality by about one order of magnitude in the macroscopic direction. [ 25 ]
In 2009, researchers from IBM managed to take the first picture of a real molecule. [ 26 ] Using an atomic force microscope every single atom and bond of a pentacene molecule could be imaged. | https://en.wikipedia.org/wiki/History_of_molecular_theory |
The development of water treatment and filtration technologies went through many stages. The greatest level of change came in the 19th century as the growth of cities forced the development of new methods for distributing and treating water and the problems of water contamination became more pronounced.
Sushruta of India recommended boiling and heating water under the sun and then filtering with gravel and charcoal prior to drinking. ( Sushruta Samhita , Arabic translation Kitab-i-Susrud). Early water treatment was primarily focused on the aesthetic properties of water, taste and odor. Writings from ancient Greece indicate that boiling and filtering water through charcoal were used along with exposing the water to sunlight and straining. Other cultures such as the Egyptians were using alum as a means of removing suspended particles by 1500 B.C. [ 1 ] Medieval Venice obtained filtered water from cisterns using beds of sand. [ 2 ]
Throughout most of human history, the primary means of acquiring untainted water was to avoid the problem and bring in water from an outside source that did not require treatment. The Romans did this with their aqueducts . London 's New River was constructed, beginning in the early 17th century as a means of bringing in clean water from outside the city. The New River was slow-flowing, which helped to increase sedimentation. It also had screens installed every few miles to catch any debris and weeds. These screens required periodic maintenance and workmen were employed to clean them and cut back the weeds. [ 3 ] The new river would meet London's needs well enough that there were few complaints before the 19th century, although the water supplied was rarely used for drinking directly, rather it was more likely used for washing or the brewing of beer. [ 4 ]
By the beginning of the 19th century filtration became a means of removing debris from the water. Paisley, Scotland, became the first city to use a filter, designed by John Gibb, to supply a city with water. [ 5 ] London would follow up on Scotland 's initial filter with one of its own at Chelsea in 1828. The Chelsea filter was a slow sand filter which consisted of a two-foot layer of sand with layers of shells gravel and bricks beneath. The Chelsea filter was capable of clearing 95 percent of impurities from the water. It was unknown at the time of its construction but this filter also functioned as biological filter due to bacteria present in the bed. [ 6 ] The Bacteriology of cities waste water would not be understood until the end of the 19th century.
The benefits of filtration were not obvious at the time and adoption of filters was slow. Berlin would install filters in 1856 and other European cities would follow. In America the need for filtration was not readily apparent. The city of Richmond, Virginia attempted to install a slow sand filter in 1832 but the filter did not operate properly. Other American cities considered installing filters but deemed them too expensive at the time. [ 7 ]
A tipping point came in the early 19th century that required attention to be paid to water treatment. The water closet ( toilet ), an improved version of which was introduced by John Bramah in the 1770s began to grow in popularity. By the 1830s the water closet was widely used in London. Household drains could not be connected to the city's sewers, but after 1815 this prohibition was lifted. Water closets could now empty into the cities sewer which in turn emptied into the Thames . [ 8 ] This was a disaster for the river. In 1816 salmon could be caught in the Thames, four years later none could be caught. The water closet overloaded the medieval cesspool system which was still in use. The use of water to dispose of sewage in the water closets filed the cesspools ten to twenty times quicker. Cesspools before this had received mostly solid waste. The rapid filling caused seepage. By 1844 the Metropolitan Buildings act simply required new buildings to be connected to street sewers. [ 9 ] The tidal nature of the Thames did not help the matter.
A similar situation was occurring in the US. Water consumption was increasing, for example in Chicago the per capita water consumption was 33 gallons per day in 1856 to 144 gallons in 1882 (although this figure also includes industrial sources). This increased water consumption and the growing use of water closets overloaded the existing cesspool system and served to contaminate the surrounding soil and watercourses. [ 10 ]
An increase in the awareness of the transmission of diseases such as cholera , typhoid and yellow fever in the 19th century manifested in a growing need to filter and treat municipal drinking water. The growth of cities and the contamination of nearby water sources by sewage and industrial waste led to an increasing demand for treatment.
The understanding of how disease was transmitted was still developing. The miasmatic theory advocated that disease was transmitted by smell and foul odors arising from putrefaction of organic matter. This was advocated by early sanitarian Edwin Chadwick , who in the 1840s, advocated the removal of human waste by means of water, the idea was to remove the foul smells as quickly as possible, by means of water ideally to be deposited on agricultural fields. [ 11 ] The sewage was instead deposited in the Thames of which many water suppliers still used as their source. In this environment John Snow established a series of experiments where he was able to show that cholera was communicable by water and was able to link a cholera outbreak in London to a single well in London on Broad Street. [ 12 ] The link between water and disease was still not well established and in 1873 the president of the New York board of health declared that "although rivers are great natural sewers, and receive the drainage of towns and cities the natural process of purification, in most cases destroys the offensive bodies derived from sewer and renders them harmless". [ 13 ] This is typical of the lack of understanding of how disease was transmitted and followed the general belief that water courses such as rivers and lakes were great sinks for purification of contaminated water.
As understanding of the bacteriological nature of disease became clearer in the late 19th century the need to filter and treat water became apparent. In Lowell, MA after a typhoid outbreak in 1890 William Thompson Sedgwick applied bacteriological methods to the investigation and was able to establish a link between contaminated water and the disease. The United States by this time had begun to introduce filtration into its municipal water supplies as a means of removing sediment form water in the 1870s through 90s. The United States also introduced the rapid sand filter which were a derivation of filters used in the paper making industry. The rapid filters and the slow sand filters engaged in competition as to which technology was superior. By the beginning of the 20th century, an increased knowledge in bacteriology led to improvements in the slow sand filters as well as the design of rapid filters. It soon became apparent that slow sand filters could remove typhoid germs. [ 14 ]
Treatment as well as filtration began to be used in the early 20th century. Water chlorination as a means of treatment began to be used in the late 19th century. Bleaching powder was the first material used for chlorination. Middelkerke, Belgium , would become the first city to chlorinate its water, in 1902, and Jersey City, New Jersey , became the first in city in the United States to do so, in 1909. Filtration alone was coincidentally able to prevent many cases of typhoid, although filtration's primary purpose was reducing turbidity of the water. [ 15 ] General concern about doping the water with chemicals led to some debate on the merits of chlorination. On January 14, 1916, the chlorination equipment in Milwaukee, Wisconsin , ceased functioning for 7 hours. In that time the water pumped from Lake Michigan caused 25,000 to 100,000 cases of diarrhea as well as 500 cases of typhoid fever, with 60 deaths. [ 16 ] By this point the need for treatment was becoming clear.
Drinking water regulations were enacted by the US federal government beginning in 1914 regarding the bacteriological quality of drinking water. This regulation would later be strengthened as it became apparent in the 1960s that industrial process was contaminating the water. Techniques such as aeration , flocculation , and granular activated carbon absorption could combat this and these techniques were known but were not universally utilized in United States water supplies. Regulation was passed in the 1974 Safe Drinking water act to address some of these deficiencies. [ 1 ] | https://en.wikipedia.org/wiki/History_of_municipal_treatment_of_drinking_water |
The history of nanotechnology traces the development of the concepts and experimental work falling under the broad category of nanotechnology . Although nanotechnology is a relatively recent development in scientific research, the development of its central concepts happened over a longer period of time. The emergence of nanotechnology in the 1980s was caused by the convergence of experimental advances such as the invention of the scanning tunneling microscope in 1981 and the discovery of fullerenes in 1985, with the elucidation and popularization of a conceptual framework for the goals of nanotechnology beginning with the 1986 publication of the book Engines of Creation . The field was subject to growing public awareness and controversy in the early 2000s, with prominent debates about both its potential implications as well as the feasibility of the applications envisioned by advocates of molecular nanotechnology , and with governments moving to promote and fund research into nanotechnology. The early 2000s also saw the beginnings of commercial applications of nanotechnology , although these were limited to bulk applications of nanomaterials rather than the transformative applications envisioned by the field.
Carbon nanotubes have been found in pottery from Keeladi , India, dating to c. 600–300 BC, though it is not known how they formed or whether the substance containing them was employed deliberately. [ 1 ] Cementite nanowires have been observed in Damascus steel , a material dating back to c. 900 AD, their origin and means of manufacture also unknown. [ 2 ]
Although nanoparticles are associated with modern science, they were used by artisans as far back as the ninth century in Mesopotamia for creating a glittering effect on the surface of pots. [ 3 ] [ 4 ]
In modern times, pottery from the Middle Ages and Renaissance often retains a distinct gold- or copper-colored metallic glitter. This luster is caused by a metallic film that was applied to the transparent surface of a glazing , which contains silver and copper nanoparticles dispersed homogeneously in the glassy matrix of the ceramic glaze. These nanoparticles are created by the artisans by adding copper and silver salts and oxides together with vinegar , ochre , and clay on the surface of previously glazed pottery. The technique originated in the Muslim world . As Muslims were not allowed to use gold in artistic representations, they sought a way to create a similar effect without using real gold. The solution they found was using luster. [ 4 ] [ 5 ]
The American physicist Richard Feynman lectured, " There's Plenty of Room at the Bottom ," at an American Physical Society meeting at Caltech on December 29, 1959, which is often held to have provided inspiration for the field of nanotechnology . Feynman had described a process by which the ability to manipulate individual atoms and molecules might be developed, using one set of precise tools to build and operate another proportionally smaller set, so on down to the needed scale. In the course of this, he noted, scaling issues would arise from the changing magnitude of various physical phenomena: gravity would become less important, surface tension and Van der Waals attraction would become more important. [ 6 ]
After Feynman's death, a scholar studying the historical development of nanotechnology has concluded that his actual role in catalyzing nanotechnology research was limited, based on recollections from many of the people active in the nascent field in the 1980s and 1990s. Chris Toumey, a cultural anthropologist at the University of South Carolina , found that the published versions of Feynman's talk had a negligible influence in the twenty years after it was first published, as measured by citations in the scientific literature, and not much more influence in the decade after the Scanning Tunneling Microscope was invented in 1981. Subsequently, interest in “Plenty of Room” in the scientific literature greatly increased in the early 1990s. This is probably because the term “nanotechnology” gained serious attention just before that time, following its use by K. Eric Drexler in his 1986 book, Engines of Creation: The Coming Era of Nanotechnology , which took the Feynman concept of a billion tiny factories and added the idea that they could make more copies of themselves via computer control instead of control by a human operator; and in a cover article headlined "Nanotechnology", [ 7 ] [ 8 ] published later that year in a mass-circulation science-oriented magazine, Omni . Toumey's analysis also includes comments from distinguished scientists in nanotechnology who say that “Plenty of Room” did not influence their early work, and in fact most of them had not read it until a later date. [ 9 ] [ 10 ]
These and other developments hint that the retroactive rediscovery of Feynman's “Plenty of Room” gave nanotechnology a packaged history that provided an early date of December 1959, plus a connection to the charisma and genius of Richard Feynman. Feynman's stature as a Nobel laureate and as an iconic figure in 20th century science surely helped advocates of nanotechnology and provided a valuable intellectual link to the past. [ 11 ]
Japanese scientist Norio Taniguchi of Tokyo University of Science was the first to use the term "nano-technology" in a 1974 conference, [ 12 ] to describe semiconductor processes such as thin film deposition and ion beam milling exhibiting characteristic control on the order of a nanometer. His definition was, "'Nano-technology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or one molecule." However, the term was not used again until 1981 when Eric Drexler, who was unaware of Taniguchi's prior use of the term, published his first paper on nanotechnology in 1981. [ 13 ] [ 14 ] [ 15 ]
In the 1980s the idea of nanotechnology as a deterministic , rather than stochastic , handling of individual atoms and molecules was conceptually explored in depth by K. Eric Drexler, who promoted the technological significance of nano-scale phenomena and devices through speeches and two influential books.
In 1980, Drexler encountered Feynman's provocative 1959 talk "There's Plenty of Room at the Bottom" while preparing his initial scientific paper on the subject, “Molecular Engineering: An approach to the development of general capabilities for molecular manipulation,” published in the Proceedings of the National Academy of Sciences in 1981. [ 16 ] The term "nanotechnology" (which paralleled Taniguchi's "nano-technology" ) was independently applied by Drexler in his 1986 book Engines of Creation: The Coming Era of Nanotechnology , which proposed the idea of a nanoscale "assembler" which would be able to build a copy of itself and of other items of arbitrary complexity. He also first published the term " grey goo " to describe what might happen if a hypothetical self-replicating machine , capable of independent operation, were constructed and released. Drexler's vision of nanotechnology is often called " Molecular Nanotechnology " (MNT) or "molecular manufacturing."
His 1991 Ph.D. work at the MIT Media Lab was the first doctoral degree on the topic of molecular nanotechnology and (after some editing) his thesis, "Molecular Machinery and Manufacturing with Applications to Computation," [ 17 ] was published as Nanosystems: Molecular Machinery, Manufacturing, and Computation, [ 18 ] which received the Association of American Publishers award for Best Computer Science Book of 1992. Drexler founded the Foresight Institute in 1986 with the mission of "Preparing for nanotechnology.” Drexler is no longer a member of the Foresight Institute. [ citation needed ]
In nanoelectronics , nanoscale thickness was demonstrated in the gate oxide and thin films used in transistors as early as the 1960s, but it was not until the late 1990s that MOSFETs (metal–oxide–semiconductor field-effect transistors) with nanoscale gate length were demonstrated. Nanotechnology and nanoscience got a boost in the early 1980s with two major developments: the birth of cluster science and the invention of the scanning tunneling microscope (STM). These developments led to the discovery of fullerenes in 1985 and the structural assignment of carbon nanotubes in 1991. The development of FinFET in the 1990s aldo laid the foundations for modern nanoelectronic semiconductor device fabrication .
The scanning tunneling microscope , an instrument for imaging surfaces at the atomic level, was developed in 1981 by Gerd Binnig and Heinrich Rohrer at IBM Zurich Research Laboratory , for which they were awarded the Nobel Prize in Physics in 1986. [ 19 ] [ 20 ] Binnig, Calvin Quate and Christoph Gerber invented the first atomic force microscope in 1986. The first commercially available atomic force microscope was introduced in 1989.
IBM researcher Don Eigler was the first to manipulate atoms using a scanning tunneling microscope in 1989. He used 35 Xenon atoms to spell out the IBM logo . [ 21 ] He shared the 2010 Kavli Prize in Nanoscience for this work. [ 22 ]
Interface and colloid science had existed for nearly a century before they became associated with nanotechnology. [ 23 ] [ 24 ] The first observations and size measurements of nanoparticles had been made during the first decade of the 20th century by Richard Adolf Zsigmondy , winner of the 1925 Nobel Prize in Chemistry , who made a detailed study of gold sols and other nanomaterials with sizes down to 10 nm using an ultramicroscope which was capable of visualizing particles much smaller than the light wavelength . [ 25 ] Zsigmondy was also the first to use the term "nanometer" explicitly for characterizing particle size. In the 1920s, Irving Langmuir , winner of the 1932 Nobel Prize in Chemistry, and Katharine B. Blodgett introduced the concept of a monolayer , a layer of material one molecule thick. In the early 1950s, Derjaguin and Abrikosova conducted the first measurement of surface forces. [ 26 ]
In 1974 the process of atomic layer deposition for depositing uniform thin films one atomic layer at a time was developed and patented by Tuomo Suntola and co-workers in Finland. [ 27 ]
In another development, the synthesis and properties of semiconductor nanocrystals were studied. This led to a fast increasing number of semiconductor nanoparticles of quantum dots .
Fullerenes were discovered in 1985 by Harry Kroto , Richard Smalley , and Robert Curl , who together won the 1996 Nobel Prize in Chemistry . Smalley's research in physical chemistry investigated formation of inorganic and semiconductor clusters using pulsed molecular beams and time of flight mass spectrometry . As a consequence of this expertise, Curl introduced him to Kroto in order to investigate a question about the constituents of astronomical dust. These are carbon rich grains expelled by old stars such as R Corona Borealis. The result of this collaboration was the discovery of C 60 and the fullerenes as the third allotropic form of carbon. Subsequent discoveries included the endohedral fullerenes , and the larger family of fullerenes the following year. [ 28 ] [ 29 ]
The discovery of carbon nanotubes is largely attributed to Sumio Iijima of NEC in 1991, although carbon nanotubes have been produced and observed under a variety of conditions prior to 1991. [ 30 ] Iijima's discovery of multi-walled carbon nanotubes in the insoluble material of arc-burned graphite rods in 1991 [ 31 ] and Mintmire, Dunlap, and White's independent prediction that if single-walled carbon nanotubes could be made, then they would exhibit remarkable conducting properties [ 32 ] helped create initial interest in carbon nanotubes. [ 33 ] [ 34 ] Nanotube research accelerated greatly following the independent discoveries [ 35 ] [ 36 ] by Bethune at IBM [ 37 ] and Iijima at NEC [ 38 ] of single-walled carbon nanotubes and methods to specifically produce them by adding transition-metal catalysts to the carbon in an arc discharge.
In the early 1990s, a team of researchers from the Max Planck Institute for Nuclear Physics and University of Arizona discovered how to synthesize and purify large quantities of fullerenes. [ 39 ] This opened the door to their characterization and functionalization by hundreds of investigators in government and industrial laboratories. [ 40 ] [ 41 ] Shortly after, rubidium- doped C 60 was found to be a medium-temperature superconductor. [ 42 ] [ 43 ] [ 44 ] Thomas Ebbesen and Pulickel Ajayen demonstrated a method to produce carbon nanotubes at scales allowing their properties to be measured in a laboratory. [ 45 ] [ 46 ]
Researchers have further developed the field of nanotube-based nanotechnology. [ 47 ] [ 48 ] A 2024 review stated that over 5,000 tons of nanotubes were produced annually, with industrial applications including biosensors, satellite sensors, and marine coatings. [ 49 ] [ 50 ] [ 51 ] [ 52 ] Practical challenges in many applications remain, such as the difficulty of retaining their unique properties in composite materials, ensuring chemical stability, and potential for toxicity. [ 49 ] [ 51 ] [ 53 ] [ 54 ] [ 55 ]
The National Nanotechnology Initiative is a United States federal nanotechnology research and development program. “The NNI serves as the central point of communication, cooperation, and collaboration for all Federal agencies engaged in nanotechnology research, bringing together the expertise needed to advance this broad and complex field." [ 56 ] Its goals are to advance a world-class nanotechnology research and development (R&D) program, foster the transfer of new technologies into products for commercial and public benefit, develop and sustain educational resources, a skilled workforce, and the supporting infrastructure and tools to advance nanotechnology, and support responsible development of nanotechnology. The initiative was spearheaded by Mihail Roco , who formally proposed the National Nanotechnology Initiative to the Office of Science and Technology Policy during the Clinton administration in 1999, and was a key architect in its development. He is currently the Senior Advisor for Nanotechnology at the National Science Foundation , as well as the founding chair of the National Science and Technology Council subcommittee on Nanoscale Science, Engineering and Technology. [ 57 ]
President Bill Clinton advocated nanotechnology development. In a 21 January 2000 speech [ 58 ] at the California Institute of Technology , Clinton said, "Some of our research goals may take twenty or more years to achieve, but that is precisely why there is an important role for the federal government." Feynman's stature and concept of atomically precise fabrication played a role in securing funding for nanotechnology research, as mentioned in President Clinton's speech:
My budget supports a major new National Nanotechnology Initiative, worth $500 million. Caltech is no stranger to the idea of nanotechnology the ability to manipulate matter at the atomic and molecular level. Over 40 years ago, Caltech's own Richard Feynman asked, "What would happen if we could arrange the atoms one by one the way we want them?" [ 59 ]
President George W. Bush further increased funding for nanotechnology. On December 3, 2003, Bush signed into law the 21st Century Nanotechnology Research and Development Act, [ 60 ] which authorizes expenditures for five of the participating agencies totaling US$ 3.63 billion over four years. [ 61 ] The NNI budget supplement for Fiscal Year 2009 provides $1.5 billion to the NNI, reflecting steady growth in the nanotechnology investment. [ 62 ]
"Why the future doesn't need us" is an article written by Bill Joy , then Chief Scientist at Sun Microsystems , in the April 2000 issue of Wired magazine. In the article, he argues that "Our most powerful 21st-century technologies — robotics , genetic engineering , and nanotech — are threatening to make humans an endangered species ." Joy argues that developing technologies provide a much greater danger to humanity than any technology before it has ever presented. In particular, he focuses on genetics , nanotechnology and robotics . He argues that 20th-century technologies of destruction, such as the nuclear bomb , were limited to large governments, due to the complexity and cost of such devices, as well as the difficulty in acquiring the required materials. He also voices concern about increasing computer power. His worry is that computers will eventually become more intelligent than we are, leading to such dystopian scenarios as robot rebellion . He notably quotes the Unabomber on this topic. After the publication of the article, Bill Joy suggested assessing technologies to gauge their implicit dangers, as well as having scientists refuse to work on technologies that have the potential to cause harm.
In the AAAS Science and Technology Policy Yearbook 2001 article titled A Response to Bill Joy and the Doom-and-Gloom Technofuturists , Bill Joy was criticized for having technological tunnel vision on his prediction, by failing to consider social factors. [ 63 ] In Ray Kurzweil 's The Singularity Is Near , he questioned the regulation of potentially dangerous technology, asking "Should we tell the millions of people afflicted with cancer and other devastating conditions that we are canceling the development of all bioengineered treatments because there is a risk that these same technologies may someday be used for malevolent purposes?".
Prey is a 2002 novel by Michael Crichton which features an artificial swarm of nanorobots which develop intelligence and threaten their human inventors. The novel generated concern within the nanotechnology community that the novel could negatively affect public perception of nanotechnology by creating fear of a similar scenario in real life. [ 64 ]
Richard Smalley, best known for co-discovering the soccer ball-shaped “buckyball” molecule and a leading advocate of nanotechnology and its many applications, was an outspoken critic of the idea of molecular assemblers , as advocated by Eric Drexler. In 2001 he introduced scientific objections to them [ 65 ] attacking the notion of universal assemblers in a 2001 Scientific American article, leading to a rebuttal later that year from Drexler and colleagues, [ 66 ] and eventually to an exchange of open letters in 2003. [ 67 ]
Smalley criticized Drexler's work on nanotechnology as naive, arguing that chemistry is extremely complicated, reactions are hard to control, and that a universal assembler is science fiction. Smalley believed that such assemblers were not physically possible and introduced scientific objections to them. His two principal technical objections, which he had termed the “fat fingers problem" and the "sticky fingers problem”, argued against the feasibility of molecular assemblers being able to precisely select and place individual atoms. He also believed that Drexler's speculations about apocalyptic dangers of molecular assemblers threaten the public support for development of nanotechnology.
Smalley first argued that "fat fingers" made MNT impossible. He later argued that nanomachines would have to resemble chemical enzymes more than Drexler's assemblers and could only work in water. He believed these would exclude the possibility of "molecular assemblers" that worked by precision picking and placing of individual atoms. Also, Smalley argued that nearly all of modern chemistry involves reactions that take place in a solvent (usually water), because the small molecules of a solvent contribute many things, such as lowering binding energies for transition states . Since nearly all known chemistry requires a solvent, Smalley felt that Drexler's proposal to use a high vacuum environment was not feasible.
Smalley also believed that Drexler's speculations about apocalyptic dangers of self-replicating machines that have been equated with "molecular assemblers" would threaten the public support for development of nanotechnology. To address the debate between Drexler and Smalley regarding molecular assemblers, Chemical & Engineering News published a point-counterpoint consisting of an exchange of letters that addressed the issues. [ 67 ]
Drexler and coworkers responded to these two issues [ 66 ] in a 2001 publication. Drexler and colleagues noted that Drexler never proposed universal assemblers able to make absolutely anything, but instead proposed more limited assemblers able to make a very wide variety of things. They challenged the relevance of Smalley's arguments to the more specific proposals advanced in Nanosystems . Drexler maintained that both were straw man arguments, and in the case of enzymes, Prof. Klibanov wrote in 1994, "...using an enzyme in organic solvents eliminates several obstacles..." [ 68 ] Drexler also addresses this in Nanosystems by showing mathematically that well designed catalysts can provide the effects of a solvent and can fundamentally be made even more efficient than a solvent/enzyme reaction could ever be. Drexler had difficulty in getting Smalley to respond, but in December 2003, Chemical & Engineering News carried a 4-part debate. [ 67 ]
Ray Kurzweil spends four pages in his book 'The Singularity Is Near' to showing that Richard Smalley's arguments are not valid, and disputing them point by point. Kurzweil ends by stating that Drexler's visions are very practicable and even happening already. [ 69 ]
The Royal Society and Royal Academy of Engineering 's 2004 report on the implications of nanoscience and nanotechnologies [ 70 ] was inspired by Prince Charles ' concerns about nanotechnology , including molecular manufacturing . However, the report spent almost no time on molecular manufacturing. [ 71 ] In fact, the word " Drexler " appears only once in the body of the report (in passing), and "molecular manufacturing" or " molecular nanotechnology " not at all. The report covers various risks of nanoscale technologies, such as nanoparticle toxicology. It also provides a useful overview of several nanoscale fields. The report contains an annex (appendix) on grey goo , which cites a weaker variation of Richard Smalley 's contested argument against molecular manufacturing. It concludes that there is no evidence that autonomous, self replicating nanomachines will be developed in the foreseeable future, and suggests that regulators should be more concerned with issues of nanoparticle toxicology.
The early 2000s saw the beginnings of the use of nanotechnology in commercial products, although most applications are limited to the bulk use of passive nanomaterials . Examples include titanium dioxide and zinc oxide nanoparticles in sunscreen, cosmetics and some food products; silver nanoparticles in food packaging, clothing, disinfectants and household appliances such as Silver Nano ; carbon nanotubes for stain-resistant textiles; and cerium oxide as a fuel catalyst. [ 72 ] As of March 10, 2011, the Project on Emerging Nanotechnologies estimated that over 1300 manufacturer-identified nanotech products are publicly available, with new ones hitting the market at a pace of 3–4 per week. [ 73 ]
The National Science Foundation funded researcher David Berube to study the field of nanotechnology [ when? ] . His findings are published in the monograph Nano-Hype: The Truth Behind the Nanotechnology Buzz. This study concludes that much of what is sold as “nanotechnology” is in fact a recasting of straightforward materials science, which is leading to a “nanotech industry built solely on selling nanotubes, nanowires, and the like” which will “end up with a few suppliers selling low margin products in huge volumes." Further applications which require actual manipulation or arrangement of nanoscale components await further research. Though technologies branded with the term 'nano' are sometimes little related to and fall far short of the most ambitious and transformative technological goals of the sort in molecular manufacturing proposals, the term still connotes such ideas. According to Berube, there may be a danger that a "nano bubble" will form, or is forming already, from the use of the term by scientists and entrepreneurs to garner funding, regardless of interest in the transformative possibilities of more ambitious and far-sighted work. [ 74 ]
Invention of ionizable cationic lipids at the turn of the 21st century allowed subsequent development of solid lipid nanoparticles , which in the 2020s became the most successful and well-known non-viral nanoparticle drug delivery system due to their use in several mRNA vaccines during the COVID-19 pandemic . | https://en.wikipedia.org/wiki/History_of_nanotechnology |
The history of neuraxial anaesthesia dates back to the late 1800s [ 1 ] and is closely intertwined with the development of anaesthesia in general. [ 2 ] Neuraxial anaesthesia , in particular, is a form of regional analgesia placed in or around the Central Nervous System , used for pain management and anaesthesia for certain surgeries and procedures. [ 3 ]
In 1855, Friedrich Gaedcke (1828–1890) became the first to chemically isolate cocaine , the most potent alkaloid of the coca plant. Gaedcke named the compound "erythroxyline". [ 4 ] [ 5 ]
In 1884, Austrian ophthalmologist Karl Koller (1857–1944) instilled a 2% solution of cocaine into his own eye and tested its effectiveness as a local anesthetic by pricking the eye with needles. [ 6 ] His findings were presented a few weeks later at annual conference of the Heidelberg Ophthalmological Society. [ 7 ] The following year, William Halsted (1852–1922) performed the first brachial plexus block. [ 8 ] Also in 1885, James Leonard Corning (1855–1923) injected cocaine between the spinous processes of the lower lumbar vertebrae , first in a dog and then in a healthy man. [ 9 ] [ 10 ] His experiments are the first published descriptions of the principle of neuraxial blockade . [ 11 ]
On August 16, 1898, German surgeon August Bier (1861–1949) performed surgery under spinal anesthesia in Kiel . [ 12 ] Following the publication of Bier's experiments in 1899, a controversy developed about whether Bier or Corning performed the first successful spinal anesthetic. [ 13 ] [ 14 ]
There is no doubt that Corning's experiments preceded those of Bier. For many years however, a controversy centered around whether Corning's injection was a spinal or an epidural block. The dose of cocaine used by Corning was eight times higher than that used by Bier and Tuffier . Despite this much higher dose, the onset of analgesia in Corning's human subject was slower and the dermatomal level of ablation of sensation was lower. Also, Corning did not describe seeing the flow of cerebrospinal fluid in his reports, whereas both Bier and Tuffier did make these observations. Based on Corning's own description of his experiments, it is apparent that his injections were made into the epidural space , and not the subarachnoid space . [ 14 ] Finally, Corning was incorrect in his theory on the mechanism of action of cocaine on the spinal nerves and spinal cord . He proposed – mistakenly – that the cocaine was absorbed into the venous circulation and subsequently transported to the spinal cord. [ 14 ]
Although Bier properly deserves credit for the introduction of spinal anesthesia into the clinical practice of medicine, it was Corning who created the experimental conditions that ultimately led to the development of both spinal and epidural anesthesia. [ 14 ]
Romanian surgeon Nicolae Racoviceanu-Pitești (1860–1942) was the first to use opioids for intrathecal analgesia; he presented his experience in Paris in 1901. [ 15 ] [ 16 ]
In 1921, Spanish military surgeon Fidel Pagés (1886–1923) developed the modern technique of lumbar epidural anesthesia, [ 17 ] which was popularized in the 1930s by Italian surgery professor Achille Mario Dogliotti [ it ] (1897–1966). [ 16 ] Dogliotti is known for describing a "loss-of-resistance" technique, involving constant application of pressure to the plunger of a syringe to identify the epidural space whilst advancing the Tuohy needle – a technique sometimes referred to as Dogliotti's principle . [ 18 ] Eugen Bogdan Aburel (1899–1975) was a Romanian surgeon and obstetrician who in 1931 was the first to describe blocking the lumbar plexus during early labor, followed by a caudal epidural injection for the expulsion phase . [ 19 ] [ 20 ]
Beginning in October 1941, Robert Andrew Hingson (1913–1996), Waldo B. Edwards and James L. Southworth, working at the United States Marine Hospital at Stapleton, on Staten Island in New York, developed the technique of continuous caudal anesthesia. [ 21 ] [ 22 ] [ 23 ] [ 24 ] Hingson and Southworth first used this technique in an operation to remove the varicose veins of a Scottish merchant seaman. Rather than removing the caudal needle after the injection as was customary, the two surgeons experimented with a continuous caudal infusion of local anesthetic. Hingson then collaborated with Edwards, the chief obstetrician at the Marine Hospital, to study the use of continuous caudal anesthesia for analgesia during childbirth. Hingson and Edwards studied the caudal region to determine where a needle could be placed to deliver anesthetic agents safely to the spinal nerves without injecting them into the cerebrospinal fluid. [ 23 ]
The first use of continuous caudal anesthesia in a laboring woman was on January 6, 1942, when the wife of a United States Coast Guard sailor was brought into the Marine Hospital for an emergency Caesarean section. Because the woman had rheumatic heart disease ( heart failure following an episode of rheumatic fever during childhood), her doctors believed that she would not survive the stress of labor but they also felt that she would not tolerate general anesthesia due to her heart failure. With the use of continuous caudal anesthesia, the woman and her baby survived. [ 25 ]
The first described placement of a lumbar epidural catheter was performed by Manuel Martínez Curbelo (5 June 1906–1 May 1962) on January 13, 1947. [ 26 ] [ 27 ] Curbelo, a Cuban anesthesiologist, introduced a 16 gauge Tuohy needle into the left flank of a 40-year-old woman with a large ovarian cyst . Through this needle, he introduced a 3.5 French ureteral catheter made of elastic silk into the lumbar epidural space. He then removed the needle, leaving the catheter in place and repeatedly injected 0.5% percaine ( cinchocaine , also known as dibucaine) to achieve anesthesia. Curbelo presented his work on September 9, 1947, at the 22nd Joint Congress of the International Anesthesia Research Society and the International College of Anesthetists, in New York City. [ 20 ] [ 28 ] | https://en.wikipedia.org/wiki/History_of_neuraxial_anesthesia |
The history of nuclear fusion began early in the 20th century as an inquiry into how stars powered themselves and expanded to incorporate a broad inquiry into the nature of matter and energy, as potential applications expanded to include warfare, energy production and rocket propulsion.
In 1920, the British physicist, Francis William Aston , discovered that the mass of four hydrogen atoms is greater than the mass of one helium atom ( He-4 ), which implied that energy can be released by combining hydrogen atoms to form helium. This provided the first hints of a mechanism by which stars could produce energy. Throughout the 1920s, Arthur Stanley Eddington became a major proponent of the proton–proton chain reaction (PP reaction) as the primary system running the Sun . [ 1 ] [ 2 ] Quantum tunneling was discovered by Friedrich Hund in 1929, and shortly afterwards Robert Atkinson and Fritz Houtermans used the measured masses of light elements to show that large amounts of energy could be released by fusing small nuclei.
Henry Norris Russell observed that the relationship in the Hertzsprung–Russell diagram suggested that a star's heat came from a hot core rather than from the entire star. Eddington used this to calculate that the temperature of the core would have to be about 40 million K. This became a matter of debate, because the value is much higher than astronomical observations that suggested about one-third to one-half that value. George Gamow introduced the mathematical basis for quantum tunnelling in 1928. [ 3 ] In 1929 Atkinson and Houtermans provided the first estimates of the stellar fusion rate. They showed that fusion can occur at lower energies than previously believed, backing Eddington's calculations. [ 4 ]
Nuclear experiments began using a particle accelerator built by John Cockcroft and Ernest Walton at Ernest Rutherford 's Cavendish Laboratory at the University of Cambridge . In 1932, Walton produced the first man-made fission by using protons from the accelerator to split lithium into alpha particles . [ 5 ] The accelerator was then used to fire deuterons at various targets. Working with Rutherford and others, Mark Oliphant discovered the nuclei of helium-3 ( helions ) and tritium ( tritons ), the first case of human-caused fusion. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ]
Neutrons from fusion were first detected in 1933. [ 11 ] The experiment involved the acceleration of protons towards a target [ 12 ] at energies of up to 600,000 electron volts.
A theory verified by Hans Bethe in 1939 showed that beta decay and quantum tunneling in the Sun's core might convert one of the protons into a neutron and thereby produce deuterium rather than a diproton . The deuterium would then fuse through other reactions to further increase the energy output. For this work, Bethe won the 1967 Nobel Prize in Physics . [ 1 ] [ 13 ] [ 14 ]
In 1938, Peter Thonemann developed a detailed plan for a pinch device, but was told to do other work for his thesis. [ 15 ]
The first patent related to a fusion reactor was registered in 1946 [ 16 ] by the United Kingdom Atomic Energy Authority . The inventors were Sir George Paget Thomson and Moses Blackman . This was the first detailed examination of the Z-pinch concept. Starting in 1947, two UK teams carried out experiments based on this concept. [ 1 ]
The first successful man-made fusion device was the boosted fission weapon tested in 1951 in the Greenhouse Item test. The first true fusion weapon was 1952's Ivy Mike , and the first practical example was 1954's Castle Bravo . In these devices, the energy released by a fission explosion compresses and heats the fuel, starting a fusion reaction. Fusion releases neutrons . These neutrons hit the surrounding fission fuel, causing the atoms to split apart much faster than normal fission processes. This increased the effectiveness of bombs: normal fission weapons blow themselves apart before all their fuel is used; fusion/fission weapons do not waste their fuel.
In 1949 expatriate German Ronald Richter proposed the Huemul Project in Argentina, announcing positive results in 1951. These turned out to be fake, but prompted others' interest. Lyman Spitzer began considering ways to solve problems involved in confining a hot plasma, and, unaware of the Z-pinch efforts, he created the stellarator. Spitzer applied to the US Atomic Energy Commission for funding to build a test device.
During this period, James L. Tuck , who had worked with the UK teams on Z-pinch, had been introducing the stellarator concept to his coworkers at LANL. When he heard of Spitzer's pitch, he applied to build a pinch machine of his own, the Perhapsatron . [ 1 ] [ 17 ]
Spitzer's idea won funding and he began work under Project Matterhorn. His work led to the creation of Princeton Plasma Physics Laboratory (PPPL). Tuck returned to LANL and arranged local funding to build his machine. By this time it was clear that the pinch machines were afflicted by instability, stalling progress. In 1953, Tuck and others suggested solutions that led to a second series of pinch machines, such as the ZETA and Sceptre devices. [ 1 ]
Spitzer's first machine, 'A' worked, but his next one, 'B', suffered from instabilities and plasma leakage. [ 18 ] [ 19 ]
In 1954 AEC chair Lewis Strauss foresaw electricity as " too cheap to meter ". [ 20 ] Strauss was likely referring to fusion power, [ 21 ] part of the secret Project Sherwood —but his statement was interpreted as referring to fission. The AEC had issued more realistic testimony regarding fission to Congress months before, projecting that "costs can be brought down... [to]... about the same as the cost of electricity from conventional sources..." [ 22 ]
In 1951 Edward Teller and Stanislaw Ulam at Los Alamos National Laboratory (LANL) developed the Teller-Ulam design for a thermonuclear weapon , allowing for the development of multi-megaton yield fusion bombs. Fusion work in the UK was classified after the Klaus Fuchs affair.
In the mid-1950s the theoretical tools used to calculate the performance of fusion machines were not predicting their actual behavior. Machines invariably leaked plasma at rates far higher than predicted. In 1954, Edward Teller gathered fusion researchers at the Princeton Gun Club. He pointed out the problems and suggested that any system that confined plasma within concave fields was doomed due to what became known as interchange instability . Attendees remember him saying in effect that the fields were like rubber bands, and they would attempt to snap back to a straight configuration whenever the power was increased, ejecting the plasma. He suggested that the only way to predictably confine plasma would be to use convex fields: a "cusp" configuration. [ 23 ] :118
When the meeting concluded, most researchers turned out papers explaining why Teller's concerns did not apply to their devices. Pinch machines did not use magnetic fields in this way, while the mirror and stellarator claques proposed various solutions. This was soon followed by Martin David Kruskal and Martin Schwarzschild 's paper discussing pinch machines, however, which demonstrated those devices' instabilities were inherent. [ 23 ] :118
The largest "classic" pinch device was the ZETA , which started operation in the UK in 1957. Its name is a take-off on small experimental fission reactors that often had "zero energy" in their name, such as ZEEP .
In early 1958, John Cockcroft announced that fusion had been achieved in the ZETA, an announcement that made headlines around the world. He dismissed US physicists' concerns. US experiments soon produced similar neutrons, although temperature measurements suggested these could not be from fusion. The ZETA neutrons were later demonstrated to be from different versions of the instability processes that had plagued earlier machines. Cockcroft was forced to retract his fusion claims, tainting the entire field for years. ZETA ended in 1968. [ 1 ]
The first experiment to achieve controlled thermonuclear fusion was accomplished using Scylla I at LANL in 1958. [ 24 ] [ 25 ] [ 26 ] Scylla I was a θ-pinch machine, with a cylinder full of deuterium. Electric current shot down the sides of the cylinder. The current made magnetic fields that pinched the plasma, raising temperatures to 15 million degrees Celsius, for long enough that atoms fused and produced neutrons. [ 27 ] [ 24 ] The Sherwood program sponsored a series of Scylla machines at Los Alamos. The program began with 5 researchers and $100,000 in US funding in January 1952. [ 28 ] By 1965, a total of $21 million had been spent. [ 29 ] The θ-pinch approach was abandoned after calculations showed it could not scale up to produce a reactor.
In 1950–1951 in the Soviet Union , Igor Tamm and Andrei Sakharov first discussed a tokamak -like approach. Experimental research on those designs began in 1956 at the Moscow Kurchatov Institute by a group of Soviet scientists led by Lev Artsimovich . The tokamak essentially combined a low-power pinch device with a low-power stellarator. The notion was to combine the fields in such a way that the particles orbited within the reactor a particular number of times, today known as the " safety factor ". The combination of these fields dramatically improved confinement times and densities, resulting in huge improvements over existing devices. [ 1 ]
In 1951, the United States completed the Greenhouse Item test of the first boosted fission weapon . A deuterium–tritium gas was used to enhance the fission yield. This became the first instance of artificial thermonuclear fusion, and the first weaponization of fusion. In 1952 Ivy Mike, part of Operation Ivy , became the first detonation of a hydrogen bomb , yielding 10.4 megatons of TNT using liquid deuterium. Cousins and Ware built a toroidal pinch device in England and demonstrated that the plasma in pinch devices is inherently unstable. In 1953 The Soviet Union tested its RDS-6S test, (codenamed " Joe 4 " in the US) demonstrated a fission/fusion/fission ("Layercake") design that yielded 600 kilotons. Igor Kurchatov spoke at Harwell on pinch devices, [ 30 ] revealing that the USSR was working on fusion.
Seeking to generate electricity, Japan , France and Sweden all start fusion research programs
In 1955, John D. Lawson (scientist) creates what is now known as the Lawson criterion which is a criterion for a fusion reactor to produce more energy than is lost to the environment due to problems like Bremsstrahlung radiation.
In 1956 the Soviet Union began publishing articles on plasma physics, leading the US and UK to follow over the next several years.
The Sceptre III z-pinch plasma column remained stable for 300 to 400 microseconds, a dramatic improvement on previous efforts. The team calculated that the plasma had an electrical resistivity around 100 times that of copper, and was able to carry 200 kA of current for 500 microseconds.
In 1960 John Nuckolls published the concept of inertial confinement fusion (ICF). The laser , introduced the same year, turned out to be a suitable "driver".
In 1961 the Soviet Union tested its 50 megaton Tsar Bomba , the most powerful thermonuclear weapon ever.
Spitzer published a key plasma physics text at Princeton in 1963. [ 31 ] He took the ideal gas laws and adapted them to an ionized plasma, developing many of the fundamental equations used to model a plasma.
Laser fusion was suggested in 1962 by scientists at LLNL. Initially, lasers had little power. Laser fusion (inertial confinement fusion) research began as early as 1965.
At the 1964 World's Fair , the public was given its first fusion demonstration. [ 32 ] The device was a Theta-pinch from General Electric. This was similar to the Scylla machine developed earlier at Los Alamos.
By the mid-1960s progress had stalled across the world. All of the major designs were losing plasma at unsustainable rates. The 12-beam "4 pi laser" attempt at inertial confinement fusion developed at LLNL targeted a gas-filled target chamber of about 20 centimeters in diameter.
The magnetic mirror was first published in 1967 by Richard F. Post and many others at LLNL. [ 33 ] The mirror consisted of two large magnets arranged so they had strong fields within them, and a weaker, but connected, field between them. Plasma introduced in the area between the two magnets would "bounce back" from the stronger fields in the middle.
A.D. Sakharov 's group constructed the first tokamaks. The most successful were the T-3 and its larger version T-4. T-4 was tested in 1968 in Novosibirsk , producing the first quasistationary fusion reaction. [ 34 ] :90 When this was announced, the international community was skeptical. A British team was invited to see T-3, and confirmed the Soviet claims. A burst of activity followed as many planned devices were abandoned and tokamaks were introduced in their place—the C model stellarator, then under construction after many redesigns, was quickly converted to the Symmetrical Tokamak. [ 1 ]
In his work with vacuum tubes, Philo Farnsworth observed that electric charge accumulated in the tube. In 1962, Farnsworth patented a design using a positive inner cage to concentrate plasma and fuse protons. [ 35 ] During this time, Robert L. Hirsch joined Farnsworth Television labs and began work on what became the Farnsworth-Hirsch Fusor . This effect became known as the Multipactor effect . [ 36 ] Hirsch patented the design in 1966 [ 37 ] and published it in 1967. [ 38 ]
Plasma temperatures of approximately 40 million degrees Celsius and 10 9 deuteron-deuteron fusion reactions per discharge were achieved at LANL with Scylla IV. [ 39 ]
In 1968 the Soviets announced results from the T-3 tokamak , claiming temperatures an order of magnitude higher than any other device. A UK team, nicknamed "The Culham Five", confirmed the results. The results led many other teams, including the Princeton group, which converted their stellarator to a tokamak.
Princeton's conversion of the Model C stellarator to a tokamak produced results matching the Soviets. With an apparent solution to the magnetic bottle problem in-hand, plans begin for a larger machine to test scaling and methods to heat the plasma.
In 1972, John Nuckolls outlined the idea of fusion ignition , [ 40 ] a fusion chain reaction. Hot helium made during fusion reheats the fuel and starts more reactions. Nuckolls's paper started a major development effort. LLNL built laser systems including Argus , Cyclops , Janus , the neodymium- doped glass (Nd:glass) laser Long Path , Shiva laser , and the 10 beam Nova in 1984. Nova would ultimately produce 120 kilojoules of infrared light during a nanosecond pulse.
The UK built the Central Laser Facility in 1976. [ 41 ]
The "advanced tokamak" concept emerged, which included non-circular plasma, internal diverters and limiters, superconducting magnets, and operation in the so-called "H-mode" island of increased stability. [ 42 ] Two other designs became prominent; the compact tokamak sited the magnets on the inside of the vacuum chamber, [ 43 ] [ 44 ] and the spherical tokamak with as small a cross section as possible. [ 45 ] [ 46 ]
In 1974 J.B. Taylor re-visited ZETA and noticed that after an experimental run ended, the plasma entered a short period of stability. This led to the reversed field pinch concept. On May 1, 1974, the KMS fusion company (founded by Kip Siegel ) achieved the world's first laser induced fusion in a deuterium-tritium pellet. [ 47 ]
The Princeton Large Torus (PLT), the follow-on to the Symmetrical Tokamak, surpassed the best Soviet machines and set temperature records that were above what was needed for a commercial reactor. Soon after it received funding with the target of breakeven.
In the mid-1970s, Project PACER , carried out at LANL explored the possibility of exploding small hydrogen bombs (fusion bombs) inside an underground cavity. [ 48 ] :25 As an energy source, the system was the only system that could work using the technology of the time. It required a large, continuous supply of nuclear bombs, however, with questionable economics.
In 1976, the two beam Argus laser became operational at LLNL. [ 49 ] In 1977, the 20 beam Shiva laser there was completed, capable of delivering 10.2 kilojoules of infrared energy on target. At a price of $25 million and a size approaching that of a football field, Shiva was the first megalaser. [ 49 ]
At a 1977 workshop at the Claremont Hotel in Berkeley Dr. C. Martin Stickley, then Director of the Energy Research and Development Agency ’s Office of Inertial Fusion, claimed that "no showstoppers" lay on the road to fusion energy.
The DOE selected a Princeton design Tokamak Fusion Test Reactor (TFTR) and the challenge of running on deuterium-tritium fuel.
The 20 beam Shiva laser at LLNL became capable of delivering 10.2 kilojoules of infrared energy on target. Costing $25 million and nearly covering a football field, Shiva was the first "megalaser" at LLNL.
In the German/US HIBALL study, [ 50 ] Garching used the high repetition rate of the RF driver to serve four reactor chambers using liquid lithium inside the chamber cavity. In 1982 high-confinement mode (H-mode) was discovered in tokamaks.
The US funded a magnetic mirror program in the late 1970s and early 1980s. This program resulted in a series of magnetic mirror devices including: 2X, [ 51 ] :273 Baseball I, Baseball II, the Tandem Mirror Experiment and upgrade, the Mirror Fusion Test Facility , and MFTF-B. These machines were built and tested at LLNL from the late 1960s to the mid-1980s. [ 52 ] [ 53 ] The final machine, MFTF cost 372 million dollars and was, at that time, the most expensive project in LLNL history. [ 54 ] It opened on February 21, 1986, and immediately closed, allegedly to balance the federal budget. [ 55 ]
Laser fusion progress: in 1983, the NOVETTE laser was completed. The following December, the ten-beam NOVA laser was finished. Five years later, NOVA produced 120 kilojoules of infrared light during a nanosecond pulse. [ 56 ]
Research focused on either fast delivery or beam smoothness. Both focused on increasing energy uniformity. One early problem was that the light in the infrared wavelength lost energy before hitting the fuel. Breakthroughs were made at LLE at University of Rochester . Rochester scientists used frequency-tripling crystals to transform infrared laser beams into ultraviolet beams.
In 1985, Donna Strickland [ 57 ] and Gérard Mourou invented a method to amplify laser pulses by "chirping". This changed a single wavelength into a full spectrum. The system amplified the beam at each wavelength and then reversed the beam into one color. Chirp pulsed amplification became instrumental for NIF and the Omega EP system. [ 58 ]
LANL constructed a series of laser facilities. [ 59 ] They included Gemini (a two beam system), Helios (eight beams), Antares (24 beams) and Aurora (96 beams). [ 60 ] [ 61 ] The program ended in the early nineties with a cost on the order of one billion dollars. [ 59 ]
In 1987, Akira Hasegawa [ 62 ] noticed that in a dipolar magnetic field, fluctuations tended to compress the plasma without energy loss. This effect was noticed in data taken by Voyager 2 , when it encountered Uranus . This observation became the basis for a fusion approach known as the levitated dipole .
In tokamaks, the Tore Supra was under construction from 1983 to 1988 in Cadarache , France. [ 63 ] Its superconducting magnets permitted it to generate a strong permanent toroidal magnetic field. [ 64 ] First plasma came in 1988. [ 65 ]
In 1983, JET achieved first plasma. In 1985, the Japanese tokamak, JT-60 produced its first plasmas. In 1988, the T-15 a Soviet tokamak was completed, the first to use (helium-cooled) superconducting magnets. [ 66 ]
In 1998, the T-15 Soviet tokamak with superconducting helium-cooled coils was completed.
In 1984, Martin Peng proposed [ 67 ] an alternate arrangement of magnet coils that would greatly reduce the aspect ratio while avoiding the erosion issues of the compact tokamak: a spherical tokamak . Instead of wiring each magnet coil separately, he proposed using a single large conductor in the center, and wiring the magnets as half-rings off of this conductor. What was once a series of individual rings passing through the hole in the center of the reactor was reduced to a single post, allowing for aspect ratios as low as 1.2. [ 68 ] :B247 [ 69 ] :225 The ST concept appeared to represent an enormous advance in tokamak design. The proposal came during a period when US fusion research budgets were dramatically smaller. ORNL was provided with funds to develop a suitable central column built out of a high-strength copper alloy called "Glidcop". However, they were unable to secure funding to build a demonstration machine.
Failing at ORNL, Peng began a worldwide effort to interest other teams in the concept and get a test machine built. One approach would be to convert a spheromak. [ 69 ] :225 Peng's advocacy caught the interest of Derek Robinson , of the United Kingdom Atomic Energy Authority . Robinson gathered a team and secured on the order of 100,000 pounds to build an experimental machine, the Small Tight Aspect Ratio Tokamak , or START. Parts of the machine were recycled from earlier projects, while others were loaned from other labs, including a 40 keV neutral beam injector from ORNL. Construction began in 1990 and operation started in January 1991. [ 68 ] :11 It achieved a record beta (plasma pressure compared to magnetic field pressure) of 40% using a neutral beam injector
The International Thermonuclear Experimental Reactor ( ITER ) coalition forms, involving EURATOM , Japan, the Soviet Union and United States and kicks off the conceptual design process.
In 1991 JET's Preliminary Tritium Experiment achieved the world's first controlled release of fusion power. [ 70 ]
In 1992, Physics Today published Robert McCory's outline of the current state of ICF, advocating for a national ignition facility. [ 71 ] This was followed by a review article from John Lindl in 1995, [ 72 ] making the same point. During this time various ICF subsystems were developed, including target manufacturing, cryogenic handling systems, new laser designs (notably the NIKE laser at NRL ) and improved diagnostics including time of flight analyzers and Thomson scattering . This work was done at the NOVA laser system, General Atomics , Laser Mégajoule and the GEKKO XII system in Japan. Through this work and lobbying by groups like the fusion power associates and John Sethian at NRL, Congress authorized funding for the NIF project in the late nineties.
In 1992 the United States and the former republics of the Soviet Union stopped testing nuclear weapons.
In 1993 TFTR at PPPL experimented with 50% deuterium , 50% tritium , eventually reaching 10 megawatts.
In the early nineties, theory and experimental work regarding fusors and polywells was published. [ 73 ] [ 74 ] In response, Todd Rider at MIT developed general models of these devices, [ 75 ] arguing that all plasma systems at thermodynamic equilibrium were fundamentally limited. In 1995, William Nevins published a criticism [ 76 ] arguing that the particles inside fusors and polywells would acquire angular momentum , causing the dense core to degrade.
In 1995, the University of Wisconsin–Madison built a large fusor , known as HOMER. [ 77 ] Dr George H. Miley at Illinois built a small fusor that produced neutrons using deuterium [ 78 ] [ 79 ] and discovered the " star mode " of fusor operation. At this time in Europe, an IEC device was developed as a commercial neutron source by Daimler-Chrysler and NSD Fusion. [ 80 ] [ 81 ]
The next year, Tore Supra reached a record plasma duration of two minutes with a current of almost 1 M amperes driven non-inductively by 2.3 MW of lower hybrid frequency waves (i.e. 280 MJ of injected and extracted energy), enabled by actively cooled plasma-facing components. [ 63 ] [ 82 ]
The upgraded Z-machine opened to the public in August 1998. [ 83 ] The key attributes were its 18 million ampere current and a discharge time of less than 100 nanoseconds . [ 84 ] This generated a magnetic pulse inside a large oil tank, which struck a liner (an array of tungsten wires). [ 85 ] Firing the Z-machine became a way to test high energy, high temperature (2 billion degrees) conditions. [ 86 ] In 1996.
In 1997, JET reached 16.1 MW (65% of heat to plasma [ 87 ] ), sustaining over 10 MW for over 0.5 sec. As of 2020 this remained the record output level. Four megawatts of alpha particle self-heating was achieved.
ITER was officially announced as part of a seven-party consortium (six countries and the EU). ITER was designed to produce ten times more fusion power than the input power. ITER was sited in Cadarache. [ 88 ] The US withdrew from the project in 1999.
JT-60 produced a reversed shear plasma with the equivalent fusion amplification factor Q e q {\displaystyle Q_{eq}} of 1.25 - as of 2021 this remained the world record.
In the late nineties, a team at Columbia University and MIT developed the levitated dipole , [ 89 ] a fusion device that consisted of a superconducting electromagnet, floating in a saucer shaped vacuum chamber. [ 90 ] Plasma swirled around this donut and fused along the center axis. [ 91 ]
In 1999 MAST replaced START .
"Fast ignition" [ 96 ] [ 97 ] appeared in the late nineties, as part of a push by LLE to build the Omega EP system, which finished in 2008. Fast ignition showed dramatic power savings and moved ICF into the race for energy production. The HiPER experimental facility became dedicated to fast ignition.
In 2001 the United States, China and Republic of Korea joined ITER while Canada withdrew.
In April 2005, a UCLA team announced [ 98 ] a way of producing fusion using a machine that "fits on a lab bench", using lithium tantalate to generate enough voltage to fuse deuterium . The process did not generate net power.
The next year, China's EAST test reactor was completed. [ 99 ] This was the first tokamak to use superconducting magnets to generate both toroidal and poloidal fields.
In the early 2000s, LANL researchers claimed that an oscillating plasma could reach local thermodynamic equilibrium. This prompted the POPS and Penning trap designs. [ 100 ] [ 101 ]
In 2005 NIF fired its first bundle of eight beams, achieving the most powerful laser pulse to date - 152.8 kJ (infrared).
MIT researchers became interested in fusors for space propulsion, [ 102 ] using fusors with multiple inner cages. [ 103 ] Greg Piefer founded Phoenix Nuclear Labs and developed the fusor into a neutron source for medical isotope production. [ 104 ] Robert Bussard began speaking openly about the polywell in 2006. [ 105 ] [ 106 ]
In March 2009, NIF became operational. [ 107 ]
In the early 2000s privately backed fusion companies launched to develop commercial fusion power. [ 108 ] Tri Alpha Energy , founded in 1998, began by exploring a field-reversed configuration approach. [ 109 ] [ 110 ] In 2002, Canadian company General Fusion began proof-of-concept experiments based on a hybrid magneto-inertial approach called Magnetized Target Fusion. [ 109 ] [ 108 ] Investors included Jeff Bezos (General Fusion) and Paul Allen (Tri Alpha Energy). [ 109 ] Toward the end of the decade, Tokamak Energy started exploring spherical tokamak devices using reconnection. [ 111 ]
Private and public research accelerated in the 2010s.
In 2017, General Fusion developed its plasma injector technology and Tri Alpha Energy constructed and operated its C-2U device. [ 113 ] In August 2014, Phoenix Nuclear Labs announced the sale of a high-yield neutron generator that could sustain 5×10 11 deuterium fusion reactions per second over a 24-hour period. [ 114 ]
In October 2014, Lockheed Martin 's Skunk Works announced the development of a high beta fusion reactor, the Compact Fusion Reactor . [ 115 ] [ 116 ] [ 117 ] Although the original concept was to build a 20-ton, container-sized unit, the team conceded in 2018 that the minimum scale would be 2,000 tons. [ 118 ]
In January 2015, the polywell was presented at Microsoft Research . [ 119 ] TAE Technologies announced that its Norman reactor had achieved plasma. [ 120 ]
In 2017, Helion Energy 's fifth-generation plasma machine went into operation, seeking to achieve plasma density of 20 T and fusion temperatures. [ 118 ] ST40 generated "first plasma". [ 121 ]
In 2018, Eni announced a $50 million investment in Commonwealth Fusion Systems , to attempt to commercialize ARC technology using a test reactor ( SPARC ) in collaboration with MIT. [ 122 ] [ 123 ] [ 124 ] [ 125 ] The reactor planned to employ yttrium barium copper oxide (YBCO) high-temperature superconducting magnet technology. Commonwealth Fusion Systems in 2021 tested successfully a 20 T magnet making it the strongest high-temperature superconducting magnet in the world. Following the 20 T magnet CFS raised $1.8 billion from private investors.
General Fusion began developing a 70% scale demo system. [ 69 ] In 2018, TAE Technologies' reactor reached nearly 20 M°C. [ 126 ]
In 2010, NIF researchers conducted a series of "tuning" shots to determine the optimal target design and laser parameters for high-energy ignition experiments with fusion fuel. [ 127 ] [ 128 ] Net fuel energy gain [ 129 ] [ 130 ] was achieved in September 2013. [ 131 ] [ 132 ]
In April 2014, LLNL ended the Laser Inertial Fusion Energy (LIFE) program and directed their efforts towards NIF. [ 133 ]
A 2012 paper demonstrated that a dense plasma focus had achieved temperatures of 1.8 billion degrees Celsius, sufficient for boron fusion , and that fusion reactions were occurring primarily within the contained plasmoid, necessary for net power. [ 134 ]
In August 2014, MIT announced a tokamak it named the ARC fusion reactor , using rare-earth barium-copper oxide (REBCO) superconducting tapes to construct high-magnetic field coils that it claimed produced comparable magnetic field strength in a smaller configuration than other designs. [ 135 ]
In October 2015, researchers at the Max Planck Institute of Plasma Physics completed building the largest stellarator to date, the Wendelstein 7-X . In December they produced the first helium plasma, and in February 2016 produced hydrogen plasma. [ 136 ] In 2015, with plasma discharges lasting up to 30 minutes, Wendelstein 7-X attempted to demonstrate the essential stellarator attribute: continuous operation of a high-temperature plasma. [ 137 ]
In 2014 EAST achieved a record confinement time of 30 seconds for plasma in the high-confinement mode (H-mode), thanks to improved heat dispersal. This was an order of magnitude improvement vs other reactors. [ 138 ] In 2017 the reactor achieved a stable 101.2-second steady-state high confinement plasma, setting a world record in long-pulse H-mode operation. [ 139 ]
In 2018 MIT scientists formulated a theoretical means to remove the excess heat from compact nuclear fusion reactors via larger and longer divertors . [ 140 ]
In 2019 the United Kingdom announced a planned £200-million (US$248-million) investment to produce a design for a fusion facility named the Spherical Tokamak for Energy Production (STEP), by the early 2040s. [ 141 ] [ 142 ]
In December 2020, the Chinese experimental nuclear fusion reactor HL-2M achieved its first plasma discharge. [ 143 ] In May 2021, Experimental Advanced Superconducting Tokamak (EAST) announced a new world record for superheated plasma, sustaining a temperature of 120 M°C for 101 seconds and a peak of 160 M°C for 20 seconds. [ 144 ] In December 2021 EAST set a new world record for high temperature (70 M°C [ 145 ] ) plasma of 1,056 seconds. [ 146 ]
In 2020, Chevron Corporation announced an investment in start-up Zap Energy , co-founded by British entrepreneur and investor, Benj Conway, together with physicists Brian Nelson and Uri Shumlak from University of Washington . [ 147 ] In 2021 the company raised $27.5 million in Series B funding led by Addition. [ 148 ]
In 2021, the US DOE launched the INFUSE program, a public-private knowledge sharing initiative involving a PPPL, MIT Plasma Science and Fusion Center and Commonwealth Fusion Systems partnership, [ 149 ] together with partnerships with TAE Technologies, Princeton Fusion Systems, and Tokamak Energy. [ 150 ] In 2021, DOE's Fusion Energy Sciences Advisory Committee approved a strategic plan to guide fusion energy and plasma physics research [ 151 ] [ 152 ] [ 153 ] that included a working power plant by 2040, similar to Canadian, Chinese, and U.K. efforts. [ 154 ] [ 155 ]
In January 2021, SuperOx announced the commercialization of a new superconducting wire, with more than 700 A/mm2 current capability. [ 156 ] [ 157 ]
TAE Technologies announced that its Norman device had sustained a temperature of about 60 million degrees C for 30 milliseconds, 8 and 10 times higher, respectively, than the company's previous devices. The duration was claimed to be limited by the power supply rather than the device. [ citation needed ]
In August 2021, the National Ignition Facility recorded a record-breaking 1.3 megajoules of energy created from fusion which is the first example of the Lawson criterion being surpassed in a laboratory. [ 158 ]
In February 2022, JET sustained 11 MW and a Q value of 0.33 for over 5 seconds, outputting 59.7 megajoules, using a mix of deuterium and tritium for fuel. [ 159 ] In March 2022 it was announced that Tokamak Energy achieved a record plasma temperature of 100 million kelvins , inside a commercial compact tokamak. [ 160 ]
In October 2022, the Korea Superconducting Tokamak Advanced Research (KSTAR) reached a record plasma duration of 45 seconds, [ 161 ] sustaining the high-temperature fusion plasma over the 100 million degrees Celsus based on the integrated real-time RMP control for ELM-less H-mode, i.e. fast ions regulated enhancement (FIRE) mode, [ 162 ] [ 163 ] machine learning algorithm, and 3D field optimization via an edge-localized RMP.
In December 2022, the NIF achieved the first scientific breakeven controlled fusion experiment, with an energy gain of 1.5. [ 164 ] [ 165 ]
In February 2024, the KSTAR tokamak set a new record (shot #34705) for the longest duration (102 seconds) of a magnetically confined plasma. The plasma was operated in the H-mode, with much better control of the error field than was possible previously. KSTAR also set a record (shot #34445) for the longest steady-state duration at a temperature of 100 million degrees Celsius (48 seconds, ELM-LESS FIRE mode). [ 166 ] [ 167 ] [ 168 ] | https://en.wikipedia.org/wiki/History_of_nuclear_fusion |
Differential equations , [ 1 ] in particular Euler equations , [ 2 ] rose in prominence during World War II in calculating the accurate trajectory [ 3 ] of ballistics, [ 4 ] both rocket-propelled and gun or cannon type projectiles. Originally, mathematicians used the simpler calculus [ 5 ] of earlier centuries to determine velocity, thrust, elevation, curve, distance, and other parameters.
New weapons, however, such as Germany's giant cannons, the " Paris Gun [ 6 ] " (Encyclopedia Astronautica) and " Big Bertha ," and the V-2 rocket , meant that projectiles would travel hundreds of miles in distance and dozens of miles in height, in all weathers. As a result, variables such as diminished wind resistance in thin atmospheres and changes in gravitational pull reduced accuracy using the historic methodology. There was the additional problem of planes that could now fly hundreds of miles an hour. Differential equations were applied to stochastic processes . Developing machines that could speed up human calculation of differential equations led in part to the creation of the modern computer through the efforts of Vannevar Bush , John von Neumann and others.
According to Mary Croarken in her paper "Computing in Britain During World War II," by 1945, the Cambridge Mathematical Laboratory created by John Lennard-Jones utilized the latest computing devices to perform the equations. These devices included a model " differential analyser ," and the Mallock machine , described as "an electrical simultaneous equation solver." According to Croarken, the Ministry was also interested in the new arrival of a differential analyzer accommodating eight integrators. This exotic computing device built by Metropolitan-Vickers in 1939 consisted of wheel and disk mechanisms that could provide descriptions and solutions for differential equations. Output resulted in a plotted graph.
At the same time, in the United States, analog computer pioneer Vannevar Bush took on a similar role to that of Lennard-Jones in the military effort after President Franklin Delano Roosevelt entrusted him with the bulk of wartime research into automatic control of fire power using machines and computing devices.
According to Sarah Bergbreiter in her paper "Moving from Practice to Theory: Automatic Control after World War II," fire control for the downing of enemy aircraft by anti-aircraft guns was the priority. The analog electro-mechanical computing machines plotted the differential firing data while servos created by H.L. Hazen adapted the data to the guns for precise firing control and accuracy. Other improvements of a similar type by Bell Labs increased firing stability so that output from the differential engines could be fully used to compensate for stochastic behaviors of enemy aircraft and large guns. A new age of intelligent warfare had begun.
This work at MIT and Bell Labs would later lead to Norbert Wiener 's development of the electronic computer and the science of cybernetics for the same purpose, speeding the differential calculation process exponentially and taking one more giant step toward the creation of the modern digital computer using von Neumann architecture . Dr. von Neumann was one of the original mathematicians employed in the development of differential equations for ballistic warfare. | https://en.wikipedia.org/wiki/History_of_numerical_solution_of_differential_equations_using_computers |
Computer operating systems (OSes) provide a set of functions needed and used by most application programs on a computer, and the links needed to control and synchronize computer hardware. On the first computers, with no operating system, every program needed the full hardware specification to run correctly and perform standard tasks, and its own drivers for peripheral devices like printers and punched paper card readers . The growing complexity of hardware and application programs eventually made operating systems a necessity for everyday use.
Early computers lacked any form of operating system. Instead, the user, also called the operator, had sole use of the machine for a scheduled period of time. The operator would arrive at the computer with program and data which needed to be loaded into the machine before the program could be run. Loading of program and data was accomplished in various ways including toggle switches, punched paper cards and magnetic or paper tape. [ 1 ] [ 2 ] [ 3 ] Once loaded, the machine would be set to execute the single program until that program completed or crashed. Programs could generally be debugged via a control panel using dials, toggle switches and panel lights, making it a very manual and error-prone process. [ 4 ]
Symbolic languages, assemblers , [ 5 ] [ 6 ] [ 7 ] compilers were developed for programmers to translate symbolic program code into machine code that previously would have been hand-encoded. Later machines came with libraries of support code on punched cards or magnetic tape, which would be linked to the user's program to assist in operations such as input and output. This was the genesis of the modern-day operating system; however, machines still ran a single program or job at a time. At Cambridge University in England the job queue was at one time a string from which tapes attached to corresponding job tickets were hung with stationery pegs. [ 8 ]
As machines became more powerful the time to run programs diminished, and the time to hand off the equipment to the next user became large by comparison. Accounting for and paying for machine usage moved on from checking the wall clock to automatic logging by the computer. Run queues evolved from a literal queue of people at the door, to a heap of media on a jobs-waiting table, or batches of punched cards stacked one on top of the other in the reader, until the machine itself was able to select and sequence which magnetic tape drives processed which tapes. Where program developers had originally had access to run their own jobs on the machine, they were supplanted by dedicated machine operators who looked after the machine and were less and less concerned with implementing tasks manually. When commercially available computer centers were faced with the implications of data lost through tampering or operational errors, equipment vendors were put under pressure to enhance the runtime libraries to prevent misuse of system resources. Automated monitoring was needed not just for CPU usage but for counting pages printed, cards punched, cards read, disk storage used and for signaling when operator intervention was required by jobs such as changing magnetic tapes and paper forms. Security features were added to operating systems to record audit trails of which programs were accessing which files and to prevent access to a production payroll file by an engineering program, for example.
All these features were building up towards the repertoire of a fully capable operating system. Eventually the runtime libraries became an amalgamated program that was started before the first customer job and could read in the customer job, control its execution, record its usage, reassign hardware resources after the job ended, and immediately go on to process the next job. These resident background programs, capable of managing multi step processes, were often called monitors or monitor-programs before the term "operating system" established itself.
An underlying program offering basic hardware management, software scheduling and resource monitoring may seem a remote ancestor to the user-oriented OSes of the personal computing era. But there has been a shift in the meaning of OS. Just as early automobiles lacked speedometers, radios, and air conditioners which later became standard, more and more optional software features became standard in every OS package. This has led to the perception of an OS as a complete user system with an integrated graphical user interface , utilities, and some applications such as file managers , text editors , and configuration tools.
The true descendant of the early operating systems is what is now called the " kernel ". In technical and development circles the old restricted sense of an OS persists because of the continued active development of embedded operating systems for all kinds of devices with a data-processing component, from hand-held gadgets up to industrial robots and real-time control systems, which do not run user applications at the front end. An embedded OS in a device today is not so far removed as one might think from its ancestor of the 1950s.
The broader categories of systems and application software are discussed in the computer software article.
The first operating system used for real work was GM-NAA I/O , produced in 1956 by General Motors ' Research division [ 9 ] for its IBM 704 . [ 10 ] [ specify ] Most other early operating systems for IBM mainframes were also produced by customers. [ 11 ]
Early operating systems were very diverse, with each vendor or customer producing one or more operating systems specific to their particular mainframe computer . Every operating system, even from the same vendor, could have radically different models of commands, operating procedures, and such facilities as debugging aids. Typically, each time the manufacturer brought out a new machine, there would be a new operating system, and most applications would have to be manually adjusted, recompiled, and retested.
The state of affairs continued until the 1960s when IBM , already a leading hardware vendor, stopped work on existing systems and put all its effort into developing the System/360 series of machines, all of which used the same instruction and input/output architecture. IBM intended to develop a single operating system for the new hardware, the OS/360 . The problems encountered in the development of the OS/360 are legendary, and are described by Fred Brooks in The Mythical Man-Month —a book that has become a classic of software engineering . Because of performance differences across the hardware range and delays with software development, a whole family of operating systems was introduced instead of a single OS/360. [ 12 ] [ 13 ]
IBM wound up releasing a series of stop-gaps followed by two longer-lived operating systems:
IBM maintained full compatibility with the past, so that programs developed in the sixties can still run under z/VSE (if developed for DOS/360) or z/OS (if developed for MFT or MVT) with no change.
IBM also developed TSS/360 , a time-sharing system for the System/360 Model 67 . Overcompensating for their perceived importance of developing a timeshare system, they set hundreds of developers to work on the project. Early releases of TSS were slow and unreliable; by the time TSS had acceptable performance and reliability, IBM wanted its TSS users to migrate to OS/360 and OS/VS2; while IBM offered a TSS/370 PRPQ, they dropped it after 3 releases. [ 14 ]
Several operating systems for the IBM S/360 and S/370 architectures were developed by third parties, including the Michigan Terminal System (MTS) and MUSIC/SP .
Control Data Corporation developed the SCOPE operating systems [ NB 1 ] in the 1960s, for batch processing and later developed the MACE operating system for time sharing, which was the basis for the later Kronos . In cooperation with the University of Minnesota , the Kronos and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and time sharing use. Like many commercial time sharing systems, its interface was an extension of the DTSS time sharing system, one of the pioneering efforts in timesharing and programming languages.
In the late 1970s, Control Data and the University of Illinois developed the PLATO system , which used plasma panel displays and long-distance time sharing networks. PLATO was remarkably innovative for its time; the shared memory model of PLATO's TUTOR programming language allowed applications such as real-time chat and multi-user graphical games.
For the UNIVAC 1107 , UNIVAC , the first commercial computer manufacturer, produced the EXEC I operating system, and Computer Sciences Corporation developed the EXEC II operating system and delivered it to UNIVAC. EXEC II was ported to the UNIVAC 1108 . Later, UNIVAC developed the EXEC 8 operating system for the 1108; it was the basis for operating systems for later members of the family. Like all early mainframe systems, EXEC I and EXEC II were a batch-oriented system that managed magnetic drums, disks, card readers and line printers; EXEC 8 supported both batch processing and on-line transaction processing. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.
Burroughs Corporation introduced the B5000 in 1961 with the MCP ( Master Control Program ) operating system. The B5000 was a stack machine designed to exclusively support high-level languages, with no software, not even at the lowest level of the operating system, being written directly in machine language or assembly language ; the MCP was the first [ citation needed ] OS to be written entirely in a high-level language - ESPOL , a dialect of ALGOL 60 - although ESPOL had specialized statements for each "syllable" [ NB 2 ] in the B5000 instruction set. MCP also introduced many other ground-breaking innovations, such as being one of [ NB 3 ] the first commercial implementations of virtual memory . The rewrite of MCP for the B6500 is now marketed as the Unisys ClearPath/MCP.
GE introduced the GE-600 series with the General Electric Comprehensive Operating Supervisor (GECOS) operating system in 1962. After Honeywell acquired GE's computer business, it was renamed to General Comprehensive Operating System (GCOS). Honeywell expanded the use of the GCOS name to cover all its operating systems in the 1970s, though many of its computers had nothing in common with the earlier GE 600 series and their operating systems were not derived from the original GECOS.
Project MAC at MIT, working with GE and Bell Labs , developed Multics , which introduced the concept of ringed security privilege levels.
Digital Equipment Corporation developed TOPS-10 for its PDP-10 line of 36-bit computers in 1967. Before the widespread use of Unix, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community. Bolt, Beranek, and Newman developed TENEX for a modified PDP-10 that supported demand paging ; this was another popular system in the research and ARPANET communities, and was later developed by DEC into TOPS-20 .
Scientific Data Systems /Xerox Data Systems developed several operating systems for the Sigma series of computers, such as the Basic Control Monitor (BCM), Batch Processing Monitor (BPM), and Basic Time-Sharing Monitor (BTM). Later, BPM and BTM were succeeded by the Universal Time-Sharing System (UTS); it was designed to provide multi-programming services for online (interactive) user programs in addition to batch-mode production jobs, It was succeeded by the CP-V operating system, which combined UTS with the heavily batch-oriented Xerox Operating System .
Digital Equipment Corporation created several operating systems for its 16-bit PDP-11 machines, including the simple RT-11 system, the time-sharing RSTS operating systems, and the RSX-11 family of real-time operating systems , as well as the VMS system for the 32-bit VAX machines.
Several competitors of Digital Equipment Corporation such as Data General , Hewlett-Packard , and Computer Automation created their own operating systems. One such, "MAX III", was developed for Modular Computer Systems Modcomp II and Modcomp III computers. It was characterised by its target market being the industrial control market. The Fortran libraries included one that enabled access to measurement and control devices.
IBM's key innovation in operating systems in this class (which they call "mid-range"), was their "CPF" for the System/38 . This had capability-based addressing , used a machine interface architecture to isolate the application software and most of the operating system from hardware dependencies (including even such details as address size and register size) and included an integrated RDBMS . The succeeding OS/400 (now known as IBM i ) for the IBM AS/400 and later IBM Power Systems has no files, only objects of different types and these objects persist in very large, flat virtual memory, called a single-level store.
The Unix operating system was developed at AT&T Bell Laboratories in the late 1960s, originally for the PDP-7 , and later for the PDP-11. Because it was essentially free in early editions, easily obtainable, and easily modified, it achieved wide acceptance. It also became a requirement within the Bell systems operating companies. Since it was written in the C language , when that language was ported to a new machine architecture, Unix was also able to be ported. This portability permitted it to become the choice for a second generation of minicomputers and the first generation of workstations , and its use became widespread. Unix exemplified the idea of an operating system that was conceptually the same across various hardware platforms. Because of its utility, it inspired many and later became one of the roots of the free software movement and open-source software . Numerous operating systems were based upon it including Minix , GNU/Linux , and the Berkeley Software Distribution . Apple's macOS is also based on Unix via NeXTSTEP [ 15 ] and FreeBSD . [ 16 ]
The Pick operating system was another operating system available on a wide variety of hardware brands. Commercially released in 1973 its core was a BASIC -like language called Data/BASIC and a SQL-style database manipulation language called ENGLISH. Licensed to a large variety of manufacturers and vendors, by the early 1980s observers saw the Pick operating system as a strong competitor to Unix. [ 17 ]
Beginning in the mid-1970s, a new class of small computers came onto the marketplace. Featuring 8-bit processors, typically the MOS Technology 6502 , Intel 8080 , Motorola 6800 or the Zilog Z80 , along with rudimentary input and output interfaces and as much RAM as practical, these systems started out as kit-based hobbyist computers but soon evolved into an essential business tool.
While many eight-bit home computers of the 1980s, such as the BBC Micro , Commodore 64 , Apple II , Atari 8-bit computers , Amstrad CPC , ZX Spectrum series and others could load a third-party disk-loading operating system, such as CP/M or GEOS , they were generally used without one. Their built-in operating systems were designed in an era when floppy disk drives were very expensive and not expected to be used by most users, so the standard storage device on most was a tape drive using standard compact cassettes . Most, if not all, of these computers shipped with a built-in BASIC interpreter on ROM, which also served as a crude command-line interface , allowing the user to load a separate disk operating system to perform file management commands and load and save to disk. The most popular [ citation needed ] home computer, the Commodore 64, was a notable exception, as its DOS was on ROM in the disk drive hardware, and the drive was addressed identically to printers, modems, and other external devices.
Furthermore, those systems shipped with minimal amounts of computer memory —4-8 kilobytes was standard on early home computers—as well as 8-bit processors without specialized support circuitry like an MMU or even a dedicated real-time clock . On this hardware, a complex operating system's overhead supporting multiple tasks and users would likely compromise the performance of the machine without really being needed. As those systems were largely sold complete, with a fixed hardware configuration, there was also no need for an operating system to provide drivers for a wide range of hardware to abstract away differences.
Video games and even the available spreadsheet , database and word processors for home computers were mostly self-contained programs that took over the machine completely. Although integrated software existed for these computers, they usually lacked features compared to their standalone equivalents, largely due to memory limitations. Data exchange was mostly performed through standard formats like ASCII text or CSV , or through specialized file conversion programs.
Since virtually all video game consoles and arcade cabinets designed and built after 1980 were true digital machines based on microprocessors (unlike the earlier Pong clones and derivatives), some of them carried a minimal form of BIOS or built-in game, such as the ColecoVision , the Sega Master System and the SNK Neo Geo .
Modern-day game consoles and videogames, starting with the PC-Engine , all have a minimal BIOS that also provides some interactive utilities such as memory card management, audio or video CD playback, copy protection and sometimes carry libraries for developers to use etc. Few of these cases, however, would qualify as a true operating system.
The most notable exceptions are probably the Dreamcast game console which includes a minimal BIOS, like the PlayStation , but can load the Windows CE operating system from the game disk allowing easily porting of games from the PC world, and the Xbox game console, which is little more than a disguised Intel-based PC running a secret, modified version of Microsoft Windows in the background. Furthermore, there are Linux versions that will run on a Dreamcast and later game consoles as well.
Long before that, Sony had released a kind of development kit called the Net Yaroze for its first PlayStation platform, which provided a series of programming and developing tools to be used with a normal PC and a specially modified "Black PlayStation" that could be interfaced with a PC and download programs from it. These operations require in general a functional OS on both platforms involved.
In general, it can be said that videogame consoles and arcade coin-operated machines used at most a built-in BIOS during the 1970s, 1980s and most of the 1990s, while from the PlayStation era and beyond they started getting more and more sophisticated, to the point of requiring a generic or custom-built OS for aiding in development and expandability.
The development of microprocessors made inexpensive computing available for the small business and hobbyist, which in turn led to the widespread use of interchangeable hardware components using a common interconnection (such as the S-100 , SS-50, Apple II , ISA , and PCI buses ), and an increasing need for "standard" operating systems to control them. The most important of the early OSes on these machines was Digital Research 's CP/M -80 for the 8080 / 8085 / Z-80 CPUs. It was based on several Digital Equipment Corporation operating systems, mostly for the PDP-11 architecture. Microsoft's first operating system, MDOS/MIDAS , was designed along many of the PDP-11 features, but for microprocessor based systems. MS-DOS , or PC DOS when supplied by IBM, was designed to be similar to CP/M-80. [ 18 ] Each of these machines had a small boot program in ROM which loaded the OS itself from disk. The BIOS on the IBM-PC class machines was an extension of this idea and has accreted more features and functions in the 20 years since the first IBM-PC was introduced in 1981.
The decreasing cost of display equipment and processors made it practical to provide graphical user interfaces for many operating systems, such as the generic X Window System that is provided with many Unix systems, or other graphical systems such as Apple 's classic Mac OS and macOS , the Radio Shack Color Computer's OS-9 Level II/Multi-Vue , Commodore 's AmigaOS , Atari TOS , IBM 's OS/2 , and Microsoft Windows . The original GUI was developed on the Xerox Alto computer system at Xerox Palo Alto Research Center in the early 1970s and commercialized by many vendors throughout the 1980s and 1990s.
Since the late 1990s, there have been three operating systems in widespread use on personal computers: Apple Inc. 's macOS , the open source Linux , and Microsoft Windows . Since 2005 and the Mac transition to Intel processors , all have been developed mainly on the x86 platform, although macOS retained PowerPC support until 2009 and Linux remains ported to a multitude of architectures including ones such as 68k , PA-RISC , and DEC Alpha , which have been long superseded and out of production, and SPARC and MIPS , which are used in servers or embedded systems but no longer for desktop computers. Other operating systems such as AmigaOS and OS/2 remain in use, if at all, mainly by retrocomputing enthusiasts or for specialized embedded applications.
In the early 1990s, Psion released the Psion Series 3 PDA , a small mobile computing device. It supported user-written applications running on an operating system called EPOC . Later versions of EPOC became Symbian , an operating system used for mobile phones from Nokia , Ericsson , Sony Ericsson , Motorola , Samsung and phones developed for NTT Docomo by Sharp , Fujitsu & Mitsubishi . Symbian was the world's most widely used smartphone operating system until 2010 with a peak market share of 74% in 2006. In 1996, Palm Computing released the Pilot 1000 and Pilot 5000, running Palm OS . Microsoft Windows CE was the base for Pocket PC 2000, renamed Windows Mobile in 2003, which at its peak in 2007 was the most common operating system for smartphones in the U.S.
In 2007, Apple introduced the iPhone and its operating system, known as simply iPhone OS (until the release of iOS 4 ), which, like Mac OS X , is based on the Unix-like Darwin . In addition to these underpinnings, it also introduced a powerful and innovative graphic user interface that was later also used on the tablet computer iPad . A year later, Android , with its own graphical user interface, was introduced, based on a modified Linux kernel , and Microsoft re-entered the mobile operating system market with Windows Phone in 2010, which was replaced by Windows 10 Mobile in 2015.
In addition to these, a wide range of other mobile operating systems are contending in this area.
Operating systems originally ran directly on the hardware itself and provided services to applications, but with virtualization, the operating system itself runs under the control of a hypervisor , instead of being in direct control of the hardware.
On mainframes IBM introduced the notion of a virtual machine in 1968 with CP/CMS on the IBM System/360 Model 67 , and extended this later in 1972 with Virtual Machine Facility/370 (VM/370) on System/370 .
On x86 -based personal computers , VMware popularized this technology with their 1999 product, VMware Workstation , [ 19 ] and their 2001 VMware GSX Server and VMware ESX Server products. [ 20 ] Later, a wide range of products from others, including Xen , KVM and Hyper-V meant that by 2010 it was reported that more than 80 percent of enterprises had a virtualization program or project in place, and that 25 percent of all server workloads would be in a virtual machine. [ 21 ]
Over time, the line between virtual machines, monitors, and operating systems was blurred:
In many ways, virtual machine software today plays the role formerly held by the operating system, including managing the hardware resources (processor, memory, I/O devices), applying scheduling policies, or allowing system administrators to manage the system. | https://en.wikipedia.org/wiki/History_of_operating_systems |
The history of pathology can be traced to the earliest application of the scientific method to the field of medicine , a development which occurred in the Middle East during the Islamic Golden Age and in Western Europe during the Italian Renaissance .
Early systematic human dissections were carried out by the Ancient Greek physicians Herophilus of Chalcedon and Erasistratus of Chios in the early part of the third century BC. [ 1 ] The first physician known to have made postmortem dissections was the Arabian physician Avenzoar (1091–1161). Rudolf Virchow (1821–1902) is generally recognized to be the father of microscopic pathology . Most early pathologists were also practicing physicians or surgeons .
Early understanding of the origins of diseases constitutes the earliest application of the scientific method to the field of medicine , a development which occurred in the Middle East during the Islamic Golden Age [ 2 ] and in Western Europe during the Italian Renaissance . [ 3 ]
The Greek physician Hippocrates , the founder of scientific medicine, was the first to deal with the anatomy and the pathology of human spine. [ 4 ] Galen developed an interest in anatomy from his studies of Herophilus and Erasistratus . [ 5 ] The concept of studying disease through the methodical dissection and examination of diseased bodies, organs, and tissues may seem obvious today, but there are few if any recorded examples of true autopsies performed prior to the second millennium . Though the pathology of contagion was understood by Muslim physicians since the time of Avicenna (980–1037) who described it in The Canon of Medicine ( c. 1020 ), [ 6 ] the first physician known to have made postmortem dissections was the Arabian physician Avenzoar (1091–1161) who proved that the skin disease scabies was caused by a parasite , followed by Ibn al-Nafis (b. 1213) who used dissection to discover pulmonary circulation in 1242. [ 7 ] In the 15th century, anatomic dissection was repeatedly used by the Italian physician Antonio Benivieni (1443–1502) to determine cause of death. [ 3 ] Antonio Benivieni is also credited with having introduced necropsy to the medical field. [ 8 ] Perhaps the most famous early gross pathologist was Giovanni Morgagni (1682–1771). His magnum opus , De Sedibus et Causis Morborum per Anatomem Indagatis , published in 1761, describes the findings of over 600 partial and complete autopsies, organised anatomically and methodically correlated with the symptoms exhibited by the patients prior to their demise. Although the study of normal anatomy was already well advanced at this date, De Sedibus was one of the first treatises specifically devoted to the correlation of diseased anatomy with clinical illness. [ 9 ] [ 10 ] By the late 1800s, an exhaustive body of literature had been produced on the gross anatomical findings characteristic of known diseases. The extent of gross pathology research in this period can be epitomized by the work of the Viennese pathologist (originally from Hradec Kralove in the Czech Rep.) Carl Rokitansky (1804–1878), who is said to have performed 20,000 autopsies, and supervised an additional 60,000, in his lifetime. [ 3 ] [ 11 ]
Rudolf Virchow (1821–1902) is generally recognized to be the father of microscopic pathology. While the compound microscope had been invented approximately 150 years prior, Virchow was one of the first prominent physicians to emphasize the study of manifestations of disease which were visible only at the cellular level. [ 3 ] [ 12 ] A student of Virchow's, Julius Cohnheim (1839–1884) combined histology techniques with experimental manipulations to study inflammation , making him one of the earliest experimental pathologists . [ 3 ] Cohnheim also pioneered the use of the frozen section procedure ; a version of this technique is widely employed by modern pathologists to render diagnoses and provide other clinical information intraoperatively. [ 13 ]
As new research techniques, such as electron microscopy , immunohistochemistry , and molecular biology have expanded the means by which biomedical scientists can study disease, the definition and boundaries of investigative pathology have become less distinct. In the broadest sense, nearly all research which links manifestations of disease to identifiable processes in cells, tissues, or organs can be considered experimental pathology . [ 14 ] | https://en.wikipedia.org/wiki/History_of_pathology |
The history of penicillin follows observations and discoveries of evidence of antibiotic activity of the mould Penicillium that led to the development of penicillins that became the first widely used antibiotics . Following the production of a relatively pure compound in 1942, penicillin was the first naturally-derived antibiotic.
Ancient societies used moulds to treat infections, and in the following centuries many people observed the inhibition of bacterial growth by moulds. While working at St Mary's Hospital in London in 1928, Scottish physician Alexander Fleming was the first to experimentally determine that a Penicillium mould secretes an antibacterial substance, which he named "penicillin". The mould was found to be a variant of Penicillium notatum (now called Penicillium rubens ), a contaminant of a bacterial culture in his laboratory. The work on penicillin at St Mary's ended in 1929.
In 1939, a team of scientists at the Sir William Dunn School of Pathology at the University of Oxford , led by Howard Florey that included Edward Abraham , Ernst Chain , Mary Ethel Florey , Norman Heatley and Margaret Jennings , began researching penicillin. They developed a method for cultivating the mould and extracting, purifying and storing penicillin from it, together with an assay for measuring its purity. They carried out experiments on animals to determine penicillin's safety and effectiveness before conducting clinical trials and field tests. They derived penicillin's chemical structure and determined how it works. The private sector and the United States Department of Agriculture located and produced new strains and developed mass production techniques. During the Second World War penicillin became an important part of the Allied war effort, saving thousands of lives. Alexander Fleming, Howard Florey and Ernst Chain shared the 1945 Nobel Prize in Physiology or Medicine for the discovery and development of penicillin.
After the end of the war in 1945, penicillin became widely available. Dorothy Hodgkin determined its chemical structure, for which she received the Nobel Prize in Chemistry in 1964. This led to the development of semisynthetic penicillins that were more potent and effective against a wider range of bacteria. The drug was synthesised in 1957, but cultivation of mould remains the primary means of production. It was discovered that adding penicillin to animal feed increased weight gain, improved feed-conversion efficiency, promoted more uniform growth and facilitated disease control. Agriculture became a major user of penicillin. Shortly after their discovery of penicillin, the Oxford team reported penicillin resistance in many bacteria. Research that aims to circumvent and understand the mechanisms of antibiotic resistance continues today.
Many ancient cultures, including those in Australia, China, Egypt, Greece and India, independently discovered the useful properties of fungi and plants in treating infections . These treatments often worked because many organisms, including many species of mould, naturally produce antibiotics . However, ancient practitioners could not precisely identify or isolate the active components in these organisms. [ 1 ] [ 2 ]
In England in 1640, the idea of using mould as a form of medical treatment was recorded by apothecaries such as the botanist John Parkinson , who documented the use of moulds to treat infections in his book on pharmacology . [ 3 ] In 17th-century Poland, wet bread was mixed with spider webs (which often contained fungal spores ) to treat wounds. The technique was mentioned by Henryk Sienkiewicz in his 1884 novel With Fire and Sword . [ 4 ]
In 1871, Sir John Scott Burdon-Sanderson reported that culture fluid covered with mould would produce no bacterial growth. [ 5 ] Joseph Lister , an English surgeon and the father of modern antisepsis , observed in November 1871 that urine samples contaminated with mould also did not permit the growth of bacteria. He also described the antibacterial action on human tissue of Penicillium glaucum but did not publish his results. [ 6 ] In 1875 John Tyndall demonstrated to the Royal Society the antibacterial action of the Penicillium fungus. [ 7 ]
In 1876, German biologist Robert Koch discovered that a bacterium ( Bacillus anthracis ) was the causative pathogen of anthrax , which became the first demonstration that a specific bacterium caused a specific disease and the first direct evidence of germ theory of diseases . [ 8 ] [ 9 ] In 1877, French biologists Louis Pasteur and Jules Francois Joubert observed that cultures of anthrax bacilli, when contaminated with other bacteria, could be successfully inhibited. [ 10 ] Reporting in the Comptes Rendus de l'Académie des Sciences , they concluded:
Neutral or slightly alkaline urine is an excellent medium for the bacteria. If the urine is sterile and the culture pure the bacteria multiply so fast that in the course of a few hours their filaments fill the fluid with a downy felt. But if when the urine is inoculated with these bacteria an aerobic organism, for example one of the "common bacteria," is sown at the same time, the anthrax bacterium makes little or no growth and sooner or later dies out altogether. It is a remarkable thing that the same phenomenon is seen in the body even of those animals most susceptible to anthrax, leading to the astonishing result that anthrax bacteria can be introduced in profusion into an animal, which yet does not develop the disease; it is only necessary to add some "common 'bacteria" at the same time to the liquid containing the suspension of anthrax bacteria. These facts perhaps justify the highest hopes for therapeutics. [ 11 ]
The phenomenon was described by Pasteur and Koch as antibacterial activity and was named antibiosis by French biologist Jean Paul Vuillemin in 1877. [ 12 ] [ 13 ] (The term antibiosis , meaning 'against life', was adopted as antibiotic by American biologist and later Nobel laureate Selman Waksman in 1947. [ 14 ] ) However, Paul de Kruif 's 1926 Microbe Hunters notes that Pasteur believed that this was contamination by other bacteria rather than by mould. [ 15 ] In 1887, Swiss physician Carl Garré developed a test method using glass plate to see bacterial inhibition and found similar results. [ 13 ] Using his gelatin-based culture plate, he grew two different species of bacteria and found that their growths were inhibited differently, as he reported:
I inoculated on the untouched cooled [gelatin] plate alternate parallel strokes of B. fluorescens [ Pseudomonas fluorescens ] and Staph. pyogenes [ Streptococcus pyogenes ]... B. fluorescens grew more quickly... [This] is not a question of overgrowth or crowding out of one by another quicker-growing species, as in a garden where luxuriantly growing weeds kill the delicate plants. Nor is it due to the utilization of the available foodstuff by the more quickly growing organisms, rather there is an antagonism caused by the secretion of specific, easily diffusible substances which are inhibitory to the growth of some species but completely ineffective against others. [ 16 ]
In 1895, Vincenzo Tiberio , an Italian physician at the University of Naples , published research on moulds initially found in a water well in Arzano ; from his observations, he concluded that these moulds contained soluble substances having antibacterial action. [ 17 ] Two years later, Ernest Duchesne at École du Service de Santé Militaire in Lyon independently discovered the healing properties of a P. glaucum mould, even curing infected guinea pigs of typhoid . He published his results in a dissertation in 1897. [ 18 ] Duchesne was using a discovery made earlier by Arab stable boys, who used moulds to cure sores on horses. He did not claim that the mould contained any antibacterial substance, only that the mould somehow protected the animals. Penicillin does not cure typhoid and so it remains unknown which substance might have been responsible. A Pasteur Institute scientist, Costa Rican Clodomiro Picado Twight , similarly recorded the antibiotic effect of Penicillium in 1923. In these early stages of penicillin research, most species of Penicillium were non-specifically referred to as P. glaucum , so that it is impossible to know the exact species and that it was really penicillin that prevented bacterial growth. [ 10 ]
Andre Gratia and Sara Dath at the Free University of Brussels studied the effects of mould samples on bacteria. In 1924, they found that dead Staphylococcus aureus cultures were contaminated by a mould, a streptomycete . Upon further experimentation, they showed that the mould extract could kill not only S. aureus , but also Pseudomonas aeruginosa , Mycobacterium tuberculosis and Escherichia coli . Gratia called the antibacterial agent "mycolysate". The next year they found another killer mould that could inhibit B. anthracis . Reporting in Comptes rendus des séances de la Société de Biologie et de ses filiales , they identified the mould as P. glaucum . But these findings received little attention as the antibacterial agent and its medical value were not fully understood, and Gratia's samples were lost. [ 19 ] [ 20 ]
While working at St Mary's Hospital, London in 1928, Alexander Fleming , a Scottish physician was investigating the variation of growth in cultures of S. aureus . [ 21 ] In August, he spent the summer break with his family at his country home The Dhoon at Barton Mills , Suffolk. Before leaving his laboratory, he inoculated several culture plates with S. aureus. He kept the plates aside on one corner of the table away from direct sunlight and to make space for his research student, Stuart Craddock, to work in his absence. He returned to his laboratory on 3 September. [ 22 ] As he and Daniel Merlin Pryce, his former research student, examined the culture plates, they found one with an open lid and the culture contaminated with a blue-green mould. In the contaminated plate the bacteria around the mould did not grow, while those farther away grew normally, meaning that the mould killed the bacteria. [ 23 ] Fleming photographed the culture and took a sample of the mould for identification before preserving the culture with formaldehyde . [ 24 ]
Fleming resumed his vacation and returned in September. He collected the original mould and grew it in culture plates. After four days he found that the plates developed large colonies of the mould. He repeated the experiment with the same bacteria-killing results. [ 21 ] [ 24 ] [ 25 ] He concluded that the mould was releasing a substance that was inhibiting bacterial growth. [ 26 ] On testing against different bacteria, he found that the mould could kill only certain Gram-positive bacteria. [ 27 ] Staphylococcus , Streptococcus , and diphtheria bacillus ( Corynebacterium diphtheriae ) were easily killed, but there was no effect on typhoid bacterium ( Salmonella typhimurium ) and a bacterium once thought to cause influenza ( Haemophilus influenzae ). He prepared a culture method from which he could obtain the mould juice, which he called "penicillin" on 7 March 1929, "to avoid the repetition of the rather cumbersome phrase 'mould broth filtrate'." [ 28 ] [ 22 ] In his Nobel lecture of 1945 he gave a further explanation, saying:
I have been frequently asked why I invented the name "Penicillin". I simply followed perfectly orthodox lines and coined a word which explained that the substance penicillin was derived from a plant of the genus Penicillium just as many years ago the word " Digitalin " was invented for a substance derived from the plant Digitalis . [ 29 ]
After structural comparison with different species of Penicillium , Fleming believed that his specimen was Penicillium chrysogenum , a species described by an American microbiologist Charles Thom in 1910. Charles John Patrick La Touche, an Irish botanist, had recently joined St Mary's as a mycologist , and he identified the specimen as Penicillium rubrum , the identification used by Fleming in his publication. [ 30 ] [ 31 ] In 1931, Thom re-examined different Penicillia , including that of Fleming's specimen, and he came to the conclusion that Fleming's specimen was P. notatum , a member of the P. chrysogenum series. [ 32 ] From then on, Fleming's mould was synonymously referred to as P. notatum and P. chrysogenum . [ 33 ] [ 34 ] To resolve the confusion, the Seventeenth International Botanical Congress held in Vienna, Austria, in 2005 formally adopted P. chrysogenum as the name. [ 35 ] Whole-genome sequence and phylogenetic analysis in 2011 revealed that Fleming's mould belongs to P. rubens , a species described by Belgian microbiologist Philibert Biourge in 1923. [ 33 ] [ 36 ]
The source of the fungal contamination in Fleming's experiment remained a speculation for several decades. Fleming suggested in 1945 that the fungal spores came through the window facing Praed Street , [ 37 ] but was disputed by his co-workers, who testified much later that Fleming's laboratory window was kept shut, [ 38 ] and Fleming was unable to reach the window to open it. [ 39 ] A consensus developed that the mould had come from La Touche's laboratory, a floor below Fleming's, and that spores had drifted in through the open doors. [ 40 ]
For the effect on the cultures of staphylococci that Fleming observed, the mould had to be growing before the bacteria began to grow, because penicillin is only effective on bacteria when they are reproducing. Fortuitously, the temperature in the laboratory during that August was optimum first for the growth of the mould, below 20°C, and later in the month for the bacteria, when it reached 25°C. Had Fleming not left the cultures on his laboratory bench and put them in an incubator, the phenomenon would not have occurred. [ 41 ]
Fleming was a bacteriologist, not a chemist, so he left most of the chemical work to Craddock. [ 21 ] In January 1929, Fleming recruited Frederick Ridley, a former research student of his who had studied biochemistry, to study the chemical properties of the mould, [ 23 ] but Craddock and Ridley could not isolate penicillin, and before the experiments were over, both had left for other jobs. It was due to this failure to isolate the compound that Fleming abandoned further research on the chemical aspects of penicillin in 1929. [ 22 ]
Fleming reported his findings to the British Journal of Experimental Pathology on 10 May 1929, and published them in the next month's issue, [ 42 ] [ 43 ] but the article failed to attract much attention. Fleming himself was quite unsure of the medical application of his work and was more concerned with its the application for bacterial isolation. [ 42 ] The article also contained some important errors. Although Ridley and Craddock had demonstrated that penicillin was soluble in ether , acetone and alcohol as well as in water – information that would be critical to its isolation – Fleming erroneously claimed that it was soluble in alcohol and insoluble in ether and chloroform , which had not been tested. [ 44 ] In fact, penicillin is soluble in ethanol , ether and chloroform. [ 45 ]
In 1939, at the Sir William Dunn School of Pathology at the University of Oxford , Ernst Boris Chain found Fleming's largely forgotten 1929 paper, and suggested to the professor in charge of the school, the Australian scientist Howard Florey , that the study of antibacterial substances produced by micro-organisms might be a fruitful avenue of research. [ 46 ] Howard Florey led an interdisciplinary research team that included Edward Abraham , Mary Ethel Florey , Arthur Duncan Gardner , Norman Heatley , Margaret Jennings , Jean Orr-Ewing and Gordon Sanders. [ 47 ] [ 48 ] Each member of the team tackled a particular aspect of the problem in their own manner, with simultaneous research along different lines building up a complete picture. This sort of collaboration was practically unknown in the United Kingdom at the time. [ 49 ] Three sources were initially chosen for investigation: Bacillus subtilis , Trueperella pyogenes and penicillin . [ 50 ] "[The possibility] that penicillin could have practical use in clinical medicine", Chain later recalled, "did not enter our minds when we started our work on penicillin." [ 51 ]
The broad subject area was deliberately chosen to be one requiring long-term funding. [ 46 ] Howard Florey approached the MRC in September 1939, and the secretary of the council, Edward Mellanby authorized the project, allocating £250 (equivalent to £20,000 in 2023) to launch the project, with £300 for salaries (equivalent to £23,000 in 2023) and £100 for expenses (equivalent to £8,000 in 2023) per annum for three years. Florey felt that more would be required. On 1 November 1939, Henry M. "Dusty" Miller Jr from the Natural Sciences Division of the Rockefeller Foundation paid Florey a visit. Miller encouraged Florey to apply for funding from the foundation and supported his application. [ 52 ] [ 53 ] "The work proposed", Florey wrote in the application letter, "in addition to its theoretical importance, may have practical value for therapeutic purposes." [ 54 ] His application was approved, with the foundation allocating US$5,000 (£1,250) per annum for five years. [ 52 ] [ 53 ]
The Oxford team's first task was to obtain a sample of penicillin mould. This turned out to be easy. Georges Dreyer , Florey's predecessor, had obtained a sample of the mould in 1930 for his work on bacteriophages , viruses that infect bacteria. Dreyer had lost interest in penicillin when he discovered that it was not a bacteriophage, but he had continued to cultivate it. [ 55 ] [ 56 ] Dreyer had died in 1934, but Campbell-Renton had continued to culture the mould and was able to supply it to the Oxford team. [ 57 ] [ 58 ] The next task was to grow sufficient mould to extract enough penicillin for laboratory experiments. The mould was cultured on a surface of liquid Czapek-Dox medium . Over the course of a few days it formed a yellow gelatinous skin covered in green spores. Beneath this, the liquid became yellow and contained penicillin. The team determined that the maximum yield was achieved in ten to twenty days. [ 59 ]
The mould needs air to grow, so cultivation required a container with a large surface area. Initially, glass bottles laid on their sides were used. Most laboratory containers did not provide a large, flat area, and so were an uneconomical use of incubator space. [ 59 ] The bedpan was found to be practical, and was the basis for specially-made ceramic containers fabricated by J. Macintyre and Company in Burslem . These containers were rectangular in shape and could be stacked to save space. [ 60 ] The MRC agreed to Florey's request for £300 (equivalent to £21,000 in 2023) and £2 each per week (equivalent to £138 in 2023) for two (later) women factory hands. In 1943 Florey asked for their wages to be increased to £2 10s each per week (equivalent to £142 in 2023). [ 61 ] Heatley collected the first 174 of an order for 500 vessels on 22 December 1940, and they were seeded with spores three days later. [ 62 ]
Efforts were made to coax the mould into producing more penicillin. Heatley tried adding various substances to the medium, including sugars, salts, malts, alcohol and even marmite , without success. [ 63 ] At the suggestion of Paul Fildes , he tried adding brewing yeast . This did not improve the yield either, but it did cut the incubation time by a third. [ 59 ] The team also discovered that if the penicillin-bearing fluid was removed and replaced by fresh fluid, a second batch of penicillin could be prepared, [ 59 ] but this practice was discontinued after eighteen months due to the danger of contamination. The mould had to be grown under sterile conditions. [ 64 ] Abraham and Chain discovered that some airborne bacteria produced penicillinase , an enzyme that destroys penicillin. [ 65 ] It was not known why the mould produced penicillin, as the bacteria penicillin kills are no threat to the mould; it was conjectured that it was a byproduct of metabolic processes for other purposes. [ 64 ]
The next stage of the process was to extract the penicillin. The liquid was filtered through parachute silk to remove the mycelium , spores and other solid debris. [ 66 ] The pH was lowered by the addition of phosphoric acid and the resulting liquid was cooled. [ 67 ] Chain determined that penicillin was stable only with a pH of between 5 and 8, but the process required one lower than that. By keeping the mixture at 0 °C, he could retard the breakdown process. [ 68 ] In this form the penicillin could be drawn off by a solvent. Initially ether was used, as it was the only solvent known to dissolve penicillin, but it is highly inflammable and toxic. At Chain's suggestion, they tried using the much less flammable amyl acetate instead, and found that it also worked. [ 66 ] [ 69 ] [ 70 ] [ 71 ]
Heatley was able to develop a continuous extraction process. The penicillin-bearing solvent was easily separated from the liquid, as it floated on top, but now they encountered the problem that had stymied Craddock and Ridley: recovering the penicillin from the solvent. Heatley reasoned that if the penicillin could pass from water to solvent when the solution was acidic , maybe it would pass back again if the solution was alkaline . Florey told him to give it a try. Sodium hydroxide was added, and this method, which Heatley called "reverse extraction", was found to work. [ 66 ] [ 70 ] The next problem was how to extract the penicillin from the water. The usual means of extracting something from water were through evaporation or boiling, but this would destroy the penicillin. Chain hit upon the idea of freeze drying , a technique recently developed in Sweden. This enabled the water to be removed, resulting in a dry, brown powder. [ 66 ] [ 68 ]
Heatley developed a penicillin assay using agar nutrient plates in which bacteria were seeded. Short glass cylinders containing the penicillin-bearing fluid to be tested were then placed on them and incubated for 12 to 16 hours at 37 °C. By then the fluid would have disappeared and the cylinder surrounded by a bacteria-free ring. The diameter of the ring indicated the strength of the penicillin. [ 67 ] An Oxford unit was defined as the purity required to produce a 25 mm bacteria-free ring. [ 57 ] It was an arbitrary measurement, as the chemistry was not yet known; the first research was conducted with solutions containing four or five Oxford units per milligram. Later, when highly pure penicillin became available, it was found to have 2,000 Oxford units per milligram. [ 72 ] Yet in testing the impure substance, they found it effective against bacteria even at concentrations of one part per million. Penicillin was at least twenty times as active as the most powerful sulfonamide . [ 68 ] The Oxford unit turned out to be very small; treating a single case required about a million units. [ 73 ]
The Oxford team reported details of the isolation method in August 1941, with a scheme for large-scale extraction. [ 67 ] In March 1942, they reported that they could prepare a highly purified compound, [ 74 ] [ 75 ] and had worked out the chemical formula as C 24 H 32 O 10 N 2 Ba . [ 76 ]
Howard Florey's team at Oxford showed that penicillium extract killed different bacteria. Gardner and Orr-Ewing tested it against gonococcus (against which it was most effective), meningococcus , streptococcus, staphylococcus, anthrax bacteria, actinomyces , tetanus bacterium ( Clostridium tetani ) and gangrene bacteria. They observed bacteria attempting to grow in the presence of penicillin, and noted that penicillin was neither an enzyme that broke the bacteria down, nor an antiseptic that killed them; rather, it was a chemical that interfered with the process of cell division . [ 77 ] [ 78 ] Jennings observed that it had no effect on white blood cells , and would therefore reinforce rather than hinder the body's natural defences against bacteria. She also found that unlike sulphonamides , the first and only effective broad-spectrum antibiotic available at the time, it was not destroyed by pus . Medawar found that it did not affect the growth of tissue cells. [ 79 ]
By March 1940 the Oxford team had sufficient impure penicillin to commence testing whether it was toxic. Over the next two months, Florey and Jennings conducted a series of experiments on rats, mice, rabbits and cats in which penicillin was administered in various ways. Their results showed that penicillin was destroyed in the stomach, but that all forms of injection were effective, as indicated by assay of the blood. It was found that penicillin was largely and rapidly excreted unchanged in their urine. [ 80 ] They found no evidence of toxicity in any of their animals. Had they tested against guinea pigs research might have halted at this point, for penicillin is toxic to guinea pigs. [ 81 ]
At 11:00 am on Saturday 25 May 1940, Florey injected eight mice with a virulent strain of Streptococcus , and then injected four of them with the penicillin solution. These four were divided into two groups: two of them received 10 milligrams once, and the other two received 5 milligrams at regular intervals. By 3:30 am on Sunday all four of the untreated mice were dead. All of the treated ones were still alive, although one died two days later. [ 82 ] [ 83 ] Florey described the result to Jennings as "a miracle." [ 84 ]
Jennings and Florey repeated the experiment on Monday with ten mice; this time, all six of the treated mice survived, as did one of the four controls. On Tuesday, they repeated it with sixteen mice, administering different doses of penicillin. All six of the control mice died within 24 hours but the treated mice survived for several days, although they were all dead in nineteen days. [ 83 ] On 1 July, the experiment was performed with fifty mice, half of whom received penicillin. All fifty of the control mice died within sixteen hours while all but one of the treated mice were alive ten days later. Over the following weeks they performed experiments with batches of 50 or 75 mice, but using different bacteria. They found that penicillin was also effective against staphylococcus and gas gangrene . [ 85 ] Florey reminded his staff that promising as their results were, a human being weighed 3,000 times as much as a mouse. [ 86 ]
The Oxford team reported their results in the 24 August 1940 issue of The Lancet , a prestigious medical journal, as "Penicillin as a Chemotherapeutic Agent" with names of the seven joint authors listed alphabetically. [ 82 ] [ 87 ] They concluded:
The results are clear cut, and show that penicillin is active in vivo against at least three of the organisms inhibited in vitro. It would seem a reasonable hope that all organisms in high dilution in vitro will be found to be dealt with in vivo. Penicillin does not appear to be related to any chemotherapeutic substance at present in use and is particularly remarkable for its activity against the anaerobic organisms associated with gas gangrene . [ 82 ]
The publication attracted little attention; Florey would spend much of the next two years attempting to convince people of the significance of their results. One reader was Fleming, who paid them a visit on 2 September 1940. Florey and Chain gave him a tour of the production, extraction and testing laboratories, but he made no comment and did not congratulate them on the work they had done. Some members of the Oxford team suspected that he was trying to claim some credit for it. [ 88 ] [ 89 ]
Unbeknown to the Oxford team, their Lancet article was read by Martin Henry Dawson , Gladys Hobby and Karl Meyer at Columbia University, and they were inspired to replicate the Oxford team's results. They obtained a culture of penicillium mould from Roger Reid at Johns Hopkins Hospital , grown from a sample he had received from Fleming in 1935. They began growing the mould on 23 September, and on 30 September tested it against viridans streptococci , and confirmed the Oxford team's results. Meyer duplicated Chain's processes, and they obtained a small quantity of penicillin. On 15 October 1940, doses of penicillin were administered to two patients at the Presbyterian Hospital in New York City, Aaron Alston and Charles Aronson. They became the first persons to receive penicillin treatment in the United States. He then treated two patients with endocarditis . [ 90 ] [ 91 ] The Columbia team presented the results of their penicillin treatment of the four patients at the annual meeting of the American Society for Clinical Investigation in Atlantic City, New Jersey , on 5 May 1941. Their paper was reported on by William L. Laurence in The New York Times and generated great public interest. [ 91 ] [ 92 ] [ 93 ]
At Oxford, Charles Fletcher volunteered to find test cases for human trials. Elva Akers, an Oxford woman dying from incurable cancer, agreed to be a test subject for the toxicity of penicillin. On 17 January 1941, he intravenously injected her with 100 mg of penicillin. Her temperature briefly rose, but otherwise she had no ill-effects. Florey reckoned that the fever was caused by pyrogens in the penicillin; these were removed with improved chromatography . [ 94 ] Fletcher next identified an Oxford policeman, Albert Alexander , who had a severe facial infection involving streptococci and staphylococci which had developed from a small sore at the corner of his mouth. His whole face, eyes and scalp were swollen to the extent that he had an eye removed to relieve the pain. [ 94 ] [ 95 ]
On 12 February, Fletcher administered 200 mg of penicillin, following by 100 mg doses every three hours. Within a day of being given penicillin, Alexander started to recover; his temperature dropped and discharge from his suppurating wounds declined. By 17 February, his right eye had become normal. However, the researchers did not have enough penicillin to help him to a full recovery. Penicillin was recovered from his urine, but it was not enough. In early March he relapsed, and he died on 15 March. Because of this experience and the difficulty in producing penicillin, Florey changed the focus to treating children, who could be treated with smaller quantities of penicillin. [ 94 ] [ 95 ]
Subsequently, several patients were treated successfully. The second was Arthur Jones, a 15-year-old boy with a streptococcal infection from a hip operation. He was given 100 mg every three hours for five days and recovered. Percy Hawkin, a 42-year-old labourer, had a 100-millimetre (4 in) carbuncle on his back. He was given an initial 200 mg on 3 May followed by 100 mg every hour. The carbuncle completely disappeared. John Cox, a semi-comatose 4-year-old boy was treated starting on 16 May. He died on 31 May but the post-mortem indicated this was from a ruptured artery in the brain, and there was no sign of infection. The fifth case, on 16 June, was a 14-year-old boy with an infection from a hip operation who made a full recovery. [ 96 ]
In addition to increased production at the Dunn School, commercial production from a pilot plant established by Imperial Chemical Industries became available in January 1942, and Kembel, Bishop and Company delivered its first batch of 910 litres (200 imp gal) on 11 September. Florey decided that the time was ripe to conduct a second series of clinical trials. Ethel Florey was placed in charge, but while Howard Florey was a consulting pathologist at Oxford hospitals, and therefore entitled to use their wards and services, Ethel, to his annoyance, was accredited merely as his assistant. Doctors tended to refer patients to the trial who were in desperate circumstances rather than the most suitable, but when penicillin did succeed, confidence in its efficacy rose. [ 97 ]
Ethel and Howard Florey published the results of clinical trials of penicillin in The Lancet on 27 March 1943, reporting the treatment of 187 cases of sepsis with penicillin. [ 98 ] It was upon this medical evidence that the British War Cabinet set up the Penicillin Committee on 5 April 1943. The committee consisted of Cecil Weir , Director General of Equipment, as chairman; Alexander Fleming; Howard Florey; V. D. Allison, another one of Fleming's former research students; Sir Percival Hartley , the head of the MRC; and representatives from pharmaceutical companies. [ 99 ] This led to the mass production of penicillin by the next year. [ 100 ]
Knowing that large-scale production for medical use was futile in a laboratory, the Oxford team tried to convince the war-torn British government and private companies to engage in mass production, but the initial response was muted. Dr Blount, director of research at Glaxo Laboratories , wrote to Florey at Oxford in September 1940 but received no reply. It appeared that Florey had already appealed for assistance to two British pharmaceutical companies but had been turned down by them, and had become disillusioned with the British pharmaceutical industry. [ 101 ]
In April 1941, Warren Weaver met with Florey, and they discussed the difficulty of producing sufficient penicillin to conduct clinical trials. Weaver arranged for the Rockefeller Foundation to fund a three-month visit to the United States for Florey and a colleague to explore the possibility of production of penicillin there. [ 102 ] Florey and Heatley left for the United States by air on 27 June 1941. [ 103 ] Knowing that mould samples kept in vials could be easily lost, they smeared their coat pockets with the mould. [ 78 ]
Florey met with neurophysiologist John Fulton , who introduced him to Ross Harrison , the Chairman of the National Research Council (NRC). Harrison referred Florey to Thom, the chief mycologist at the Bureau of Plant Industry of the United States Department of Agriculture (USDA) in Beltsville, Maryland , and the man who had identified the mould reported by Fleming. On 9 July, Thom took Florey and Heatley to Washington, D.C. , to meet Percy Wells, the acting assistant chief of the USDA Bureau of Agricultural and Industrial Chemistry and as such the head of the USDA's four laboratories. Wells sent an introductory telegram to Orville May, the director of the UDSA's Northern Regional Research Laboratory (NRRL) in Peoria, Illinois . They met with May on 14 July, and he arranged for them to meet Robert D. Coghill, the chief of the NRRL's fermentation division, who raised the possibility that fermentation in large vessels might be the key to large-scale production. [ 104 ] [ 105 ] [ 106 ]
On 17 August, Florey met with Alfred Newton Richards , the chairman of the Committee for Medical Research (CMR) of the Office of Scientific Research and Development (OSRD), who promised his support. [ 107 ] On 8 October, Richards held a meeting with representatives of four major pharmaceutical companies: Squibb , Merck , Pfizer and Lederle . Vannevar Bush , the director of OSRD was present, as was Thom, who represented the NRRL. Richards told them that antitrust laws would be suspended, allowing them to share information about penicillin. This was not legalized until 7 December 1943, and it covered only penicillin and no other drug. [ 108 ] [ 109 ] OSRD arranged with the War Production Board (WPB) for them to have priority for equipment for laboratories and pilot plants. [ 110 ]
Coghill made Andrew J. Moyer available to work on penicillin with Heatley, while Florey left to see if he could arrange for a pharmaceutical company to manufacture penicillin. As a first step to increasing yield, Moyer replaced sucrose in the growth media with lactose . An even larger increase occurred when Moyer added corn steep liquor , a byproduct of the corn industry that the NRRL routinely tried in the hope of finding more uses for it. The effect on penicillin was dramatic; Heatley and Moyer found that it increased the yield tenfold. [ 103 ]
At the Yale New Haven Hospital in March 1942, Anne Sheafe Miller, the wife of Yale University 's athletics director, Ogden D. Miller, was succumbing to a streptococcal septicaemia contracted after a miscarriage . Her doctor, John Bumstead, was also treating John Fulton for an infection at the time. He knew that Fulton knew Florey, and that Florey's children were staying with him. He went to Fulton to plead for some penicillin. Florey had returned to the UK, but Heatley was still in the United States, working with Merck. A phone call to Richards released 5.5 grams of penicillin earmarked for a clinical trial, which was despatched from Washington, D. C., by air. The effect was dramatic; within 48 hours her 41 °C (106 °F) fever had abated and she was eating again. Her blood culture count had dropped 100 to 150 bacteria colonies per millilitre to just one. Bumstead suggested reducing the penicillin dose from 200 milligrams; Heatley told him not to. Heatley subsequently came to New Haven, where he collected her urine; about 3 grams of penicillin were recovered. Miller made a full recovery, and lived until 1999. [ 111 ] [ 112 ] [ 113 ]
Until May 1943, almost all penicillin was produced using the shallow-pan method pioneered by the Oxford team, [ 114 ] but NRRL mycologist Kenneth Bryan Raper experimented with deep submergence production, in which penicillin mould was grown in a vat instead of a shallow dish. The initial results were disappointing; penicillin cultured in this manner yielded only three to four Oxford units per cubic centimetre, compared to twenty for surface cultures. [ 115 ] He got the help of U.S. Army's Air Transport Command to search for similar mould in different parts of the world. The best moulds were found to be those from Chongqing , Bombay , and Cape Town . The best sample, however, was from a cantaloupe sold in a Peoria fruit market in 1943. The mould was identified as Penicillium chrysogenum and designated as NRRL 1951 or cantaloupe strain . [ 106 ] [ 116 ] The spores may have escaped from the NRRL. [ 117 ] [ a ] [ b ]
Between 1941 and 1943, Moyer, Coghill and Raper developed methods for industrialized penicillin production and isolated higher-yielding strains of the Penicillium fungus. To improve upon that strain, researchers at the Carnegie Institution of Washington subjected NRRL 1951 to X-rays to produce a mutant strain designated X-1612 that produced 300 milligrams of penicillin per litre of mould, twice as much as NRRL 1951. In turn, researchers at the University of Wisconsin used ultraviolet radiation on X-1612 to produce a strain designated Q-176. This produced more than twice the penicillin of X-1612, but in the form of the less desirable penicillin K. [ c ] Phenylacetic acid was added to switch it to producing the highly potent penicillin G. This strain could produce up to 550 milligrams of penicillin per litre. [ 122 ] [ 116 ]
Pfizer was a small Brooklyn company that specialised in making citric acid , for which it had developed deep submergence techniques. This involved converting molasses to citric acid by fermenting it in a large tank in which it was stirred and the pH was carefully controlled. [ 123 ] Pfizer's vice president, John L. Smith , whose daughter had died from an infection, put all of Pfizer's resources into the development of a practical deep submergence technique. [ 124 ] The company invested $2.98 million in penicillin in 1943 and 1944. (equivalent to $53 million in 2024). Pfizer scientists Jasper H. Kane , G. M. Shull, E. M. Weber, A. C. Finlay and E. J. Ratajak worked on the fermentation process while R. Pasternak, W. J. Smith, V. Bogert and P. Regna developed extraction techniques. [ 125 ]
Now that they had a mould that grew well submerged and produced an acceptable amount of penicillin, the next challenge was to provide the required air to the mould for it to grow. This was solved using an aerator , but aeration caused severe foaming of the corn steep. The foaming problem was solved by the introduction of an anti-foaming agent, glyceryl monoricinoleate. The technique also involved cooling and mixing. [ 126 ]
Pfizer opened a pilot plant with a 7,600-litre (2,000 US gal) fermentor in August 1943 and Ratajak delivered the first penicillin liquor from it on 27 August. The one tank was soon producing half the company's output. Smith then decided to construct a full-scale production plant. The nearby Rubel Ice plant was acquired on 20 September 1943 and converted into the first deep-submergence production plant, with fourteen 130,000-litre (34,000 US gal) tanks. The work was carried out in five months under the leadership of John E. McKeen and Edward J. Goett, and the plant opened on 1 March 1944. [ 124 ] [ 125 ] [ 127 ]
In mid-1943 the Australian War Cabinet decided to produce penicillin in Australia. Colonel E. V. (Bill) Keogh , the Australian Army's Director of Hygiene and Pathology, was placed in charge of the effort. Keogh summoned Captain Percival Bazeley , with whom he had worked at the Commonwealth Serum Laboratories (CSL) before the war, and Lieutenant H. H. Kretchmar, a chemist, and directed them to establish a production facility by Christmas. They set off on a fact-finding mission to the United States, where they visited NRRL and obtained penicillin cultures from Coghill. They also inspected the Pfizer plant in Brooklyn and the Merck plant at Rahway, New Jersey . A production plant was established at the CSL facilities in Parkville, Victoria , and the first Australian-made penicillin began reaching the troops in New Guinea in December 1943. By 1944, CSL was producing 400 million Oxford units per week, and there was sufficient penicillin production to allocate some for civilian use. [ 128 ] [ 129 ]
Wartime production in Australia was in bottles and flasks, but Bazeley made a second tour of facilities in the United States between September 1944 and March 1945 and was impressed by the progress made on deep submergence technology. In 1946 and 1947 he created a pilot deep submerged plant at CSL using small 45-litre (10 imp gal) tanks to gain experience with the technique. Two 23,000-litre (5,000 imp gal) tanks became operational in 1948, followed by eight more. During the 1950s and 1960s, CSL produced semisynthetic penicillin as well. Penicillin was also produced by F.H. Faulding in South Australia, Abbott Laboratories in New South Wales and Glaxo in Victoria. By the 1970s there was a worldwide glut of penicillin, and Glaxo ceased production in 1975 and CSL in 1980. [ 130 ]
During his visit to North America in August 1941, Howard Florey approached the Connaught Laboratories at the University of Toronto , where he met with the director, R. D. Defries, and Ronald Hare. Florey was rebuffed; Defries argued that the laboratories did not have the space, and he expressed his belief that constructing facilities to culture penicillin would be a waste as it would soon be synthesised. The results of clinical trials caused a change of heart, and in August 1943 the Canadian government asked the Connaught Laboratories to initiate mass production of penicillin. The Spadina Building was purchased by the University of Toronto for the purpose, and refurbished at a cost of Canadian $1.2 million (equivalent to Canadian $21 million in 2023), split equally between the university and the government. Penicillin was initially cultured in 200,000 bottles occupying 740 square metres (8,000 sq ft) of air-conditioned laboratory space. Production was switched to the deep submergence method in November 1945. [ 131 ] [ 132 ]
A translation of the Oxford team's 1941 report reached Germany via Sweden the following year. [ 133 ] Like most research in wartime Germany, research into penicillin was carried out in a fragmentary fashion with little coordination. [ 133 ] On 6 December 1943, the Reich Health Ministry ordered the medical community to conduct research into penicillin and other antibiotics. [ 134 ] Three vials of penicillin captured by the Afrika Korps reached Germany in 1943 and one was sent to Heinz Öppinger at Hoechst in Frankfurt , and he began conducting experiments with moulds. Penicillin was produced there in 300-litre batches, and Öppinger developed a rotating drum for a deep-tank fermentation process. [ 133 ] [ 135 ]
Research was also carried out by Schering in Berlin using a sample of Fleming's mould, which they failed to cultivate; their efforts to determine the chemical structure of penicillin were also unsuccessful. [ 136 ] Maria Brommelhues at IG Farben 's Bacteriological Laboratory in Elberfeld catalogued different species of penicillin. [ 137 ] Hitler's personal physician, Theodor Morell , treated Hitler with penicillin for injuries sustained in the 20 July 1944 assassination attempt . [ 134 ] Information about penicillin research in Germany was gathered by the Manhattan Project 's Alsos Mission and forwarded to Florey in the UK. [ 138 ] [ d ]
Much of Germany's penicillin came from Czechoslovakia, where research was carried out at Charles University in Prague and the Fragner Pharmaceutical Company by a team that included chemist Karel Wiesner . Work was also conducted in secret in France and at the Delft University of Technology in the Netherlands. [ 140 ] In 1946 and 1947, penicillin factories were established in Belarus, Ukraine, Poland, Italy and Yugoslavia with plant and expertise from Canada through the United Nations Relief and Rehabilitation Administration (UNRRA), of which Canadian Lester B. Pearson was the head of its supply committee. UNRRA was wound up in 1948, and its penicillin responsibilities were transferred to the World Health Organization (WHO). [ 141 ]
In Italy, Domenico Marotta negotiated with UNRRA for a penicillin plant to be built in Rome near the Sapienza University of Rome . This took longer than expected and construction did not commence until 1948. In the meantime, Chain came to the Istituto Superiore di Sanità to deliver a series of lectures on penicillin and Marotta took the opportunity to recruit him as a colleague. Chain suggested that instead of building a pilot plant, they use the UNRRA money to build an institute for research into penicillin. This became the largest of its kind in the world, with over one hundred chemists, biochemists, microbiologists and technicians, and was soon at the forefront of research into semisynthetic penicillin. [ 142 ]
Manfred Kiese [ de ] at the Pharmacological Institute in Berlin published a survey of literature on antibiotics in the 7 August 1943 issue of Klinische Wochenschrift that included the Oxford team's publications. A copy was acquired by the Japanese embassy in Berlin and taken to Japan on the Japanese submarine I-8 , which docked at Kure, Hiroshima , on 21 December 1943. The article was translated into Japanese, and production of penicillin was underway by 1 February 1944. By mid-May, a research team under Hamao Umezawa had tested 750 different strains of mould and found that 75 exhibited antibiotic activity. Experiments were conducted on mice to determine efficacy and toxicity. The Morinaga Milk company had a small penicillin production plant in operation in Mishima, Shizuoka , by the end of the year, and the Banyu Pharmaceutical Company [ jp ] opened a small plant in Okazaki, Aichi , in January 1945. The penicillin was called "Hekiso" after its blue colour. By 1948 Japan had become the third country, after the US and UK, to become self-sufficient in penicillin, and exports to China and Korea began the following year. [ 143 ] [ 144 ]
In the UK, the firm Kemball, Bishop & Co. was asked in early 1941 if it could produce 45,000 litres (10,000 imp gal) of raw penicillin brew. [ 145 ] Like Pfizer, with which it had a commercial relationship, it was a small firm, but one with experience in fermentation techniques as a manufacturer of citric acid. [ 146 ] It was unable to do it at the time, [ 145 ] but on 23 February 1942, Florey received an offer from Kemball, Bishop & Co. of a more modest effort of 910 litres (200 imp gal) every ten days. [ 147 ] Work commenced at its Bromley-by-Bow plant on 5 March 1942 and the first trays of mould were seeded on 25 March. [ 146 ]
Wartime conditions, including German bombing, made progress difficult. The 55-litre (12 imp gal) milk churns needed for shipment were in short supply, and special arrangements were made with the Ministry of Supply . The brew was initially despatched by rail to minimise the use of rationed petrol. [ 147 ] The first 680 litres (150 imp gal) of brew, containing 6.1 million units at 9 units per mL, were delivered to Florey on 28 October 1942. [ 146 ] Kemball, Bishop & Co. built an extraction plant, which became operational on 24 November 1943. [ 147 ]
In the meantime, Imperial Chemical Industries (ICI) had established a small production unit at its plant in Blackley and had begun shipments in December 1941. In May 1942, production moved to a purpose-built plant at Trafford Park , which initially produced two million Oxford units of penicillin per week. Production was ramped up to sixty million units per week by the time the plant was closed in March 1944; production shifted thereafter to a new plant that produced 300 million units per week. [ 147 ] [ 148 ] In 1947 ICI decided to construct a new plant to produce 32,000 litres (7,000 imp gal) of penicillin per day by the deep submergence method. [ 149 ]
Glaxo Laboratories opened a small production plant at Greenford in December 1942 that produced 70 litres of penicillin broth per week. In February 1943, it opened a second plant at Aylesbury . Initially it used the techniques developed at Oxford, but in September 1943 it switch to using corn steep liquor as a medium, and switched to using the NRRL 1249.B21 strain of mould provided by Coghill. In 1943, Glaxo was responsible for 2,570 million of the 3,500 million Oxford units produced in the UK. Glaxo opened a third factory at Watford in February 1944 and a fourth at Stratford, London , in January 1945. The company was responsible for 80 per cent of the UK's output up to June 1944. [ 150 ]
In 1944 the Ministry of Supply arranged for the Commercial Solvents Company to install the first deep submergence plant at Speke , and it asked Glaxo to build one too. This new Glaxo plant opened at Barnard Castle in January 1946 and produced more penicillin over the next nine months than its surface plants had produced in all of 1945. The surface plants were all closed in 1946. [ 151 ] Penicillin production in the UK increased from 25 million units per week in March 1943 to 30 billion per week in 1946. [ 152 ]
The WPB placed penicillin under a wartime allocation system on 16 July 1943. All supplies were designated for use by the armed forces and the Public Health Service . [ 153 ] Penicillin production in the United States ramped up from 800 million Oxford units in the first half of 1943 to 20 billion units in the second half. [ 154 ] The US government built six production plants at a cost of $7.6 million (equivalent to $136 million in 2024). These were sold after the war to the companies that operated them for $3.4 million (equivalent to $55 million in 2024). Another sixteen plants were built by the private sector for $22.6 million (equivalent to $404 million in 2024), although $14.5 million (equivalent to $259 million in 2024) was approved for accelerated depreciation under which the cost could be written off in five years instead of the usual twelve to fifteen. [ 155 ]
US penicillin production rose from 21.192 billion units in 1943, to 1.663 trillion units in 1944, and an estimated 6,852 billion units in 1945. [ 156 ] By June 1944, Pfizer alone was producing 70 billion units per month. [ 157 ] Monthly production dropped off after July 1945 due to a shortage of corn-steep liquor. The price offered by the CMR for a million units fell from $200 in 1943 (equivalent to $4,000 in 2024), which was below its manufacturing cost, to $6 in 1945 (equivalent to $105 in 2024). [ 154 ] [ 153 ]
The chairman of the NRC committee on chemotherapy, Chester Keefer was responsible for administering the equitable distribution of penicillin for civilian use on behalf of the CMR. As the news of the effectiveness of penicillin spread, he had to deal with a large volume of requests for the drug. Supplies for civilian use were initially small, and penicillin was initially provided only for cases with a high mortality rate that did not respond to other forms of treatment. [ 158 ] [ 159 ] [ 73 ] In January 1943, he reported to OSRD on the results of the treatment of the first 100 patients; by August, 500 patients had been treated. [ 153 ] Military requirements consumed 85 per cent of production in 1944. This dropped to 30 per cent in 1945, but civilian demands for penicillin exceeded allocations. [ 73 ]
By April 1944 supply and demand had exceeded the ability of one man to administer, and the task was handed over to a Penicillin Producers Industry Advisory Committee that distributed supplies through a network of depot hospitals. By 1945, there were 2,700 depot hospitals holding supplies of penicillin, and another 5,000 hospitals receiving supplies through them. Penicillin became commercially available by the end of the year, by which time the United States was exporting 200 billion units a month. [ 73 ] By 1956, only twelve of the twenty-one firms that produced penicillin during the war were still involved in its manufacture. [ 160 ]
In 1943, the Medical Research Council decided that the time had come for field trials of penicillin. The location of centres to receive the drug was kept secret so as to not provoke demand for the drug when it was still in short supply. [ 161 ] Howard Florey was sent to North Africa, where the North African campaign was ongoing. On 29 June he was joined by Hugh Cairns , another Rhodes Scholar from Adelaide, who now held the rank of brigadier in the British Army, and was in charge of the Military Hospital for head injuries in Oxford, who brought with him a stockpile of 40 million units of penicillin. [ 162 ] [ 163 ]
Over the next two months Florey and Cairns treated over one hundred cases and compiled a report that ran to over a hundred pages. Florey gave lectures on penicillin, and his report contained recommendations for training of medical officers in its use. The fighting in North Africa ended in May 1943, so most of the cases Florey saw were not recently wounded soldiers, but ones with old wounds that had not healed; battle casualties began arriving again after the Allied invasion of Sicily in July. [ 164 ]
Florey considered that the source of infection in many cases was from the hospital rather than the battlefield, and advocated changes to the way that patients were treated to take advantage of the properties of penicillin. He argued that wounds should be cleaned and sealed up promptly. This was a radical idea; normally it would have been inviting gas gangrene, but he proposed leaving that to the penicillin. Florey's recommendations were acted upon; the War Office established a training course for pathologists and clinicians at the Royal Herbert Hospital , which made use of film that Florey shot in North Africa. [ 164 ]
Although he intended that penicillin be used to treat the seriously wounded, there were large numbers of venereal disease cases, against which penicillin was particularly effective, and from a military point of view being able to cure gonorrhea in 48 hours was a breakthrough. The supply situation improved, and 20 million units per day were made available for Allied invasion of Italy in September. [ 164 ] [ 165 ]
During the campaign in Western Europe in 1944–1945, penicillin was widely used both to treat infected wounds and as a prophylactic to prevent wounds from becoming infected. Gas gangrene had killed 150 out of every 1,000 casualties in the First World War, but the instance of this disease now disappeared almost completely. Open fractures now had a recovery rate of better than 94 per cent, and recovery from burns of one-fifth of the body or less was 100 per cent. [ 166 ]
The chemical structure of penicillin was first suggested by Abraham in 1942. [ 167 ] Dorothy Hodgkin determined the correct chemical structure of penicillin using X-ray crystallography at Oxford in 1945. [ 168 ] [ 169 ] In 1945, the US Committee on Medical Research and the British Medical Research Council jointly published in Science a chemical analyses done at different universities, pharmaceutical companies and government research departments. The report announced the existence of different forms of penicillin compounds which all shared the same structural component called β-lactam . [ 170 ] The penicillins were designated by Roman numerals in UK (penicillin I, II, III and IV) in order of their discoveries, and known by letters (F, G, X, and K) referring to their origins or sources in the US, as below:
The chemical names were based on the side chains of the compounds. In 1948, Chain introduced the chemical names as standard nomenclature, remarking that this would "make the nomenclature as far as possible unambiguous". [ 171 ]
In Kundl , Tyrol , Austria , in 1952, Hans Margreiter and Ernst Brandl of Biochemie developed the first acid-stable penicillin for oral administration, penicillin V . [ 172 ] American chemist John C. Sheehan at the Massachusetts Institute of Technology (MIT) completed the first chemical synthesis of penicillin in 1957. [ 173 ] [ 174 ] [ 175 ] Sheehan had started his studies into penicillin synthesis in 1948, and during these investigations developed new methods for the synthesis of peptides , as well as new protecting groups —groups that mask the reactivity of certain functional groups. [ 175 ] [ 176 ] Although the initial synthesis developed by Sheehan was not appropriate for mass production of penicillins, one of the intermediate compounds in Sheehan's synthesis was 6-aminopenicillanic acid (6-APA), the nucleus of penicillin. [ 177 ] [ 178 ]
An important development was the discovery of 6-APA itself. In 1957, researchers at the Beecham Research Laboratories in Surrey isolated 6-APA from the culture media of P. chrysogenum . 6-APA was found to constitute the core nucleus of penicillin (and subsequently many β-lactam antibiotics) and was easily chemically modified by attaching side chains through chemical reactions. [ 179 ] [ 180 ] The discovery was published Nature in 1959. [ 181 ] This paved the way for new and improved drugs as all semisynthetic penicillins are produced from chemical manipulation of 6-APA. [ 182 ]
The second-generation semisynthetic β-lactam antibiotic methicillin , designed to counter first-generation-resistant penicillinases, was introduced in the United Kingdom in 1959. Methicillin-resistant forms of S. aureus were first observed in the UK in 1960, less than a year later. It is likely that MRSA strains already existed many years before methicillin was introduced. This demonstrated that new drugs intended to circumvent known resistance mechanisms could be rendered ineffective by bacterial adaptations caused by the widespread use of other antibiotics. [ 183 ]
Penicillin patents became a matter of concern and conflict. Chain had wanted to apply for a patent but Florey had objected, arguing that penicillin should benefit all. [ 78 ] Florey sought the advice of Sir Henry Dale , the chairman of the Wellcome Trust and a member of the Scientific Advisory Panel to the British Cabinet , and John William Trevan, the director of the Wellcome Trust Research Laboratory. On 26 and 27 March 1941, Dale and Trevan met at Oxford University's Sir William Dunn School of Pathology to discuss the issue. Dale advised that patenting penicillin would be unethical. Undeterred, Chain approached Sir Edward Mellanby , then Secretary of the Medical Research Council, who also objected on ethical grounds. As Chain later admitted, he had "many bitter fights" with Mellanby, [ 184 ] [ 185 ] but Mellanby's decision was accepted as final. [ 186 ]
In 1945, Moyer patented the methods for production and isolation of penicillin. [ 187 ] [ 188 ] He could not obtain patents in the US as an employee of the NRRL, but filed for patents with the British Patent Office. He gave the license to a US company, Commercial Solvents Corporation . [ 187 ] When Fleming learnt of the American patents on penicillin production, he was incensed and commented:
I found penicillin and have given it free for the benefit of humanity. Why should it become a profit-making monopoly of manufacturers in another country? [ 189 ]
The patenting of penicillin-related technologies by US companies gave rise to a myth in the UK that British scientists had done the work but American ones garnered the rewards. [ 187 ] When the Rockefeller Foundation published its annual report in 1944, The Evening News contrasted the foundation's generous support of the Oxford team's work with that of the parsimonious MRC. [ 190 ] [ 191 ] In April 1945, the British firm Glaxo signed agreements with Squibb and Merck under which it paid 5 per cent royalties on its sales of penicillin for five years in return for the use of their deep submergence fermentation techniques. Glaxo paid almost £500,000 (equivalent to £15,763,091 in 2023) in royalties between 1946 and 1956. [ 186 ] [ 187 ] The controversy over patents led to the establishment of the UK National Research Development Corporation (NRDC) in June 1948. This organisation collected government patents and charged royalties on them. [ 192 ]
After the news about the curative properties of penicillin broke in an editorial in The Times on 27 August 1942, [ 193 ] Fleming enjoyed the publicity, but Howard Florey did not: he feared that this would create a demand for penicillin that he did not yet have to give. [ 194 ] When the press arrived at the Sir William Dunn School, he told his secretary to "send them packing". [ 195 ] He also prohibited his team to speak to the press. [ 78 ] Confusion resulted from the fact that both the mould juice and the drug produced from it were both called penicillin . [ 194 ] Distorted and inaccurate accounts were published and broadcast giving Fleming credit for the development of penicillin, accounts that Fleming and St. Mary's Hospital made little or no effort to correct. [ 195 ] [ 196 ] The story the media wished to tell was the familiar one of the lone scientist and the serendipitous discovery. The British medical historian Bill Bynum later wrote:
The discovery and development of penicillin is an object lesson of modernity: the contrast between an alert individual (Fleming) making an isolated observation and the exploitation of the observation through teamwork and the scientific division of labour (Florey and his group). The discovery was old science, but the drug itself required new ways of doing science. [ 197 ]
In 1943, the Nobel committee received a single nomination for the Nobel Prize in Physiology or Medicine for Fleming and Florey from the British biochemist Rudolph Peters . The secretary of the Nobel committee, Göran Liljestrand , made an assessment of Fleming and Florey in the same year, but little was known about penicillin in Sweden at the time, and he concluded that more information was required. The following year, there was one nomination for Fleming alone and one for Fleming, Florey and Chain. Liljestrand and Nanna Svartz , the professor of medicine at the Karolinska Institute , considered their work, and while both judged Fleming and Florey equally worthy of a Nobel Prize, the Nobel committee was divided, and decided to award the prize that year instead to Joseph Erlanger and Herbert S. Gasser "for their discoveries relating to the highly differentiated functions of single nerve fibres". [ 198 ] [ 199 ]
There were a large number of nominations for Florey and Fleming or both in 1945, and one for Chain, from Liljestrand, who nominated all three. [ 200 ] Liljestrand noted that thirteen of the first sixteen nominations that came in mentioned Fleming, but only three mentioned him alone. [ 201 ] This time evaluations were made by Liljestrand, Sven Hellerström [ sv ] and Anders Kristenson [ sv ] , who endorsed all three. [ 199 ]
There were rumours that the committee would award the prize to Fleming alone, or half to Fleming and one-quarter each to Florey and Chain. Fulton and Dale lobbied for the award to be given to Florey. [ 201 ] The Nobel Assembly at the Karolinska Institute did consider awarding half to Fleming and one-quarter each to Florey and Chain, but in the end decided to divide it equally three ways. [ 199 ] On 25 October 1945, it announced that Fleming, Florey and Chain equally shared the 1945 Nobel Prize in Physiology or Medicine "for the discovery of penicillin and its curative effect in various infectious diseases." [ 202 ] [ 203 ] When The New York Times announced that "Fleming and Two Co-Workers" had won the prize, Fulton demanded – and received – a correction in an editorial the next day. [ 204 ] [ 205 ] [ 206 ]
Dorothy Hodgkin received the 1964 Nobel Prize in Chemistry "for her determinations by X-ray techniques of the structures of important biochemical substances." She became only the third woman to receive the Nobel Prize in Chemistry, after Marie Curie in 1911 and Irène Joliot-Curie in 1935. [ 207 ]
The narrow range of treatable diseases or "spectrum of activity" of the penicillins, along with the poor activity of the orally active penicillin V, led to the search for derivatives of penicillin that could treat a wider range of infections. The isolation of 6-APA , the nucleus of penicillin, allowed for the preparation of semisynthetic penicillins, with various improvements over benzylpenicillin . Ampicillin was developed by the Beecham Research Laboratories in London. When introduced to clinical use in 1961 it was the first semisynthetic penicillin that could be taken orally that was effective against both Gram-negative and Gram-positive organisms. [ 208 ] It was more advantageous than the original penicillin as it offered a broader spectrum of activity against both Gram-positive and Gram-negative bacteria, whereas the original was only effective against Gram-positive. [ 208 ]
Further development yielded β-lactamase-resistant penicillins , including flucloxacillin , dicloxacillin , and methicillin . These were significant for their activity against β-lactamase-producing bacterial species, but were ineffective against the MRSA strains. [ 209 ]
Another development of the line of penicillins was the antipseudomonal penicillins, such as carbenicillin , ticarcillin , and piperacillin , useful for their activity against Gram-negative bacteria. The usefulness of the β-lactam ring was such that related antibiotics, including the mecillinams , the carbapenems and, most important, the cephalosporins , still retain it at the centre of their structures. [ 180 ] [ 210 ]
β-lactam penicillins became the most widely used antibiotics in the world. [ 211 ] Amoxicillin , a semisynthetic penicillin developed by Beecham Research Laboratories in 1970, [ 212 ] [ 213 ] was the most commonly used of all. [ 214 ] [ 215 ] In the early 21st century, antibiotic preferences differed from country to country: in Europe, amoxicillin was widely used in the UK and Germany; France, Italy and Spain preferred broad-spectrum combinations like co-amoxiclav; and the Scandinavian countries relied on narrow-spectrum penicillin V. [ 216 ]
In 1940, Ernst Chain and Edward Abraham reported the first indication of antibiotic resistance to penicillin, an E. coli strain that produced the penicillinase enzyme, which was capable of breaking down penicillin and negating its antibacterial effect. [ 217 ] [ 43 ] [ 218 ] Chain and Abraham worked out the chemical nature of penicillinase which they reported in Nature as:
The conclusion that the active substance is an enzyme is drawn from the fact that it is destroyed by heating at 90° for 5 minutes and by incubation with papain activated with potassium cyanide at pH 6, and that it is non-dialysable through " cellophane " membranes. [ 219 ]
In his Nobel lecture, Fleming warned of the possibility of penicillin resistance in clinical conditions:
The time may come when penicillin can be bought by anyone in the shops. Then there is the danger that the ignorant man may easily underdose himself and by exposing his microbes to non-lethal quantities of the drug make them resistant. [ 220 ]
At the time, only poisons required a doctor's prescription, and self-treatment was a real possibility. Legislation was passed in the UK in 1947 to require a prescription for antibiotics. The United States followed in 1951. [ 221 ] Elsewhere in the world, the export of Western pharmaceuticals diffused faster than Western medical knowledge and practices, and penicillin was often dispensed by practitioners of traditional medicine. [ 222 ] As late as 1999, a study in the UK found that 39 per cent of respondents erroneously believed that antibiotics could cure colds and flu, and 12 per cent believed that they were the best treatment for them. [ 223 ] The misplaced faith in antibiotics had serious consequences. It reduced the status of doctors to providers of pills. Many more people sought medical attention for ailments they would have ignored before, and they often demanded antibiotics. For their part, overworked doctors were increasingly willing to provide them even if not asked to do so. [ 224 ]
By 1942, some strains of Staphylococcus aureus had developed a strong resistance to penicillin and many strains were resistant by the 1960s. [ 225 ] In 1946, bacteriologist Mary Barber began a study of penicillin resistance through natural selection at Hammersmith Hospital in London. She found that in 1946, seven out of eight bacterial infections were susceptible to penicillin, but two years later only three out of eight were. Nurses were exposed to both bacteria and penicillin and harboured and transmitted bacterial infections. Miller found that three out of ten student midwives were colonized by bacteria when they arrived; after three months, seven out of ten were. The problem was sloppy hygiene practices by health care workers, poor medical practices like prophylactic use of antibiotics, and slipshod administrative practices, such as taking babies from their mothers to large hospital nurseries where they could infect each other. [ 226 ]
Antibiotic-resistant infections were reported in Australia in 1952. [ 226 ] During the 1957–1958 influenza pandemic there were 16,000 deaths in the UK and 80,000 in US from bacterial complications; 28 per cent of those who contracted pneumonia died. Most cases of pneumonia were contracted in hospitals, and many of these were antibiotic-resistant strains that had been nurtured there. [ 227 ] In 1965, the first case of penicillin resistance in Streptococcus pneumoniae was reported from Boston. [ 228 ] [ 229 ] Since then other strains and species of bacteria have developed resistance. [ 217 ]
Research conducted by the American Cyanamid laboratories in the late 1940s and early 1950s demonstrated that adding penicillin to chicks' feed increased their weight gain by 10 per cent. The reasons for this were still subject to debate in the twenty-first century. Subsequent research indicated that adding penicillin to animal feed also improved feed conversion efficiency , promoted more uniform growth and facilitated disease control. After the Food and Drug Administration approved the use of penicillin as feed additives for poultry and livestock in 1951, the pharmaceutical companies ramped up production to meet the demand. [ 230 ]
By 1954, the United States was producing 910 t (2 million lb) of antibiotics each year, of which 220 t (490,000 lb) was going into animal feed; in the 1990s, the United States was producing 23,000 t (50 million lb) of antibiotics per year, of which half was going to livestock. The largest user remained the poultry industry, which consumed 4,800 t (10.5 million lb) of antibiotics each year, compared to 4,700 t (10.3 million lb) for hogs and 1,700 t (3.7 million lb) for cattle. A 1981 study by the Council for Agricultural Science and Technology estimated that banning their use in animal feed could cost American consumers up to $3.5 billion a year (equivalent to $12.11 billion in 2024) in increased food prices. [ 230 ] The story was similar in the UK, where 44 per cent of antibiotic production was consumed by animals by 1963. [ 231 ]
By the mid-1950s, there were reports in the United States that milk was not curdling to make cheese. The FDA found that the milk was contaminated with penicillin, which was killing the bacteria required for cheesemaking . In 1963 the World Health Organization reported high levels of penicillin in milk worldwide. People who were allergic to penicillin could now get a reaction from drinking milk. [ 232 ] A committee chaired by Lord Netherthorpe was established in the UK in 1960 to inquire into the use of antibiotics in animal feed. In 1962, the committee recommended that restrictions on the use of antibiotics in animals be relaxed. It contended that the benefits were substantial and that even if bacteria became resistant, new antibiotics would soon be developed, and there was no evidence that bacterial resistance in animals impacted human health. [ 233 ] [ 234 ]
The Netherthorpe committee's conclusions were undermined by new research even before they were published, and the committee was recalled to reconsider the issue in 1965. New studies had shown that bacteria were not only were able to inherit the genes for antibiotic resistance, but they could also communicate them to each other. [ 235 ] In 1967, a multiresistant strain of E. coli killed fifteen children in the UK. The use of antibiotics in animals for nontherapeutic use was banned there in 1971. Many other European countries soon followed. [ 236 ]
When Sweden acceded to the European Union (EU) in 1995, a total ban on antibiotic growth promoters (AGPs) had been in place there for ten years. This would be superseded by more relaxed EU rules unless Sweden could demonstrate scientific evidence in favour of a ban. Two Swedish scientists, Anders Franklin and Christina Greko, and two Danish scientists, Frank Aarestrup [ dk ] and Henrik Wegener [ dk ] , took up the fight. The odds seemed against them but this coincided with the United Kingdom BSE outbreak , which resulted in intense political pressure. In December 1996, the European Parliament 's Standing Committee on Health and Welfare voted to ban the use of AGPs. The EU went further and recommended broad restrictions on the use of antibiotics. [ 237 ] [ 238 ] | https://en.wikipedia.org/wiki/History_of_penicillin |
The history of phagocytosis is an account of the discoveries of cells, known as phagocytes , that are capable of eating other cells or particles, and how that eventually established the science of immunology . [ 1 ] [ 2 ] Phagocytosis is broadly used in two ways in different organisms, for feeding in unicellular organisms (protists) and for immune response to protect the body against infections in metazoans. [ 3 ] Although it is found in a variety of organisms with different functions, its fundamental process is cellular ingestion of foreign (external) materials, and thus, is considered as an evolutionary conserved process. [ 4 ]
The biological theory and concept, experimental observations and the name, phagocyte (from Ancient Greek φαγεῖν (phagein) ' to eat ' and κύτος (kytos) ' cell ' ) were introduced by a Ukrainian zoologist Élie Metchnikoff in 1883, the moment regarded as the foundation or birth of immunology. [ 5 ] [ 6 ] The discovery of phagocytes and the process of innate immunity earned Metchnikoff the 1908 Nobel Prize in Physiology or Medicine , and the epithet "father of natural immunity". [ 7 ]
However, the cellular process was known before Metchnikoff's works, but with inconclusive descriptions. The first scientific description was from Albert von Kölliker who in 1849 reported an alga eating a microbe. In 1862, Ernst Haeckel experimentally showed that some blood cells in a slug could ingest external particles. [ 8 ] By then evidences were mounting that leucocytes can perform cell eating just like protists, but it was not until Metchnikoff showed that specific leukocytes (in his case macrophages ) eat cell that the role of phagocytosis in immunity was realised. [ 9 ] [ 10 ]
Phagocytosis was first observed as a process by which unicellular organisms eat their food, usually smaller organisms like protists and bacteria. The earliest definitive account was given by Swiss scientist Albert von Kölliker in 1849. [ 8 ] As he reported in the journal Zeitschrift für Wissenschaftliche Zoologie , Kölliker described the feeding process of an amoeba-like alga, Actinophyrys sol (a heliozoan ). Under microscope, he noticed that the protist engulfed and swallowed (the process now called endocytosis) a small organism, that he named infusoria (a generic name for microbes at the time). Modern translation of his description reads:
The creature [infusoria] which is destined for food [i.e., trapped by the spines], gradually reaches the surface of the animal [i.e., Actinophyrys ), in particular, the thread that caught it is shortened to nothing, or, as it often happens, once trapped in the thread space, the thread unwinds from around the prey when close together and at the surface of the cell body... The place on the cell surface where the caught animal is, gradually becomes a deeper and deeper pit into which the prey, which is attached everywhere to the cell surface, comes to rest. Now, by continuing to draw in the body wall, the pit gets deeper, and the prey which was previously on the edge of the Actinophrys , disappears completely, and at the same time the catching threads, which still lay with their points against each other, cancel each other out and extend again. Finally, the edges "choke" the pit, so that it is flask-shaped ( flaschenformig ) all sides increasingly merging together, so that the pit completely closes and the prey is completely within the cortical cytoplasm. [ 11 ]
The general process given by Kölliker correlates with modern understanding of phagocytosis as a feeding method. The thread and thread space are pseudopodia , gradually deepening pit is the endocytosis, the flaschenformig structure is the phagosome . [ 11 ] [ 12 ] [ 13 ]
The first demonstration of phagocytosis as a property of leukocytes, the immune cells, was from the German zoologist Ernst Haeckel . [ 14 ] [ 15 ] In 1846, English physician Thomas Wharton Jones had discovered that a group of leucocytes , which he called "granule-cell" (later renamed and identified as eosinophil [ 16 ] ), could change shape, the phenomenon later called amoeboid movement . Jones studied the bloods of different animals, from invertebrates to mammals, [ 17 ] [ 18 ] [ 19 ] and noticed the blood of a marine fish ( skate ) had cells that could move by themselves and remarked that "the granule-cells at first presented most remarkable changes of shape." [ 20 ] Other scientists confirmed his findings, however, among them, German physician Johann Nathanael Lieberkühn in 1854 concluded that the movement was not for ingesting food or particles. [ 8 ]
Disproving Lieberkühn's conclusion, Haeckel discovered that such cells could indeed ingest particles, even experimentally introduced ones. In 1862, Haeckel injected an Indian ink (or indigo [ 21 ] ) into a sea slug, Tethys , and observed how the colour was taken up by the tissues. As he extracted the blood, he found that the colour particles accumulated in the cytoplasm of some blood cells. [ 8 ] It was a direct evidence of phagocytosis by immune cells. [ 14 ] [ 21 ] Haeckel reported his experiment in a monograph Die Radiolarien (Rhizopoda Radiaria): Eine Monographie. [ 22 ]
In 1869, Joseph Gibbon Richardson at the Pennsylvania Hospital observed amoeboid leukocytes from his own salivary cells, urine of an individual hospitalised for kidney and bladder problem and urine from a cystitis case. He noticed from the pus sample that one cell had moving "molecule" inside, the cell gradually enlarged and ultimately ruptured like "that of swarm of bees from a hive". [ 23 ] He hypothesised: "[It] seems not improbably that the white corpuscles, either in the capillaries or lymphatic glands, collect during their amoebaform [sic] movements, those germs of bacteria, which my own experiments indicate always exist in the blood to a greater or less amount." [ 24 ] [ 25 ] Although generally overlooked in the study of phagocytosis, [ 26 ] after it was originally published in the Pennsylvania Hospital Report , [ 27 ] it was reproduced in other journals. [ 23 ] [ 28 ] [ 29 ]
In 1869, Russian physician Kranid Slavjansky published his research on injection of guinea pigs and rabbits with indigo and cinnabar in Archiv für pathologische Anatomie und Physiologie und für klinische Medicin (later renamed Virchows Archiv ) . [ 30 ] Slavjansky found that leukocytes easily take up the indigo and cinnabar as do the cells of the respiratory tract ( alveoli ). He noticed that the alveolar cells behaved like the leukocytes as they became distributed in the alveoli and the bronchial mucus, [ 31 ] the observation of which made him to suggest that the tissue cells were the source of particle up-take in the lungs. [ 26 ] He concluded:
Da jene Zellen zinnoberhaltig sind, so liegt es auf der Hand, sie als weisse Blutzellen anzunehmen, welche aus den Gefӓssen herauswandernd und kein freies Pigment in den Lungen-Alveolen findend, wie das der Fall in den Versuchen ist, wo man Zinnober in das Blut injicirt, nachdem man zwei Tage früher Indigo in die Lunge eingeführt hat, als zinnoberhaltige Zellen erscheinen... entweder sind es ausgewanderte weisse Blutkörperchen, welche die Schleim-metamorphose durchgemacht haben und auf diese Weise in Schleimkörperchen übergegangen sind, oder sie können von den metamorphosirten Cylinderepithelien der Bronchialschleimhaut stammen. [As those cells contain cinnabar, it is natural to suppose them to be white blood cells migrating out of the vessels and finding no free pigment in the pulmonary alveoli, as is the case in the experiments in which cinnabar is introduced into the blood after introducing indigo into the lungs two days before cinnabar cells appear... either they are migrated white blood cells which have undergone mucus metamorphosis and have thus become mucus corpuscles, or they can come from the metamorphosed columnar epithelium of the bronchial mucosa.] [ 30 ]
A Canadian physician William Osler at McGill College reported "On the pathology of miner's lung" in Canada Medical and Surgical Journal in 1875. [ 32 ] Osler had examined a case of black lung disease ( pneumoconiosis ) in two miners. From an autopsy of one who died from the disease, he found leukocytes and lung cells (alveolar cells) that contained the coal (carbon) particles. [ 26 ] For the blood cells, he was not convinced that the coal particles were taken up by the cells; instead suggesting that "they must be regarded as the original cell elements of the alveoli", conceding that he lacked "the necessary knowledge to decide." But on the lung cells, his observation was clear, remarking:
Inside all of these [lung cells] the carbon particles exist in extraordinary numbers, filling the cells in different degrees. Some are so densely crowded that not a trace of cell substance can be detected, more commonly a rim of protoplasm remains free, or at a spot near the circumference, the nucleus, which in these cells is almost always eccentric, is seen uncovered... One most curious specimen was observed: on an elongated piece of carbon three cells were attached, one at either end, and a third in the middle; so that the whole had a striking resemblance to a dumbbell. I could hardly credit this at first, until, by touching top-cover with a needle and causing the whole to roll over, I quite satisfied myself that the ends of the rod were completely imbedded in the corpuscles, and the middle portion entirely surrounded by another. [ 33 ]
Oslar's report continued with his experimental observation. He injected Indian ink into the axillae and lungs of kittens. [ 32 ] On autopsy of a two-day-old kitten, he noticed leukocytes and large tissue cells, which showed amoeboid movements , containing the ink. However, he could not work out how the ink spread inside the cells, as he accidentally dropped and broke his slide. From a four-week-old kitten, he found that the ink also accumulated in almost all the blood and lung cells, and such cells were so crowded that under a microscope "hardly anything could be seen. [ 33 ] He was convinced that there was a cellular process of up-taking particles ("irritating materials" as he called them [ 26 ] ), which he considered as an "intravasation" or "ingestion", as he concluded:
Here we have to do with an intravasation, or rather an ingestion of the coloured corpuscles within others. Many deny this, but as far as my observation goes there can be no doubt of the fact. In these corpuscles as many as six to ten were seen, in others again the outlines of the red corpuscles could not be detected, as if the cells had absorbed only the colouring matter. [ 33 ]
The phagocytic property of macrophage, a specialised leukocyte, and its role in immunity was discovered by Ukrainian zoologist Élie Metchnikoff. However, he did not discover phagocytes or phagocytosis, as is often depicted in books. [ 34 ] Metchnikoff had been working as professor of zoology and comparative anatomy at the University of Odessa , Ukraine (then Russian Empire), since 1870. [ 35 ] In 1880, he had nervous breakdown, partly due to her wife Olga Belokopytova's terminal typhoid fever, and attempted suicide by self-injecting with blood sample from blood from an individual with relapsing fever. [ 36 ] By then he had keen interest in Charles Darwin's theory of natural selection, and had been investigating the origin of metazoans. [ 37 ]
Based on the knowledge of cell eating in primitive metazoans, Metchnikoff believed that the common ancestor of metazoan must be a simple cell-eating organism. His initial experimental observation in 1880 in Naples, Italy, showed that such intracellular digestion does occur in the parenchyma (tissue cells) of coelenterates, and became convinced that the original metazoan must be like that. [ 38 ] He called this hypothetical metazoan ancestor parenchymella [ 34 ] (later commonly known as phagocytella ; [ 39 ] the term parechymella adopted for the name of the larvae of demosponges. [ 40 ] [ 41 ] ) This was a direct contradiction to the hypothesis of Ernst Haeckel , a German zoologist and staunch supporter of Darwin's theory. In 1872, Haeckel had formulated a theory (as part of his evolutionary theory called biogenetic law ) that a metazoan ancestor must be like a gastrula , an embryonic stage undergoing invagination as seen in chordates. [ 42 ] He named the hypothetical ancestor gastrea. [ 38 ]
To strengthen his parenchymella theory, Metchnikoff thought about several ways to look for cell eating as a fundamental process in metazoans. [ 39 ] In the summer of 1880, he resigned from the University of Odessa and moved to Messina , a seashore city in Sicily, where he could conduct a private research. His initial study on sponges indicated that the mesodermal and endodermal (body tissue wall) cells performed amoeboid movements and cell eating. His earlier experiments on planarian worms already showed that the endoderm is formed by migrating cells, and not by invagination. [ 43 ] His critical study came from the larvae ( bipinnaria ) of a starfish, Astropecten pentacanthus (later reclassified as Astropecten irregularis ). [ 44 ]
Metchnikoff observed that the body covering of the transparent starfish consisted of the outer ( ectoderm ) and internal (endoderm) layers, and that the space in between the layers are filled with moving endodermal cells. When he injected carmine stain (a red dye) into the starfish, he found that the stain was taken up (eaten) by the amoeboid cells as they turned red in colour. [ 43 ] He remarked: "I found it an easy matter to demonstrate that these elements seized foreign bodies of very varied nature by means of their living processes, and certain of these bodies underwent a true digestion within the amoeboid cells." [ 2 ] Then, he conceived a novel idea that if the cells could eat external particles, they must be responsible for eating harmful materials and pathogens like bacteria to protect the body – the key process to immunity. [ 43 ]
It was one afternoon in December 1880, when he stayed home alone while his family went to a circus show that he momentarily realised that his idea could be put to test by piercing live starfish larvae. He collected fresh specimens from the seashore and a few rose thorns on the way home. [ 45 ] He discovered what he hypothesised, that the amoeboid cell gathers round the rose thorn as if to eat when it pierced through the skin, and predicted that the same would be true in humans as a form of body defence. [ 2 ] Recapitulating the experiment, he said:
I hypothesized that if my presumption was correct, a thorn introduced into the body of a starfish larva, devoid of blood vessels and nervous system, would have to be rapidly encircled by the motile cells, similarly to what happens to a human finger with a splinter. No sooner said than done. In the shrubbery of our home, the same shrubbery where we had just a few days before assembled a 'Christmas tree' for the children on a mandarin bush, I picked up some rose thorns to introduce them right away under the skin of the superb starfish larva, as transparent as water. I was so excited I couldn't fall asleep all night in trepidation of the result of my experiment, and the next morning, at a very early hour, I observed with immense joy that the experiment was a perfect success! This experiment formed the basis for the theory of phagocytosis, to whose elaboration I devoted the next 25 years of my life.
Thus, it was in Messina that the turning point in my scientific life took place. [ 10 ] | https://en.wikipedia.org/wiki/History_of_phagocytosis |
The history of pharmacy as a modern and independent science dates back to the first third of the 19th century. Before then, pharmacy evolved from antiquity as part of medicine . Before the advent of pharmacists, there existed apothecaries that worked alongside priests and physicians in regard to patient care.
Paleopharmacological studies attest to the use of medicinal plants in pre-history. [ 1 ] [ 2 ] For example, herbs were discovered in the Shanidar Cave , and remains of the areca nut ( Areca catechu ) in the Spirit Cave . [ 3 ] : 8 Prehistoric man learned pharmaceutical techniques through instinct, by watching birds and beasts, and using cool water, leaves, dirt, or mud as a soothing agent. [ 4 ]
Sumerian cuneiform tablets record prescriptions for medicine. [ 5 ] Ancient Egyptian pharmacological knowledge was recorded in various papyri, such as the Ebers Papyrus of 1550 BC and the Edwin Smith Papyrus of the 16th century BC.
The very beginnings of pharmaceutical texts were written on clay tablets by Mesopotamians. Some texts included formulas, instructions via pulverization, infusion, boiling, filtering and spreading; herbs were mentioned as well. [ 6 ] Babylon, a state within Mesopotamia, provided the earliest known practice of running an apothecary i.e. pharmacy. Alongside the ill person included a priest, physician, and a pharmacist to tend to their needs. [ 4 ]
In Ancient Greece , there existed a separation between physician and herbalist. The duties of the herbalist was to supply physicians with raw materials, including plants, to make medicines. [ 7 ] According to Edward Kremers and Glenn Sonnedecker, "before, during and after the time of Hippocrates there was a group of experts in medicinal plants. Probably the most important representative of these rhizotomoi was Diocles of Carystus (4th century BC). He is considered to be the source for all Greek pharmacotherapeutic treatises between the time of Theophrastus and Dioscorides." [ 8 ]
Between 60 and 78 AD, [ 3 ] : 21–22 the Greek physician Pedanius Dioscorides wrote a five-volume book, De materia medica , covering over 600 plants and coining the term materia medica . It formed the basis for many medieval texts, and was built upon by many Middle Eastern scientists during the Islamic Golden Age . [ 3 ] : 21–22
The earliest known Chinese manual on materia medica is the Shennong Ben Cao Jing ( The Divine Farmer's Herb-Root Classic ), dating back to the first century AD. It was compiled during the Han dynasty and was attributed to the mythical Shennong . Earlier literature included lists of prescriptions for specific ailments, exemplified by a manuscript "Recipes for 52 Ailments", found in the Mawangdui , sealed in 168 BC. Present-day Chinese pharmacy is a result of pharmaceutical exchanges between China and the rest of the world in the past centuries. [ 9 ]
The earliest known compilation of medicinal substances in Indian traditional medicine dates to the third or fourth century AD (attributed to Sushruta , who is recorded as a physician of the sixth century BC).
There is a stone sign for a pharmacy with a tripod, a mortar, and a pestle opposite one for a doctor in the Arcadian Way in Ephesus , Turkey. [ citation needed ]
In Japan, at the end of the Asuka period (538–710) and the early Nara period (710–794), the men who fulfilled roles similar to those of modern pharmacists were highly respected. The place of pharmacists in society was expressly defined in the Taihō Code (701) and re-stated in the Yōrō Code (718). Ranked positions in the pre- Heian Imperial court were established; and this organizational structure remained largely intact until the Meiji Restoration (1868). In this highly stable hierarchy, the pharmacists—and even pharmacist assistants—were assigned status superior to all others in health-related fields such as physicians and acupuncturists. In the Imperial household, the pharmacist was even ranked above the two personal physicians of the Emperor. [ 10 ]
In Baghdad the first pharmacies, or drug stores, were established in 754, [ 11 ] under the Abbasid Caliphate during the Islamic Golden Age . By the ninth century, these pharmacies were state-regulated. [ 12 ]
The advances made in the Middle East in botany and chemistry led medicine in medieval Islam substantially to develop pharmacology . Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915), for instance, acted to promote the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation . His Liber servitoris is of particular interest, as it provides the reader with recipes and explains how to prepare the "simples" from which were compounded the complex drugs then generally used. Shapur ibn Sahl (d. 869), was, however, the first physician to initiate a pharmacopoeia, describing a large variety of drugs and remedies for ailments. Al-Biruni (973–1050) wrote one of the most valuable Islamic works on pharmacology entitled Kitab al-Saydalah ( The Book of Drugs ), where he gave detailed knowledge of the properties of drugs and outlined the role of pharmacy and the functions and duties of the pharmacist.
Ibn Sina (Avicenna), too, described no less than 700 preparations, their properties, mode of action and their indications. He devoted in fact a whole volume to simple drugs in The Canon of Medicine . Of great impact were also the works by al-Maridini of Baghdad and Cairo , and Ibn al-Wafid (1008–1074), both of which were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by ' Mesue ' the younger, and the Medicamentis Simplicibus by ' Abenguefit '. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Maridini under the title De Veneris . Al-Muwaffaq's contributions in the field are also pioneering. Living in the tenth century, he wrote The Foundations of the True Properties of Remedies , amongst others describing arsenious oxide , and being acquainted with silicic acid . He made clear distinction between sodium carbonate and potassium carbonate , and drew attention to the poisonous nature of copper compounds, especially copper vitriol , and also lead compounds. He also describes the distillation of sea-water for drinking. [ 13 ]
Middle Eastern pharmaceutical practitioners would experience an upheaval of their craft come the beginning of the 13th and 14th centuries as pharmacists realigned their interests from developing medicinal theories to establishing practical and therapeutic applications of pharmaceuticals. For example, in 1260 CE a Cairenes pharmacist named Abu ‘I-Munā al-Kuhín al-‘Attār published a 25-chapter manual, the Minhāj al-dukkān (How to run a pharmacy), wherein he documented: titles of drugs, their ingredients and quantities, preparation methods, and dosages. The manual noticeably lacks any chapters that highlight desirable characteristics and qualities aspiring physicians should display and Al Kuhín al-Attār covers very little of the ethical dilemmas or basic concepts that most pharmacists would normally discuss during this time in his manual. This demonstrates that after 1260 CE, interests lessened in discussing the culture and values surrounding pharmacy and a growing interest in developing archives of pharmacy knowledge for the public. It is worth mentioning that written manuals were not commonly produced by pharmacists in the Middle East. [ 14 ] It is also during the 13th and 14th centuries that Middle Eastern pharmacopoeias begin to resemble cookbooks more than medical encyclopedias, which Thomas Allsen attributes to the extensive cultural exchange between China, Iran, and the greater Mongol Empire. [ 15 ]
After the fifth century fall of the Western Roman Empire , medicinal knowledge in Europe suffered due to the loss of Greek medicinal texts and a strict adherence to tradition, although an area of Southern Italy near Salerno remained under Byzantine control and developed a hospital and medical school, which became famous by the 11th century. [ 3 ] : 30
In the early 11th century, Salerno scholar Constantinos Africanus translated many Arabic books into Latin, driving a shift from Hippocratic medicine towards a pharmaceutical-driven approach advocated by Galen. [ 3 ] : 30 In medieval Europe, monks typically did not speak Greek, leaving only Latin texts such as the works of Pliny available until these translations by Constantinos. [ 3 ] : 30 In addition, Arabic medicine became more widely known due to Muslim Spain . [ 3 ] : 30
In the 15th century, the printing press spread medicinal textbooks and formularies; the Antidotarium was the first printed drug formulary. [ 3 ] : 30
In Europe pharmacy-like shops began to appear during the 12th century. In 1240 emperor Frederic II issued a decree by which the physician's and the apothecary's professions were separated. [ 16 ]
Old pharmacies continue to operate in Dubrovnik, Croatia located inside the Franciscan monastery, opened in 1317 [ citation needed ] . The Town Hall Pharmacy in Tallinn, Estonia, which dates back to at least 1422, is the oldest continuously run pharmacy in the world still operating in the original premises. [ 17 ]
The trend towards pharmacy specialization started to take effect in Bruges, Belgium where a new law was passed that forbid physicians to prepare medications for patients. [ 7 ]
The oldest pharmacy is claimed to be set up in 1221 in the Church of Santa Maria Novella in Florence, Italy, which now houses a perfume museum. Florence is also the birthplace of the first official pharmacopeia , called the Nuovo Receptario , which all pharmacies use as guidance for caring for the sickly. [ 4 ]
The Royal College of Apothecaries of the City and Kingdom of Valencia was founded in 1441, considered the oldest in the world, with full administrative and legislative powers. The apothecaries of Valencia were the first in the world to elaborate their medicines, with the same criteria that are currently required in the official pharmacopoeias. [ 18 ]
The Republic of Venice was the first State with health policies which requires that the nature of the drug is public. In actuality, thirteen secrets survive which were offered to sale to the Venetian Republic. [ 19 ]
The 1800s brought increased technical sophistication. By the late 1880s, German dye manufacturers had perfected the purification of individual organic compounds from tar and other mineral sources and had also established rudimentary methods in organic chemical synthesis . [ 20 ]
Chloral hydrate was introduced as a sleeping aid and sedative in 1869. [ 21 ] Chloroform was first used as an anesthetic in 1847. [ 22 ]
Derivatives of phenothiazines had an important impact on various aspects of medicine, beginning with methylene blue which was originally used as a dye after its synthesis from aniline in 1876. [ 23 ] Phenothiazines were used as antimalarials, antiseptics, and antihelminthics up to 1940. [ 24 ] The "psychopharmacological revolution" began in 1950 when Chlorpromazine was discovered. [ 24 ]
The United States formed the American Pharmaceutical Association in 1852 [ 25 ] with its main purpose to advance pharmacists' roles in patient care, assist in furthering career development, spread information about tools and resources, and raising awareness about the roles of pharmacists and their contribution to patient care. [ 26 ]
Frederick Banting and Charles Best found the hormone insulin to lower blood sugar of dogs in 1921. This inspired further work by James Collip who developed pure insulin used for human testing and dramatically changed the prospects for all diabetics.
In 1929 Alexander Fleming developed the first antibiotic, penicillin , after discovering a fungus that was able to kill off bacteria. [ 27 ]
The first pharmaceutical infrastructure was the Medical Stores and Dispensary, organized by Sub-Assistant Surgeon Thomas Prendergast during Raffles ' expedition to Singapore . British settlement in Singapore led to establishing three General Hospitals . Two were for military and sailors respectively, while the last one was for civilian use. [ 28 ]
Medical staffing were ranked as Senior Surgeon, Assistant Surgeon, and Apothecaries. Apothecaries were medical subordinates; they were doctors that graduated from Indian Medical Colleges . To support staffing shortages at the General Hospital, a proposal to select local male students from Penang Free School to become assistant apothecaries were approved. The proposal outlined five years of training and rigorous requirements to be qualified as an apprenticeship. Those that complete the apprenticeship not only experienced strict expectations and responsibilities, but also little pay. Apothecaries resigned and left for private practice. Private practices heavily advertised in newspapers. This was considered the first system of pharmacy operations. [ 28 ]
In the 1820s, James Isaiah, (J.I.) Woodford trained to be an apothecary in Penang . [ 29 ] He later founded the Kampong Glam Dispensary. Another company was Martin & Line of the Singapore Dispensary. Both establishments were considered as chemists and druggists. [ 28 ] In addition to traditional Chinese medicine , Western medicine and practices were also established. Dr. Christopher Trebing arrived in Singapore and opened a dispensary called German Medicine Deity Medical Office. Following Dr. Trebing’s passing, the Medical Office continued to be operated by German owners until its demolition in 1970. [ 30 ]
Singaporean consumers suffered from misinformation of drug advertisements and lacked standards and qualifications for dispensing drugs. There was also substance abuse with opium and poison accessibility for criminal attempt. This led to creating the Medical Ordinance in 1904 and Poisons Ordinance in 1905 to create standards for qualified chemists and druggists to handle these substances. The Poisons Ordinance established regulation for the retail of poisons. One criterion was possessing certificates from the Principal Civil Medical Officer. In the same year, King Edward VII College of Medicine was established. The Medical School hosted classes and exams for the pharmacy certificate. [ 28 ]
Throughout the twentieth century, the government amended its ordinances for education and licensing. These early legal efforts led to Ordinance No. 30 of 1933. The ordinance formally required training and registration for pharmacists after training. King Edward VII College of Medicine admitted its first pharmacy students in 1935. [ 28 ] | https://en.wikipedia.org/wiki/History_of_pharmacy |
The history of phycology is the history of the scientific study of algae . Human interest in plants as food goes back into the origins of the species , and knowledge of algae can be traced back more than two thousand years. However, only in the last three hundred years has that knowledge evolved into a rapidly developing science.
The study of botany goes back into pre-history, as plants have been eaten since the beginning of the human race. The first attempts at plant cultivation are believed to have been made shortly before 10,000 BC in Western Asia (Morton, 1981) [ 1 ] and the first references to algae are to be found in early Chinese literature . Records as far back as 3000 BC indicate that algae were used by the emperor of China as food (Huisman, 2000 p. 13). [ 2 ] The use of Porphyra in China dates back to at least AD 533–544 (Mumfard and Miura, 1988); [ 3 ] there are also references in Roman and Greek literature. The Greek word for algae was phycos whilst in Latin the name became fucus . There are early references to the use of algae for manure . The first coralline algae to be recognized as living organisms were probably Corallina , by Pliny the Elder in the 1st century AD (Irvine and Chamberlain, 1994 p. 11). [ 4 ]
The classification of plants suffered many changes since Theophrastus (372–287 BC) and Aristotle (384–322 BC) grouped them as " trees ", " shrubs " and " herbs " (Smith, 1955 p. 1). [ 5 ]
Little is known of botany during the Middle Ages —it was the Dark Ages of botany. [ 1 ]
The development of the study of phycology runs in a pattern comparable with, and parallel to, other biological fields but at a different rate. After the invention of the printing-press in the 15th century, [ 6 ] education enabled people to read and knowledge to spread.
Written accounts of the algae of South Africa were made by the Portuguese explorers of the 15th and 16th centuries; however, it is not clear to which species they refer. (Huisman, 2000 p. 7) [ 2 ]
In the 17th century, there was a great awakening of scientific interest all over Europe, and after the invention of the printing-press books on botany were published. Among them was the work of John Ray , [1] who wrote in 1660: Catalogus Plantarum circa Cantabrigiam. , this initiated a new era in the study of Botany (Smith, 1975 p. 4). [ 7 ] Ray "influenced both the theory and the practice of botany more decisively than any other single person in the latter half of the seventeenth century" (Morton, 1981). [ 1 ]
However, no real progress was made in the scientific study of algae until the invention of the microscope —in about 1600. It was Anton van Leeuwenhoek (1632–1723) who discovered bacteria and saw the cell structure of plants. His unsystematic glimpses of plant structure, reported to the Royal Society between 1678 and his death in 1723, produced no significant advances (Morton, 1981 p. 180). [ 1 ]
As adventurers explored the world, more species of all animals and plants were discovered; this demanded efforts to bring order out of this quickly accumulating knowledge.
The first Australian marine plant recorded in print was collected from Shark Bay on the Western Australian coast by William Dampier , who described many new species of Australian wildlife in the 17th century (Huisman, 2000 p. 7). [ 2 ]
Before Carl von Linné (or Carl Linnaeus, 1707–1778), animals and plants had names, but it took him to arrange the names and group the Earth's plants into some sort of order. [ 8 ] Linnaeus was a Swedish botanist, the son of a pastor of the Lutheran church, a physician and zoologist . He laid the foundations of modern biological systematics and nomenclature in his Species Plantarum (1753). [ 9 ] He adopted and popularized a binomial (or binary) system of designation (Morton, 1981) [ 1 ] using one name for the genus and a second name for the species , both in Latin or Latinised . He referred to the specific (species) name as a "trivial" name ( nomen triviale ). It consisted of a single word, normally a Latin adjective, but any single word would suffice, to identify a particular species, but not intended to describe it. He developed a coherent system for naming organisms and divided the plant kingdom into 25 classes (according to Smith p. 1 and p. 24 according to Dixon, 1973) (Smith, 1955 p. 1), [ 5 ] [ 10 ] one of which, the Cryptogamia , included all plants with "concealed" reproductive organs. He divided the Cryptogamia into four orders: Filices (ferns), Musci ( mosses ), Algae (including lichens and liverworts ), and fungi (Smith, 1955 p. 1). [ 5 ]
Examination of the reproductive structures had already started. In 1711, R.A.F. de Réaumur gave an account of Fucus in which he noted the two types of external openings in the thallus: the non-sexual cryptostromata (sterile surface cavities) and the conceptacles (fertile cavities, immersed but with a surface opening) containing the sexual organs, which he thought were female flowers. With a lens, he was able to see the oogonoa (the female sex organs) and the antheridia (the male sex organs) within the conceptacles, but he interpreted these as seeds (Morton, 1981 p. 245). [ 1 ] Johann Hedwig (1730–1799) provided further evidence of the sexual process in algae, and figured conjugation in Spirogyra (Hedwig) in 1797. He also illustrated Chara ( Charales ) and identified the antheridia and oogonia as male and female sexual organs (Morton, 1981 p. 323 & 357). [ 1 ]
Harvey commented on "...motion, apparently spontaneous, among the seeds at the period of germination. Some found it difficult...to account for these anomalous motions. ...that the seeds becomes (how is not said) a perfect animalcule, which after enjoying an animal existence for a time ceases to live animally, and, reverting to its original nature, gives birth to a vegetable. Thus, this seed was first vegetable, then animal, and then again vegetable,...". [ 11 ] During the 18th century, there was a stormy controversy as to whether coralline algae were plants or animals. Up to the mid-18th century, coralline algae (and coral animals) were generally treated as plants. By 1768, many, but by no means all authorities, considered them animal. Five years later, Harvey concluded that they were certainly of vegetable material he noted: "The question of the vegetable nature of Corallines, among which the Melobesia take rank, may now be considered as finally set at rest, by the researches of Kützing, Phillipi and Decaisne." (Harvey, 1847, pl. 73). [ 12 ] [ 13 ]
The first scientific species description of a South African seaweed accepted for most nomenclatural purposes is that of Ecklonia maxima , published in 1757 as Fucus maximus (Stegenga et al. , 1997). [ 14 ]
Knowledge of North American Pacific algae begins with the 1791–95 expedition of Captain George Vancouver (Papenfuss, 1976 p. 21). [ 15 ]
Archibald Menzies (1754–1842) was the appointed botanist on the expedition led by Captain George Vancouver in the ships Discovery and Chatham of 1791–1795 to the Pacific coast of North America and south-western Australia. The algae collected by Menzies were passed to Dawson Turner (1775–1858) who described and illustrated them in a four-volumed work published in 1808–1819. However, Turner only referred to the taxa referable to Fucus ; either Menzies collected very few or he gave only a few to Turner. Three of these species described by Turner later became the types of new genera (Papenfuss, 1976) [ 15 ] and (Huisman, 2000) [ 2 ] Turner also received plants from Robert Brown (1773–1858) the botanist who accompanied Captain Matthew Flinders on the Investigator (1801–1805). This collection also included many plants from Australia (Huisman, 2000). [ 2 ]
The real awakening of interest in American algae resulted from a visit by William Henry Harvey in 1849–1850 when he visited areas from Florida to Nova Scotia and produced three volumes of Nereis Boreali-Americana. These gave an incentive to others to study algae (Taylor, 1972 p. 21). [ 16 ]
The first collector of marine algae in Greenland waters seems to have been J.M.Vahl , who lived in Greenland from 1828 to 1836. Vahl's East Greenland species were not recorded until 1893 when Rosenvinge included them in his work of 1893 together with the species collected by Sylow (Lund, 1959). [ 17 ] F.R.Kjellman records only 12 species from East Greenland, 4 of which are doubtful; these records are based on Zeller's list (Lund, 1959). [ 17 ]
Carl Adolph Agardh was one of the most prominent algologists of all time. He was born in Sweden on 23 January 1785 and died on 28 January 1859. He was Professor of Botany at the University of Lund and later Bishop of Karlstad Diocese (Papenfuss, 1976). [ 15 ] Many species still show his name as the authority of the scientific name. He traveled widely in Europe, visiting Germany, Poland , Denmark , the Netherlands , Belgium , France and Italy , and was the first to emphasize the importance of the reproductive characters of algae and use them to distinguish the different genera and families. His son, Jacob Georg Agardh (1813–1901), who became Professor of Botany at Lund in 1839, made a study of the life-histories of algae, described many new genera and species. It was to him that many workers sent specimens for determination and as donations. Because of this, the herbarium at Lund is the most important algal herbarium in the world (Papenfuss, 1976). [ 15 ]
The first records of algae from the Faroe Islands were made by Jørgen Landt in his book of 1800, where he mentions about 30 species. Following this, Hans Christian Lyngbye visited the Faroe Islands in 1817 and published his work in 1819. In this, he described several new genera and species, some 100 new species were listed. Emil Rostrup , who visited the Faroe Islands in 1867, listed ten new species and a total not far from 100. In 1895, Herman G. Simmons mentioned 125 species. In that year, F. Børgesen (1866–1956) started work and in 1902, published his work (Børgesen, 1902). [ 18 ]
Jean Vincent Félix Lamouroux (1779–1825) was the first, in 1813, to separate the algae into groups on the basis of colour (Dixon and Irvine, 1977 p. 59). [ 19 ] At this time, all coralline algae were considered animals. It was R. Philippi who, in 1837, published his paper in which he finally recognized that coralline algae were not animals and he proposed the generic names Lithophyllum and Lithothamnion (Irvine and Chamberlain, 1994 p. 11). [ 20 ]
Freshwater algae are commonly treated separately from marine algae and may be considered not correctly placed in phycology. Lewis Weston Dillwyn (1778–1855) "British Confervae" (1809) was one of the earliest attempts to bring together all that was then known on the British freshwater algae . [ 21 ]
Specimens of Anne E. Ball (1808–1872) have been found in both the Herbarium of the Irish National Botanic Gardens , Dublin [2] and the Ulster Museum (BEL). A.E. Ball was an Irish algologist who corresponded with W.H. Harvey and whose records appear in his Phycologia Britannica. The specimens in Dublin do not contain any unusual or rare items. However, they are well documented. [ 22 ]
William Henry Harvey (1811–1866), Keeper of the Herbarium and professor in botany at Trinity College, Dublin , was one of the most distinguished algologists of his time (Papenfuss, 1976 p. 26). [ 15 ] Apart from Ireland , he visited South Africa; the Atlantic seaboard of America as far south as the Florida Keys on the east coast of North America; and Australia (1854–1856). Between 1853 and 1856, he visited Ceylon , Australia and New Zealand and various parts of the South Pacific (Huisman, 2000 & Papenfuss, 1976). [ 2 ] [ 15 ] His collection in Australia resulted in one of the most extensive collections of marine plants, and it inspired others (Huisman, 2000). [ 2 ] He published Nereis Australis Or Algae of the Southern Ocean in 1847–1849 and in 1846–1851, his Phycologia Britannica appeared. His Nereis Boreali-Americana was published in three parts (1852–1858). This was the first, and still is (in 1976), the only marine algal flora of North America, as it includes taxa from the Pacific coast (Papenfuss, 1976 p. 27). [ 15 ] His five-volume Phycologia Australica was published in 1858–1863. These volumes remain to this day a most important reference to Australian algae (Huisman, 2000). [ 2 ]
His primary herbarium is in Trinity College, Dublin (TCD). However, large collections of Harvey's material are to be found in the Ulster Museum (BEL) (Morton, 1977; Morton, 1981); [ 23 ] [ 24 ] University of St Andrews (STA); and National Herbarium of Victoria (MEL), Melbourne, Australia (May 1977). [ 25 ] Many of the collectors of this period sent, and exchanged, specimens freely one to another; as a result, Harvey's books show a remarkable knowledge of the distribution of algae elsewhere in the world. His Phycologia Britannica lists species recorded and collected from various parts of the British Isles . For example, he notes William Thompson (1805–1852), William McCalla (c. 1814–1849), John Templeton (1766–1825) and D. Landsborough (1779–1854) who collected, as he did, from distinct sites in Ireland. The collections of these botanists, and many others, are represented separately by collections in the Ulster Museum.
Sir William Jackson Hooker (1785–1865) was a lifelong friend of Harvey (Papenfuss, 1976 p. 26); he was appointed Professor of Botany at Glasgow University in 1820 and became director in Kew 1841–1865. Hooker recognized the talent in Harvey and lent him books, and encouraged and invited him to write the section on algae in his British Flora. as well as the section on algae for The Botany of Captain Beechey's Voyage (Papenfuss, 1976). [ 15 ] Margaret Gatty (1809–1873) (née Margaret Scott) (author of British Seaweeds , 1863), and others, corresponded with Harvey (Desmond, 1977 and Evans, 2003). [ 26 ] [ 27 ]
Much work was done in this period by many workers and the many specimens became very valuable. Harvey's specimens are to be found in at least several herbaria, as well as those of other phycologists whose names are to be found in historic publications.
In the same period, Friedrich Traugott Kützing (1807–1893) in Germany described more new genera than anyone either before or after (Chapman, 1968 p. 13). [ 28 ] His publications span the period 1841 to 1869 and added materially to knowledge of algae of cold waters of the Arctic seas. Some of his specimens are stored in the Ulster Museum Herbarium (BEL) catalogued: F1171; F10281–F10318. In 1883 Frans Reinhold Kjellman , professor of botany at Uppsala University , published The Algae of the Arctic Sea . He divided the "Arctic Sea" into different regions which surround the North Pole (Kjellman, 1883). [ 29 ] Further research work on the marine algae of the world included: Charles Lewis Anderson (1827–1910) who collaborated with William Gilson Farlow and with Professor Daniel Cady Eaton to produce on the first exsiccatae of North American Algae (Papenfuss, 1976). [ 15 ] Edward Morell Holmes (1843–1930), was an expert on seaweeds , mosses , liverworts and lichens , specimens were sent to him from all over the British Isles , as well as from Norway , Sweden , Florida, Tasmania , France, Cape of Good Hope , Ceylon and Australia. He also exchanged specimens (Furley, 1989). [ 30 ] and some are in the herbarium of the Ulster Museum (BEL). George Clifton (1823–1913), an Australian phycologist, is mentioned in Harvey 's Memoirs, as the superintendent of the Water Police in Perth, West Australia, sent algal specimens to Harvey (Blackler, H.1977). [ 31 ] In these years, there were many workers in this field: W.G. Farlow, mentioned above, who was appointed in 1879, Professor of Cryptogamic Botany at University of Harvard (U.S.) in 1879 and published, among other works, the Marine algae of New England and Adjacent Coasts. ; in 1876, John Erhard Areschoug, a Swedish Professor of Botany at Uppsala University, reported on some brown algae collected in California by Gustavus A. Eisen (Papenfuss. 1976). [ 15 ] George W. Traill (1836–1897) was a clerk in the Standard Life Company in Edinburgh where he worked long hours, yet he was one of the greatest authorities on Scottish algae. Despite bad health, he was an indefatigable collector. In 1892, he gave his collection to the Herbarium of the Edinburgh Botanic Gardens (Furley, 1989). [ 30 ]
Mikael Heggelund Foslie ( M. Foslie ) (1855–1905) published 69 papers between 1887 and 1909. During this time, he increased the number of species and forms (of corallines) from 175 to 650 (Irvine and Chamberlain, 1994). [ 20 ] After his death, his collection of specimens was purchased by the Museum of the Royal Norwegian Society for Sciences and Letters (Thor et al. , 2005) [ 32 ] and there is a small collection of his in the Ulster Museum Herbarium: (Collection No. 42) entitled: Algae Norvegicae (Ulster Museum Herbarium catalogue (BEL): F10319–F10334). F. Heydrich also described 84 taxa and was a bitter foe of Foslie. This left a legacy of complicated and still unresolved problems. [ 13 ]
It was in the 19th century that the true nature of lichens , as organisms consisting of an alga and a fungus in specific association, was demonstrated by Schwendener in 1867. This removed a source of confusion in morphology and classification (Morton, 1981 p. 432). [ 1 ] It was in this period (1859) that Charles Darwin (1809–1882) published his book on evolution : On the Origin of Species by Means of Natural Selection,... .
In 1895, Børgesen started his study of the Faeroe Islands and published his work in 1902. [ 18 ] [ 33 ] Later between 1920 and 1936, he published his research on the algae of the Canary Islands . [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ]
In 1935 and 1945, Felix Eugen Fritsch (1879–1954) published in two volumes his treatise: The Structure and Reproduction of the Algae . These two volumes detail virtually all that was then known about the morphology and reproduction of the algae. However, knowledge of algae has so greatly increased since then it would be impossible for these to be brought up to date. Nevertheless, reference is often made to them. Other valuable works published in the 1950s include Cryptogamic Botany. written by Gilbert Morgan Smith (1885–1959), the algal volume (no.1) was published in 1955. In the following year (1956), Die Gattungen der Rhodophyceen. by Johan Harald Kylin (1879–1949) was published posthumously. Other phycologists who contributed massively to the knowledge of algae include Elmer Yale Dawson (1918–1966), who published over 60 papers on the algae of the North American Pacific seas (Papenfuss, 1976). [ 15 ]
The number of books published in the mid to late 19th century shows how interest in the natural world developed. Books on algae were written by: Isabella Gifford (1853) The Marine Botanist... , some of her specimens are in the Ulster Museum; D. Landsborough (c. 1779–1854) A Popular History of British Seaweeds,... third edition published in 1857; Louisa Lane Clarke (c. 1812–1883) The Common Seaweeds of the British Coast and Channel Islands;... in 1865; S.O. Gray (1828–1902) British Seaweeds:... published 1867 and W.H. Grattann British Marine Algae:... published about 1874. These books were for the common people.
In 1902, Edward Arthur Lionel Batters (1860–1907) published "A catalogue of the British Marine algae." (Batters, 1902). [ 40 ] In this, he detailed records of algae found on the shores of the British Isles with the localities. This was the start of a new approach, the bringing together of records, detailed keys, checklists and mapping schemes.
The process accelerated in the 20th century. Lily Newton (1893–1981), professor in botany at the University College of Wales, Aberystwyth, and professor emeritus in 1931 wrote A Handbook of the British Seaweeds. [ 41 ] This was the first, and for quite a time, the only book for identification of seaweeds in the British Isles using a botanical key . In 1962, Eifion Jones published A key to the genera of the British seaweeds . [ 42 ] This small booklet provided a valuable source in the period before the valuable series Seaweeds of the British Isles was produced by the British Museum (Natural History) or The Natural History Museum .
Research advanced so quickly that the need for an up-to-date checklist became apparent. Mary Parke (1902–1981), who was a founder member of the British Phycological Society, produced a preliminary checklist of British marine algae in 1953; corrections and additions of this were published in 1956, 1957 and 1959. In 1964, M.Parke and Peter Stanley Dixon (1929–1993) published a revised checklist; a second revision of this was produced in 1968 and a third revision in 1976. Distribution was added to the checklist in 1986 with G.R.South and I.Tittley's A Checklist and Distributional Index of the Benthic Marine Algae of the North Atlantic Ocean . In 2003, A Check-list and Atlas of the Seaweeds of Britain and Ireland was published by Gavin Hardy and Michael Guiry with a revised edition in 2006. This shows how rapidly knowledge of algae, at least in the British Isles, advanced. First efforts had been made by interested biologists and people capable of identifying the algae; this required books using the botanical names. Botanical keys to identify the plants then developed, followed by checklists. As more information was brought to light by interested workers, some volunteers, the checklists were improved and eventually a mapping scheme brought together all this information. The same pattern of knowledge developed with birds, mammals and flowering plants , though to a different time-scale and knowledge in other parts of the world has developed to this degree.
As records were collected, the need to draw all the information together advanced. Checklists and annotated checklists were produced and updated so the actual numbers of different species became more precise. At first this was quite local. Threlkeld, in 1726, produced the first attempt at an enumeration of Irish Algae and in 1802, William Tighe published his "Marine plants observed at the County of Wexford"; it included 58 marine and 2 freshwater species. In 1804 Wade published Plantae Rariores in Hibernia Inventae, in which 51 species of marine and 4 species of freshwater algae were enumerated. In the north of Ireland John Templeton and William Thompson were at work publishing on the algae of Ireland. In 1836, Mackay published his Flora Hibernica including 296 species. Adams, in his synopsis of 1908, listed a total of marine species reaching 843. [ 43 ]
In more localised lists, Adams (in 1907) listed the species of County Antrim [ 44 ] noted that of the 747 species included in "Batter's List" [ 40 ] he recorded 211 species from the Co. Antrim coast. In 1907, a list of marine algae from Lambay Island (County Dublin) was published by Batters. [ 45 ] In 1960, A preliminary list of the marine algae of Galloway coast was published. [ 46 ]
At the international level there are well over 3,000 species of alga in Australia. [ 2 ]
As the study and identification of the different species became more extensive, it became clear that identification was not at all easy. Harvey's 1846–1851 Phycologia Britannica along with his other publications makes no effort to provide "keys" to help in the identification. In 1931, Newton's Handbook [ 41 ] which gave the first key to assist in the identification of algae of the British Isles, in the same year Knight and Park gave a key in their "Manx Algae." [ 47 ] Eifion Jones in 1962, wrote a key to the genera of British seaweeds. [ 48 ] Others soon followed: Dickinson wrote one entitled British Seaweeds. [ 49 ] and Adey and Adey (1973) gave keys to the identification of the Corallinaceae of the British Isles. [ 50 ] Abott and Hollenberg, in 1976, published keys to the identification of algae of California. [ 51 ]
Linnaeus's "sexual system" (Linnaeus, 1754) [ 52 ] in which he grouped plants according to the number of stamens and carpels in their flowers, although wholly artificial was advantageous in that a newly discovered plant could be fitted in amongst those already known. He divided the plant kingdom into 25 classes, one of which was the Cryptogamia—plants with "concealed reproductive organs" (see above) (Smith, 1955). [ 5 ] Linnaeus accepted 14 genera of algae of which only four, Conferva , Ulva , Fucus and Chara , contained organisms now regarded as algae (Dixon, 1973 p. 231). [ 53 ] As a consequence of the great increase in the number of species, the artificiality of the Linnaean system was appreciated so that during the 18th century and early 19th century, considerable numbers of new genera were described. J.V.F. Lamouroux in 1813, [ 54 ] was the first to separate the groups on the basis of colour, however this was not taken up by other botanists and it was Harvey who, in 1836, divided the algae into four major divisions solely on the basis of their pigmentation: Rhodospermae (red algae), Melanospermae (brown algae), Chlorospermae (green algae) and Diatomaceae (Dixon, 1973 p. 232). [ 53 ]
In 1883 and 1897, Schmitz separated the Rhodophyceae into two main groups. The first contained the Bangiales and the second the Nemoniales, Cryptonemiales , Gigartinales and Rhodymeniales (Newton, 1931). [ 41 ] The Rhodophyta are now arranged in the Orders: Porphyridiales, Goniotrichales, Erythropeltidales, Bangiales, Acrochaetiales, Colaconematales, Palmariales, Ahnfeltiales, Nemaliales, Gelidiales, Gracilariales, Bonnemaisoniales, Cryptonemiales, Hildenbrandiales, Corallinales, Gigartinales, Plocamiales, Rhodymeniales and Ceramiales. The Chlorophyta are arranged in the Orders: Chlorococcales, Microsporales, Chaetophorales, Phaeophilales, Ulvales, Prasiolales, Acrosiphoniales, Cladiphorales, Bryopsidales, Chlorocystidales, Klebsormidiales and Ulotrichales. The Heterokontophyta: Sphacelariales, Dictyotales, Ectocarpales, Ralfsiales, Utleriales, Sporochniales, Tilopteridales, Desmarestiales, Laminariales and the Fucales (Hardy and Guiry, 2006). [ 55 ]
Recently (1990s), The Kingdom: Protoctista has been recommended; [ 56 ] however, this has not been accepted by many authors.
Publications:
De Valéra, M. 1958. A topographical guide to the seaweed of Co. Galway Bay with some brief notes on other districts on the west coast of Ireland. Institute for Industrial Standards and Research Dublin, Dublin.
De Valéra, M. 1959. The Third International Seaweed Symposium at University College, Galway. 1958, Irish Naturalists' Journal 13 : 18–19.
De Valéra, M. 1960. Interesting seaweeds from the shores of the Burren. Irish Naturalists' Journal. 13 : 168.
De Valéra, M. * Cooke, P.J. 1979. Seaweed in Burren grykes. Irish Naturalists' Journal. 19 : 435–436.
De Valéra, M., Pybus, C., Casley, B. & Webster, A. 1979. 1979. Littoral and benthic investigations on the west coast of Ireland.X. Marine algae of the northern shores of the Burren, C. Clare. Proceedings of the Royal Irish Academy. 79B : 259–269. | https://en.wikipedia.org/wiki/History_of_phycology |
Plant breeding started with sedentary agriculture , particularly the domestication of the first agricultural plants, a practice which is estimated to date back 9,000 to 11,000 years. Initially, early human farmers selected food plants with particular desirable characteristics and used these as a seed source for subsequent generations, resulting in an accumulation of characteristics over time. In time however, experiments began with deliberate hybridization, the science and understanding of which was greatly enhanced by the work of Gregor Mendel . Mendel's work ultimately led to the new science of genetics . Modern plant breeding is applied genetics, but its scientific basis is broader, covering molecular biology , cytology , systematics , physiology , pathology , entomology , chemistry , and statistics ( biometrics ). It has also developed its own technology. Plant breeding efforts are divided into a number of different historical landmarks.
Domestication of plants is an artificial selection process conducted by humans to produce plants that have more desirable traits than wild plants, and which renders them dependent on artificial usually enhanced environments for their continued existence. The practice is estimated to date back 9,000–11,000 years. Many crops in present-day cultivation are the result of domestication in ancient times, about 5,000 years ago in the Old World and 3,000 years ago in the New World . In the Neolithic period, domestication took a minimum of 1,000 years and a maximum of 7,000 years. Today, all principal food crops come from domesticated varieties. Almost all the domesticated plants used today for food and agriculture were domesticated in the centers of origin . In these centers there is still a great diversity of closely related wild plants, so-called crop wild relatives , that can also be used for improving modern cultivars by plant breeding.
A plant whose origin or selection is due primarily to intentional human activity is called a cultigen , and a cultivated crop species that has evolved from wild populations due to selective pressures from traditional farmers is called a landrace . Landraces, which can be the result of natural forces or domestication, are plants or animals that are suited to a particular region or environment.
In some cases, such as rice , different subspecies were domesticated in different regions; Oryza sativa subspecies indica was domesticated in South Asia , while Oryza sativa subspecies japonica was developed in China .
For more on the mechanisms of domestication, see Hybrid (biology) .
Humans have traded useful plants from distant lands for centuries, and plant hunters have been sent to bring plants back for cultivation. Human agriculture has had two important results: the plants most favoured by humans came to be grown in many places and (2) gardens and farms have provided some opportunities for plants to interbreed that would not have been possible for their wild ancestors. Columbus 's arrival in America in 1492 triggered unprecedented transfer of plant resources between Europe and the New World .
Thomas Fairchild (? 1667 – 10 October 1729) was an English gardener, "the leading nurseryman of his day", working in London. [ 1 ] He corresponded with Carl Linnæus , and helped by experiments to establish the existence of sex in plants . In 1716–17 (the cross made in summer 1716, the new plant appearing the next spring) he was the first person [ dubious – discuss ] to scientifically produce [ clarification needed ] an artificial hybrid , Dianthus Caryophyllus barbatus , known as "Fairchild's Mule", a cross between a Sweet william and a Carnation pink . [ 2 ] [ 3 ]
Gregor Mendel 's experiments with plant hybridization led to his laws of inheritance . This work became well known in the 1900s and formed the basis of the new science of genetics , which stimulated research by many plant scientists dedicated to improving crop production through plant breeding.
However, successful commercial plant breeding concerns began to be founded from the late 19th century. Gartons Agricultural Plant Breeders in England was established in the 1890s by John Garton, who was one of the first to cross-pollinate agricultural plants and commercialize the newly created varieties. He began experimenting with the artificial cross pollination firstly of cereal plants, then herbage species and root crops and developed far reaching techniques in plant breeding. [ 4 ] [ 5 ] [ 6 ]
William Farrer revolutionized wheat farming in Australia with the widespread release in 1903 of the fungus resistant "Federation" strain of wheat, which was developed as a result of his plant breeding work over a period of twenty years using Mendel's theories. [ 7 ] [ 8 ]
From 1904 to World War II in Italy , Nazareno Strampelli created a number of wheat hybrids. His work allowed Italy to increase crop production during the so-called " Battle for Grain " (1925–1940) and some varieties were exported to foreign countries, such as Argentina, Mexico, and China. Strampelli's work laid the foundations for Norman Borlaug and the Green Revolution .
In 1908, George Harrison Shull described heterosis , also known as hybrid vigor. Heterosis describes the tendency of the progeny of a specific cross to outperform both parents. The detection of the usefulness of heterosis for plant breeding has led to the development of inbred lines that reveal a heterotic yield advantage when they are crossed. Maize was the first species where heterosis was widely used to produce hybrids.
By the 1920s, statistical methods were developed to analyze gene action and distinguish heritable variation from variation caused by environment. In 1933 another important breeding technique, cytoplasmic male sterility (CMS), developed in maize, was described by Marcus Morton Rhoades . CMS is a maternally inherited trait that makes the plant produce sterile pollen . This enables the production of hybrids without the need for labor-intensive detasseling .
These early breeding techniques resulted in large yield increase in the United States in the early 20th century. Similar yield increases were not produced elsewhere until after World War II , the Green Revolution increased crop production in the developing world in the 1960s. This remarkable improvement was based on three essential crops. First came the development of hybrid maize , then high-yielding and input-responsive " semi-dwarf wheat " (for which the CIMMYT breeder N.E. Borlaug received the Nobel prize for peace in 1970), and third came high-yielding "short statured rice" cultivars. [ 9 ] Similarly notable improvements were achieved in other crops like sorghum and alfalfa .
Intensive research in molecular genetics has led to the development of recombinant DNA technology (popularly called genetic engineering ). Advancement in biotechnological techniques has opened many possibilities for breeding crops. Thus, while mendelian genetics allowed plant breeders to perform genetic transformations in a few crops, molecular genetics has provided the key to both the manipulation of the internal genetic structure, and the "crafting" of new cultivars according to a pre-determined plan.
Most approaches to crop improvement, including conventional breeding, genome modification and gene editing, rely primarily on the fundamental processes of DNA repair and recombination . [ 10 ] Our current understanding of DNA repair and recombination mechanisms in plants was derived largely from prior studies in prokaryotes , yeast and animals, so that our present knowledge remains rooted in this history. [ 10 ] This approach has led to gaps in our understanding of the basic processes of DNA repair and recombination in plants so that further progress in this area of plant research should contribute to significant crop improvement. | https://en.wikipedia.org/wiki/History_of_plant_breeding |
The history of the polymerase chain reaction (PCR) has variously been described as a classic "Eureka!" moment, [ 1 ] or as an example of cooperative teamwork between disparate researchers. [ 2 ] Following is a list of events before, during, and after its development:
By 1980 all of the components needed to perform PCR amplification were known to the scientific community. The use of DNA polymerase to extend oligonucleotide primers was a common procedure in DNA sequencing and the production of cDNA for cloning and expression . The use of DNA polymerase for nick translation was the most common method used to label DNA probes for Southern blotting .
In December 1985 a joint venture between Cetus and Perkin-Elmer was established to develop instruments and reagents for PCR. Complex thermal cyclers were constructed to perform the Klenow-based amplifications, but never marketed. Simpler machines for Taq-based PCR were developed, and on November 19, 1987, a press release announces the commercial availability of the "PCR-1000 Thermal Cycler" and "AmpliTaq DNA Polymerase".
In the spring of 1985 John Sninsky at Cetus began to use PCR for the difficult task of measuring the amount of HIV circulating in blood. A viable test was announced on April 11, 1986, and published in May 1987. [ 26 ] Donated blood could then be screened for the virus, and the effect of antiviral drugs directly monitored.
In 1985 Norm Arnheim, also a member of the development team, concluded his sabbatical at Cetus and assumed an academic position at University of Southern California . He began to investigate the use of PCR to amplify samples containing just a single copy of the target sequence. By 1989 his lab developed multiplex-PCR on single sperm to directly analyze the products of meiotic recombination. [ 27 ] These single-copy amplifications, which had first been run during the characterization of Taq polymerase, [ 24 ] became vital to the study of ancient DNA , as well as the genetic typing of preimplanted embryos.
In 1986 Edward Blake, a forensics scientist working in the Cetus building, collaborated with Henry Erlich a researcher at Cetus, to apply PCR to the analysis of criminal evidence. A panel of DNA samples from old cases was collected and coded, and was analyzed blind by Saiki using the HLA DQα assay. When the code was broken, all of the evidence and perpetrators matched. Blake and Erlich's group used the technique almost immediately in Pennsylvania v. Pestinikas , [ 28 ] the first use of PCR in a criminal case. This DQα test is developed by Cetus as one of their "Ampli-Type" kits, and became part of early protocols for the testing of forensic evidence, such as in the O. J. Simpson murder case .
By 1989 Alec Jeffreys , who had earlier developed and applied the first DNA Fingerprinting tests, used PCR to increase their sensitivity. [ 29 ] With further modification, the amplification of highly polymorphic Variable number tandem repeat (VNTR) loci became the standard protocol for National DNA Databases such as Combined DNA Index System (CODIS).
In 1987 Russ Higuchi succeeded in amplifying DNA from a human hair. [ 30 ] This work expanded to develop methods for the amplification of DNA from highly degraded samples, such as from ancient DNA and in forensic evidence. | https://en.wikipedia.org/wiki/History_of_polymerase_chain_reaction |
The history of radiation protection begins at the turn of the 19th and 20th centuries with the realization that ionizing radiation from natural and artificial sources can have harmful effects on living organisms. As a result, the study of radiation damage also became a part of this history.
While radioactive materials and X-rays were once handled carelessly, increasing awareness of the dangers of radiation in the 20th century led to the implementation of various preventive measures worldwide, resulting in the establishment of radiation protection regulations. Although radiologists were the first victims, they also played a crucial role in advancing radiological progress and their sacrifices will always be remembered. Radiation damage caused many people to suffer amputations or die of cancer. The use of radioactive substances in everyday life was once fashionable, but over time, the health effects became known. Investigations into the causes of these effects have led to increased awareness of protective measures. The dropping of atomic bombs during World War II brought about a drastic change in attitudes towards radiation. The effects of natural cosmic radiation , radioactive substances such as radon and radium found in the environment, and the potential health hazards of non-ionizing radiation are well-recognized. Protective measures have been developed and implemented worldwide, monitoring devices have been created, and radiation protection laws and regulations have been enacted.
In the 21st century, regulations are becoming even stricter. The permissible limits for ionizing radiation intensity are consistently being revised downward. The concept of radiation protection now includes regulations for the handling of non-ionizing radiation.
In the Federal Republic of Germany, radiation protection regulations are developed and issued by the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV). The Federal Office for Radiation Protection is involved in the technical work. [ 2 ] In Switzerland , the Radiation Protection Division of the Federal Office of Public Health is responsible, [ 3 ] and in Austria , the Ministry of Climate Action and Energy . [ 4 ]
The discovery of X-rays by Wilhelm Conrad Röntgen (1845-1923) in 1895 led to extensive experimentation by scientists, physicians, and inventors. The first X-ray machines produced extremely unfavorable radiation spectra for imaging with extremely high skin doses. [ 5 ] In February 1896, John Daniel and William Lofland Dudley (1859–1914) of Vanderbilt University conducted an experiment in which Dudley's head was X-rayed, resulting in hair loss. Herbert D. Hawks , a graduate of Columbia University , suffered severe burns on his hands and chest during demonstration experiments with X-rays. [ 6 ] [ 7 ] Burns and hair loss were reported in scientific journals. Nikola Tesla (1856–1943) was one of the first researchers to explicitly warn of the potential dangers of X-rays in the Electrical Review on May 5, 1897 - after initially claiming them to be completely harmless. He suffered massive radiation damage after his experiments. [ 8 ] Nevertheless, some doctors at the time still claimed that X-rays had no effect on humans. [ 9 ] Until the 1940s, X-ray machines were operated without any protective safeguards. [ 9 ]
Röntgen himself was spared the fate of the other X-ray users by habit. He always carried the unexposed photographic plates in his pockets and found that they were exposed if he remained in the same room during the exposure. So he regularly left the room when he took X-rays. [ citation needed ]
The use of X-rays for diagnostic purposes in dentistry was made possible by the pioneering work of C. Edmund Kells (1856-1928), a New Orleans dentist who demonstrated them to dentists in Asheville, North Carolina, in July 1896. [ 10 ] Kells committed suicide after suffering from radiation-induced cancer for many years. He had been amputated one finger at a time, later his entire hand, followed by his forearm and then his entire arm.
Otto Walkhoff (1860-1934), one of the most important German dentists in history, took X-rays of himself in 1896 and is considered a pioneer in dental radiology. He described the required exposure time of 25 minutes as an "ordeal". Braunschweig's medical community later commissioned him to set up and supervise a central X-ray facility. In 1898, the year radium was discovered, he also tested the use of radium in medicine in a self-experiment using an amount of 0.2 grams of radium bromide . Walkhoff observed that cancerous mice exposed to radium radiation died significantly later than a control group of untreated mice. He thus initiated the development of radiation research for the treatment of tumors . [ 11 ] [ 12 ]
The Armenian-American radiologist Mihran Krikor Kassabian (1870-1910), vice president of the American Roentgen Ray Society (ARRS), was concerned about the irritating effects of X-rays. In a publication, he mentioned his increasing problems with his hands. Although Kassabian recognized X-rays as the cause, he avoided making this reference so as not to hinder the progress of radiology. In 1902, he suffered a severe radiation burn on his hand. Six years later, the hand became necrotic and two fingers of his left hand were amputated. Kassabian kept a diary and photographed his hands as the tissue damage progressed. He died of cancer in 1910. [ 13 ]
Many of the early X-ray and radioactivity researchers went down in history as "martyrs for science." In her article, The Miracle and the Martyrs , Sarah Zobel of the University of Vermont tells of a 1920 banquet held to honor many of the pioneers of X-rays. Chicken was served for dinner: "Shortly after the meal was served, it could be seen that some of the participants were unable to enjoy the meal. After years of working with X-rays, many of the participants had lost fingers or hands due to radiation exposure and were unable to cut the meat themselves". [ 14 ] The first American to die from radiation exposure was Clarence Madison Dally (1845-1904), an assistant to Thomas Alva Edison (1847-1931). Edison began studying X-rays almost immediately after Röntgen's discovery and delegated the task to Dally. Over time, Dally underwent more than 100 skin operations due to radiation damage. Eventually, both of his arms had to be amputated. His death led Edison to abandon all further X-ray research in 1904.
One of the pioneers was the Austrian Gustav Kaiser (1871-1954), who in 1896 succeeded in photographing a double toe with an exposure time of 1½-2 hours. Due to the limited knowledge at the time, he also suffered severe radiation damage to his hands, losing several fingers and his right metacarpal. His work was the basis for, among other things, the construction of lead rubber aprons. [ 15 ] Heinrich Albers-Schönberg (1865-1921), the world's first professor of radiology, recommended gonadal protection for testicles and ovaries in 1903. He was one of the first to protect germ cells not only from acute radiation damage but also from small doses of radiation that could accumulate over time and cause late damage. Albers-Schönberg died at the age of 56 from radiation damage, [ 16 ] as did Guido Holzknecht and Elizabeth Fleischman .
Since April 4, 1936, a radiology memorial in the garden of the of Hamburg's St. Georg Hospital has commemorated the 359 victims from 23 countries who were among the first medical users of X-rays. [ 17 ]
In 1896, the engineer Wolfram Fuchs, based on his experience with numerous X-ray examinations, recommended keeping the exposure time as short as possible, staying away from the tube, and covering the skin with Vaseline . [ 18 ] In 1897, Chicago doctors William Fuchs and Otto Schmidt became the first users to have to pay compensation to a patient for radiation damage. [ 19 ] [ 20 ]
In 1901, dentist William Herbert Rollins (1852-1929) called for using lead-glass goggles when working with X-rays, for the X-ray tube to be encased in lead, and for all areas of the body to be covered with lead aprons. He published over 200 articles on the potential dangers of X-rays, but his suggestions were long ignored. A year later, Rollins wrote in despair that his warnings about the dangers of X-rays were not being heeded by either the industry or his colleagues. By this time, Rollins had demonstrated that X-rays could kill laboratory animals and induce miscarriages in guinea pigs. Rollins' achievements were not recognized until later. Since then, he has gone down in the history of radiology as the "father of radiation protection. He became a member of the Radiological Society of North America and its first treasurer. [ 21 ]
Radiation protection continued to develop with the invention of new measuring devices such as the chromoradiometer by Guido Holzknecht (1872-1931) in 1902, [ 22 ] the radiometer by Raymond Sabouraud (1864-1938) and Henri Noiré (1878–1937) [ 23 ] in 1904/05, and the quantimeter by Robert Kienböck (1873-1951) in 1905, [ 24 ] which made it possible to determine maximum doses at which there was a high probability that no skin changes would occur. Radium was also included by the British Roentgen Society , which published its first memorandum on radium protection in 1921.
Since the 1920s, pedoscopes have been installed in many shoe stores in North America and Europe, more than 10,000 in the U.S. alone, following the invention of Jacob Lowe, a Boston physicist. They were X-ray machines used to check the fit of shoes and to promote sales, especially to children. Children were particularly fascinated by the sight of their footbones. X-rays were often taken several times daily to evaluate the fit of different shoes. Most were available in shoe stores until the early 1970s. The energy dose absorbed by the customer was up to 116 rads , or 1.16 grays. In the 1950s, when medical knowledge of the health risks was already available, pedoscopes came with warnings that shoe-buyers should not be scanned more than three times a day and twelve times a year. [ 25 ]
By the early 1950s, several professional organizations issued warnings against the continued use of shoe-mounted fluoroscopes, including the American Conference of Governmental Industrial Hygienists, the American College of Surgeons, the New York Academy of Medicine, and the American College of Radiology. At the same time, the District of Columbia enacted regulations requiring that shoe-mounted fluoroscopes be operated only by a licensed physical therapist. A few years later, the state of Massachusetts passed regulations stating that these machines could only be operated by a licensed physician. In 1957, the use of shoe-mounted fluoroscopes was banned by court order in Pennsylvania . By 1960, these measures and pressure from insurance companies led to the disappearance of the shoe-mounted fluoroscope, at least in the United States. [ 26 ]
In Switzerland, there were 1,500 shoe-mounted fluoroscopes in use, 850 were required to be inspected by the Swiss Electrotechnical Association by a decree of the Federal Department of Home Affairs on October 7, 1963. The last one was decommissioned in 1990. [ 27 ]
In Germany, the machines were not banned until 1976. The fluoroscopy machine emitted uncontrolled X-rays, which continuously exposed children, parents, and sales staff. The all-wood cabinet of the machine did not prevent the X-rays from passing through, resulting in particularly high cumulative radiation levels for the cashier when the pedoscope was placed near the cash register. The all-wood cabinet of the machine did not prevent the X-rays from passing through, resulting in particularly high cumulative radiation levels for the cashier when the pedoscope was placed near the cash register. It is clear that the machine was not designed with proper safety measures in place, leading to dangerous levels of radiation exposure. The well-established long-term effects of X-rays, including genetic damage and carcinogenicity, suggest that the use of pedoscopes worldwide over several decades may have contributed to health effects.The well-established long-term effects of X-rays, including genetic damage and carcinogenicity, suggest that the use of pedoscopes worldwide over several decades may have contributed to health effects. However, it cannot be definitively proven that they were the sole cause. [ 28 ] [ 29 ] For example, a direct link has been discussed in the case of basal cell carcinoma of the foot. [ 30 ] In 1950, a case was published in which a shoe model had to have a leg amputated as a result. [ 31 ]
In 1896, Viennese dermatologist Leopold Freund (1868-1943) used X-rays to treat patients for the first time. He successfully irradiated the hairy nevus of a young girl. In 1897, Hermann Gocht (1869–1931) published the treatment of trigeminal neuralgia with X-rays, and Alexei Petrovich Sokolov (1854-1928) wrote about radiotherapy for arthritis in the oldest radiology journal, Advances in the field of X-rays ( RöFo ). In 1922, X-rays were recommended as safe for many diseases and for diagnostic purposes. Radiation protection was limited to recommending doses that would not cause erythema (reddening of the skin). For example, X-rays were promoted as an alternative to tonsillectomy . It was also boasted that in 80% of cases of diphtheria carriers, Corynebacterium diphtheriae was no longer detectable within two to four days. [ 32 ] In the 1930s, Günther von Pannewitz (1900–1966), a radiologist from Freiburg, Germany, perfected what he called X-ray stimulation radiation for degenerative diseases. Low-dose radiation reduces the inflammatory response of tissues. Until about 1960, children with diseases such as ankylosing spondylitis or favus (head fungus) were irradiated, which was effective but led to increased cancer rates among patients decades later. [ 33 ] [ 34 ] In 1926, the American pathologist James Ewing (1866-1943) was the first to observe bone changes as a result of radiotherapy, [ 35 ] which he described as radiation osteitis ( now Osteoradionecrosis ). [ 36 ] In 1983, Robert E. Marx stated that osteoradionecrosis is radiation-induced aseptic bone necrosis. [ 37 ] [ 38 ] The acute and chronic inflammatory processes of osteoradionecrosis are prevented by the administration of steroidal anti-inflammatory drugs. In addition, the administration of pentoxifylline and antioxidant treatments, such as superoxide dismutase and tocopherol (vitamin E) are recommended. [ 39 ]
Sonography (ultrasound diagnostics) is a versatile and widely used imaging modality in medical diagnostics. Ultrasound is also used in therapy . However, it uses mechanical waves and no ionizing or non-ionizing radiation. Patient safety is ensured if the recommended limits for avoiding cavitation and overheating are observed, see also Safety Aspects of Sonography .
Even devices that use alternating magnetic fields in the radiofrequency range , such as magnetic resonance imaging (MRI), do not use ionizing radiation. MRI was developed as an imaging technique in 1973 by Paul Christian Lauterbur (1929-2007) with significant contributions from Sir Peter Mansfield (1933-2017). [ 40 ] Jewelry or piercings can become very hot; on the other hand, a high tensile force is exerted on the jewelry, which in the worst case can cause it to be torn out. To avoid pain and injury, jewelry containing ferromagnetic metals should be removed beforehand. Pacemakers , defibrillator systems, and large tattoos in the examination area that contain metallic color pigments may heat up or cause second-degree burns or malfunction of the implants. [ 41 ] [ 42 ]
Photoacoustic Tomography (PAT) is a hybrid imaging modality that utilizes the photoacoustic effect without the use of ionizing radiation. It works without contact with very fast laser pulses that generate ultrasound in the tissue under examination. The local absorption of the light leads to sudden local heating and the resulting thermal expansion. The result is broadband acoustic waves. The original distribution of absorbed energy can be reconstructed by measuring the outgoing ultrasound waves with appropriate ultrasound transducers.
In order to better assess radiation protection, the number of X-ray examinations, including the dose, has been recorded annually in Germany since 2007. However, the Federal Statistical Office does not have complete data for conventional X-ray examinations. In 2014, the total number of X-ray examinations in Germany was estimated to be about 135 million, of which about 55 million were dental X-ray examinations. The average effective dose from x-ray examinations per inhabitant in Germany in 2014 was about 1.55 mSv (about 1.7 x-ray examinations per inhabitant per year). The proportion of dental X-rays is 41%, but accounts for only 0.4% of the collective effective dose. [ 43 ]
In Germany, Section 28 of the X-ray Ordinance (RöV) has required since 2002 that the attending physician must have an X-ray pass available for X-ray examinations and offer it to the patient. The pass contains information about the patient's X-rays to avoid unnecessary examinations and to allow comparison with previous images. With the entry into force of the new Radiation Protection Ordinance on December 31, 2018, this obligation no longer applies. In Austria and Switzerland, x-ray passports have so far been available voluntarily. [ 44 ] [ 45 ] In principle, there must always be both a justifiable indication for the use of X-rays and the informed consent of the patient. In the context of medical treatment, informed consent refers to the patient's agreement to all types of interventions and other medical measures.
§ 630d
Act of (in German)
Over the years, there have been increasing efforts to reduce radiation exposure to therapists and patients.
Following Rollins' discovery in 1920 that lead aprons protected against X-rays, lead aprons with a lead thickness of 0.5 mm were introduced. Due to their weight, lead-free and lead-reduced aprons were subsequently developed. In 2005, it was recognized that in some cases the protection was significantly less than wearing lead aprons. [ 46 ] The lead-free aprons contain tin , antimony and barium , which have the property of producing intense radiation ( X-ray fluorescence radiation) when irradiated. In Germany, the Radiology Standards Committee has taken up the issue and introduced a German standard (DIN 6857-1) in 2009. The international standard IEC 61331-3:2014 was finally published in 2014. Protective aprons that do not comply with DIN 6857-1 of 2009 or the new IEC 61331-1 [ 47 ] of 2014 may result in higher exposures. There are two classes of lead equivalency classes: 0.25 mm and 0.35 mm. The manufacturer must specify the area weight in kg/m 2 at which the protective effect of a pure lead apron of 0.25 or 0.35 mm Pb is achieved. The protective effect of an apron shall be appropriate to the energy range used, up to 110 kV for low energy aprons and up to 150 kV for high energy aprons. [ 48 ]
If necessary, lead glass panels must also be used, with the front panels having a lead equivalent of 0.5-1.0 mm, depending on the application, and the side shields having a lead equivalent of 0.5-0.75 mm.
Outside the useful beam, radiation exposure is primarily caused by scattered radiation from the tissue being scanned. During examinations of the head and torso, this scattered radiation can spread throughout the body and is difficult to shield with radiation protective clothing. Fears that a lead apron will prevent radiation from leaving the body are unfounded, however, because lead absorbs radiation rather than scattering it. [ 49 ]
When preparing an orthopantomogram (OPG) for a dental overview radiograph, it is sometimes recommended not to wear a lead apron, as it does little to shield scattered radiation from the jaw area, but may hinder the rotation of the imaging device. [ 50 ] However, according to the 2018 X-ray regulation, it is still mandatory to wear a lead apron when taking an OPG.
In the same year as the discovery of X-rays, Mihajlo Idvorski Pupin (1858-1935) invented the method of placing a sheet of paper coated with fluorescent substances on the photographic plate , drastically reducing the exposure time and thus the radiation exposure. 95% of the film was blackened by the intensifying film and only the remaining 5% was directly blackened by the X-rays. Thomas Alva Edison identified the blue-emitting calcium tungstate (CaWO4) as a suitable phosphor, which quickly became the standard for X-ray intensifying film. In the 1970s, calcium tungstate was replaced by even better and finer intensifying films with rare earth-based phosphors ( terbium -activated lanthanum oxybromide , gadolinium oxysulfide ). [ 51 ] The use of intensifying films in dental film production did not become widespread because of the loss of image quality. [ 52 ] The combination with high-sensitivity films further reduced radiation exposure.
An anti-scatter grid is a device in X-ray technology that is placed in front of the image receiver ( screen , detector , or film) and reduces the incidence of diffuse radiation on it. The first diffusion radiation grid was developed in 1913 by Gustav Peter Bucky (1880-1963). The US radiologist Hollis Elmer Potter (1880-1964) improved it in 1917 by adding a moving device. [ 53 ] The radiation dose must be increased when using scattered radiation grids. For this reason, the use of scattered radiation equipment should not be used on children. In digital radiography, a grid may be omitted under certain conditions to reduce radiation exposure to the patient. [ 54 ]
Radiation protection measures may also be necessary against scattered radiation, which occurs during tumor irradiation of the head and neck on metal parts of the dentition ( dental fillings , bridges , etc.). Since the 1990s, soft tissue retractors known as radiation protection splints have been used to prevent or reduce mucositis , an inflammation of the mucous membranes. It is the most significant adverse acute side effect of radiation. [ 55 ] The radiation protection splint is a spacer that keeps the mucosa away from the teeth and reduces the amount of scattered radiation that hits the mucosa according to the square law of distance. Mucositis, which is extremely painful, is one of the most significant detriments to a patient's quality of life and often limits radiation therapy, thereby reducing the chances of tumor cure. [ 56 ] The splint reduces oral mucosal reactions that typically occur in the second and third third of a radiation series and are irreversible.
The Japanese Hisatugu Numata developed the first panoramic radiograph in 1933/34. This was followed by the development of intraoral panoramic X-ray units, in which the X-ray tube is placed intraorally (inside the mouth) and the X-ray film extraorally (outside the mouth). At the same time, Horst Beger from Dresden in 1943 and the Swiss dentist Walter Ott in 1946 worked on the Panoramix (Koch & Sterzel), Status X ( Siemens ) and Oralix ( Philips ). [ 57 ] Intraoral panoramic devices were discontinued at the end of the 1980s because the radiation exposure was too high in direct contact with the tongue and oral mucosa due to the intraoral tube.
Eastman Kodak filed the first patent for digital radiography in 1973. [ 58 ] The first commercial CR (Computed Radiology) solution was offered by Fujifilm in Japan in 1983 under the device name CR-101. [ 59 ] X-ray imaging plates are used in X-ray diagnostics to record the shadow image of X-rays. The first commercial digital X-ray system for use in dentistry was introduced in 1986 by Trophy Radiology (France) under the name Radiovisiography. [ 60 ] Digital x-ray systems help reduce radiation exposure. Instead of film, the machines contain a scintillator that converts the incident X-ray photons either into visible light or directly into electrical impulses.
In 1972, the first commercial CT scanner for clinical use went into operation at Atkinsons Morley Hospital in London. Its inventor was the English engineer Godfrey Newbold Hounsfield (1919-2004), who shared the 1979 Nobel Prize in Medicine with Allan McLeod Cormack (1924-1998) for his pioneering work in the field of computed tomography. The first steps toward dose reduction were taken in 1989 in the era of single-slice spiral CT. The introduction of multi-slice spiral computed tomography in 1998 and its continuous development made it possible to reduce the dose by means of dose modulation. The tube current is adjusted, for example by reducing the power for images of the lungs compared to the abdomen . The tube current is modulated during rotation. Because the human body has an approximately oval cross-section, radiation intensity is reduced when radiation is delivered from the front or back, and is increased when radiation is delivered from the side. This dose control also depends on the body mass index . For example, the use of dose modulation in the head and neck region reduces total exposure and organ doses to the thyroid and eye lens by up to 50% without significantly compromising diagnostic image quality. [ 61 ] The Computed Tomography Dose Index (CTDI) is used to measure radiation exposure during a CT scan. The CTDI was first defined by the Food and Drug Administration (FDA) in 1981. The unit of measurement for the CTDI is the mGy (milli- Gray ). Multiplying the CTDI by the length of the examination volume yields the dose-length product (DLP), which quantifies the total radiation exposure to the patient during a CT scan. [ 62 ]
An X-ray room must be shielded on all sides with 1 mm lead equivalent shielding. Calcium silicate or solid brick masonry is recommended. A steel jamb should be used, not only because of the weight of the heavy shielding door but also because of the shielding; wooden frames must be shielded separately. The shielding door must be covered with a 1 mm thick lead foil and a lead glass window must be installed as a visual connection. A keyhole shall be avoided. All installations (sanitary or electrical), that interrupt the radiation protection, must be leaded (
§ 20 § 20 Röntgenverordnung (röv_1987) [§ 20 X-ray Ordinance] (in German)
and
§ Annex+2 Annex 2 (to § 8 para. 1 sentence 1 RöV) (röv_1987) (in German)
Depending on the application, nuclear medicine requires even more extensive protective measures, up to and including concrete walls several meters thick. [ 63 ] In addition, from December 31, 2018, when the latest amendments to Section 14 (1) No. 2b of the Radiation Protection Act
§ 14 Strahlenschutzgesetz – StrlSchG [Radiation Protection Act (StrlSchG)] (in German)
come into force, an expert in medical physics for X-ray diagnostics and therapy must be consulted for the optimization and quality assurance of the application and for advice on radiation protection issues.
Each facility operating an x-ray unit shall have sufficient personnel with appropriate expertise. The person responsible for radiation protection or one or more Radiation Safety Officers shall have appropriate qualifications, which shall be regularly updated. X-ray examinations may be technically performed by any other staff member of a medical or dental practice if they are under the direct supervision and responsibility of the person responsible and if they have knowledge of radiation protection.
This knowledge of radiation protection has been required since the amendment of the X-ray Ordinance in 1987; medical and dental assistants (then called medical assistants or dental assistants) received this additional training in 1990. [ 64 ] The regulations for the specialty of radiology were tightened by the Radiation Protection Act, which came into force on October 1, 2017. [ 65 ]
The handling of radioactive substances and ionizing radiation (if not covered by the X-ray Ordinance) is regulated by the Radiation Protection Ordinance ( StrlSchV ). Section 30 StrlSchV
§ 30 StrlSchV (in German)
defines the "Required expertise and knowledge in radiation protection".
The Association of German Radiation Protection Physicians ( VDSÄ ) was formed in the late 1950s from a working group of radiation protection physicians of the German Red Cross and was founded in 1964. It was dedicated to the promotion of radiation protection and the representation of medical, dental, and veterinary radiation protection concerns to the public and the health care system. In 2017, it was merged into the Professional Association for Radiation Protection. The Austrian Association for Radiation Protection ( ÖVS ), [ 66 ] founded in 1966, pursues the same goals as the Association for Medical Radiation Protection in Austria. [ 67 ] The Professional Association for Radiation Protection for Germany and Switzerland is networked worldwide. [ 68 ]
In radiotherapy, radiation protection is often overlooked in favor of structural safeguards and therapist protection. The benefit/risk assessment should prioritize both the therapeutic goal of treating the patient's cancer and the safety of all involved. However, it is crucial to ensure that radiation is delivered only where it is needed through appropriate treatment planning. By employing strong radiation protection measures, we can confidently provide effective treatment while minimizing potential risks. Linear accelerators replaced cobalt and caesium emitters in routine therapy due to their superior technical characteristics and risk profile. They have been available since about 1970. The presence of a medical physicist responsible for technical quality control is required for linear accelerators, unlike X-rays and telecurie systems. It is important to note that radiation necrosis is the necrosis of cells in an organism caused by the effects of ionizing radiation. Radionecrosis is a serious complication of radiosurgical treatment that becomes clinically apparent months or years after irradiation. [ 69 ] Radiation therapy has significantly reduced the incidence of radionecrosis since its early days. Modern radiation techniques prioritize the sparing of healthy tissue while irradiating as much of the area around the tumor as possible to prevent recurrence. It is important to note that patients undergoing radiotherapy face a certain level of radiation risk.
While there is limited literature on radiation injury to animals, there is no evidence of other types of radiation injury. Diagnostic radiation has been shown to cause local burns in animals, typically resulting from prolonged exposure of body parts or sparks from old x-ray tubes. It is important to note that the frequency of injury to veterinary staff and veterinarians is significantly lower than that in human medicine, highlighting the safety of diagnostic radiation in veterinary practice. In veterinary medicine, fewer images are taken compared to human medicine, particularly fewer CT scans. However, due to the manual restraint of animals to avoid anesthesia, at least one person is present in the control area, resulting in significantly higher radiation exposure than that of human medical staff. It is important to note that since the 1970s, dosimeters have been used to measure the radiation exposure of veterinary personnel, ensuring their safety.
Feline hyperthyroidism (overactive thyroid) is a common disease in older cats. Radioiodine therapy is considered by many authors to be the treatment of choice. Following the administration of radioactive iodine, cats are kept in an isolation pen. The cat's radioactivity is measured to determine the time of discharge, which is typically 14 days after the start of therapy. The therapy requires significant radiation protection measures and is currently only offered at two veterinary facilities in Germany (as of 2010). After the start of treatment, cats must be kept indoors for four weeks, and contact with pregnant women and children under the age of 16 must be avoided due to residual radioactivity. [ 70 ]
Just like a medical practice, any veterinary practice operating an X-ray machine must have sufficient staff with the appropriate expertise, as required by Section 18 of the X-Ray Ordinance 2002. The corresponding training for paraveterinary workers (then called veterinary nurses) took place in 1990. [ 64 ]
In 2017, Linsengericht (Hesse) opened Europe's first clinic for horses with cancer. Radiation therapy is administered in a treatment room that is eight meters wide, on a specially designed table that can withstand heavyweight. The surrounding area is protected from radiation by three-meter thick walls. Mobile equipment is used to irradiate tumors in small animals at various locations. [ 71 ]
Radon is a naturally occurring radioactive noble gas discovered in 1900 by Friedrich Ernst Dorn (1848-1916) and is considered carcinogenic . Radon is increasingly found in areas with high levels of uranium and thorium in the soil. These are mainly areas with high granitic rock deposits . According to studies by the World Health Organization , the incidence of lung cancer increases significantly at radiation levels of 100-200 Bq per cubic meter of indoor air. The likelihood of developing lung cancer increases by 10% with each additional 100 Bq/m 3 of indoor air. [ 72 ]
Elevated radon levels have been measured in numerous areas in Germany, particularly in southern Germany, Austria and Switzerland.
The Federal Office for Radiation Protection has developed a radon map of Germany. [ 73 ] The EU Directive 2013/59/Euratom (Radiation Protection Basic Standards Directive) introduced reference levels and the possibility for workers to have their workplace tested for radon exposure. In Germany, it was implemented in the Radiation Protection Act (Chapter 2 or Sections 124-132 StrlSchG) § 124-132 StrlSchG (in German)
and the amended Radiation Protection Ordinance (Part 4 Chapter 1, Sections 153-158 StrlSchV). § 153-158 Act of (in German)
The new radon protection regulations for workplaces and new residential buildings have been binding since January 2019. Extensive radon contamination and radon precautionary areas have been determined by the ministries of the environment of the federal states (as of June 15, 2021). [ 74 ]
The highest radon concentrations in Austria were measured in 1991 in the municipality of Umhausen in Tyrol. Umhausen has about 2300 inhabitants and is located in the Ötztal valley. Some of the houses there were built on a bedrock of granite gneiss . From this porous subsoil , the radon present in the rock seeped freely into the unsealed cellars, which were contaminated with up to 60,000 Becquerels of radon per cubic meter of air. [ 75 ] Radon levels in the apartments in Umhausen have been systematically monitored since 1992. Since then, extensive radon mitigation measures have been implemented in the buildings: New buildings, sealing of cellar floors, forced ventilation of cellars or relocation. Queries in the Austrian Health Information System ( ÖGIS ) have shown that the incidence of new cases of lung cancer has declined sharply since then. The Austrian National Radon Project (ÖNRAP) has studied radon exposure throughout the country. [ 76 ] Austria also has a Radiation Protection Act as a legal basis. [ 77 ] Indoor limits were set in 2008 [ 78 ] The Austrian Ministry of the Environment states that
"Precautionary measures in radiation protection use the generally accepted model that the risk of lung cancer increases uniformly (linearly) with radon concentration. This means that an increased risk of lung cancer does not only occur above a certain value, but that a guideline or limit value only adjusts the magnitude of the risk in a meaningful way to other existing risks. Achieving a guideline or limit therefore means taking a risk that is still (socially) acceptable. It therefore makes perfect sense to take simple measures to reduce radon levels, even if they are below the guideline values."
In Austria, the Radon Protection Ordinance in its version of September 10, 2021 is currently in force, which also defines the radon protection areas and radon precautionary areas. [ 79 ]
The aim of the Radon Action Plan 2012-2020 in Switzerland was to incorporate the new international recommendations into the Swiss strategy for protection against radon and thus reduce the number of lung cancer cases attributable to radon in buildings. [ 80 ]
On 1 January 2018, the limit value of 1000 Bq/m 3 was replaced by a reference value of 300 becquerels per cubic meter (Bq/m 3 ) for the radon gas concentration averaged over a year in "rooms in which people regularly spend several hours a day".
Subsequently, on May 11, 2020, the Federal Office of Public Health FOPH issued the Radon Action Plan 2021-2030. [ 81 ] The provisions on radon protection are primarily laid down in the Radiation Protection Ordinance (RPO). [ 82 ]
In 1879, Walther Hesse (1846-1911) and Friedrich Hugo Härting published the study "Lung Cancer, the Miners' Disease in the Schneeberg Mines". Hesse, a pathologist , was shocked by the poor health and young age of the miners. [ 83 ] This particular form of bronchial carcinoma was given the name Schneeberg disease because it occurred among miners in the Schneeberg mines (Saxon Erz Mountains).
When Hesse's report was published, radioactive radiation and the existence of radon were unknown. It was not until 1898 that Marie Curie-Skłodowska (1867-1934) and her husband Pierre Curie (1859-1906) discovered radium and created the concept of radioactivity . [ 84 ] Beginning in the fall of 1898, Marie Curie suffered from inflammation of the fingertips, the first known symptoms of radiation sickness .
In the Jáchymov mines, where silver and non-ferrous metals were mined from the 16th to the 19th century, uranium ore was mined in abundance in the 20th century. It was only during the Second World War that restrictions were imposed on ore mining in the Schneeberg and Jáchymov mines. After World War II, uranium mining was accelerated for the Soviet atomic bomb project and the emerging Soviet nuclear industry. Forced labor was used. Initially, these were German prisoners of war and displaced persons, and after the February Revolution of 1948 , political prisoners were imprisoned by the Communist Party regime in Czechoslovakia , as well as conscripted civilian workers. [ 85 ] Several "Czechoslovak gulags " were established in the area to house these workers. In all, about 100,000 political prisoners and more than 250,000 forced laborers passed through the camps. About half of them probably did not survive the mining work. [ 86 ] Uranium mining ceased in 1964. We can only speculate about other victims who died as a result of radiation. Radon-bearing springs discovered during the mining in the early 20th century established a spa industry that is still important today, as well as the town's status as the oldest radium brine spa in the world.
The approximately 200,000 uranium miners employed by Wismut AG in the former Soviet occupation zone of East Germany were exposed to very high levels of radiation, particularly between 1946 and 1955, but also in later years. This exposure was caused by the inhalation of radon and its radioactive by-products, which were deposited to a considerable extent in the inhaled dust. Radiation exposure was expressed in the historical unit of working level month (WLM). This unit of measurement was introduced in the 1950s specifically for occupational safety in uranium mines in the U.S. [ 87 ] to record radiation exposure resulting from radioactive exposure to radon and its decay products in the air we breathe. [ 88 ] Approximately 9000 workers at Wismut AG have been diagnosed with lung cancer.
Until the 1930s, radium compounds were not only considered relatively harmless, but also beneficial to health, and were advertised as medicines for a variety of ailments or used in products that glowed in the dark. Processing took place without any safeguards.
Until the 1960s, radioactivity was often handled naively and carelessly. From 1940 to 1945, the Berlin-based Auergesellschaft , founded by Carl Auer von Welsbach (1858-1929, Osram ), produced a radioactive toothpaste called Doramad that contained thorium-X and was sold internationally. It was advertised with the statement, "Its radioactive radiation strengthens the defenses of the teeth and gums. The cells are charged with new life energy and the destructive effect of bacteria is inhibited. This gave the claim of radiant white teeth a double meaning. By 1930, there were also bath additives and eczema ointments under the brand name "Thorium-X". Radium was also added to toothpastes, such as Kolynos toothpaste . After World War I, radioactivity became a symbol of modern achievement and was considered "chic". Radioactive substances were added to mineral water, condoms, and cosmetic powders. Even chocolate laced with radium was sold. [ 89 ] The toy manufacturer Märklin in the Swabian town of Göppingen tested the sale of an X-ray machine for children. [ 90 ] At upper-class parties, people "photographed" each other's bones for fun. A system called Trycho ( Ancient Greek : τριχο- , romanized : tricho- , lit. 'concerning the hair') for epilation (hair removal) of the face and body was franchised in the USA. As a result, thousands of women suffered skin burns, ulcers and tumors. [ 25 ] It was not until the atomic bombings of Hiroshima and Nagasaki that the public became aware of the dangers of ionizing radiation and these products were banned. [ 91 ] [ 92 ] [ 93 ]
A radium industry developed, using radium in creams, beverages, chocolates, toothpastes, and soaps. [ 94 ] [ 95 ] It took a relatively long time for radium and its decay product radon to be recognized as the cause of the observed effects. Radithor , a radioactive agent consisting of triple- distilled water in which the radium isotopes 226 Ra and 228 Ra were dissolved so that it had an activity of at least one microcurie , was marketed in the United States. [ 96 ] It was not until 1932, when the prominent American athlete Eben Byers , who by his own account had taken about 1,400 vials of Radithor as medicine on the recommendation of his physician, fell seriously ill with cancer, lost many of his teeth, and died shortly thereafter in great agony, that strong doubts were raised about the healing powers of Radithor and radium water. [ 97 ]
1908 saw a boom in the use of radioactive water for therapeutic purposes. The discovery of springs in Oberschlema and Bad Brambach paved the way for the establishment of radium spas, which relied on the healing properties of radium. During the cures, people bathed in radium water, drank cures with radium water, and inhaled radon in emanatoriums. The baths were visited by tens of thousands of people every year, hoping for hormesis .
To this day, therapeutic applications are carried out in spas and healing tunnels. The natural release of radon from the ground is used. According to the German Spa Association, the activity in water must be at least 666 Bq/liter. The requirement for inhalation treatments is at least 37,000 Bq/m 3 of air. This form of therapy is not scientifically accepted and the potential risk of radiation exposure is criticized. The equivalent dose of a radon cure in Germany is given by the individual health resorts as about one to two millisieverts, depending on the location. In 2010, doctors in Erlangen, using the (outdated) LNT (Linear, No-Threshold) model , concluded that five percent of all lung cancer deaths in Germany are caused by radon. [ 98 ] There are radon baths in Bad Gastein , Bad Hofgastein and Bad Zell in Austria, in Niška Banja in Serbia, in the radon revitalization bath in Menzenschwand and in Bad Brambach , Bad Münster am Stein-Ebernburg , Bad Schlema , Bad Steben , Bad Schmiedeberg and Sibyllenbad in Germany, in Jáchymov in the Czech Republic, in Hévíz in Hungary, in Świeradów-Zdrój (Bad Flinsberg) in Poland, in Naretschen and Kostenez in Bulgaria and on the island of Ischia in Italy. There are radon tunnels in Bad Kreuznach and Bad Gastein . [ 99 ]
The dangers of radium were recognized in the early 1920s and first described in 1924 by New York dentist and oral surgeon Theodor Blum (1883-1962). [ 100 ] He was particularly aware of the use of radium in the watch industry, where it was used for luminous dials. He published an article on the clinical picture of the so-called radium jaw . He observed this disease in female patients who, as dial painters, came into contact with luminous paint whose composition was similar to Radiomir, a luminous material invented in 1914 consisting of a mixture of zinc sulfide and radium bromide . As they painted, they used their lips to form the tip of the phosphorus-laden brush into the desired pointed shape, and this is how the radioactive radium entered their bodies. In the U.S. and Canada alone, about 4,000 workers were affected over the years. [ 101 ] In retrospect, the factory workers were called the Radium Girls . They also played with the paint, painting their fingernails, teeth and faces. This made them glow at night to the surprise of their companions.
After Harrison Stanford Martland (1883-1954), chief medical examiner in Essex County , detected the radioactive noble gas radon (a decay product of radium) in the breath of the Radium Girls, he turned to Charles Norris (1867-1935) and Alexander Oscar Gettler (1883-1968). In 1928, Gettler was able to detect a high concentration of radium in the bones of Amelia Maggia, one of the young women, even five years after her death. [ 102 ] [ 103 ] In 1931, a method was developed for determining radium dosage using a film dosimeter. A standard preparation is irradiated through a hardwood cube onto an X-ray film, which is then blackened. For a long time, the cube minute was an important unit of radium dosage. [ 104 ] It was calibrated by ionometric measurements. The radiologists Hermann Georg Holthusen (1886-1971) and Anna Hamann (1894-1969) found a calibration value of 0.045 r/min in 1932/1935. The calibration film receives the y-ray dose of 0.045 r per minute through the wooden cube from the preparation of 13.33 mg. In 1933, the physicist Robley D. Evans (1907-1995) made the first measurements of radon and radium in the excretions of female workers. [ 105 ] On this basis, the National Bureau of Standards, the predecessor to the National Institute of Standards and Technology (NIST), set the limit for radium at 0.1 microcuries (about 3.7 kilobecquerels ) in 1941.
A Radium Action Plan 2015-2019 aims to solve the problem of radiological contamination in Switzerland, mainly in the Jura Mountains , due to the use of radium luminous paint in the watch industry until the 1960s. [ 106 ]
In France, a line of cosmetics called Tho-Radia , which contained both thorium and radium, was created in 1932 and lasted until the 1960s. [ 107 ]
Terrestrial radiation is the ubiquitous radiation on Earth caused by radionuclides in the ground that were formed billions of years ago by stellar nucleosynthesis and have not yet decayed due to their long half-lives . Terrestrial radiation is caused by natural radionuclides that occur naturally in the Earth's soil, rocks, hydrosphere , and atmosphere . Natural radionuclides can be divided into cosmogenic and primordial nuclides . Cosmogenic nuclides do not contribute significantly to the terrestrial ambient radiation at the Earth's surface. The sources of terrestrial radiation are the natural radioactive nuclides found in the uppermost layers of the Earth, in the water and in the air. These include in particular [ 108 ]
According to the World Nuclear Association , coal from all deposits contains traces of various radioactive substances, particularly radon, uranium and thorium. These substances are released during coal mining, especially from surface mines, through power plant emissions, or power plant ash, and contribute to terrestrial radiation exposure through their exposure pathways. [ 109 ]
In December 2009, it was revealed that oil and gas production generates millions of tons of radioactive waste each year, much of which is improperly disposed of without detection, including 226 Radium and 210 Polonium . [ 110 ] [ 111 ] The specific activity of the waste ranges from 0.1 to 15,000 becquerels per gram. In Germany, according to the Radiation Protection Ordinance of 2001, the material is subject to monitoring at one Becquerel per gram and would have to be disposed of separately. The implementation of this regulation has been left to the industry, which has disposed of the waste carelessly and improperly for decades.
Every building material contains traces of natural radioactive substances, especially 238 uranium, 232 thorium, and their decay products, and 40 potassium. Solidified and effusive rocks such as granite , tuff , and pumice have higher levels of radioactivity. In contrast, sand, gravel , limestone , and natural gypsum ( calcium sulfate dihydrate) have low levels of radioactivity. The European Union's Activity Concentration Index (ACI), developed in 1999, can be used to assess radiation exposure from building materials. [ 112 ] It replaces the Leningrad summation formula, which was used in 1971 in Leningrad (St. Petersburg) to determine how much radiation exposure from building materials is permissible for humans. The ACI is calculated from the sum of the weighted activities of 40 potassium, 226 radium, and 232 thorium. The weighting takes into account the relative harmfulness to humans. According to official recommendations, building materials with a European ACI value greater than "1" should not be used in large quantities. [ 113 ]
Uranium pigments are used to color ceramic tiles with uranium glazes (red, yellow, brown), where 2 mg of uranium per cm 2 is allowed. Between 1900 and 1943, large quantities of uranium-containing ceramics were produced in the United States, as well as in Germany and Austria. It is estimated that between 1924 and 1943, 50-150 tons of uranium (V,VI) oxide were used annually in the U.S. to produce uranium-containing glazes. In 1943, the U.S. government imposed a ban on the civilian use of uranium-containing substances, which remained in effect until 1958. Beginning in 1958, the U.S. government, and in 1969 the United States Atomic Energy Commission , sold depleted uranium in the form of uranium(VI) fluoride for civilian use. [ 114 ] In Germany, uranium-glazed ceramics were produced by the Rosenthal porcelain factory and were commercially available until the early 1980s. [ 115 ] Uranium-glazed ceramics should only be used as collector's items and not for everyday use due to possible abrasion.
The Federal Office for Radiation Protection's monitoring network measures natural radiation exposure through the local dose rate (ODL), expressed in microsieverts per hour (μSv/h). In Germany, the natural ODL ranges from approximately 0.05 to 0.18 μSv/h, depending on local conditions. The ODL monitoring network has been operational since 1973 and currently comprises 1800 fixed, automatically operating measuring points. Its primary function is to provide early warning for the rapid detection of increased radiation from radioactive substances in the air in Germany. Spectroscopic probes have been successfully utilized since 2008 to determine the contribution of artificial radionuclides in addition to the local dose rate, showcasing the network's advanced capabilities. [ 116 ] In addition to the ODL monitoring network of the Federal Office for Radiation Protection, there are other federal monitoring networks at the Federal Maritime and Hydrographic Agency and the Federal Institute of Hydrology , which measure gamma radiation in water; the German Meteorological Service measures air activity with aerosol samplers. [ 117 ] To monitor nuclear facilities , the relevant federal states operate their own ODL monitoring networks. The data from these monitoring networks are automatically fed into the Integrated Measurement and Information System (IMIS), where they are used to analyze the current situation.
Many countries operate their own ODL monitoring networks to protect the public. In Europe, these data are collected and published on the EURDEP platform of the European Atomic Energy Community . The European monitoring networks are based on Articles 35 and 37 of the Euratom Treaty . [ 118 ]
Nuclear medicine is the use of open radionuclides for diagnostic and therapeutic purposes ( radionuclide therapy ). [ 119 ] It also includes the use of other radioactive substances and nuclear physics techniques for functional and localization diagnostics. George de Hevesy (1885-1966) lived as a lodger and in 1923 suspected that his landlady was offering him pudding that he had not eaten the following week. He mixed a small amount of a radioactive isotope into the leftovers. When she served him the pudding a week later, he was able to detect radioactivity in a sample of the casserole. When he showed this to his landlady, she immediately gave him notice. The method he used made him the father of nuclear medicine . It became known as the tracer method , which is still used today in nuclear medicine diagnostics. [ 120 ] A small amount of a radioactive substance, its distribution in the organism, and its path through the human body can be tracked externally. This provides information about various metabolic functions of the body. The continuous development of radionuclides has improved radiation protection. For example, the mercury compounds 203 chloro-merodrin and 197 chloro-merodrin were abandoned in the 1960s as substances were developed that allowed a higher photon yield with less radiation exposure. Beta emitters such as 131 I and 90 Y are used in radionuclide therapy. In nuclear medicine diagnostics, the beta+ emitters 18 F, 11 C, 13 N, and 15 O are used as radioactive markers for tracers in positron emission tomography (PET). [ 121 ] Radiopharmaceuticals (isotope-labeled drugs) are being developed on an ongoing basis.
Radiopharmaceutical residues, such as empty application syringes and contaminated residues from the patient's toilet, shower and washing water, are collected in tanks and stored until they can be safely pumped into the sewer system. The storage time depends on the half-life and ranges from a few weeks to a few months, depending on the radionuclide. Since 2001, by § 29 StrlSchV (in German)
of the Radiation Protection Ordinance , the specific radioactivity in the waste containers has been recorded in release measuring stations and the release time is calculated automatically. This requires measurements of the sample activity in Bq/g and the surface contamination in Bq/cm 2 . In addition, the behavior of the patients after their discharge from the clinic is prescribed. [ 122 ] To protect personnel, syringe filling systems, borehole measurement stations for nuclide-specific measurement of low-activity, small volume individual samples, a lift system into the measurement chamber to reduce radiation exposure when handling highly active samples, probe measurement stations, ILP (isolated limb perfusion) measurement stations to monitor activity with one or more detectors during surgery and report leakage to the surgical oncologist .
Radioiodine Therapy (RIT) is a nuclear medicine procedure used to treat thyroid hyperfunction, Graves' disease , thyroid enlargement, and certain forms of thyroid cancer. The radioactive iodine isotope used is 131 Iodine, a predominant beta emitter with a half-life of eight days, which is only stored in thyroid cells in the human body. In 1942, Saul Hertz (1905-1950) of the Massachusetts General Hospital and the physicist Arthur Roberts published their report on the first radioiodine therapy (1941) for Graves' disease, [ 123 ] [ 124 ] at that time still predominantly using the 130 iodine isotope with a half-life of 12.4 hours. [ 125 ] At the same time, Joseph Gilbert Hamilton (1907-1957) and John Hundale Lawrence (1904-1991) performed the first therapy with 131 iodine, the isotope still used today. [ 125 ]
Radioiodine therapy is subject to special legal regulations in many countries, and in Germany may only be performed on an inpatient basis. There are approximately 120 treatment centers in Germany (as of 2014), performing approximately 50,000 treatments per year. [ 126 ] In Germany, the minimum length of stay is 48 hours. Discharge depends on the residual activity remaining in the body. In 1999, the limit for residual activity was raised. The dose rate may not exceed 3.5 μSv per hour at a distance of 2 meters from the patient, which means that a radiation exposure of 1 mSv may not be exceeded within one year at a distance of 2 meters. This corresponds to a residual activity of about 250 MBq . Similar regulations exist in Austria.
In Switzerland, a maximum radiation exposure of 1 mSv per year and a maximum of 5 mSv per year for the patient's relatives may not be exceeded. [ 127 ] After discharge following radioiodine therapy, a maximum dose rate of 5 μSv per hour at a distance of 1 meter is permitted, which corresponds to a residual activity of approximately 150 MBq. [ 128 ] In the event of early discharge, the supervisory authority must be notified up to a dose rate of 17.5 μSv/h; above 17.5 μSv/h, permission must be obtained. If the patient is transferred to another ward, the responsible radiation protection officer must ensure that appropriate radiation protection measures are taken there, e.g. that a temporary control area is set up.
Scintigraphy is a nuclear medicine procedure in which low-level radioactive substances are injected into the patient for diagnostic purposes. These include bone scintigraphy , thyroid scintigraphy , octreotide scintigraphy , and, as a further development of the procedure, single photon emission computed tomography (SPECT). For example, 201 Tl thallium(I) chloride , technetium compounds ( 99m Tc tracer, 99m technetium tetrofosmin), PET tracers (with radiation exposure of 1100 MBq each with 15 O-water, 555 MBq with 13 N ammonia , or 1850 MBq with 82 Rb rubidium chloride ) are used in myocardial scintigraphy to diagnose blood flow conditions and function of the heart muscle (myocardium). The examination with 74 MBq 201 Thallium Chloride causes a radiation exposure of about 16 mSv (effective dose equivalent), the examination with 740 MBq 99m Technetium-MIBI about 7 mSv. [ 129 ] Metastable 99m Tc is by far the most important nuclide used as a tracer in scintigraphy because of its short half-life, the 140 keV gamma radiation it emits, and its ability to bind to many active biomolecules. Most of this radiation is excreted after the examination. The remaining 99m Tc decays rapidly to 99 Tc with a half-life of 6 hours. This has a long half-life of 212,000 years and, because of the relatively weak beta radiation released during its decay, contributes only a small amount of additional radiation exposure over the remaining lifetime. [ 130 ] In the United States alone, approximately seven million individual doses of 99m Tc are administered each year for diagnostic purposes.
To reduce radiation exposure, the American Society of Nuclear Cardiology (ASNC) issued dosage recommendations in 2010. The effective dose is 2.4 mSv for 13 N-ammonia, 2.5 mSv for 15 O-water, 7 mSv for 18 F- fluorodeoxyglucose , and 13.5 mSv for 82 Rb-rubidium chloride. [ 131 ] Compliance with these recommendations is expected to reduce the average radiation exposure to = 9 mSv. The Ordinance on Radioactive Drugs or Drugs Treated with Ionizing Radiation § 2 AMRadV (in German)
regulates the approval procedures for the marketability of radioactive drugs. [ 132 ]
Brachytherapy is used to place a sealed radioactive source inside or near the body to treat cancer, such as prostate cancer. Afterloading brachytherapy is often combined with teletherapy , which is external radiation delivered from a greater distance than brachytherapy. It is not classified as a nuclear medicine procedure, although like nuclear medicine, it uses the radiation emitted by radionuclides. After initial interest in brachytherapy in the early 20th century, its use declined in the mid-20th century because of the radiation exposure to physicians from manual handling of the radiation sources. [ 133 ] [ 134 ] It was not until the development of remote-controlled afterloading systems and the use of new radiation sources in the 1950s and 1960s that the risk of unnecessary radiation exposure to physicians and patients was reduced. [ 135 ] In the afterloading procedure, an empty, tubular applicator is inserted into the target volume (e.g., the uterus ) before the actual therapy and, after checking the position, loaded with a radioactive preparation. The preparation is located at the tip of a steel wire that is advanced and retracted step by step under computer control. After the pre-calculated time, the source is withdrawn into a safe and the applicator is removed. The procedure is used for breast cancer, bronchial carcinoma or oral floor carcinoma, among others. Beta emitters such as 90 Sr or 106 Ru or 192 Ir are used. As a precaution, patients undergoing permanent brachytherapy are advised not to hold small children immediately after treatment and not to be in the vicinity of pregnant women, since low-dose radioactive sources (seeds) remain in the body after treatment with permanent brachytherapy. This is to protect the particularly radiation-sensitive tissues of a fetus or infant.
Radioactive thorium was used in the 1950s and 60s to treat tuberculosis and other benign diseases (including children), with serious consequences (see Peteosthor). A stabilized suspension of colloidal thorium(IV) oxide , co-developed by António Egas Moniz (1874-1954), [ 136 ] was used from 1929 under the trade name Thorotrast as an X-ray contrast agent for angiography in several million patients worldwide until it was banned in the mid-1950s. It accumulates in the reticulohistiocytic system and can lead to cancer due to locally increased radiation exposure. The same is true for cholangiocarcinoma and angiosarcoma of the liver, two rare liver cancers. Carcinomas of the paranasal sinuses have also been described following administration of Thorotrast. Typical onset of disease is 30–35 years after exposure. The biological half-life of Thorotrast is approximately 400 years. [ 137 ] [ 138 ] The largest study in this area was conducted in Germany in 2004 and showed a particularly high mortality rate among patients exposed in this way. The median life expectancy over a seventy-year observation period was 14 years shorter than in the comparison group. [ 139 ]
After the U.S. atomic bombs were dropped on Hiroshima and Nagasaki on August 6 and 9, 1945, an additional 130,000 people - in addition to the 100,000 immediate victims - died from the effects of radiation by the end of 1945. Some experienced the so-called walking ghost phase , an acute radiation sickness caused by a high equivalent dose of 6 to 20 Sievert after a lethal whole-body dose. The phase describes the period of apparent recovery of a patient between the onset of the first massive symptoms and the inevitable death. [ 140 ] In the years that followed, a number of deaths from radiation-induced diseases were added. In Japan, the radiation-damaged survivors are called hibakusha ( Japanese : 被爆者 , lit. ''Explosion victim'') and are conservatively estimated to number about 100,000. [ 141 ]
In 1946, the Atomic Bomb Casualty Commission (ABCC) was established by the National Research Council of the National Academy of Sciences by order of U.S. President Harry S. Truman to study the long-term effects of radiation on survivors of the atomic bombings. In 1975, the ABCC was replaced by the Radiation Effects Research Foundation (RERF). [ 142 ] Organizations such as the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), founded in 1955, [ 143 ] and the National Academy of Sciences - Advisory Committee on the Biological Effects of Ionizing Radiation (BEIR Committee), [ 144 ] founded in 1972, analyze the effects of radiation exposure on humans on the basis of atomic bomb victims who have been examined and, in some cases, medically monitored for decades. They determine the course of the mortality rate as a function of the age of the radiation victims in comparison with the spontaneous rate, and also the dose-dependency of the number of additional deaths. To date, 26 UNSCEAR reports have been published and are available online, most recently in 2017 on the effects of the Fukushima nuclear accident. [ 145 ]
By 1949, Americans felt increasingly threatened by the possibility of nuclear war with the Soviet Union and sought ways to survive a nuclear attack. The U.S. Federal Civil Defense Administration (USFCDA) was created by the government to educate the public on how to prepare for such an attack. In 1951, with the help of this agency, a children's educational film was produced in the U.S. called Duck and Cover , in which a turtle demonstrates how to protect oneself from the immediate effects of an atomic bomb explosion by using a coat, tablecloths, or even a newspaper. [ 146 ]
Recognizing that existing medical capacity would not be sufficient in an emergency, dentists were called upon to either assist physicians in an emergency or, if necessary, to provide assistance themselves. To mobilize the profession with the help of a prominent representative, dentist Russell Welford Bunting (1881-1962), dean of the University of Michigan Dental School, was recruited in July 1951 as a dental consultant to the USFCDA. [ 147 ] [ 148 ]
The American physicist Karl Ziegler Morgan (1907-1999) was one of the founders of radiation health physics. In later life, after a long career with the Manhattan Project and Oak Ridge National Laboratory (ORNL), he became a critic of nuclear power and nuclear weapons production. Morgan was Director of Health Physics at ORNL from the late 1940s until his retirement in 1972. In 1955, he became the first president of the Health Physics Society and served as editor of the journal Health Physics from 1955 to 1977. [ 149 ]
Nuclear fallout shelters are designed to protect for an extended period. Due to the nature of nuclear warfare, such shelters must be completely self-sufficient for long periods. In particular, because of the radioactive contamination of the surrounding area, such a facility must be able to survive for several weeks. In 1959, top-secret construction began in Germany on a government bunker in the Ahr valley. In June 1964, 144 test persons survived for six days in a civilian nuclear bunker. The bunker in Dortmund had been built during the Second World War and had been converted at great expense in the early 1960s into a nuclear-weapon-proof building. However, it would be impossible to build a bunker for millions of German citizens. [ 150 ] The Swiss Army built about 7800 nuclear fallout shelters in 1964. In the United States in particular, but also Europe, citizens built private fallout shelters in their front yards on their initiative. This construction was largely kept secret because the owners feared that third parties might take possession of the bunker in the event of a crisis.
On July 16, 1945, the first atomic bomb test took place near the town of Alamogordo (New Mexico, USA). As a result of the atmospheric nuclear weapons tests carried out by the United States, the Soviet Union, France, Great Britain, and China, the Earth's atmosphere became increasingly contaminated with fission products from these tests from the 1950s onwards. The radioactive fallout landed on the earth's surface and ended up in plants and, via animal feed, in food of animal origin. Ultimately, they entered the human body and could be detected in bones and teeth as strontium-90, among other things. [ 152 ] The radioactivity in the field was measured with a gamma scope, as shown at the air raid equipment exhibition in Bad Godesberg in 1954. [ 153 ] Around 180 tests were carried out in 1962 alone. The extent of the radioactive contamination of the food sparked worldwide protests in the early 1960s.
During World War II and the Cold War, the Hanford Site produced plutonium for U.S. nuclear weapons for more than 50 years. The plutonium for the first plutonium bomb, Fat Man, also came from there. Hanford is considered the most radioactively contaminated site in the Western Hemisphere. [ 154 ] A total of 110,000 tons of nuclear fuel was produced there. In 1948, a radioactive cloud leaked from the plant. The amount of 131 I alone was 5500 curies . Most of the reactors at Hanford were shut down in the 1960s, but no disposal or decontamination was done. After preliminary work, the world's largest decontamination operation began at Hanford in 2001 to safely dispose of the radioactive and toxic waste. In 2006, some 11,000 workers were still cleaning up contaminated buildings and soil to reduce radiation levels at the site to acceptable levels. This work is expected to continue until 2052. [ 155 ] It is estimated that more than four million liters of radioactive liquid have leaked from storage tanks.
It was only after the two superpowers agreed on a Partial Test Ban Treaty in 1963, which allowed only underground nuclear weapons testing, that the level of radioactivity in food began to decline. Shields Warren (1896-1980), one of the authors of a report on the effects of the atomic bombs dropped on Japan, was criticized for downplaying the effects of residual radiation in Hiroshima and Nagasaki, [ 156 ] but later warned of the dangers of fallout. Fallout refers to the spread of radioactivity in the context of a given meteorological situation. A model experiment was conducted in 2008. [ 157 ]
The International Campaign to Abolish Nuclear Weapons (ICAN) is an international alliance of non-governmental organizations committed to the elimination of all nuclear weapons through a binding international treaty - a Nuclear Weapons Convention. ICAN was founded in 2007 by IPPNW ( International Physicians for the Prevention of Nuclear War ) and other organizations at the Nuclear Non-Proliferation Treaty Conference in Vienna and launched in twelve countries. Today, 468 organizations in 101 countries are involved in the campaign (as of 2017). [ 158 ] ICAN was awarded the 2017 Nobel Peace Prize . [ 159 ]
A radioprotector is a pharmacon that, when administered, selectively protects healthy cells from the toxic effects of ionizing radiation . The first work with radioprotectors began as part of the Manhattan Project , a military research project to develop and build an atomic bomb.
Iodine absorbed by the body is almost completely stored in the thyroid gland and has a biological half-life of about 120 days. If the iodine is radioactive ( 131 I), it can irradiate and damage the thyroid gland in high doses during this time. Because the thyroid gland can only absorb a limited amount of iodine, prophylactic administration of non-radioactive iodine may result in iodine blockade. Potassium iodide in tablet form (colloquially known as "iodine tablets") reduces the uptake of radioactive iodine into the thyroid by a factor of 90 or more, thus acting as a radioprotector. [ 160 ] All other radiation damage remains unaffected by taking iodine tablets. In Germany, the Potassium Iodide Ordinance (KIV) was enacted in 2003 to ensure "the supply of the population with potassium iodide-containing medicines in the event of radiological incidents". § 1 kiv (in German)
Potassium iodide is usually stored in communities near nuclear facilities for distribution to the population in the event of a disaster. [ 161 ] People over the age of 45 should not take iodine tablets because the risk of side effects is higher than the risk of developing thyroid cancer. In Switzerland, as a precautionary measure, tablets have been distributed every five years since 2004 to the population living within 20 km of nuclear power plants (from 2014, 50 km). [ 162 ] [ 163 ] In Austria, large stocks of iodine tablets have been kept in pharmacies, kindergartens, schools, the army and the federal reserve since 2002. [ 164 ]
Thanks to the protective function of radioprotectors, the dose of radiation used to treat malignant tumors (cancer) can be increased, thereby increasing the effectiveness of the therapy. [ 165 ] There are also radiosensitizers , which increase the sensitivity of malignant tumor cells to ionizing radiation. [ 166 ] As early as 1921, the German radiologist Hermann Holthusen (1886-1971) described that oxygen increases the sensitivity of cells. [ 167 ]
Founded in 1957 as a sub-organization of the Organization for Economic Cooperation and Development (OECD), the Nuclear Energy Agency (NEA) pools the scientific and financial resources of participating countries' nuclear research programs. It operates various databases and also manages the International Reporting System for Operating Experience (IRS or IAEA/NEA Incident Reporting System) of the International Atomic Energy Agency (IAEA). The IAEA records and investigates radiation accidents that have occurred worldwide in connection with nuclear medical procedures and the disposal of related materials. [ 168 ]
The International Nuclear and Radiological Event Scale (INES) is a scale for safety-related events, in particular nuclear incidents and accidents in nuclear facilities. It was developed by an international group of experts and officially adopted in 1990 by the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD). [ 169 ] The purpose of the scale is to inform the public quickly about the safety significance of an event by means of a comprehensible classification of events.
At the end of its useful life, the proper disposal of the remaining high activity is of paramount importance. Improper disposal of the radionuclide cobalt-60 , used in cobalt guns for radiotherapy, has led to serious radiation accidents, such as the Ciudad Juárez (Mexico) radiological accident in 1983/84, [ 170 ] the Goiânia (Brazil) accident in 1987, the Samut Prakan (Thailand) nuclear accident in 2000, and the Mayapuri (India) accident in 2010. [ 171 ]
Eleven Therac-25 linear accelerators were built by the Canadian company Atomic Energy of Canada Limited (AECL) between 1982 and 1985 and installed in clinics in the United States and Canada. Software errors and a lack of quality assurance led to a serious malfunction that killed three patients and seriously injured three others between June 1985 and 1987 before appropriate countermeasures were taken. The radiation exposure in the six cases was subsequently estimated to be between 40 and 200 Gray ; normal treatment is equivalent to a dose of less than 2 Gray. [ 172 ] [ 173 ]
Around 1990, about one hundred cobalt guns were still in use in Germany. In the meantime, electron linear accelerators were introduced and the last cobalt gun was decommissioned in 2000. [ 174 ]
The Fukushima nuclear accident in 2011 reinforced the need for proper safety management and the derivation of safety indicators regarding the frequency of errors and incorrect actions by personnel, i.e., the human factor . [ 175 ] The Nuclear Safety Commission of Japan ( Japanese : 原子力安全委員会 ) was a body of scientists that advised the Japanese government on nuclear safety issues. The commission was established in 1978, [ 176 ] but was dissolved after the Fukushima nuclear disaster on September 19, 2012, and replaced by the Genshiryoku Kisei Iinkai [ 177 ] ( Japanese : 原子力規制委員会 , lit. 'Nuclear Regulatory Committee'). It is an independent agency ( gaikyoku , "external office") of the Japanese Ministry of the Environment that regulates and monitors the safety of Japan's nuclear power plants and related facilities.
As a result of the Chernobyl nuclear disaster in 1986, the IAEA coined the term " safety culture " for the first time in 1991 to draw attention to the importance of human and organizational issues for the safe operation of nuclear power plants.
After this nuclear disaster, the sand in children's playgrounds in Germany was removed and replaced with uncontaminated sand to protect children who were most vulnerable to radioactivity. Some families temporarily left Germany to escape the fallout. Infant mortality increased significantly by 5% in 1987, the year after Chernobyl. [ 178 ] In total, 316 more newborns died that year than statistically expected. In Germany, the caesium 137 inventories from the Chernobyl nuclear disaster in soil and food decrease by 2-3% each year; however, the contamination of game and mushrooms was still comparatively high in 2015, especially in Bavaria; there are several cases of game meat , especially wild boar , exceeding the limits. [ 179 ] However, controls are insufficient. [ 180 ] [ 181 ]
"In particular, wild boars in southern Bavaria are repeatedly found to have a very high radioactive contamination of over 10,000 Becquerel/kg. The limit is 600 Becquerel/kg. For this reason, the Bavarian Consumer Center advises against eating wild boar from the Bavarian Forest and south of the Danube too often. Whoever buys wild boar from a hunter, should ask for the measurement protocol."
Between 1969 and 1982, conditioned low- and intermediate-level radioactive waste was disposed of in the Atlantic Ocean at a depth of about 4,000 meters under the supervision of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) in accordance with the provisions of the European Convention on the Prevention of Marine Pollution by Dumping of Waste of All Kinds (London Dumping Convention of June 11, 1974). This was carried out jointly by several European countries. [ 27 ] Since 1993, international treaties have prohibited the dumping of radioactive waste in the oceans. [ 183 ] For decades, this dumping of nuclear waste went largely unnoticed by the public until Greenpeace denounced it in the 1980s.
Since the commissioning of the first commercial nuclear power plants (USA 1956, Germany 1962), various final storage concepts for radioactive materials have been proposed in the following decades, of which only storage in deep geological formations appeared to be safe and feasible within a reasonable period of time and was pursued further. Due to the high activity of the short-lived fission products, spent fuel is initially handled only under water and stored for several years in a decay pool. The water is used for cooling and also shields much of the emitted radiation. This is followed either by reprocessing or by decades of interim storage. Waste from reprocessing must also be stored temporarily until the heat has decreased enough to allow final disposal. Casks are special containers for the storage and transport of highly radioactive materials. Their maximum permissible dose rate is 0.35 mSv/h, of which a maximum of 0.25 mSv/h is due to neutron radiation. The safety of these transport containers has been discussed every three years since 1980 at the International Symposium on the Packaging and Transportation of Radioactive Materials (PATRAM). [ 184 ]
Following various experiments, such as the Gorleben exploratory mine or the Asse mine , a working group on the selection procedure for repository sites (AkEnd) developed recommendations for a new selection procedure for repository sites between 1999 and 2002. [ 185 ] In Germany, the Site Selection Act was passed in 2013 and the Act on the Further Development of the Site Search was passed on March 23, 2017. A suitable site is to be sought throughout Germany and identified by 2031. In principle, crystalline (granite), salt or clay rock types can be considered for a repository. There will be no "ideal" site. The "best possible" site will be sought. Mining areas and regions where volcanoes have been active or where there is a risk of earthquakes are excluded. Internationally, experts are advocating storage in rock formations several hundred meters below the earth's surface. This involves building a repository mine and storing the waste there. It is then permanently sealed. Geological and technical barriers surrounding the waste are designed to keep it safe for thousands of years. For example, 300 meters of rock will separate the repository from the earth's surface. [ 186 ] It will be surrounded by a 100-meter-thick layer of granite, salt or clay. The first waste is not expected to be stored until 2050. [ 187 ]
The Federal Office for the Safety of Nuclear Waste Management (BfE) took up its activities on September 1, 2004. [ 188 ] Its remit includes tasks relating to nuclear safety, the safety of nuclear waste management, the site selection procedure including research activities in these areas and, later on, further tasks in the area of licensing and supervision of repositories.
In the USA, Yucca Mountain was initially selected as the final storage site, but this project was temporarily halted in February 2009. Yucca Mountain was the starting point for an investigation into atomic semiotics.
The operation of nuclear power plants and other nuclear facilities produces radioactive materials that can have lethal health effects for thousands of years. It is important to note that there is no institution capable of maintaining the necessary knowledge of the dangers over such periods, and of ensuring that warnings about the dangers of nuclear waste in nuclear repositories will be understood by posterity in the distant future. A few years ago, even the capsules of the radionuclide cobalt-60, which were appropriately labeled, went unnoticed. Improper disposal led to the opening of these capsules, resulting in fatal consequences. The dimensions of time exceed previous human standards. For instance, cuneiform writing, which is only about 5000 years old (about 150 human generations), can only be understood after a long period of research and by experts. In 1981, research into the development of atomic semiotics began in the USA, [ 189 ] in the German-speaking world, Roland Posner (1942-2020) of the Center for Semiotics at Technische Universität Berlin worked on this in 1982/83. [ 190 ] In the USA, the time horizon for such warning signs was set at 10,000 years; later, as in Germany, it was set at a period of one million years, which would correspond to about 30,000 (human) generations. To date, no satisfactory solution to this problem has been found.
In 1912, Victor Franz Hess (1883-1964) discovered (secondary) cosmic rays in the Earth's atmosphere using balloon flights. For this discovery he received the Nobel Prize in Physics in 1936. He was also one of the "martyrs" of early radiation research and had to undergo a thumb amputation and larynx surgery due to radium burns. [ 191 ] In the United States and the Soviet Union, balloon flights to altitudes of about 30 km, followed by parachute jumps from the stratosphere , were conducted before 1960 to study human exposure to cosmic radiation in space. The American Manhigh and Excelsior projects with Joseph Kittinger (1928-2022) became particularly well known, but the Soviet parachutist Yevgeny Andreyev (1926-2000) also set new records. [ 192 ]
High-energy radiation from space is much stronger at high altitudes than at sea level. The radiation exposure of flight crews and air travelers is therefore increased. The International Commission on Radiological Protection (ICRP) has issued recommendations for dose limits, which were incorporated into European law in 1996 and into the German Radiation Protection Ordinance in 2001. Radiation exposure is particularly high when flying in the polar regions or over the polar route . [ 193 ] The average annual effective dose for aviation personnel was 1.9 mSv in 2015 and 2.0 mSv in 2016. The highest annual personal dose was 5.7 mSv in 2015 and 6.0 mSv in 2016. [ 194 ] The collective dose for 2015 was about 76 person-Sv. This means that flight personnel are among the occupational groups in Germany with the highest radiation exposure in terms of collective dose and average annual dose. [ 195 ] This group also includes frequent flyers , with Thomas Stuker holding the "record" - also in terms of radiation exposure - by reaching the 10 million mile mark with United Airlines MileagePlus on 5,900 flights between 1982 and the summer of 2011. [ 196 ] In 2017, he passed the 18 million mile mark.
The program EPCARD (European Program Package for the Calculation of Aviation Route Dose) was developed at the University of Siegen and the Helmholtz Munich and can be used to calculate the dose from all components of natural penetrating cosmic radiation on any flight route and flight profile - also online. [ 197 ]
From the earliest crewed space flights to the first moon landing and the construction of the International Space Station (ISS), radiation protection has been a major concern. Spacesuits used for extravehicular activities are coated on the outside with aluminum , which largely protects against cosmic radiation. The largest international research project to determine the effective dose or effective dose equivalent was the Matryoshka experiment in 2010, named after the Russian Matryoshka dolls, because it uses a human-sized phantom that can be cut into slices. [ 198 ] As part of Matroshka, an anthropomorphic phantom was exposed to the outside of the space station for the first time to simulate an astronaut performing an extravehicular activity (spacewalk) and determine their exposure to radiation. [ 199 ] [ 200 ] Microelectronics on satellites must also be protected from radiation.
Japanese scientists from the Japan Aerospace Exploration Agency (JAXA) have discovered a huge cave on the moon with their Kaguya lunar probe, which could offer astronauts protection from dangerous radiation during future lunar landings, especially during the planned stopover of a Mars mission. [ 201 ] [ 202 ]
As part of a human mission to Mars , astronauts must be protected from cosmic radiation. During Curiosity 's mission to Mars, a Radiation Assessment Detector (RAD) was used to measure radiation exposure. [ 203 ] The radiation exposure of 1.8 millisieverts per day was mainly due to the constant presence of high-energy galactic particle radiation. In contrast, radiation from the sun accounted for only about three to five percent of the radiation levels measured during Curiosity's flight to Mars. On the way to Mars, the RAD instrument detected a total of five major radiation events caused by solar flares . [ 204 ] To protect the astronauts, a plasma bubble will surround the spacecraft as an energy shield and its magnetic field will protect the crew from cosmic radiation. This would eliminate the need for conventional radiation shields, which are several centimeters thick and correspondingly heavy. [ 205 ] In the Space Radiation Superconducting Shield (SR2S) project, which was completed in December 2015, magnesium diboride was found to be a suitable material for generating a suitable force field . [ 206 ]
Dosimeters are instruments used to measure radiation dose - as absorbed dose or dose equivalent - and are an important cornerstone of radiation protection.
At the October 1907 meeting of the American Roentgen Ray Society , Rome Vernon Wagner , an X-ray tube manufacturer, reported that he had begun carrying a photographic plate in his pocket and developing it every evening. This allowed him to determine how much radiation he had been exposed to. This was the forerunner of the film dosimeter . His efforts came too late, as he had already developed cancer and died six months after the conference.
In the 1920s, the physical chemist John Eggert (1891-1973) played a key role in the introduction of film dosimetry for routine personal monitoring. Since then, it has been successively improved and, in particular, the evaluation technique has been automated since the 1960s. [ 207 ] At the same time, Hermann Joseph Muller (1890-1967) discovered mutations as genetic consequences of X-rays, for which he was awarded the Nobel Prize in 1946. At the same time, the roentgen (R) was introduced as a unit for quantitative measurement of radiation exposure.
A dosimeter for film is divided into multiple segments, each containing a light- or radiation-sensitive film surrounded by layers of copper and lead with varying thickness. The degree of radiation penetration determines whether the segment is not blackened or blackened to varying degrees. The absorbed radiation effect during the measurement time is summed up, and the radiation dose can be determined from the blackening. Guidelines for evaluation exist, with those for Germany being published in 1994 and last updated on December 8, 2003. [ 208 ]
With the invention of the Geiger gaseous ionization detector in 1913, which became the Geiger-Müller gaseous ionization detector in 1928 - named after the physicists Hans Geiger (1882-1945) and Walther Müller (1905-1979) - the individual particles or quanta of ionizing radiation could be detected and measured. Detectors developed later, such as proportional counters or scintillation counters , which not only "count" but also measure energy and distinguish between types of radiation, also became important for radiation protection. Scintillation measurement is one of the oldest methods of detecting ionizing radiation or X-rays; originally, a zinc sulfide screen was held in the path of the beam and the scintillation events were either counted as flashes or, in the case of X-ray diagnostics, viewed as an image. A scintillation counter known as a spinthariscope was developed in 1903 by William Crookes (1832-1919) [ 209 ] and used by Ernest Rutherford (1871-1937) to study the scattering of alpha particles from atomic nuclei.
Lithium fluoride had already been proposed in the USA in 1950 by Farrington Daniels (1889-1972), Charles A. Boyd and Donald F. Saunders (1924-2013) for solid-state dosimetry using thermoluminescent dosimeters . The intensity of the thermoluminescent light is proportional to the amount of radiation previously absorbed. This type of dosimetry has been used since 1953 in the treatment of cancer patients and wherever people are occupationally exposed to radiation. [ 210 ] The thermoluminescence dosimeter was followed by OSL dosimetry, which is not based on heat but on optically stimulated luminescence and was developed by Zenobia Jacobs and Richard Roberts at the University of Wollongong (Australia). [ 211 ] The detector emits the stored energy as light. The light output, measured with photomultipliers , is then a measure of the dose. [ 212 ]
Since 2003, whole-body counters have been used in radiation protection to monitor the absorption (incorporation) of radionuclides in people who handle gamma-emitting open radioactive materials and who may be contaminated through food, inhalation of dusts and gases, or open wounds. ( α and β emitters are not measurable). [ 213 ]
Constancy testing is the verification of reference values as part of quality assurance in x-ray diagnostics , nuclear medicine diagnostics, and radiotherapy . National regulations specify [ 214 ] [ 215 ] which parameters are to be tested, which limits are to be observed, which test methods are to be used, and which test samples are to be used. In Germany, the Radiation Protection in Medicine Directive and the relevant DIN 6855 standard in nuclear medicine require regular (in some cases daily) constancy testing. Test sources are used to check the response of probe measuring stations as well as in vivo and in vitro measuring stations. Before starting the tests, the background count rate and the setting of the energy window must be checked every working day, and the settings and the yield with reproducible geometry must be checked at least once a week with a suitable test source, e.g. 137Caesium (DIN 6855-1). [ 216 ] The reference values for the constancy test are determined during the acceptance test.
Compact test specimens for medical X-ray images were not created until 1982. Prior to this, the patient himself often served as the object for producing X-ray test images. Prototypes of such an X-ray phantom with integrated structures were developed by Thomas Bronder at the Physikalisch-Technische Bundesanstalt . [ 217 ] [ 218 ]
A water phantom is a Plexiglas container filled with distilled water that is used as a substitute for living tissue to test electron linear accelerators used in radiation therapy. According to regulatory requirements, water phantom testing must be performed approximately every three months to ensure that the radiation dose delivered by the treatment system is consistent with the radiation planning . [ 219 ]
The Alderson-Rando phantom, invented by Samuel W. Alderson (1914-2005), has become the standard X-ray phantom. It was followed by the Alderson Radio Therapy (ART) phantom, which he patented in 1967. The ART phantom is cut horizontally into 2.5 cm thick slices. Each slice has holes sealed with bone-equivalent, soft-tissue-equivalent, or lung-equivalent pins that can be replaced by thermoluminescent dosimeters. Alderson is also known as the inventor of the crash test dummy . [ 220 ]
As a result of accidents or improper use and disposal of radiation sources, a significant number of people are exposed to varying degrees of radiation. Radioactivity and local dose measurements are not sufficient to fully assess the effects of radiation. To retrospectively determine the individual radiation dose, measurements are made on teeth, i.e. on biological, endogenous materials. Tooth enamel is particularly suitable for the detection of ionizing radiation due to its high mineral content ( hydroxyapatite ), which has been known since 1968 thanks to the research of John M. Brady, Norman O. Aarestad and Harold M. Swartz. [ 221 ] The measurements are performed on milk teeth , preferably molars, using electron paramagnetic resonance spectroscopy (ESR, EPR). The concentration of radicals generated by ionizing radiation is measured in the mineral part of the tooth. Due to the high stability of the radicals, this method can be used for dosimetry of long past exposures. [ 222 ] [ 223 ]
Since about 1988, in addition to physical dosimetry, biological dosimetry has made it possible to reconstruct the individual dose of ionizing radiation. This is especially important for unforeseen and accidental exposures, where radiation exposures occur without physical dose monitoring. Biological markers, particularly cytogenetic markers in blood lymphocytes, are used for this purpose. Techniques for detecting radiation damage include analyzing dicentric chromosomes after acute radiation exposure. Dicentric chromosomes result from defective repair of chromosome breaks in two chromosomes, resulting in two centromeres instead of one like undamaged chromosomes. Symmetric translocations, detected through fluorescence in situ hybridization (FISH), are used after chronic or long-term exposure to radiation. The micronucleus test and the premature chromosome condensation (PCC) test are available to measure acute exposure. [ 224 ] [ 225 ]
In principle, reducing the exposure of the human organism to ionizing radiation to zero is not possible and perhaps not even sensible. The human organism has been accustomed to natural radioactivity for thousands of years and ultimately this also triggers mutations (changes in genetic material ), which are the cause of the development of life on earth. The mutation-inducing effect of high-energy radiation was first demonstrated in 1927 by Hermann Joseph Muller (1890-1967). [ 226 ]
Three years after its establishment in 1958, the United Nations Scientific Committee on the Effects of Atomic Radiation adopted the Linear No-Threshold (LNT) model - a linear dose-effect relationship without a threshold - largely at the instigation of the Soviet Union. The dose-response relationship measured at high doses was extrapolated linearly to low doses. There would be no threshold, since even the smallest amounts of ionizing radiation would trigger some biological effect. [ 227 ] The LNT model ignores not only possible radiation hormesis , but also the known ability of cells to repair genetic damage and the ability of the organism to remove damaged cells. [ 228 ] [ 229 ] [ 230 ] Between 1963 and 1969, John W. Gofman (1918-2007) and Arthur R. Tamplin of the University of California, Berkeley , conducted research for the United States Atomic Energy Commission (USAEC, 1946-1974) investigating the relationship between radiation doses and cancer incidence. Their findings sparked a fierce controversy in the United States beginning in 1969. Starting in 1970, Ernest J. Sternglass , a radiologist at the University of Pittsburgh , published several studies describing the effect of radiation from nuclear tests and the vicinity of nuclear power plants on infant mortality. In 1971, the UASEC reduced the maximum allowable radiation dose by a factor of 100. Subsequently, nuclear technology was based on the principle of "As Low As Reasonably Achievable" ( ALARA ). This was a coherent principle as long as it was assumed that there was no threshold and that all doses were additive. In the meantime, a transition to "As High As Reasonably Safe" (AHARS) is increasingly being discussed. For the question of evacuation after accidents, a transition to AHARS seems absolutely necessary. [ 231 ] In both the Chernobyl and Fukushima cases, hasty, poorly organized and poorly communicated evacuations caused psychological and physical damage to those affected - including documented deaths in the case of Fukushima. [ 232 ] [ 233 ] [ 234 ] By some estimates, this damage is greater than would have been expected had the evacuation not taken place. [ 235 ] [ 236 ] [ 237 ] Voices such as Geraldine Thomas therefore question such evacuations in principle and call for a transition to shelter-in-place wherever possible. [ 238 ] [ 239 ]
The British physicist and radiologist and founder of radiobiology Louis Harold Gray (1905-1965) introduced the unit Rad (acronym for radiation absorbed dose) in the 1930s, which was renamed Gray (Gy) after him in 1978. One gray is a mass-specific quantity and corresponds to the energy of one joule absorbed by one kilogram of body weight. Acute whole-body exposures in excess of four Gy are usually fatal to humans.
The different types of radiation ionize to different degrees. Ionization is any process in which one or more electrons are removed from an atom or molecule, leaving the atom or molecule as a positively charged ion ( cation ). Each type of radiation is therefore assigned a dimensionless weighting factor that expresses its biological effectiveness. For X-rays, gamma and beta radiation, the factor is one, alpha radiation reaches a factor of twenty, and for neutron radiation it is between five and twenty, depending on the energy.
Multiplying the absorbed dose in Gy by the weighting factor gives the equivalent dose , expressed in Sievert (Sv). It is named after the Swedish physician and physicist Rolf Maximilian Sievert (1896-1966). Sievert was the founder of radiation protection research and developed the Sievert chamber in 1929 to measure the intensity of X-rays. He founded the International Commission on Radiation Units and Measurements (ICRU) and later became chairman of the International Commission on Radiological Protection (ICRP). [ 240 ] The ICRU and ICRP specify differently defined weighting factors that apply to environmental measurements (quality factor) and body-related dose equivalent data (radiation weighting factor).
In relation to the body, the relevant dose term is the Organ Equivalent Dose (formerly "Organ Dose"). This is the dose equivalent averaged over an organ. Multiplied by organ-specific tissue weighting factors and summed over all organs, the effective dose is obtained, which represents a dose balance. In relation to environmental measurements, the ambient dose equivalent or local dose is relevant. Its increase over time is called the local dose rate.
Even at very low effective doses, stochastic effects (genetic and cancer risk) are expected. At effective doses above 0.1 Sv, deterministic effects also occur (tissue damage up to radiation sickness at very high doses). Correspondingly high radiation doses are now only given in units of Gy. Natural radiation exposure in Germany, with an annual average effective dose of about 0.002 Sv, is well below this range. [ 241 ]
In 1931, the U.S. Advisory Committee on X-Ray and Radium Protection (ACXRP, now the National Council on Radiation Protection and Measurements, NCRP), founded in 1929, published the results of a study on the so-called tolerance dose, on which a scientifically based radiation protection guideline was based. Exposure limits were gradually lowered. In 1936 the tolerance dose was 0.1 R/day. [ 9 ] The unit "R" (the X-ray) from the CGS unit system has been obsolete since the end of 1985. Since then, the SI unit of ion dose has been " coulomb per kilogram".
After World War II, the concept of tolerance dose was replaced by that of maximum permissible dose and the concept of relative biological effectiveness was introduced. The limit was set in 1956 by the National Council on Radiation Protection & Measurements (NCRP) and the International Commission on Radiological Protection (ICRP) at 5 rem (50 mSv ) per year for radiation workers and 0.5 rem per year for the general population. The unit Rem as a physical measure of radiation dose (from the English roentgen equivalent in man) was replaced by the unit Sv (sievert) in 1978. This was due to the advent of nuclear energy and its associated dangers. [ 242 ] Prior to 1991, the equivalent dose was used both as a measure of dose and as a term for the body dose that determines the course and survival of radiation sickness. ICRP Publication 60 [ 243 ] introduced the radiation weighting factor w R {\displaystyle w_{R}} was introduced. For examples of equivalent doses as body doses, see
The origin of the concept of using a banana equivalent dose (BED) as a benchmark is unknown. In 1995, Gary Mansfield of the Lawrence Livermore National Laboratory found the Banana Equivalent Dose (BED) to be very useful in explaining radiation risks to the public. [ 244 ] It is not a formally used dose.
The banana equivalent dose is the dose of ionizing radiation to which a person is exposed by eating one banana. Bananas contain potassium . Natural potassium consists of 0.0117% of the radioactive isotope 40 K (potassium-40) and has a specific activity of 30,346 becquerels per kilogram, or about 30 becquerels per gram. The radiation dose from eating a banana is about 0.1 μSv. [ 244 ] The value of this reference dose is given as "1" and thus becomes the "unit of measurement" banana equivalent dose. Consequently, other radiation exposures can be compared to the consumption of one banana. For example, the average daily total radiation exposure of a person is 100 banana equivalent doses.
At 0.17 mSv per year, almost 10 percent of natural radioactive exposure in Germany (an average of 2.1 mSv per year) is caused by the body's own (vital) potassium. [ 245 ] [ 246 ]
The banana equivalent dose does not take into account the fact that no radioactive nuclide is accumulated in the body through the consumption of potassium-containing foods. The potassium content of the body is in homeostasis and is kept constant. [ 247 ] [ 248 ]
The Trinity test was the first nuclear weapon explosion conducted as part of the US Manhattan Project . There were no warnings to residents about the fallout, nor information about shelters or possible evacuations. [ 249 ]
This was followed in 1946 by tests in the Marshall Islands (Operation Crossroads), [ 250 ] as recounted by chemist Harold Carpenter Hodge (1904-1990), toxicologist for the Manhattan Project, in his lecture (1947) as president of the International Association for Dental Research. [ 251 ] Hodge's reputation was severely damaged by historian Eileen Welsome's 1999 Pulitzer Prize -winning book The Plutonium Files - America's Secret Medical Experiments in the Cold War. She documents horrific human experiments in which the subjects (including Hodge) were unaware that they were being used as "guinea pigs" to test the safety limits of uranium and plutonium. The experiments on the unidentified subjects were continued by the United States Atomic Energy Commission (AEC) into the 1970s. [ 252 ]
The abuse of radiation continues to this day. [ 253 ] During the Cold War, ethically reprehensible radiation experiments were conducted in the United States on untrained human subjects to determine the detailed effects of radiation on human health. Between 1945 and 1947, 18 people were injected with plutonium by Manhattan Project doctors. In Nashville, pregnant women were given radioactive mixtures. In Cincinnati, about 200 patients were irradiated over a 15-year period. In Chicago, 102 people received injections of strontium and caesium solutions. In Massachusetts, 57 children with developmental disorders were given oatmeal with radioactive markers. These radiation experiments were not stopped until 1993 under President Bill Clinton . But the injustice committed was not atoned for. [ 254 ] [ 255 ] For years, uranium hexafluoride caused radiation damage at a DuPont Company plant and to local residents. [ 256 ] At times, the plant even deliberately released uranium hexafluoride in its heated gaseous state into the surrounding area to study the effects of the radioactive and chemically aggressive gas.
Between 1978 and 1989, vehicles were checked with 137 Cs gamma sources at 17 border crossings between the German Democratic Republic and the Federal Republic of Germany. According to the Transit Agreement, vehicles could only be screened if there was reasonable suspicion. For this reason, the Ministry for State Security (Stasi) installed and operated a secret radioactive screening technology , codenamed "Technik V," which was generally used to screen all transit passengers to detect " deserters from the Republic ." Ordinary GDR customs officers were unaware of the secret radioactive screening technology and were subject to strict "entry regulations" designed to "protect" them as much as possible from radiation exposure. Lieutenant General Heinz Fiedler (1929-1993), as the highest ranking border guard of the MfS, was responsible for all radiation controls. [ 257 ] On February 17, 1995, the Radiation Protection Commission published a statement in which it said: "Even if we assume that individual persons stopped more frequently in the radiation field and that a fluoroscopy lasting up to three minutes increases the annual radiation exposure by one to a few mSv, this does not result in a dose that is harmful to health". [ 258 ] In contrast, the designer of this type of border control calculated 15 nSv per crossing. Lorenz of the former State Office for Radiation Protection and Nuclear Safety of the GDR came up with a dose estimate of 1000 nSv, which was corrected to 50 nSv a few weeks later. [ 257 ]
Radar equipment is used at airports, in airplanes, at missile sites , on tanks, and on ships. The radar technology commonly used in the 20th century produced X-rays as a technically unavoidable by-product in the high-voltage electronics of the equipment. [ 259 ] In the 1960s and 1970s, German soldiers and technicians were largely unaware of the dangers, as were those in the GDR's National People's Army . [ 259 ] The problem had been known internationally since the 1950s, and to the German Armed Forces since at least 1958. [ 260 ] However, no radiation protection measures were taken, such as the wearing of lead aprons. Until about the mid-1980s, radiation shielding was inadequate, especially for pulse switch tubes. [ 259 ] Particularly affected were maintenance technicians (radar mechanics) who were exposed to the X-ray generating parts for hours without any protection. The permissible annual limit value could be exceeded after just 3 minutes. It was not until 1976 that warning notices were put up and protective measures taken in the German Navy, and not until the early 1980s in general. [ 259 ] As late as the 1990s, the German Armed Forces denied any connection between radar equipment and cancer or genetic damage. [ 261 ] The number of victims amounted to several thousand. The connection was later acknowledged by the German Armed Forces and in many cases a supplementary pension was paid. In 2012, a foundation was set up to provide unbureaucratic compensation for the victims. [ 262 ]
The harmful effects of X-rays were recognized during the National Socialist era. The function of the gonads ( ovaries or testicles ) was destroyed by ionizing radiation, leading to infertility . In July 1942, Heinrich Himmler (1900-1945) decided to conduct forced sterilization experiments at the Auschwitz-Birkenau concentration camp , which were carried out by Horst Schumann (1906-1983), previously a doctor in Aktion T4 . [ 263 ] Each test victim had to stand between two X-ray machines, which were arranged in such a way that the test victim had just enough space between them. Opposite the x-ray machines was a booth with lead walls and a small window. From the booth, Schumann could direct X-rays at the test victims' sexual organs without endangering himself. [ 264 ] Human radiation castration experiments were also conducted in concentration camps under the direction of Viktor Brack (1904-1948). As part of the "Law for the Prevention of Hereditary Diseases," people were often subjected to radiation castration during interrogations without their knowledge. [ 265 ] Approximately 150 radiologists from hospitals throughout Germany participated in the forced castration of approximately 7,200 people using X-rays or radium. [ 266 ]
On November 23, 2006, Alexander Alexanderovich Litvinenko (1962-2006) was murdered under unexplained circumstances as a result of radiation sickness caused by polonium . [ 267 ] This was also briefly suspected in the case of Yasser Arafat (1929-2004), who died in 2004.
The misuse of ionizing radiation is a radiation offence under German criminal law . The use of ionizing radiation to harm persons or property is punishable. Since 1998, the regulations can be found in § 309 StGB (in German)
(previously § 311a StGB old version); the regulations go back to § 41 AtG old version. In the Austrian Criminal Code, relevant criminal offenses are defined in the seventh section, " Criminal acts dangerous to the public " and " Criminal acts against the environment ". In Switzerland, endangerment by nuclear energy, radioactive substances or ionizing radiation is punishable under Art. 326 of the Swiss Criminal Code and disregard of safety regulations under Chapter 9 of the Nuclear Energy Act of 21 March 2003.
Originally, the term radiation protection referred only to ionizing radiation. Today, non-ionizing radiation is also included and is the responsibility of the Federal Office for Radiation Protection, the Radiation Protection Division [ 2 ] of the Federal Office of Public Health [ 3 ] and the Ministry of Climate Action and Energy (Austria) . [ 4 ] The project collected, evaluated and compared data on the legal situation in all European countries (47 countries plus Germany) and major non-European countries (China, India, Australia, Japan, Canada, New Zealand and the USA) regarding electric, magnetic and electromagnetic fields (EMF) and optical radiation (OS). The results were very different and in some cases deviated from the recommendations of the International Commission on Non-Ionizing Radiation Protection (ICNIRP). [ 268 ]
For many centuries, the Inuit ( Eskimos ) have used snow goggles with narrow slits, carved from seal bones or reindeer antlers, to protect against snow blindness (photokeratitis).
In the 1960s, Australia - particularly Queensland - launched the first awareness campaign on the dangers of ultraviolet (UV) radiation in the spirit of primary prevention. In the 1980s, many countries in Europe and overseas initiated similar UV protection campaigns. UV radiation has a thermal effect on the skin and eyes and can lead to skin cancer (malignant melanoma) and eye inflammation or cataracts. [ 269 ] To protect the skin from harmful UV radiation, such as photodermatosis , acne aestivalis , actinic keratosis or urticaria solaris, normal clothing, special UV protective clothing (SPF 40-50) and high SPF sunscreen can be used. The Australian-New Zealand Standard (AS/NZS 4399) of 1996 measures new textile materials in an unstretched and dry state for the manufacture of protective clothing worn while bathing, especially by children, and for the manufacture of shading textiles (sunshades, awnings). The UV Standard 801 assumes a maximum radiation intensity with the solar spectrum in Melbourne, Australia, on January 1 of a year (at the height of the Australian summer), the most sensitive skin type of the wearer, and under wearing conditions. As the solar spectrum in the northern hemisphere differs from that in Australia, the measurement method according to the European standard EN 13758-1 is based on the solar spectrum of Albuquerque (New Mexico, USA), which corresponds approximately to that of southern Europe. [ 270 ]
To protect your eyes, wear sunglasses with UV protection or special goggles that also shield the sides to prevent snow blindness. A defensive reaction of the skin is the formation of a light callus, the skin's own sun protection, which corresponds to a protection factor of about 5. At the same time, the production of brown skin pigments ( melanin ) in the corresponding cells ( melanocytes ) is stimulated.
A solar control film is usually a film made of polyethylene terephthalate (PET) that is applied to windows to reduce the light and heat from the sun's rays. The film filters UV-A and UV-B radiation. Polyethylene terephthalate goes back to an invention by the two Englishmen John Rex Whinfield (1902-1966) and James Tennant Dickson in 1941.
The fact that UV-B radiation (Dorno radiation, after Carl Dorno (1865-1942)) is a proven carcinogen, but is also required for the body's own synthesis of vitamin-D 3 (cholecalciferol), leads to internationally conflicting recommendations regarding health-promoting UV exposure. [ 271 ] In 2014, based on the scientific evidence of the last decades, 20 scientific authorities, professional societies and associations from the fields of radiation protection, health, risk assessment, medicine and nutrition published a recommendation on "UV exposure for the formation of the body's own vitamin D". It was the first interdisciplinary recommendation on this topic worldwide. Using a solarium for the first time at a young age (<35 years) almost doubles the risk of developing malignant melanoma. In Germany, the use of tanning beds by minors has been prohibited by law since March 2010. As of August 1, 2012, sunbeds must not exceed a maximum irradiance of 0.3 watts per square meter of skin. Sunbeds must be labeled accordingly. The new irradiance limit corresponds to the highest UV dose that can be measured on Earth at 12 noon under a cloudless sky at the equator. [ 272 ]
The minimum erythema dose (MED) is determined for medical applications. The MED is defined as the lowest dose of radiation that produces a barely visible erythema. It is determined 24 hours after the test irradiation. It is performed with the type of lamp intended for the therapy by applying so-called light stairs to skin that is not normally exposed to light (for example, on the buttocks). [ 273 ]
Richard Küch (1860-1915) was able to melt quartz glass - the basis for UV radiation sources - for the first time in 1890 and founded the Heraeus Quarzschmelze . He developed the first quartz lamp (sun lamp) for generating UV radiation in 1904, thus laying the foundation for this form of light therapy.
Despite the dosage problems, doctors increasingly used quartz lamps in the early 20th century. Internal medicine specialists and dermatologists were among the most eager testers. After successful treatment of skin tuberculosis , internal medicine began to treat tuberculous pleurisy , glandular tuberculosis and intestinal tuberculosis. In addition, doctors tested the effect of quartz lamps on other infectious diseases such as syphilis , metabolic diseases , cardiovascular diseases , nerve pain such as sciatica , or nervous diseases such as neurasthenia and hysteria . In dermatology, fungal diseases , ulcers and wounds, psoriasis , acne , freckles and hair loss were also treated with quartz lamps, while in gynecology, abdominal diseases were treated with quartz lamps. Rejuvenation specialists used artificial high-altitude sunlight to stimulate gonadal activity and treated infertility, impotentia generandi (inability to conceive), and lack of sexual desire by irradiating the genitals. For this purpose, Philipp Keller (1891-1973) developed an erythema dosimeter with which he measured the amount of radiation not in Finsen units (UV radiation with a wavelength λ of 296.7 nm and an irradiance E of 10 −5 W/m 2 ), but in height solar units (HSE). It was the only instrument in use around 1930, but it was not widely accepted in medical circles. [ 274 ] [ 275 ]
Treatment of acne with ultraviolet radiation is still controversial. Although UV radiation can have an antibacterial effect, it can also induce proliferative hyperkeratosis . This can lead to the formation of comedones ("blackheads"). Phototoxic effects may also occur. In addition, it is carcinogenic and promotes skin aging. UV therapy is increasingly being abandoned in favor of photodynamic therapy . [ 276 ]
The ruby laser was developed in 1960 by Theodore Maiman (1927-2007) as the first laser based on the ruby maser . Soon after, the dangers of lasers were discovered, especially for the eyes and skin, due to the laser's low penetration depth. Lasers have numerous applications in technology and research as well as in everyday life, from simple laser pointers to distance measuring devices , cutting and welding tools , reproduction of optical storage media such as CDs, DVDs and Blu-ray discs, communication, laser scalpels and other devices using laser light in everyday medical practice. The Radiation Protection Commission requires that laser applications on human skin be performed only by a specially trained physician. Lasers are also used for show effects in discotheques and at events.
Lasers can cause biological damage due to the properties of their radiation and their sometimes extremely concentrated electromagnetic power. For this reason, lasers must be labeled with standardized warnings depending on the laser class . The classification is based on the DIN standard EN 60825-1 , which distinguishes between ranges of wavelengths and exposure times that lead to characteristic injuries and injury thresholds for power or energy density .
The CO 2 -Laser was developed in 1964 by the Indian electrical engineer and physicist Chandra Kumar Naranbhai Patel (*1938) [ 277 ] at the same time as the Nd:YAG laser (neodymium-doped yttrium aluminum garnet laser) at Bell Laboratories by LeGrand Van Uitert (1922-1999) and Joseph E. Geusic (*1931) and the Er:YAG laser (erbium-doped yttrium aluminum garnet laser) and has been used in dentistry since the early 1970s. In the hard laser field, two systems in particular are emerging for use in the oral cavity: the CO2 laser for use in soft tissue and the Er:YAG laser for use in dental hard and soft tissue. The goal of soft laser treatment is to achieve biostimulation with low energy densities. [ 278 ]
The Commission on Radiological Protection strongly recommends that the possession and purchase of class 3B and 4 laser pointers be regulated by law to prevent misuse. [ 279 ] This is due to the increase in dangerous dazzle attacks caused by high-power laser pointers. In addition to pilots, these include truck and car drivers, train operators, soccer players, referees, and even spectators at soccer games. [ 280 ] Such glare can lead to serious accidents and, in the case of pilots and truck drivers, to occupational disability due to eye damage. The first accident prevention regulation was published on April 1, 1988 as BGV B2, followed on January 1, 1997 by DGUV Regulation 11 of the German Social Accident Insurance. [ 281 ] Between January and mid-September 2010, the German Federal Aviation Office registered 229 dazzle attacks on helicopters and airplanes of German airlines nationwide. [ 282 ] On October 18, 2017, a perpetrator of a dazzle attack on a federal police helicopter was sentenced to one year and six months in prison without parole. [ 283 ]
Electrosmog is colloquially understood as the exposure of humans and the environment to electric, magnetic and electromagnetic fields , some of which are believed to have undesirable biological effects. [ 284 ] Electromagnetic environmental compatibility (EMC) refers to the effects on living organisms, some of which are considered electrosensitive . Fears of such effects have existed since the beginning of technological use in the mid-19th century. In 1890, for example, officials of the Royal General Directorate in Bavaria were forbidden to attend the opening ceremony of Germany's first alternating current power plant, the Reichenhall Electricity Works, or to enter the machine room. With the establishment of the first radio telegraphy and its telegraph stations, the U.S. magazine The Atlanta Constitution reported in April 1911 on the potential dangers of radio telegraph waves, which, in addition to "tooth loss," were said to cause hair loss and make people "crazy" over time. [ 285 ] Full-body protection was recommended as a preventive measure.
During the second half of the 20th century, other sources of electromagnetic fields have become the focus of health concerns, such as power lines, photovoltaic systems , microwave ovens, computer and television screens, security devices, radar equipment, and more recently, cordless telephones ( DECT ), cell phones, their base stations , energy-saving lamps , and Bluetooth connections. Electrified railroad lines, tram overhead lines and subway tracks are also strong sources of electrosmog. In 1996, the World Health Organization (WHO) launched the EMF (ElectroMagnetic Fields) Project to bring together current knowledge and available resources from key international and national organizations and scientific institutions on electromagnetic fields. [ 286 ] [ 287 ] The German Federal Office for Radiation Protection ( BfS ) published the following recommendation in 2006:
"In order to avoid possible health risks, the German Federal Office for Radiation Protection recommends that you minimize your personal exposure to radiation through your own initiative."
As of 2016, the EMF Guideline 2016 of EUROPAEM ( European Academy For Environmental Medicine ) on the prevention, diagnosis and treatment of EMF-related complaints and diseases applies. [ 289 ]
A microwave oven , invented in 1950 by U.S. researcher Percy Spencer (1894-1970), is used to quickly heat food using microwave radiation at a frequency of 2.45 gigahertz. In an intact microwave oven, leakage radiation is relatively low due to the shielding of the cooking chamber. An "emission limit of five milliwatts per square centimeter (equivalent to 50 watts per square meter) at a distance of five centimeters from the surface of the appliance" (radiation density or power flux density) is specified. Children should not stand directly in front of or next to the appliance while food is being prepared. In addition, the Federal Office for Radiation Protection lists pregnant women as particularly at risk. [ 290 ]
In microwave therapy, electromagnetic waves are generated for heat treatment. The penetration depth and energy distribution vary depending on the frequency of application (short waves, ultra short waves, microwaves). To achieve greater penetration, pulsed microwaves are used, each of which delivers high energy to the tissue. A pulse pause ensures that no burns occur. Metal implants and pacemakers are contraindications. [ 291 ]
The discussion about possible health risks from mobile phone radiation has been controversial to date, although there are currently no valid results. According to the German Federal Office for Radiation Protection
"there are still uncertainties in the risk assessment that could not be completely eliminated by the German Mobile Telecommunication Research Program, in particular possible health risks of long-term exposure to high-frequency electromagnetic fields from cell phone calls in adults (intensive cell phone use over more than 10 years) and the question of whether the use of cell phones by children could have an effect on health. For these reasons, the Federal Office for Radiation Protection still considers preventive health protection (precaution) to be necessary: exposure to electromagnetic fields should be kept as low as possible."
The German Federal Office for Radiation Protection recommends, among other things, mobile phones with a low SAR (Specific Absorption Rate) [ 292 ] [ 293 ] and the use of headsets or hands-free devices to keep the mobile phone away from the head. There is some discussion that mobile phone radiation may increase the incidence of acoustic neuroma , a benign tumor that arises from the vestibulocochlear nerve . It should therefore be reduced. [ 294 ] In everyday life, a mobile phone transmits at maximum power only in exceptional cases. As soon as it is near a cell where maximum power is no longer needed, it is instructed by that cell to reduce its power. Electrosmog or cell phone radiation filters built into cell phones are supposed to protect against radiation. The effect is doubtful from the point of view of electromagnetic environmental compatibility, because the radiation intensity of the cell phone is increased disproportionately in order to obtain the necessary power. The same is true for use in a car without an external antenna, as the necessary radiation can only penetrate through the windows, or in areas with poor network coverage. Since 2004, radio network repeaters have been developed for mobile phone networks ( GSM , UMTS , Tetrapol ) that can amplify the reception of a mobile phone cell in shaded buildings. This reduces the SAR value of the mobile phone when making calls.
The SAR value of a WLAN router is only a tenth of that of a cell phone, although this drops by a further 80% at a distance of just one meter. The router can be set so that it switches off when not in use, for example at night. [ 295 ]
Until now, electrical energy has been transported from the power plant to the consumer almost exclusively via high-voltage lines , in which alternating current flows at a frequency of 50 Hertz . As part of the energy transition , high-voltage direct current (HVDC) transmission systems are also planned in Germany. Since the amendment of the 26th Federal Immission Control Ordinance (BImSchV) in 2013, emissions from HVDC systems are also regulated by law. The limit is set to prevent interference with electronic implants caused by static magnetic fields . No limit has been set for static electric fields.
Ground fault interrupters are available to reduce electric fields and (in the case of current flow) magnetic fields from residential electrical installations. In plaster installations, only a small part of the electric field can escape from the wall. However, a mains disconnect switch automatically disconnects the relevant line as long as no electrical load is switched on; as soon as a load is switched on, the mains voltage is also switched on. [ 296 ] Ground fault interrupters were introduced in 1973 and have been continuously improved over the decades. [ 297 ] In 1990, for example, it became possible to disconnect the PEN conductor (formerly known as the neutral conductor). [ 298 ] Circuit breakers can be installed in several different circuits, preferably in those that supply bedrooms. However, they only turn off when no continuous current consumers such as air conditioners, fans, humidifiers, electric alarm clocks, night lights, standby devices, alarm systems, chargers, and similar devices are turned on. Instead of the mains voltage, a low voltage (2-12 volts) is applied, which can be used to detect when a consumer is switched on.
Rooms can also be shielded with copper wallpaper or special wall paints containing metal, thus applying the Faraday cage principle.
Since about 2005, body scanners have been used primarily at airports for security (passenger) screening. Passive scanners detect the natural radiation emitted by a person's body and use it to locate objects worn or concealed on the body. Active systems also use artificial radiation to improve detection by analyzing the backscatter . A distinction is made between body scanners that use ionizing radiation (usually X-rays) and those that use non-ionizing radiation ( terahertz radiation ).
The integrated components operating in the lower terahertz range emit less than 1 mW (-3 dBm), [ 299 ] so no health effects are expected. There are conflicting studies from 2009 on whether genetic damage can be detected as a result of terahertz radiation. [ 300 ] In the U.S., backscatter x-ray scanners make up the majority of devices used. Scientists fear that a future increase in cancer could pose a greater threat to the life and limb of passengers than terrorism itself. [ 301 ] It is not clear to the passenger whether the body scanners used during a particular checkpoint use only terahertz or also X-ray radiation.
According to the Federal Office for Radiation Protection, the few available results from investigations in the frequency range of active whole-body scanners that work with millimeter wave or terahertz radiation do not yet allow a conclusive assessment from a radiation protection perspective (as of 24 May 2017). [ 302 ]
In the vicinity of the plant, where employees or other third parties may be present, the limit value of the permissible annual dose for a single person in the population of one millisievert (1 mSv, including pregnant women and children) is not exceeded, even in the case of permanent presence.
In the case of X-ray scanners for hand luggage, it is not necessary to set up a radiation protection area by Section §19 RöV , as the radiation exposure during a hand luggage check for passengers does not exceed 0.2 microsievert (μSv), even under unfavorable assumptions. For this reason, employees involved in baggage screening are not considered to be occupationally exposed to radiation in accordance with Section §31 X-ray Ordinance and therefore do not have to wear a dosimeter. [ 303 ]
Electromagnetic alternating fields have been used in medicine since 1764, [ 304 ] mainly for heating and increasing blood circulation ( diathermy , short-wave therapy ) to improve wound and bone healing. [ 305 ] The relevant radiation protection is regulated by the Medical Devices Act together with the Medical Devices Operator Ordinance. [ 306 ] The Medical Devices Act came into force in Germany on January 14, 1985. It divided the medical devices known at that time into groups according to their degree of risk to the patient. The Medical Devices Ordinance regulated the handling of medical devices until January 1, 2002, when it was replaced by the Medical Devices Act. When ionizing radiation is used in medicine, the benefit must outweigh the potential risk of tissue damage (justifiable indication). For this reason, radiation protection is of great importance. The design should be optimized according to the ALARA (As Low As Reasonably Achievable) principle as soon as an application is described as suitable. Since 1996, the European ALARA Network (EAN), founded by the European Commission , has been working on the further implementation of the ALARA principle in radiation protection. [ 307 ]
Discovered around 1800 by the German-British astronomer, engineer and musician Friedrich Wilhelm Herschel (1738-1822), infrared radiation primarily produces heat. If the increase in body temperature and the duration of exposure exceed critical limits, heat damage and even heat stroke can result. Due to the still unsatisfactory data situation and the partly contradictory results, it is not yet possible to give clear recommendations for radiation protection with regard to infrared radiation. However, the findings regarding the acceleration of skin aging by infrared radiation are sufficient to describe the use of infrared radiation against wrinkles as counterproductive. [ 308 ]
In 2011, the Institute for Occupational Safety and Health of the German Social Accident Insurance established exposure limit values to protect the skin from burns caused by thermal radiation . The IFA recommends that, in addition to the limit specified in EU Directive 2006/25/EC to protect the skin from burns for exposure times up to 10 seconds, a limit for exposure times between 10 and 1000 seconds should be applied. In addition, all radiation components in the wavelength range from 380 to 20000 nm should be considered for comparison with the limit values. [ 309 ]
A leaflet published by the German Radiological Society (DRG) in 1913 was the first systematic approach to radiation protection . [ 310 ] [ 311 ] The physicist and co-founder of the society, Bernhard Walter (1861-1950), was one of the pioneers of radiation protection.
The International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) were established at the Second International Congress of Radiology in Stockholm in 1928. In the same year, the first international radiation protection recommendations were adopted and each country represented was asked to develop a coordinated radiation control program. The United States representative, Lauriston Taylor of the US Bureau of Standards (NSB), formed the Advisory Committee on X-Ray and Radium Protection, later renamed the National Committee on Radiation Protection and Measurements (NCRP). The NCRP received a Congressional charter in 1964 and continues to develop guidelines to protect individuals and the public from excessive radiation. In the years that followed, numerous other organizations were established by almost every president. [ 312 ]
Individuals in professions such as pilots, nuclear physicians, and nuclear power plant workers are regularly exposed to ionizing radiation. In Germany, over 400,000 workers undergo occupational radiation monitoring to safeguard against the harmful effects of radiation. Approximately 70,000 individuals employed across various industries possess a radiation pass (distinct from an X-ray pass - see below). Individuals who may receive an annual effective dose of more than 1 millisievert during their work are required to undergo radiation protection monitoring. In Germany, the effective dose from natural radiation is 2.1 millisieverts per year. Radiation dose is measured using dosimeters, and the occupational dose limit is 20 millisieverts per year. [ 313 ] Monitoring also applies to buildings, plant components or (radioactive) substances. These are exempted from the scope of the Radiation Protection Ordinance by a special administrative act, the exemption in radiation protection. To this end, it must be ensured that the resulting radiation exposure for an individual member of the public does not exceed 10 μSv per calendar year and that the resulting collective dose does not exceed 1 person sievert per year. [ 314 ]
According to § 170 StrlSchG [Radiation Protection Act] (in German)
all occupationally exposed persons and holders of radiation passports require a radiation protection register number (SSR number or SSRN), a unique personal identification number, as of December 31, 2018. The SSR number facilitates and improves the allocation and balancing of individual dose values from occupational radiation exposure in the radiation protection register. It replaces the former radiation passport number. It is used to monitor dose limits. Companies are obliged to deploy their employees in such a way that the radiation dose to which they are exposed does not exceed the limit of 20 millisieverts per calendar year. In Germany, about 440,000 people were classified as occupationally exposed to radiation in 2016. According to § 145 StrlSchG [German Radiation Protection Act] (in German)
paragraph 1, Sentence 1, "in the case of remediation and other measures to prevent and reduce exposure at radioactively contaminated sites, the person who carries out the measures himself or has them carried out by workers under his supervision must carry out an assessment of the body dose of the workers before starting the measures". Applications for SSR numbers must be submitted to the Federal Office for Radiation Protection ( BfS ) by March 31, 2019 for all employees currently under surveillance. [ 315 ]
The application for the SSR number at the Federal Office and the transmission of the necessary data must be ensured following § 170 StrlSchG [German Radiation Protection Act] (in German)
paragraph 4 sentence 4 by
StrlSchG (in German) paragraph 1 or § 145 StrlSchG (in German) paragraph 1 sentence 1 or by
§ 115 StrlSchG (in German) paragraph 2 or § 153 StrlSchG (in German) paragraph 1.
The SSR numbers must then be available for further use as part of normal communication with monitoring stations or radiation pass authorities. [ 316 ] The SSR number is derived from the social security number and personal data using non-traceable encryption. The transmission takes place online. Approximately 420,00 persons are monitored for radiation protection in Germany (as of 2019).
Emergency responders (including volunteers) who are not occupationally exposed persons within the meaning of the Radiation Protection Act also require an SSR number retrospectively, i.e. after an operation in which they were exposed to radiation above the limits specified in the Radiation Protection Ordinance, as all relevant exposures must be recorded in the Radiation Protection Register.
Radiation protection areas are spatial areas in which either people can receive certain body doses during their stay or in which a certain local dose rate is exceeded. They are defined in § 36 of the Radiation Protection Ordinance and in §§ 19 and 20 of the X-Ray Ordinance. According to the Radiation Protection Ordinance, radiation protection areas are divided into restricted areas (local dose rate ≥ 3 mSv/hour), control areas (effective dose > 6 mSv/year) and monitoring areas (effective dose > 1 mSv/year), depending on the hazard.
Germany, Austria and Switzerland, among many other countries, have early warning systems in place to protect the population.
The local dose rate measurement network (ODL measurement network) is a measurement system for radioactivity operated by the German Federal Office for Radiation Protection, which determines the local dose rate at the measurement site. [ 317 ]
In Austria, the Radiation Early Warning System is a measurement and reporting system established in the late 1970s to provide early detection of elevated levels of ionizing radiation in the country and to enable the necessary measures to be taken. The readings are automatically sent to the central office at the Ministry, where they can be accessed by the relevant departments, such as the Federal Warning Center or the warning centers of the federal states. [ 318 ]
NADAM (Network for Automatic Dose Alerting and Measurement) is the gamma radiation monitoring network of the Swiss National Emergency Operations Center. The monitoring network is complemented by the MADUK stations (Monitoring Network for Automatic Dose Rate Monitoring in the Environment of Nuclear Power Plants) of the Swiss Federal Nuclear Safety Inspectorate (ENSI).
In 2011-2014, the NERIS-TP project aimed to discuss the lessons learned from the European EURANOS project on nuclear emergency response with all relevant stakeholders . [ 319 ]
The European PREPARE project aims to fill gaps in nuclear and radiological emergency preparedness identified after the Fukushima accident. The project aims to review emergency response concepts for long-lived releases, to address issues of measurement methods and food safety in the case of transboundary contamination, and to fill gaps in decision support systems (source term reconstruction, improved dispersion modeling, consideration of aquatic dispersion pathways in European river systems). [ 320 ]
Environmental radioactivity has been monitored in Germany since the 1950s. Until 1986, this was carried out by various authorities that did not coordinate with each other. Following the confusion during the Chernobyl reactor disaster in April 1986, measurement activities were pooled in the IMIS (Integrated Measurement and Information System) project, an environmental information system for monitoring radioactivity in Germany. [ 321 ] Previously, the measuring equipment was affiliated to the warning offices under the name WADIS ("Warning service information system").
The aim of the CONCERT (European Joint Programme for the Integration of Radiation Protection Research) project is to establish a joint European program for radiation protection research in Europe in 2018, based on the current strategic research programs of the European research platforms MELODI (radiation effects and radiation risks), ALLIANCE (radioecology), NERIS (nuclear and radiological emergency response), EURADOS (radiation dosimetry) and EURAMED (medical radiation protection). [ 322 ]
The REWARD (Real time wide area radiation surveillance system) project was established to address the threats of nuclear terrorism, missing radioactive sources, radioactive contamination and nuclear accidents. The consortium developed a mobile system for real time wide area radiation monitoring based on the integration of new miniaturized solid state sensors. Two sensors are used: a cadmium zinc telluride (CdZnTe) detector for gamma radiation and a high efficiency neutron detector based on novel silicon technologies. The gamma and neutron detectors are integrated into a single monitoring device called a tag. The sensor unit includes a wireless communication interface to remotely transmit data to a monitoring base station, which also uses a GPS system to calculate the tag's position. [ 323 ]
The Nuclear Emergency Support Team (NEST) is a US program for all types of nuclear emergencies of the National Nuclear Security Administration (NNSA) of the United States Department of Energy and is also a counter-terrorism unit that responds to incidents involving radioactive materials or nuclear weapons in US possession abroad. [ 324 ] [ 325 ] It was founded in 1974/75 under US President Gerald Ford and renamed the Nuclear Emergency Support Team in 2002. [ 326 ] [ 327 ] In 1988, a secret agreement from 1976 between the USA and the Federal Republic of Germany became known, which stipulates the deployment of NEST in the Federal Republic. In Germany, a similar unit has existed since 2003 with the name Central Federal Support Group for Serious Cases of Nuclear-Specific Emergency Response ( ZUB ). [ 328 ]
As early as 1905, the Frenchman Viktor Hennecart [ 329 ] called for special legislation to regulate the use of X-rays. In England, Sidney Russ (1879-1963) suggested to the British Roentgen Society in 1915 that it should develop its own set of safety standards, which it did in July 1921 with the formation of the British X-Ray and Radium Protection Committee. [ 330 ] In the United States, the American X-Ray Society developed its own guidelines in 1922. In the German Reich, a special committee of the German X-Ray Society under Franz Maximilian Groedel (1881-1951), Hans Liniger (1863-1933) and Heinz Lossen (1893-1967) formulated the first guidelines after the First World War. In 1953, the employers' liability insurance associations issued the accident prevention regulation "Use of X-rays in medical facilities" based on the legal basis in § 848a of the Reich Insurance Code (RVG). In the GDR, the Occupational Safety and Health Regulation (ASAO) 950 was in effect from 1954 to 1971. It was replaced by ASAO 980 on April 1, 1971.
The European Atomic Energy Community (EURATOM) was founded on March 25, 1957, by the Treaty of Rome between France, Italy, the Benelux countries and the Federal Republic of Germany, and remains almost unchanged to this day. Chapter 3 of the Euratom Treaty regulates measures to protect the health of the population. Article 35 requires facilities for the continuous monitoring of soil, air and water for radioactivity. As a result, monitoring networks have been set up in all Member States and the data collected is sent to the EU's central database (EURDEP, European Radiological Data Exchange Platform). [ 331 ] The platform is part of the EU's ECURIE system for the exchange of information in the event of radiological emergencies and became operational in 1995. [ 332 ] Switzerland also participates in this information system. [ 333 ] [ 334 ]
In Germany, the first X-ray regulation ( RGBl . I p. 88) was issued in 1941 and originally applied to non-medical companies. The first medical regulations were issued in October 1953 by the Main Association of Industrial Employer's Liability Insurance Associations as accident prevention regulations for the Reich Insurance Code. Basic standards for radiation protection were introduced by directives of the European Atomic Energy Community ( EURATOM ) on February 2, 1959. The Atomic Energy Act of December 23, 1959 is the national legal basis for all radiation protection legislation in the Federal Republic of Germany (West) with the Radiation Protection Ordinance of June 24, 1960 (only for radioactive substances), the Radiation Protection Ordinance of July 18, 1964 (for the medical sector) and the X-ray Ordinance of March 1, 1973. [ 335 ] Radiation protection was formulated in § 1, according to which life, health and property are to be protected from the dangers of nuclear energy and the harmful effects of ionizing radiation and damage caused by nuclear energy or ionizing radiation is to be compensated. The Radiation Protection Ordinance sets dose limits for the general population and for occupationally exposed persons. In general, any use of ionizing radiation must be justified and radiation exposure must be kept as low as possible even below the limit values. To this end, physicians, dentists and veterinarians, for example, must provide proof every five years - by Section 18a (2) X-ray Ordinance . in the version dated April 30, 2003 - that their specialist knowledge in radiation protection has been updated and must complete a full-day course with a final examination. Specialist knowledge in radiation protection is required by the Technical Knowledge Guideline according to X-ray Ordinance . - R3 for persons who work with baggage screening equipment, industrial measuring equipment and interfering emitters. Since 2019, the regulatory areas of the previous X-ray and radiation protection ordinances have been merged in the amended Radiation Protection Ordinance.
The Radiation Protection Commission ( SSK ) was founded in 1974 as an advisory body to the Federal Ministry of the Interior. [ 336 ] It emerged from Commission IV "Radiation Protection" of the German Atomic Energy Commission, which was founded on January 26, 1956. After the Chernobyl nuclear disaster in 1986, the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection was established in the Federal Republic of Germany. The creation of this ministry was primarily a response to the perceived lack of coordination in the political response to the Chernobyl disaster and its aftermath. On December 11, 1986, the German Bundestag passed the Precautionary Radiation Protection Act ( StrVG ) to protect the population, to monitor radioactivity in the environment, and to minimize human exposure to radiation and radioactive contamination of the environment in the event of radioactive accidents or incidents. The last revision of the X-Ray Ordinance was issued on January 8, 1987. As part of a comprehensive modernization of German radiation protection law, [ 337 ] which is largely based on Directive 2013/59/Euratom, [ 338 ] the provisions of the X-Ray Ordinance have been incorporated into the revised Radiation Protection Ordinance.
Among many other measures, contaminated food was withdrawn from the market on a large scale. Parents were strongly advised not to let their children play in sandboxes. Some of the contaminated sand was replaced. In 1989, the Federal Office for Radiation Protection was incorporated into the Ministry of the Environment. On April 30, 2003, a new precautionary radiation protection law was promulgated to implement two EU directives on the health protection of persons against the dangers of ionizing radiation during medical exposure. [ 339 ] [ 340 ] The protection of workers from optical radiation (infrared radiation (IR), visible light (VIS) and ultraviolet radiation (UV)), which falls under the category of non-ionizing radiation, is regulated by the Ordinance on the Protection of Workers from Artificial Optical Radiation of 19 July 2010 . [ 341 ] It is based on the EU Directive 2006/25/EC of April 27, 2006. [ 342 ] On March 1, 2010, the "Act on the Protection of Humans from Non-Ionizing Radiation" ( NiSG ), [ 343 ] BGBl . I p. 2433, came into force, according to which the use of sunbeds by minors has been prohibited since August 4, 2009, in accordance with § 4 NiSG [Network and Information Systems Security Ordinance – NIS Ordinance] (in German)
A new Radiation Protection Act came into force in Germany on October 1, 2017. [ 344 ]
In Germany, a radiation protection officer directs and supervises activities to ensure radiation protection when handling radioactive materials or ionizing radiation. Their duties are described in § 31-33 StrlSchV (in German)
of the Radiation Protection Ordinance and § 13-15 RöV (in German)
of the X-Ray Ordinance. They are appointed by the radiation protection officer, who is responsible for ensuring that all radiation protection regulations are observed.
Since 2002, an x-ray pass is a document in which the examining physician or dentist must enter information about the x-ray examinations performed on the patient. The main aim was to avoid unnecessary repeat examinations. According to the new Radiation Protection Ordinance ( StrlSchV ), [ 345 ] practices and clinics are no longer obliged to offer their patients X-ray passports and to enter examinations in them. The Radiation Protection Ordinance came into force on December 31, 2018, together with the Radiation Protection Act (StrlSchG) passed in 2017, replacing the previous Radiation Protection Ordinance and the X-ray Ordinance. The Federal Office for Radiation Protection ( BfS ) continues to advise patients to keep records of their own radiation diagnostic examinations. On its website, the BfS provides a downloadable document that can be used for personal documentation. [ 346 ]
In Switzerland, institutionalized radiation protection began in 1955 with the issuance of guidelines for protection against ionizing radiation in medicine, laboratories, industry and manufacturing plants , although these were only recommendations. The legal basis was created by a new constitutional article (Art. 24), according to which the federal government issues regulations on protection against the dangers of ionizing radiation. On this basis, a corresponding federal law entered into force on July 1, 1960. The first Swiss ordinance on radiation protection entered into force on May 1, 1963. On October 7, 1963, the Federal Department of Home Affairs (EDI) issued the following decrees to supplement the ordinance:
Another 40 regulations followed. The monitoring of such facilities took many years due to a lack of personnel. From 1963, dosimeters were to be used for personal protection, but this met with great resistance. It was not until 1989 that an updated radiation protection law was passed, accompanied by radiation protection training for the people concerned. [ 347 ]
The legal basis for radiation protection in Austria is the Radiation Protection Act (BGBl. 277/69 as amended) of June 11, 1969. [ 348 ] The tasks of radiation protection extend to the fields of medicine, commerce and industry, research, schools, worker protection and food. The General Radiation Protection Ordinance, Federal Law Gazette II No. 191/2006, has been in force since June 1, 2006. [ 349 ] Based on the Radiation Protection Act, it regulates the handling of radiation sources and measures for protection against ionizing radiation. The Optical Radiation Ordinance ( VOPST ) is a detailed ordinance to the Occupational Safety and Health Act ( ASchG ).
On August 1, 2020, a new radiation protection law came into force, which largely harmonized the radiation protection regulations for artificial radioactive substances and terrestrial natural radioactive substances. They are now enshrined in the General Radiation Protection Ordinance 2020. Companies that carry out activities with naturally occurring radioactive substances are now subject to the licensing or notification requirements pursuant to Sections 15 to 17 of the Radiation Protection Act 2020, unless an exemption provision pursuant to Sections 7 or 8 of the General Radiation Protection Ordinance 2020 applies. Cement production including maintenance of clinker kilns, production of primary iron and tin, lead and copper smelting are included in the scope. If a company falls within the scope of the General Radiation Protection Ordinance 2020, its owner must commission an officially authorized monitoring body. The mandate includes dose assessment for workers who may be exposed to increased radiation exposure and, if necessary, determination of the activity concentration of residues and radioactive substances discharged with the air or waste water. [ 350 ] | https://en.wikipedia.org/wiki/History_of_radiation_protection |
Arabidopsis thaliana is a first class model organism and the single most important species
for fundamental research in plant molecular genetics .
A. thaliana was the first plant for which a high-quality reference genome sequence was determined and a worldwide research community has developed many other genetic resources and tools.
The experimental advantages of A. thaliana have enabled many important discoveries. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] These advantages have been extensively reviewed, [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] as has its role in fundamental discoveries
about the plant immune system, [ 22 ] natural variation, [ 23 ] [ 24 ] root biology, [ 25 ] and other areas. [ 26 ]
A. thaliana was first described by Johannes Thal, and later renamed in his honor. [ 24 ] (See the Taxonomy section of the main article.) Friedrich Laibach outlined why A. thaliana might be a good experimental system in 1943 [ 27 ] and collected a large number of natural accessions. [ 6 ] [ 12 ] [ 13 ] [ 24 ] A. thaliana is largely self-pollinating , so these accessions represent inbred strains ,
with high homozygosity that simplifies genetic analysis.
Natural A. thaliana accessions are often referred to as " ecotypes ".
Laibach had earlier (1907) determined the A. thaliana chromosome number (5) as part of his PhD research. [ 28 ] Laibach's student Erna Reinholz described mutagenesis of A. thaliana with X-ray radiation in 1945. [ 29 ]
George Rédei pioneered the use of A. thaliana for fundamental studies,
mutagenizing plants with ethyl methanesulfonate (EMS)
and then screening them
for auxotrophic defects [ 5 ] and writing an influential review in 1975. [ 6 ] Rédei distributed the standard laboratory accessions 'Columbia-0' and 'Landsberg erecta' . [ 8 ] [ 18 ]
Gerhard Röbbelen organized the first International Arabidopsis Symposium in 1965. [ 13 ] Röbbelen also started the 'Arabidopsis Information Service', a newsletter for sharing information in the community. [ 30 ] This newsletter was maintained by A.R. Kranz starting in 1974, and was published until 1990. [ 13 ]
As molecular biology methods progressed,
many investigators
sought to focus community effort
on a common model plant species
such as petunia or tomato . [ 12 ] [ 13 ] This concept changed the emphasis
of the long tradition of researchers
using diverse agronomically important species
such as maize , barley , and peas . [ 13 ] The A. thaliana subcommunity espoused an ethos
of freely sharing information and materials,
and investigators were attracted by the perceived wide-open nature
of plant molecular genetics
relative to other fields that were better established and thus more "crowded" and competitive. [ 15 ] The A. thaliana genome was shown to be relatively small and nonrepetitive, [ 31 ] [ 32 ] [ 33 ] which was an important advantage for early molecular methods. [ 13 ] Pioneering A. thaliana studies have used its natural filamentous pathogen Hyaloperonospora arabidopsidis ,
the model plant-pathogenic bacterium Pseudomonas syringae , and many other microbes. [ 22 ] A. thaliana roots are transparent
and have a relatively simple radially symmetric cellular structure,
facilitating analysis by microscopy. [ 34 ]
Cloning of an A. thaliana gene, an alcohol dehydrogenase -encoding locus, was described in 1986, [ 35 ] by which time mutations at over 200 loci had been defined. [ 7 ]
Development of genetic maps based on scorable phenotypes [ 36 ] and molecular genetic markers facilitated map-based cloning of mutant loci from classical " forward genetic " screens. [ 13 ] [ 14 ] [ 17 ] Growing amounts of DNA sequence data
facilitated development and application of such molecular markers. [ 37 ] [ 38 ] Descriptions of the first successful map-based cloning projects
were published in 1992. [ 39 ] [ 40 ]
Recombinant inbred strain /line (RIL) populations
were developed,
notably from a cross of Columbia-0 × Lansberg erecta , [ 41 ] and used to map and clone a wide variety of quantitative trait loci .
A. thaliana can be genetically transformed using Agrobacterium tumefaciens ; transformation was first reported in 1986. [ 42 ] Later work showed that transgenic seed can be obtained by simply dipping flowers into a suitable bacterial suspension. The invention/discovery of this 'floral dip' method, published in 1998, [ 43 ] made A. thaliana arguably the most easily transformed multicellular organism, and has been essential to many subsequent investigations. [ 13 ] Efficient transformation facilitated insertional mutagenesis [ 44 ] as described further below.
A. thaliana geneticists
made important contributions to development of the ABC model of flower development via genetic analysis of floral homeotic mutants . [ 45 ] [ 46 ] [ 47 ] [ 48 ]
The plant homeodomain finger is so named due to its discovery in an Arabidopsis homeodomain. In 1993 Schindler et al. discovered the PHD finger in the protein HAT3.1 . [ 49 ] It has since proven to be important to chromatin in a wide variety of taxa. [ 50 ]
KNOTTED-like homeobox genes,
homologs of the maize KNOTTED1 gene that control shoot apical meristem identity,
were described in 1994 [ 51 ] and cloning of the SHOOT-MERISTEMLESS locus
was published in 1996. [ 52 ]
An international consortium began developing a physical map for A. thaliana in 1990, and DNA sequencing and assembly efforts were formalized in the Arabidopsis Genome Initiative (AGI) in 1996. [ 4 ] [ 10 ] This work paralleled the Human Genome Project and related projects for other model organisms,
including the budding yeast S. cerevisiae ,
the nematode C. elegans ,
and the fly Drosophila melanogaster ,
which were published in 1996, [ 53 ] 1998, [ 54 ] and 2000, [ 55 ] respectively.
The project built on efforts to sequence expressed sequence tags from A. thaliana . [ 56 ] [ 57 ] Descriptions of the sequences of chromosomes 4 and 2 were published in 1999, [ 58 ] [ 59 ] and the project was completed in 2000. [ 60 ] [ 61 ] [ 62 ] [ 63 ] This represented the first reference genome for a flowering plant and facilitated comparative genomics .
A series of meetings led to an ambitious long-term NSF -funded initiative to determine the function of every A. thaliana gene by the year 2010. [ 64 ] [ 65 ] The rationale for this project was to combine new high-throughput technologies with systematic gene-family-wide studies and community resources to accelerate progress beyond what was possible via piecemeal single-laboratory studies.
DNA microarray technology
was rapidly adopted for A. thaliana research
and led to the development
of "atlases" of gene expression
in different tissues and under different conditions.
The A. thaliana genome sequence,
low-cost Sanger sequencing ,
and ease of transformation
facilated genome-wide mutagenesis,
yielding collections
of sequence-indexed transposon mutant and (especially) T-DNA mutant lines. [ 66 ] [ 67 ] The ease and speed
of ordering mutant seed from stock centers
dramatically accelerated "reverse genetic" study of many gene families;
the Arabidopsis Biological Resource Center and the Nottingham Arabidopsis Stock Centre were important in this regard,
and information on stock availability was integrated
into The Arabidopsis Information Resource database. [ 26 ]
Syngenta developed and publicly shared a significant T-DNA mutant population,
the Syngenta Arabidopsis Insertion Library (SAIL) collection.
Industry investment in A. thaliana research
suffered a setback
in the closure of Syngenta's Torrey Mesa Research Institute (TMRI), [ 68 ] but remained robust. Mendel Biotechnology overexpressed the vast majority of A. thaliana transcription factors
to generate leads for genetic engineering.
Cereon Genomics,
a subsidiary of Monsanto ,
sequenced the Landsberg erecta accession
(at lower coverage than the Col-0 project)
and shared the assembly,
along with other sequence marker data. [ 38 ] [ 69 ] [ 70 ]
A. thaliana quickly became an important model for the study of plant small RNAs .
The argonaute1 mutant, named for its resemblance to an Argonauta octopuses , [ 71 ] was the namesake for the Argonaute protein family central to silencing. [ 16 ] Forward genetic screens focused on vegetative phase change uncovered many genes controlling small RNA biogenesis.
Multiple groups identified mutations in the DICER-LIKE1 gene (encoding the main DICER protein controlling microRNA biogenesis in plants) that cause strong developmental defects. [ 72 ] A. thaliana became an important model for RNA-directed DNA methylation (transcriptional silencing), partly because many A. thaliana methylation mutants are viable, which is not the case for several model animals (in which such mutations cause lethality). [ 16 ]
As the NSF 2010 project neared completion, there was a perceived decrease in funding agency interest in A. thaliana , evidenced by the cessation of USDA funding for A. thaliana research [ citation needed ] and the end of NSF funding for the TAIR database. [ 73 ] This trend coincided with the progress of the (US NSF-supported) National Plant Genome Initiative, which began in 1998
and put an increased emphasis on crops.
Draft genome sequence for rice were published in 2002 [ 74 ] [ 75 ] and followed by publications for sorghum [ 76 ] and maize [ 77 ] in 2009.
A draft genome of the model tree Populus trichocarpa was published in 2006. [ 78 ] The draft genome of Brachypodium distachyon ,
a short-statured model grass ( Poaceae )
was published in 2010. [ 79 ] The Joint Genome Institute of the United States Department of Energy identified poplar, sorghum, B. distachyon , model C4 grass Setaria viridis (foxtail millet), model moss Physcomitrella patens , model alga Chlamydomonas reinhardtii , and soybean as its "flagship" species for plant genomics geared towards bioenergy applications. [ 80 ]
Well established investigators
including Ronald W. Davis , Gerald Fink ,
and Frederick M. Ausubel adopted A. thaliana as a model in the 1980s,
attracting interest. [ 81 ] [ 9 ]
Elliot Meyerowitz and Chris R. Somerville were awarded the Balzan Prize in 2006 for their work developing A. thaliana as a model. [ 82 ] Thirteen prominent American A. thaliana geneticists were selected as investigators of the prestigious Howard Hughes Medical Institute and Gordon and Betty Moore Foundation in 2011: [ 83 ] [ 84 ] Philip Benfey , Dominique Bergmann ,
Simon Chan, Xuemei Chen , Jeff Dangl , Xinnian Dong , Joseph R. Ecker , Mark Estelle , Sheng Yang He , Robert A. Martienssen , Elliot Meyerowitz , Craig Pikaard ,
and Keiko Torii .
(Also selected were wheat geneticist Jorge Dubcovsky and photosynthesis researcher Krishna Niyogi , who has extensively used A. thaliana along with the alga Chlamydomonas reinhardtii . [ 85 ] )
Prior to this, a handful of A. thaliana geneticists had become HHMI investigators: Joanne Chory (1997, [ 86 ] also awarded a 2018 Breakthrough Prize in Life Sciences [ 87 ] ),
Daphne Preuss (2000-2006), [ 88 ] and Steve Jacobsen (2005). [ 89 ] Caroline Dean was awarded many honors
including the 2020 Wolf Prize in Agriculture for "pioneering discoveries in flowering time control and epigenetic basis of vernalization"
made with A. thaliana . [ 90 ]
A. thaliana continues to be the subject of intense study using new technologies such as high-throughput sequencing.
Direct sequencing of cDNA (" RNA-Seq ")
largely replaced microarray analysis of gene expression,
and several studies sequenced cDNA from single cells ( scRNA-seq ), particularly from root tissue. [ 25 ] Mapping of mutations from forward screens
is increasingly done
with direct genome sequencing,
combined in some cases with bulked segregant analysis or backcrossing . [ 91 ] A. thaliana is a premier model
for studies of the plant microbiome and natural genetic variation, [ 16 ] [ 23 ] [ 24 ] including genome-wide association studies .
Short RNA-guided DNA editing with CRISPR tools has been applied to A. thaliana since 2013. [ 92 ] | https://en.wikipedia.org/wiki/History_of_research_on_Arabidopsis_thaliana |
Middlewich , a town in northwest England, lies on the confluence of three rivers – the Dane , the Croco and the Wheelock . Most importantly for the history of salt making, it also lies on the site of a prehistoric brine spring.
Following the Roman invasion, Middlewich was named Salinae on account of the salt deposits around it, as it was one of their major sites of salt production. [ 1 ] During this time the Romans built a fort at
Harbutts Field (SJ70216696), to the north of the town. [ 2 ] [ 3 ] Recent excavations to the south of the fort have found evidence of further Roman activity [ 4 ] [ 5 ] including a well and part of a preserved Roman road . [ 1 ]
Salt manufacture has remained the principal industry for the past 2,000 years. Salt making is mentioned in the Domesday Book , and by the 13th century there were approximately 100 " wich houses" packed around the town's two brine pits. [ 6 ] By 1908 there were nine industrial scale salt manufacturers in the town, with a number of open pan salt works close to the canal, however salt manufacture in Middlewich is now concentrated in one manufacturer, British Salt . The salt is sold as the Saxa brand by RHM , and by others e.g. supermarket own brands. Salt produced by British Salt in Middlewich has 57% of the UK market for salt used in cooking. [ 7 ]
"From thence runneth Wever down by Nantwich , not far from Middlewich, and so to Northwich . These are very famous Salt-Witches, five or six miles distant, where brine or salt water is drawn out of pits, which they pour not upon wood while it burneth as the ancient Gauls and Germans were wont to do, but boil it over a fire to make salt thereof. Neither doubt I that these were known unto the Romans, and that from hence was usually paid the Custom of Salt, called Salarium .
"For, there went a notable highway from Middlewich to Northwich, raised with gravel to such a height, that a man may easily acknowledge that it was the work of the Romans, seeing that all this country over, gravel is so scare: and from then at this day it is carried to private men’s uses. Mather Paris writeth that Henry III stopped up these salt-pits when in hostile manner he wasted this shire; because the Welshmen, so tumultuous in those days should not have any victuals or provisions from thence. But when the fair beams of peace began once more to shine out, they were opened again.
"Then runneth the Dane under Kinderton , the old seat of the ancient race of the Venebles; who, ever since the first coming of the Normans have been commonly called the Barons of Kinderton. Beneath this southwards, the little river croco, runnerth also into the Dan...Croke, the river aforesaid being past Brereton, within a little while visiteth Middlewich, very near unto his confluence with Dan, where there be two wells of salt water, parted one from the other by a small brook: Sheathes they call them. The one stands not open, but at certain set times, because folk willingly steal the water thereof, as being of greater virtue and efficacy. From thence runneth Dan to Bostoke, in time past Botestock, the ancient sear of the family of the Bostokes, Knights . Out of this ancient house of the Bostoks, as out of a stock, sprung a goodly number of the same name, in Cheshire, Shropshire, Berkshire and elsewhere."
The Croxton Works were located on the Trent and Mersey canal, approximately halfway between the Big Lock and the Croxton Lane Bridge at SJ699669 Archived 27 September 2007 at the Wayback Machine . The works were established by the Dairy and Domestic Salt Company, probably in 1892. It was taken over by Henry Seddon before 1905 and worked until closed by subsidence in the 1920s. Until the early 1990s a derelict canal side warehouse still existed on the site, however this has now been demolished. All that remains now is a canal side flash (proposed as the site of a Middlewich Marina in the 1970s) and the foundations for the warehouse. Both the flash and warehouse foundations are now overgrown and hardly visible.
It is likely that this is the only saltworks next to the Roman fort on Harbutt's field. Salt making sites in Cheshire [ 8 ] places this site at SJ703668 Archived 30 September 2007 at the Wayback Machine , however the 1882 Ordnance Survey map places the salt pans at approximately SJ7032266605 Archived 30 September 2007 at the Wayback Machine , whilst Middlewich 900–1900 [ 9 ] mentions the salt workings being yards away from the stone houses off King Street (i.e. the location given in the 1882 OS map).
It is likely that this was the salt works of the Baron of Kinderton, Peter Venables, in 1671, and it is listed in documents of 1682 as producing a weekly output of 2,210 bushels of salt from its seven pans. By the mid-eighteenth century this was the only saltworks on the Kinderton side of the River Croco. In the mid nineteenth century Ralph Seddon owned the works, and on his death it was sold to the Salt Union in 1888. Sometime between 1888 and 1919 the site was dismantled, however a capped off shaft which once formed part of the works could be seen from the path running from King Street to the Big Lock until the new housing estate was built.
In around 1913 the Pepper Street salt works were rebuilt by Henry Seddon. Following a merger between Seddon and Sons and Cerebos in the late 1950s, the open pans at Pepper Street and those at the Cerebos site on Booth Lane were worked together as a single department, before being closed in 1968–1970. The Pepper Street works were demolished in the mid 1970s (at around the same time as the gas works on the opposite side of the canal), and the site is now a housing estate.
In the 14th century the area around the current Wych House Lane was occupied by many salt houses. [ 9 ] In 1892 a new salt works was established to the north of Wych House Lane, owned by the Dairy and Domestic Company. [ 8 ] In common with many of the works this was taken over by Henry Seddon in the early 1900s. The works continued to be used until around 1969, and were used by the town council as a depot until the 1980s. [ 8 ] The land is currently a green field running down to the canal.
Salt making sites in Cheshire [ 8 ] locates Newtons Salt Works at the same site as Wych House Lane Salt works (SJ705662). However the 1898 map places Newtons Salt Works to the south side of Wych House Lane, at approximately SJ706661.
Aman's Salt Works was opened on Brooks Lane shortly after the discovery in 1889 of rock salt and brine at the adjacent Murgatroyd's site, with the earliest entry in the accounts book being 16 November 1892. [ 8 ] The location of Aman's Works, between the Trent and Mersey canal and the railway branch line between Sandbach and Northwich , is indicative of the move from canals to railways for transport during the nineteenth century. To this end the works had its own siding and platform for loading trains adjacent to the branch line (around 300 metres from Middlewich railway station ).
The only remaining salt works in Middlewich is the British Salt works at Cledford (SJ716644). This salt works obtains its brine from Warmingham nearby, rather than Middlewich. [ 8 ] Salt from this works is sold by RHM under the Saxa brand.
All of the old town centre salt works are now closed. Because of the aggressive chemicals that were handled, salt houses were 'temporary' structures, and so unlike the mills constructed in the mill towns of Lancashire , they would be unsuitable for conversion into other uses. Consequently, almost all the structures of the old salt works have been pulled down and the land has been put to other uses. | https://en.wikipedia.org/wiki/History_of_salt_in_Middlewich |
The history of science and technology ( HST ) is a field of history that examines the development of the understanding of the natural world (science) and humans' ability to manipulate it ( technology ) at different points in time. This academic discipline also examines the cultural, economic, and political context and impacts of scientific practices; it likewise may study the consequences of new technologies on existing scientific fields.
History of science is an academic discipline with an international community of specialists. Main professional organizations for this field include the History of Science Society , the British Society for the History of Science , and the European Society for the History of Science.
Much of the study of the history of science has been devoted to answering questions about what science is , how it functions , and whether it exhibits large-scale patterns and trends. [ 1 ]
Histories of science were originally written by practicing and retired scientists, [ 2 ] starting primarily with William Whewell's History of the Inductive Sciences (1837), as a way to communicate the virtues of science to the public. [ 3 ]
Auguste Comte proposed that there should be a specific discipline to deal with the history of science. [ 4 ]
The development of the distinct academic discipline of the history of science and technology did not occur until the early 20th century. [ citation needed ] Historians have suggested that this was bound to the changing role of science during the same time period. [ citation needed ]
After World War I , extensive resources were put into teaching and researching the discipline, with the hopes that it would help the public better understand both Science and Technology as they came to play an exceedingly prominent role in the world. [ citation needed ]
In the decades since the end of World War II , history of science became an academic discipline, with graduate schools , research institutes , public and private patronage, peer-reviewed journals , and professional societies. [ citation needed ]
In the United States , a more formal study of the history of science as an independent discipline was initiated by George Sarton 's publications, Introduction to the History of Science (1927) and the journal Isis (founded in 1912). [ citation needed ] Sarton exemplified the early 20th-century view of the history of science as the history of great men and great ideas. [ citation needed ] He shared with many of his contemporaries a Whiggish belief in history as a record of the advances and delays in the march of progress. [ citation needed ]
The study of the history of science continued to be a small effort until the rise of Big Science after World War II. [ 5 ] With the work of I. Bernard Cohen at Harvard University , the history of science began to become an established subdiscipline of history in the United States. [ 6 ]
In the United States, the influential bureaucrat Vannevar Bush , and the president of Harvard, James Conant , both encouraged the study of the history of science as a way of improving general knowledge about how science worked, and why it was essential to maintain a large scientific workforce. [ citation needed ]
History of science and technology is a well-developed field in India. At least three generations of scholars can be identified.
The first generation includes D.D.Kosambi, Dharmpal, Debiprasad Chattopadhyay and Rahman. The second generation mainly consists of Ashis Nandy , Deepak Kumar , Dhruv Raina , S. Irfan Habib , Shiv Visvanathan , Gyan Prakash , Stan Lourdswamy, V.V. Krishna, Itty Abraham , Richard Grove, Kavita Philip, Mira Nanda and Rob Anderson. There is an emergent third generation that includes scholars like Abha Sur and Jahnavi Phalkey. [ 16 ]
Departments and Programmes
The National Institute of Science, Technology and Development Studies had a research group active in the 1990s which consolidated social history of science as a field of research in India.
Currently there are several institutes and university departments offering HST programmes.
Ukraine
Academic study of the history of science as an independent discipline was launched by George Sarton at Harvard with his book Introduction to the History of Science (1927) and the Isis journal (founded in 1912). Sarton exemplified the early 20th century view of the history of science as the history of great men and great ideas. He shared with many of his contemporaries a Whiggish belief in history as a record of the advances and delays in the march of progress. The History of Science was not a recognized subfield of American history in this period, and most of the work was carried out by interested Scientists and Physicians rather than professional Historians. [ 40 ] With the work of I. Bernard Cohen at Harvard, the history of Science became an established subdiscipline of history after 1945. [ 41 ]
See also the list of George Sarton medalists .
Historiography of science
History of science as a discipline | https://en.wikipedia.org/wiki/History_of_science_and_technology |
Software is a set of programmed instructions stored in the memory of stored-program digital computers for execution by the processor. Software is a recent development in human history and is fundamental to the Information Age .
Ada Lovelace 's programs for Charles Babbage 's analytical engine in the 19th century are often considered the founder of the discipline. However, the mathematician's efforts remained theoretical only, as the technology of Lovelace and Babbage's day proved insufficient to build his computer. Alan Turing is credited with being the first person to come up with a theory for software in 1935, which led to the two academic fields of computer science and software engineering .
The first generation of software for early stored-program digital computers in the late 1940s had its instructions written directly in binary code , generally for mainframe computers . Later, the development of modern programming languages alongside the advancement of the home computer would greatly widen the scope and breadth of available software, beginning with assembly language , and continuing through functional programming and object-oriented programming paradigms.
Computing as a concept goes back to ancient times, with devices such as the abacus , the Antikythera mechanism , astrolabes , mechanical astronomical clocks and mechanical calculators . [ 1 ] The Antikythera mechanism is an example for a highly complex ancient mechanical Astronomical device. [ 2 ]
However, these devices were pure hardware and had no software - their computing powers were directly tied to their specific form and engineering.
Software requires the concept of a general-purpose processor - what is now described as a Turing machine - as well as computer memory in which reusable sets of routines and mathematical functions comprising programs can be stored, started, and stopped individually, and only appears recently in human history.
The first known computer algorithm was written by Ada Lovelace in the 19th century for the analytical engine , to translate Luigi Menabrea 's work on Bernoulli numbers for machine instruction. [ 3 ] However, this remained theoretical only - the lesser state of engineering in the lifetime of these two mathematicians proved insufficient [ citation needed ] to construct the analytical engine.
The first modern theory of software was proposed by Alan Turing in his 1935 essay Computable numbers with an application to the Entscheidungsproblem (decision problem) . [ 4 ]
This eventually led to the creation of the twin academic fields of computer science and software engineering , which both study software and its creation. Computer science is more theoretical (Turing's essay is an example of computer science), whereas software engineering is focused on more practical concerns.
However, prior to 1946, software as we now understand it – programs stored in the memory of stored-program digital computers – did not yet exist. The very first electronic computing devices were instead rewired in order to "reprogram" them. The ENIAC , one of the first electronic computers, was programmed largely by women who had been previously working as human computers . [ 5 ] [ 6 ] Engineers would give the programmers blueprints of the ENIAC wiring and expected them to figure out how to program the machine. [ 7 ] The women who worked as programmers prepped the ENIAC for its first public reveal, wiring the patch panels together for the demonstrations. [ 8 ] [ 9 ] [ 10 ] Kathleen Booth developed assembly language in 1950 to make it easier to program the computers she worked on at Birkbeck College . [ 11 ]
Grace Hopper worked as one of the first programmers of the Harvard Mark I . [ 12 ] She later created a 500-page manual for the computer. [ 13 ] Hopper is often falsely credited with coining the terms "bug" and " debugging ", when she found a moth in the Mark II, causing a malfunction; [ 14 ] however, the term was in fact already in use when she found the moth. [ 14 ] Hopper developed the first compiler and brought her idea from working on the Mark computers to working on UNIVAC in the 1950s. [ 15 ] Hopper also developed the programming language FLOW-MATIC to program the UNIVAC. [ 14 ] Frances E. Holberton , also working at UNIVAC, developed a code [ clarification needed ] , C-10, which let programmers use keyboard inputs and created the Sort-Merge Generator in 1951. [ 16 ] [ 17 ] Adele Mildred Koss and Hopper also created the precursor to a report generator . [ 16 ]
In his manuscript "A Mathematical Theory of Communication", Claude Shannon (1916–2001) provided an outline for how binary logic could be implemented to program a computer. Subsequently, the first computer programmers used binary code to instruct computers to perform various tasks. Nevertheless, the process was very arduous. Computer programmers had to provide long strings of binary code to tell the computer what kind of data it should store. Code and data had to be loaded onto computers using various tedious mechanisms, including flicking switches or punching holes at predefined positions in cards and loading these punched cards into a computer. With such methods, if a mistake was made, the whole program might have to be loaded again from the beginning.
The very first time a stored-program computer held a piece of software in electronic memory and executed it successfully, was 11 am 21 June 1948, at the University of Manchester, on the Manchester Baby computer. It was written by Tom Kilburn , and calculated the highest factor of the integer 2^18 = 262,144. Starting with a large trial divisor, it performed a division of 262,144 by repeated subtraction and then checked if the remainder was zero. If not, it decremented the trial divisor by one and repeated the process. Google released a tribute to the Manchester Baby, celebrating it as the "birth of software".
FORTRAN was developed by a team led by John Backus at IBM in the 1950s. The first compiler was released in 1957. The language proved so popular for scientific and technical computing that by 1963 all major manufacturers had implemented or announced FORTRAN for their computers. [ 18 ] [ 19 ]
COBOL was first conceived of when Mary K. Hawes convened a meeting (which included Grace Hopper ) in 1959 to discuss how to create a computer language to be shared between businesses. [ 16 ] Hopper's innovation with COBOL was developing a new symbolic way to write programming. [ 13 ] Her programming was self-documenting. [ 20 ] Betty Holberton helped edit the language which was submitted to the Government Printing Office in 1960. [ 21 ] FORMAC was developed by Jean E. Sammet in the 1960s. [ 21 ] Her book, Programming Languages: History and Fundamentals (1969), became an influential text. [ 21 ] [ 22 ]
The Apollo Mission to the moon depended on software to program the computers in the landing modules. [ 23 ] [ 24 ] The computers were programmed with a language called "Basic" (no relation to the BASIC programming language developed at Dartmouth at about the same time). [ 25 ] The software also had an interpreter which was made up of a series of routines and an executive (like a modern-day operating system ), which specified which programs to run and when. [ 25 ] Both were designed by Hal Laning . [ 25 ] Margaret Hamilton , who had previously been involved with software reliability issues when working on the US SAGE air defense system, was also part of the Apollo software team. [ 23 ] [ 26 ] Hamilton was in charge of the onboard flight software for the Apollo computers. [ 23 ] Hamilton felt that software operations were not just part of the machine, but also intricately involved with the people who operated the software. [ 25 ] Hamilton also coined the term " software engineering " while she was working at NASA. [ 27 ]
The actual "software" for the computers in the Apollo missions was made up of wires that were threaded through magnetic cores. [ 28 ] Where the wire went through a magnetic core, that represented a "1" and where the wire went around the core, that represented a "0." [ 28 ] Each core stored 64 bits of information. [ 28 ] Hamilton and others would create the software by punching holes in punch cards, which were then later processed on a Honeywell mainframe where the software could be simulated. [ 23 ] When the code was "solid," then it was sent to be woven into the magnetic cores at Raytheon , where women known as "Little Old Ladies" worked on the wires. [ 23 ] The program itself was "indestructible" and could even withstand lightning strikes, which happened to Apollo 12 . [ 28 ] Wiring the computers took several weeks to do, freezing software development during that time. [ 29 ]
While using the simulators to test the programming, Hamilton discovered ways that code could produce dangerous errors when human mistakes were made while using it. [ 23 ] NASA believed that the astronauts would not make mistakes due to their training. [ 30 ] Hamilton was not allowed to program code to prevent errors that would lead to system crash, so she annotated the code in the program documentation. [ 23 ] Her idea to add error-checking code was rejected as "excessive." [ 23 ] However, exactly what Hamilton predicted would happen occurred on the Apollo 8 flight, when human error caused the computer to wipe out all of the navigational data. [ 23 ]
Later, software was sold to multiple customers by being bundled with the hardware by original equipment manufacturers (OEMs) such as Data General , Digital Equipment and IBM. When a customer bought a minicomputer , at that time the smallest computer on the market, the computer did not come with pre-installed software , but needed to be installed by engineers employed by the OEM. [ citation needed ]
This bundling attracted the attention of US antitrust regulators, who sued IBM for improper "tying" in 1969, alleging that it was an antitrust violation that customers who wanted to obtain its software had to also buy or lease its hardware in order to do so. However, the case was dropped by the US Justice Department, after many years of attrition, as it concluded it was "without merit". [ 31 ]
Data General also encountered legal problems related to bundling – although in this case, it was due to a civil suit from a would-be competitor. When Data General introduced the Data General Nova , a company called Digidyne wanted to use its RDOS operating system on its own hardware clone . Data General refused to license their software and claimed their "bundling rights". The US Supreme Court set a precedent called Digidyne v. Data General in 1985 by letting a 9th circuit appeal court decision on the case stand, and Data General was eventually forced into licensing the operating system because it was ruled that restricting the license to only DG hardware was an illegal tying arrangement . [ 32 ] Even though the District Court noted that "no reasonable juror could find that within this large and dynamic market with much larger competitors", Data General "had the market power to restrain trade through an illegal tie-in arrangement", the tying of the operating system to the hardware was ruled as per se illegal on appeal. [ 33 ]
In 2008, Psystar Corporation was sued by Apple Inc. for distributing unauthorized Macintosh clones with OS X preinstalled, and countersued. One of the arguments in the countersuit - citing the Data General case - was that Apple dominates the market for OS X compatible computers by illegally tying the operating system to Apple computers. District Court Judge William Alsup rejected this argument, saying, as the District Court had ruled in the Data General case over 20 years prior, that the relevant market was not simply one operating system (Mac OS) but all PC operating systems, including Mac OS, and noting that Mac OS did not enjoy a dominant position in that broader market. Alsup's judgement also noted that the surprising Data General precedent that tying of copyrighted products was always illegal had since been "implicitly overruled" by the verdict in the Illinois Tool Works Inc. v. Independent Ink, Inc. case. [ 34 ]
An industry producing independently packaged software - software that was neither produced as a "one-off" for an individual customer, nor "bundled" with computer hardware - started to develop in the late 1960s. [ 35 ]
Unix was an early operating system which became popular and very influential, and still exists today. The most popular variant of Unix today is macOS (previously called OS X and Mac OS X), while Linux is closely related to Unix.
In January 1975, Micro Instrumentation and Telemetry Systems began selling its Altair 8800 microcomputer kit by mail order. Microsoft released its first product Altair BASIC later that year, and hobbyists began developing programs to run on these kits. Tiny BASIC was published as a type-in program in Dr. Dobb's Journal , and developed collaboratively.
In 1976, Peter R. Jennings for instance created his Microchess program for MOS Technology 's KIM-1 kit, but since it did not come with a tape drive, he would send the source code in a little booklet to his mail-order customers, and they would have to type the whole program in by hand. In 1978, Kathe and Dan Spracklen released the source of their Sargon (chess) program in a computer magazine. Jennings later switched to selling paper tape, and eventually compact cassettes with the program on it.
It was an inconvenient and slow process to type in source code from a computer magazine, and a single mistyped – or worse, misprinted – character could render the program inoperable, yet people still did so. ( Optical character recognition technology, which could theoretically have been used to scan in the listings rather than transcribe them by hand, was not yet in wide use.)
Even with the spread of cartridges and cassette tapes in the 1980s for distribution of commercial software, free programs (such as simple educational programs for the purpose of teaching programming techniques) were still often printed, because it was cheaper than making and attaching cassette tapes to magazines.
However, eventually a combination of four factors brought this practice of printing complete source code listings of entire programs in computer magazines to an end:
Very quickly, commercial software started to be pirated , and commercial software producers were very unhappy at this. Bill Gates , cofounder of Microsoft , was an early moraliser against software piracy with his famous Open Letter to Hobbyists in 1976. [ 36 ]
Before the microcomputer, a successful software program typically sold up to 1,000 units at $50,000–60,000 each. By the mid-1980s, personal computer software sold thousands of copies for $50–700 each. Companies like Microsoft, MicroPro , and Lotus Development had tens of millions of dollars in annual sales. [ 37 ] They similarly dominated the European market with localized versions of already successful products. [ 38 ]
A pivotal moment in computing history was the publication in the 1980s of the specifications for the IBM Personal Computer published by IBM employee Philip Don Estridge , which quickly led to the dominance of the PC in the worldwide desktop and later laptop markets – a dominance which continues to this day. Microsoft, by successfully negotiating with IBM to develop the first operating system for the PC ( MS-DOS ), profited enormously from the PC's success over the following decades, via the success of MS-DOS and its add-on-cum-successor, Microsoft Windows . Winning the negotiation was a pivotal moment in Microsoft's history.
Free and Open Source Software (FOSS) refers to software that is both freely available for use and distributed under licenses that grant users the freedom to access, modify, and share the software's source code. This approach contrasts with proprietary software, where the source code is typically closed and usage is restricted by licensing agreements. FOSS promotes collaboration and transparency, enabling developers and users worldwide to contribute to the software's improvement, tailor it to their needs, and share enhancements without legal or financial barriers. Popular examples of FOSS include operating systems like Linux, web browsers like Mozilla Firefox, and programming languages like Python. The philosophy behind FOSS not only drives technological innovation but also fosters a global community committed to creating accessible and adaptable software for diverse needs.
Applications for mobile devices (cellphones and tablets) have been termed "apps" in recent years. Apple chose to funnel iPhone and iPad app sales through their App Store , and thus both vet apps, and get a cut of every paid app sold. Apple does not allow apps which could be used to circumvent their app store (e.g. virtual machines such as the Java or Flash virtual machines).
The Android platform, by contrast, has multiple app stores available for it, and users can generally select which to use (although Google Play requires a compatible or rooted device).
This move was replicated for desktop operating systems with GNOME Software (for Linux), the Mac App Store (for macOS), and the Windows Store (for Windows). All of these platforms remain, as they have always been, non-exclusive: they allow applications to be installed from outside the app store, and indeed from other app stores.
The explosive rise in popularity of apps, for the iPhone in particular but also for Android, led to a kind of " gold rush ", with some hopeful programmers dedicating a significant amount of time to creating apps in the hope of striking it rich. As in real gold rushes, not all of these hopeful entrepreneurs were successful.
The development of curricula in computer science has resulted in improvements in software development. Components of these curricula include:
As more and more programs enter the realm of firmware, and the hardware itself becomes smaller, cheaper and faster as predicted by Moore's law , an increasing number of types of functionality of computing first carried out by software, have joined the ranks of hardware, as for example with graphics processing units . (However, the change has sometimes gone the other way for cost or other reasons, as for example with softmodems and microcode .)
Most hardware companies today have more software programmers on the payroll than hardware designers [ citation needed ] , since software tools have automated many tasks of printed circuit board (PCB) engineers.
The following tables include year by year development of many different aspects of computer software including:
WordStar 3.0 WordPerfect for DOS
for Google was formed.
CryEngine#CryEngine 3 ( BeamNG.drive ) | https://en.wikipedia.org/wiki/History_of_software |
The history of software configuration management (SCM) can be traced back as early as the 1950s, when CM (configuration management), originally for hardware development and production control, was being applied to software development. Early software had a physical footprint, such as cards , tapes , and other media. The first software configuration management was a manual operation. With the advances in language and complexity, software engineering , involving configuration management and other methods, became a major concern due to issues like schedule, budget, and quality. Practical lessons, over the years, had led to the definition, and establishment, of procedures and tools. Eventually, the tools became systems to manage software changes. [ 1 ] Industry-wide practices were offered as solutions, either in an open or proprietary manner (such as Revision Control System ). With the growing use of computers, systems emerged that handled a broader scope, including requirements management , design alternatives, quality control, and more; later tools followed the guidelines of organizations, such as the Capability Maturity Model of the Software Engineering Institute .
Until the 1980s, SCM could only be understood as CM applied to software development. [ 5 ] Some basic concepts such as identification and baseline (well-defined point in the evolution of a project) were already clear, but what was at stake was a set of techniques oriented towards the control of the activity, and using formal processes, documents, request forms, control boards etc.
It is only after this date that the use of software tools applying directly to software artefacts representing the actual resources, has allowed SCM to grow as an autonomous entity (from traditional CM).
The use of different tools has actually led to very distinct emphases.
SCCS (first released in 1973) and DSEE (considered a predecessor of Atria ClearCase ), described in 1984, [ 6 ] are two other notable VCS software tools. These tools, along with Revision Control System (RCS), are generally considered the first generation of VCS as automated software tools. [ 7 ]
After the first generation VCS , tools such as CVS and Subversion , which feature a locally centralized repository, could be considered as the second generation VCS. Specifically, CVS (Concurrent Versions System) was developed on top of RCS structure, improving scalability of the tool for larger groups, and later PRCS, [ 8 ] a simpler CVS-like tool which also uses RCS-like files, but improves upon the delta compression by using Xdelta instead.
By 2006 or so, Subversion was considered to be the most popular and widely in use VCS tool from this generation and filled important weaknesses of CVS. [ according to whom? ] Later SVK developed with the goal of remote contribution feature, but still the foundation of its design were similar to its predecessors. [ 7 ]
As Internet connectivity improved and geographically distributed software development became more common, tools emerged that did not rely on a shared central project repository. These allow users to maintain independent repositories (or forks ) of a project and communicate revisions via changesets . BitKeeper , Git , Monotone , darcs , Mercurial , and bzr are some examples of third generation version control systems. [ 7 ] | https://en.wikipedia.org/wiki/History_of_software_configuration_management |
The history of software engineering begins around the 1960s. Writing software has evolved into a profession concerned with how best to maximize the quality of software and of how to create it. Quality can refer to how maintainable software is, to its stability, speed, usability, testability, readability, size, cost, security, and number of flaws or "bugs", as well as to less measurable qualities like elegance, conciseness, and customer satisfaction, among many other attributes. How best to create high quality software is a separate and controversial problem covering software design principles, so-called "best practices" for writing code, as well as broader management issues such as optimal team size, process, how best to deliver software on time and as quickly as possible, work-place "culture", hiring practices, and so forth. All this falls under the broad rubric of software engineering . [ 1 ]
The evolution of software engineering is notable in a number of areas:
Early usages for the term software engineering include a 1965 letter from ACM president Anthony Oettinger , [ 6 ] [ 7 ] lectures by Douglas T. Ross at MIT in the 1950s. [ 8 ] Margaret H. Hamilton is the person who came up with the idea of naming the discipline, software engineering, as a way of giving it legitimacy during the development of the Apollo Guidance Computer . [ 9 ] [ 10 ]
I fought to bring the software legitimacy so that it—and those building it—would be given its due respect and thus I began to use the term 'software engineering' to distinguish it from hardware and other kinds of engineering, yet treat each type of engineering as part of the overall systems engineering process. When I first started using this phrase, it was considered to be quite amusing. It was an ongoing joke for a long time. They liked to kid me about my radical ideas. Software eventually and necessarily gained the same respect as any other discipline
The NATO Science Committee sponsored two conferences [ 12 ] on software engineering in 1968 ( Garmisch , Germany) and 1969, which gave the field its initial boost. Many believe these conferences marked the official start of the profession of software engineering . [ 6 ] [ 13 ]
Software engineering was spurred by the so-called software crisis of the 1960s, 1970s, and 1980s, which identified many of the problems of software development. Many projects ran over budget and schedule. Some projects caused property damage. Through negligence in publishing software with critical bugs, some lost their lives due to software failures. One of the most striking examples of harm through software bugs was the Therac-25 race condition bug. The bug caused a radiation therapy machine to administer overdoses of radiation in cases where low doses should have been used. The software crisis was originally defined in terms of productivity , but evolved to emphasize quality . Some used the term software crisis to refer to their inability to hire enough qualified programmers. [ citation needed ] During this time, Silicon Valley cemented itself as the best location for software engineers to work. [ 14 ]
Peter G. Neumann has kept a contemporary list of software problems and disasters. [ 16 ] The software crisis has been fading from view, because it is psychologically extremely difficult to remain in crisis mode for a protracted period (more than 20 years). Nevertheless, software – especially real-time embedded software – remains risky and is pervasive, and it is crucial not to give in to complacency. Over the last 10–15 years Michael A. Jackson has written extensively about the nature of software engineering, has identified the main source of its difficulties as lack of specialization, and has suggested that his problem frames provide the basis for a "normal practice" of software engineering, a prerequisite if software engineering is to become an engineering science. [ 17 ]
One of the largest projects undertaken by software engineers during this time period was the development of modern operating systems . Starting in Bell Labs and then moving to UC Berkeley, Ken Thompson and Dennis Ritchie , among other software engineers, worked to create Unix V6 in 1975. Unix V6 was a landmark operating system that set standards for future operating systems and is used today to educate students about proper operating system principles. [ 18 ] Moreover, future operating systems built on Unix V6's methods, and its descendants can be grouped into five types of operating system paradigms: Grassroots Systems, Large-Scale Systems, Hybrid Systems, Experimental Systems, and Minor Systems. [ 19 ] In contrast with Unix, software engineers at MIT in 1983 built GNU (literally "GNU's Not Unix") as an open source alternative to Unix. As an early open source software, GNU was beloved by a small group of developers and its work grew the open source software development community in the 80s. [ 20 ]
For decades, solving the software crisis was paramount to researchers and companies producing software tools.
The cost of owning and maintaining software in the 1980s was twice as expensive as developing the software. [ citation needed ]
Seemingly, every new technology and practice from the 1970s through the 1990s was trumpeted as a silver bullet to solve the software crisis. Tools, discipline, formal methods , process, and professionalism were touted as silver bullets: [ citation needed ]
In 1986, Fred Brooks published his No Silver Bullet article, arguing that no individual technology or practice would ever make a 10-fold improvement in productivity within 10 years. [ citation needed ]
Debate about silver bullets raged over the following decade. Advocates for Ada , components , and processes continued arguing for years that their favorite technology would be a silver bullet. Skeptics disagreed. Eventually, almost everyone accepted that no silver bullet would ever be found. Yet, claims about silver bullets pop up now and again, even today. [ citation needed ]
Some [ who? ] interpret [ why? ] no silver bullet to mean that software engineering failed. [ clarification needed ] However, with further reading, Brooks goes on to say: "We will surely make substantial progress over the next 40 years; an order of magnitude over 40 years is hardly magical ..." [ citation needed ]
The search for a single key to success never worked. All known technologies and practices have only made incremental improvements to productivity and quality. Yet, there are no silver bullets for any other profession, either. Others interpret no silver bullet as proof that software engineering has finally matured and recognized that projects succeed due to hard work. [ citation needed ]
However, it could also be said that there are, in fact, a range of silver bullets today, including lightweight methodologies (see " Project management "), spreadsheet calculators, customized browsers , in-site search engines, database report generators, integrated design-test coding-editors with memory/differences/undo, and specialty shops that generate niche software, such as information web sites, at a fraction of the cost of totally customized web site development. Nevertheless, the field of software engineering appears too complex and diverse for a single "silver bullet" to improve most issues, and each issue accounts for only a small portion of all software problems. [ citation needed ]
The rise of the Internet led to very rapid growth in the demand for international information display/e-mail systems on the World Wide Web. Programmers were required to handle illustrations, maps, photographs, and other images, plus simple animation, at a rate never before seen, with few well-known methods to optimize image display/storage (such as the use of thumbnail images). [ citation needed ]
The growth of browser usage, running on the HyperText Markup Language (HTML), changed the way in which information-display and retrieval was organized. The widespread network connections led to the growth and prevention of international computer viruses on MS Windows computers, and the vast proliferation of spam e-mail became a major design issue in e-mail systems, flooding communication channels and requiring semi-automated pre-screening. Keyword-search systems evolved into web-based search engines , and many software systems had to be re-designed, for international searching, depending on search engine optimization (SEO) . Human natural-language translation systems were needed to attempt to translate the information flow in multiple foreign languages, with many software systems being designed for multi-language usage, based on design concepts from human translators. Typical computer-user bases went from hundreds, or thousands of users, to, often, many-millions of international users. [ citation needed ]
With the expanding demand for software in many smaller organizations, the need for inexpensive software solutions led to the growth of simpler, faster methodologies that developed running software, from requirements to deployment, quicker & easier. The use of rapid-prototyping evolved to entire lightweight methodologies , such as Extreme Programming (XP), which attempted to simplify many areas of software engineering, including requirements gathering and reliability testing for the growing, vast number of small software systems. Very large software systems still used heavily documented methodologies, with many volumes in the documentation set; however, smaller systems had a simpler, faster alternative approach to managing the development and maintenance of software calculations and algorithms, information storage/retrieval and display. [ citation needed ]
Software engineering is a young discipline, and is still developing. The directions in which software engineering is developing include: [ citation needed ]
Aspects help software engineers deal with quality attributes by providing tools to add or remove boilerplate code from many areas in the source code . Aspects describe how all objects or functions should behave in particular circumstances. For example, aspects can add debugging , logging , or locking control into all objects of particular types. Researchers are currently working to understand how to use aspects to design general-purpose code. Related concepts include generative programming and templates .
Experimental software engineering is a branch of software engineering interested in devising experiments on software, in collecting data from the experiments, and in devising laws and theories from this data.
Software product lines, aka product family engineering , is a systematic way to produce families of software systems, instead of creating a succession of completely individual products. This method emphasizes extensive, systematic, formal code reuse , to try to industrialize the software development process.
The Future of Software Engineering conference (FOSE), held at ICSE 2000, documented the state of the art of SE in 2000 and listed many problems to be solved over the next decade. The FOSE tracks at the ICSE 2000 [ 22 ] and the ICSE 2007 [ 23 ] conferences also help identify the state of the art in software engineering. [ citation needed ]
The profession is trying to define its boundary and content. The Software Engineering Body of Knowledge SWEBOK has been tabled as an ISO standard during 2006 (ISO/IEC TR 19759). [ citation needed ]
In 2006, Money Magazine and Salary.com rated software engineering as the best job in America in terms of growth, pay, stress levels, flexibility in hours and working environment, creativity, and how easy it is to enter and advance in the field. [ 24 ]
A wide variety of platforms has allowed different aspects of AI to develop, ranging from expert systems such as Cyc to deep learning to robot platforms such as the Roomba with open interface. [ 25 ] Recent advances in deep artificial neural networks and distributed computing have led to a proliferation of software libraries, including Deeplearning4j , TensorFlow , Theano and Torch .
A 2011 McKinsey Global Institute study found a shortage of 1.5 million highly trained data and AI professionals and managers [ 26 ] and a number of private bootcamps have developed programs to meet that demand, including free programs like The Data Incubator or paid programs like General Assembly . [ 27 ]
Early symbolic AI inspired Lisp and Prolog , which dominated early AI programming. Modern AI development often uses mainstream languages such as Python or C++ , [ 28 ] or niche languages such as Wolfram Language . [ 29 ] | https://en.wikipedia.org/wiki/History_of_software_engineering |
The history of sound recording - which has progressed in waves, driven by the invention and commercial introduction of new technologies — can be roughly divided into four main periods:
Experiments in capturing sound on a recording medium for preservation and reproduction began in earnest during the Industrial Revolution of the 1800s. Many pioneering attempts to record and reproduce sound were made during the latter half of the 19th century – notably Édouard-Léon Scott de Martinville 's phonautograph of 1857 – and these efforts culminated in the invention of the phonograph by Thomas Edison in 1877. Digital recording emerged in the late 20th century and has since flourished with the popularity of digital music and online streaming services. [ 1 ]
The earliest practical recording technologies were entirely mechanical devices. These recorders typically used a large conical horn to collect and focus the physical air pressure of the sound waves produced by the human voice or musical instruments. A sensitive membrane or diaphragm, located at the apex of the cone, was connected to an articulated scriber or stylus, and as the changing air pressure moved the diaphragm back and forth, the stylus scratched or incised an analog of the sound waves onto a moving recording medium, such as a roll of coated paper, or a cylinder or disc coated with a soft material such as wax or a soft metal.
These early recordings were necessarily of low fidelity and volume and captured only a narrow segment of the audible sound spectrum — typically only from around 250 Hz up to about 2,500 Hz — so musicians and engineers were forced to adapt to these sonic limitations. Musical ensembles of the period often favored louder instruments such as trumpet , cornet , and trombone ; lower-register brass instruments such as the tuba and the euphonium doubled or replaced the double bass , and blocks of wood stood in for bass drums . Performers also had to arrange themselves strategically around the horn to balance the sound, and to play as loudly as possible. The reproduction of domestic phonographs was similarly limited in both frequency-range and volume.
By the end of the acoustic era, the disc had become the standard medium for sound recording, and its dominance in the domestic audio market lasted until the end of the 20th century. [ 2 ]
The second wave of sound recording history was ushered in by the introduction of Western Electric 's integrated system of electrical microphones , electronic signal amplifiers and electromechanical recorders, which was adopted by major US record labels in 1925. Sound recording now became a hybrid process — sound could now be captured, amplified , filtered , and balanced electronically, and the disc-cutting head was now electrically powered, but the actual recording process remained essentially mechanical – the signal was still physically inscribed into a wax master disc, and consumer discs were mass-produced mechanically by stamping a metal electroform made from the wax master into a suitable substance, originally a shellac -based compound and later polyvinyl plastic.
The Western Electric system greatly improved the fidelity of sound recording, increasing the reproducible frequency range to a much wider band (between 60 Hz and 6000 Hz) and allowing a new class of professional – the audio engineer – to capture a fuller, richer, and more detailed and balanced sound on record, using multiple microphones connected to multi-channel electronic amplifiers, compressors, filters and mixers . Electrical microphones led to a dramatic change in the performance style of singers, ushering in the age of the crooner , while electronic amplification had a wide-ranging impact in many areas, enabling the development of broadcast radio, public address systems, and electronically amplified home record players.
In addition, the development of electronic amplifiers for musical instruments now enabled quieter instruments such as the guitar and the string bass to compete on equal terms with the naturally louder wind and horn instruments, and musicians and composers also began to experiment with entirely new electronic musical instruments such as the Theremin , the Ondes Martenot , the electronic organ , and the Hammond Novachord , the world's first analog polyphonic synthesizer .
Contemporaneous with these developments, several inventors were engaged in a race to develop practical methods of providing synchronised sound with films. Some early sound films — such as the landmark 1927 film The Jazz Singer – used large soundtrack records which were played on a turntable mechanically interlocked with the projector . By the early 1930s, the movie industry had almost universally adopted sound-on-film technology, in which the audio signal to be recorded was used to modulate a light source that was imaged onto the moving film through a narrow slit, allowing it to be photographed as variations in the density or width of a soundtrack running along a dedicated area of the film. The projector used a steady light and a photoelectric cell to convert the variations back into an electrical signal, which was amplified and sent to loudspeakers behind the screen.
The adoption of sound-on-film also helped movie-industry audio engineers to make rapid advances in the process we now know as multi-tracking , by which multiple separately-recorded audio sources (such as voices, sound effects and background music) can be replayed simultaneously, mixed together, and synchronized with the action on film to create new blended audio tracks of great sophistication and complexity. One of the best-known examples of a constructed composite sound from that era is the famous " Tarzan yell " created for the series of Tarzan movies starring Johnny Weissmuller .
Among the vast and often rapid changes that have taken place over the last century of audio recording, it is notable that there is one crucial audio device, invented at the start of the Electrical Era, [ 3 ] which has survived virtually unchanged since its introduction in the 1920s: the electro-acoustic transducer , or loudspeaker . The most common form is the dynamic loudspeaker – effectively a dynamic microphone in reverse. This device typically consists of a shallow conical diaphragm, usually of a stiff paper-like material concentrically pleated to make it more flexible, firmly fastened at its perimeter, with the coil of a moving-coil electromagnetic driver attached around its apex. When an audio signal from a recording, a microphone, or an electrified instrument is fed through an amplifier to the loudspeaker, the varying electromagnetic field created in the coil causes it and the attached cone to move backwards and forward, and this movement generates the audio-frequency pressure waves that travel through the air to our ears, which hear them as sound.
Although there have been numerous refinements to the technology, and other related technologies have been introduced (e.g. the electrostatic loudspeaker ), the basic design and function of the dynamic loudspeaker has not changed substantially in 90 years, and it remains overwhelmingly the most common, sonically accurate and reliable means of converting electronic audio signals back into audible sound.
The third wave of development in audio recording began in 1945 when the allied nations gained access to a new German invention: magnetic tape recording. The technology was invented in the 1930s but remained restricted to Germany (where it was widely used in broadcasting) until the end of World War II. Magnetic tape provided another dramatic leap in audio fidelity—indeed, Allied observers first became aware of the existence of the new technology because they noticed that the audio quality of obviously pre-recorded programs was practically indistinguishable from live broadcasts.
From 1950 onwards, magnetic tape quickly became the standard medium of audio master recording in the radio and music industries and led to the development of the first hi-fi stereo recordings for the domestic market, the development of multi-track tape recording for music, and the demise of the disc as the primary mastering medium for sound. Magnetic tape also brought about a radical reshaping of the recording process—it made possible recordings of far longer duration and much higher fidelity than ever before, and it offered recording engineers the same exceptional plasticity that film gave to cinema editors—sounds captured on tape could now easily be manipulated sonically, edited, and combined in ways that were simply impossible with disc recordings.
These experiments reached an early peak in the 1950s with the recordings of Les Paul and Mary Ford , who pioneered the use of tape editing and multi-tracking to create large virtual ensembles of voices and instruments, constructed entirely from multiple taped recordings of their own voices and instruments. Magnetic tape fueled a rapid and radical expansion in the sophistication of popular music and other genres, allowing composers, producers, engineers and performers to realize previously unattainable levels of complexity. Other concurrent advances in audio technology led to the introduction of a range of new consumer audio formats and devices, on both disc and tape, including the development full-frequency-range disc reproduction, the change from shellac to polyvinyl plastic for disc manufacture, the invention of the 33 rpm, 12-inch (300 mm) long-playing (LP) disc and the 45 rpm 7-inch (180 mm) single , the introduction of domestic and professional portable tape recorders (which enabled high-fidelity recordings of live performances), the popular 4-track cartridge and compact cassette formats, and even the world's first sampling keyboards , the pioneering tape-based keyboard instrument the Chamberlin , and its more famous successor, the Mellotron .
The fourth and current phase, the digital era, has seen rapid, dramatic and far-reaching series of changes. In a period of fewer than 20 years, all previous recording technologies were rapidly superseded by digital sound encoding, and the Japanese electronics corporation Sony in the 1970s was instrumental with the first consumer PCM encoder PCM-F1, introduced in 1981. [ 4 ] Unlike all previous technologies, which captured a continuous analog of the sounds being recorded, digital recording captured sound by means of a very dense and rapid series of discrete samples of the sound. [ 5 ] When played back through a digital-to-analog converter , these audio samples are recombined to form a continuous flow of sound. The first all-digitally-recorded popular music album, Ry Cooder 's Bop Till You Drop , was released in 1979, and from that point, digital sound recording and reproduction quickly became the new standard at every level, from the professional recording studio to the home hi-fi.
Although a number of short-lived hybrid studio and consumer technologies appeared in this period (e.g. Digital Audio Tape or DAT, which recorded digital signal samples onto standard magnetic tape), Sony assured the preeminence of its new digital recording system by introducing, together with Philips , the digital compact disc (CD) . The compact disc rapidly replaced both the 12 in (300 mm) album and the 7 in (180 mm) single as the new standard consumer format and ushered in a new era of high-fidelity consumer audio.
CDs are small, portable and durable, and they could reproduce the entire audible sound spectrum, with a large dynamic range (~96 dB), perfect clarity and no distortion. Because CDs were encoded and read optically, using a laser beam, there was no physical contact between the disc and the playback mechanism, so a well-cared-for CD could be played over and over, with absolutely no degradation or loss of fidelity. CDs also represented a considerable advance in both the physical size of the medium and its storage capacity. LPs could only practically hold about 20–25 minutes of audio per side because they were physically limited by the size of the disc itself and the density of the grooves that could be cut into it — the longer the recording, the closer together the grooves and thus the lower the overall fidelity. CDs, on the other hand, were less than half the overall size of the old 12 in (300 mm) LP format, but offered about double the duration of the average LP, with up to 80 minutes of audio. [ 6 ]
The compact disc almost totally dominated the consumer audio market by the end of the 20th century, but within another decade, rapid developments in computing technology saw it rendered virtually redundant in just a few years by the most significant new invention in the history of audio recording — the digital audio file (.wav, .mp3 and other formats). When combined with newly developed digital signal compression algorithms, which greatly reduced file sizes, digital audio files came to dominate the domestic market, thanks to commercial innovations such as Apple's iTunes media application, and their popular iPod portable media player.
However, the introduction of digital audio files, in concert with the rapid developments in home computing, soon led to an unforeseen consequence — the widespread unlicensed distribution of audio and other digital media files. The uploading and downloading of large volumes of digital media files at high speed was facilitated by freeware file-sharing technologies such as Napster and BitTorrent .
Although infringement remains a significant issue for copyright owners, the development of digital audio has had considerable benefits for consumers and labels. In addition to facilitating the high-volume, low-cost transfer and storage of digital audio files, this new technology has also powered an explosion in the availability of so-called back-catalog titles stored in the archives of recording labels, thanks to the fact that labels can now convert old recordings and distribute them digitally at a fraction of the cost of physically reissuing albums on LP or CD. Digital audio has also enabled dramatic improvements in the restoration and remastering of acoustic and pre-digital electric recordings, and even freeware consumer-level digital software can very effectively eliminate scratches, surface noise and other unwanted sonic artifacts from old 78rpm and vinyl recordings and greatly enhance the sound quality of all but the most badly damaged records. In the field of consumer-level digital data storage, the continuing trend towards increasing capacity and falling costs means that consumers can now acquire and store vast quantities of high-quality digital media (audio, video, games and other applications), and build up media libraries consisting of tens or even hundreds of thousands of songs, albums, or videos — collections which, for all but the wealthiest, would have been both physically and financially impossible to amass in such quantities if they were on 78 or LP, yet which can now be contained on storage devices no larger than the average hardcover book.
The digital audio file marked the end of one era in recording and the beginning of another. Digital files effectively eliminated the need to create or use a discrete, purpose-made physical recording medium (a disc, or a reel of tape, etc.) as the primary means of capturing, manufacturing and distributing commercial sound recordings. Concurrent with the development of these digital file formats, dramatic advances in home computing and the rapid expansion of the Internet mean that digital sound recordings can now be captured, processed, reproduced, distributed and stored entirely electronically, on a range of magnetic and optical recording media, and these can be distributed anywhere in the world, with no loss of fidelity, and crucially, without the need to first transfer these files to some form of permanent recording medium for shipment and sale.
Music streaming services have gained popularity since the late 2000s. [ 7 ] Streaming audio does not require the listener to own the audio files. Instead, they listen over the internet. [ 8 ] Streaming services offer an alternative method of consuming music and some follow a freemium business model . The freemium model many music streaming services use, such as Spotify and Apple Music , provide a limited amount of content for free, and then premium services for payment. [ 9 ] There are two categories in which streaming services are categorized, radio or on-demand. Streaming services such as Pandora use the radio model, allowing users to select playlists but not specific songs to listen to, while services such as Apple Music allow users to listen to both individual songs and pre-made playlists. [ 10 ]
The earliest method of sound recording and reproduction involved the live recording of a performance directly to a recording medium by an entirely mechanical process, often called acoustical recording . In the standard procedure used until the mid-1920s, the sounds generated by the performance vibrated a diaphragm with a recording stylus connected to it while the stylus cut a groove into a soft recording medium rotating beneath it. To make this process as efficient as possible, the diaphragm was located at the apex of a hollow cone that served to collect and focus the acoustical energy, with the performers crowded around the other end. Recording balance was achieved empirically. A performer who recorded too strongly or not strongly enough would be moved away from or nearer to the mouth of the cone. The number and kind of instruments that could be recorded were limited. Brass instruments, which recorded well, often substituted instruments such as cellos and bass fiddles, which did not. In some early jazz recordings, a block of wood was used in place of the snare drum , which could easily overload the recording diaphragm.
In 1857, Édouard-Léon Scott de Martinville invented the phonautograph , the first device that could record sound waves as they passed through the air. It was intended only for visual study of the recording and could not play back the sound. The recording medium was a sheet of soot-coated paper wrapped around a rotating cylinder carried on a threaded rod. A stylus , attached to a diaphragm through a series of levers, traced a line through the soot, creating a graphic record of the motions of the diaphragm as it was minutely propelled back and forth by the audio-frequency variations in air pressure.
In the spring of 1877 another inventor, Charles Cros , suggested that the process could be reversed by using photoengraving to convert the traced line into a groove that would guide the stylus, causing the original stylus vibrations to be recreated, passed on to the linked diaphragm, and sent back into the air as sound. Edison's invention of the phonograph soon eclipsed this idea, and it was not until 1887 that yet another inventor, Emile Berliner , actually photoengraved a phonautograph recording into metal and played it back.
Scott's early recordings languished in French archives until 2008 when scholars keen to resurrect the sounds captured in these and other types of early experimental recordings tracked them down. Rather than using rough 19th-century technology to create playable versions, they were scanned into a computer and software was used to convert their sound-modulated traces into digital audio files. Brief excerpts from two French songs and a recitation in Italian, all recorded in 1860, are the most substantial results. [ 11 ]
The phonograph , invented by Thomas Edison in 1877, [ 12 ] could both record sound and play it back. The earliest type of phonograph sold recorded on a thin sheet of tinfoil wrapped around a grooved metal cylinder. A stylus connected to a sound-vibrated diaphragm indented the foil into the groove as the cylinder rotated. The stylus vibration was at a right angle to the recording surface, so the depth of the indentation varied with the audio-frequency changes in air pressure that carried the sound. This arrangement is known as vertical or hill-and-dale recording. The sound could be played back by tracing the stylus along the recorded groove and acoustically coupling its resulting vibrations to the surrounding air through the diaphragm and a so-called amplifying horn.
The crude tinfoil phonograph proved to be of little use except as a novelty. It was not until the late 1880s that an improved and much more useful form of the phonograph was marketed. The new machines recorded on easily removable hollow wax cylinders and the groove was engraved into the surface rather than indented. The targeted use was business communication, and in that context, the cylinder format had some advantages. When entertainment use proved to be the real source of profits, one seemingly negligible disadvantage became a major problem: the difficulty of making copies of a recorded cylinder in large quantities.
At first, cylinders were copied by acoustically connecting a playback machine to one or more recording machines through flexible tubing, an arrangement that degraded the audio quality of the copies. Later, a pantograph mechanism was used, but it could only produce about 25 fair copies before the original was too worn down. During a recording session, as many as a dozen machines could be arrayed in front of the performers to record multiple originals. Still, a single take would ultimately yield only a few hundred copies at best, so performers were booked for marathon recording sessions in which they had to repeat their most popular numbers over and over again. By 1902, successful molding processes for manufacturing prerecorded cylinders had been developed.
The wax cylinder got a competitor with the advent of the Gramophone, which was patented by Emile Berliner in 1887. The vibration of the Gramophone's recording stylus was horizontal, parallel to the recording surface, resulting in a zig-zag groove of constant depth. This is known as lateral recording. Berliner's original patent showed a lateral recording etched around the surface of a cylinder, but in practice, he opted for the disc format. The Gramophones he soon began to market were intended solely for playing prerecorded entertainment discs and could not be used to record. The spiral groove on the flat surface of a disc was relatively easy to replicate: a negative metal electrotype of the original record could be used to stamp out hundreds or thousands of copies before it wore out. Early on, the copies were made of hard rubber , and sometimes of celluloid , but soon a shellac -based compound was adopted.
Gramophone , Berliner's trademark name, was abandoned in the US in 1900 because of legal complications, with the result that in American English Gramophones and Gramophone records, along with disc records and players made by other manufacturers, were long ago brought under the umbrella term phonograph , a word which Edison's competitors avoided using but which was never his trademark, simply a generic term he introduced and applied to cylinders, discs, tapes and any other formats capable of carrying a sound-modulated groove. In the UK, proprietary use of the name Gramophone continued for another decade until, in a court case, it was adjudged to have become genericized and so could be used freely by competing disc record makers, with the result that in British English a disc record is called a gramophone record and phonograph record is traditionally assumed to mean a cylinder.
Not all cylinder records are alike. They were made of various soft or hard waxy formulations or early plastics, sometimes in unusual sizes; did not all use the same groove pitch; and were not all recorded at the same speed. Early brown wax cylinders were usually cut at about 120 rpm , whereas later cylinders ran at 160 rpm for clearer and louder sound at the cost of reduced maximum playing time. As a medium for entertainment, the cylinder was already losing the format war with the disc by 1910, but the production of entertainment cylinders did not entirely cease until 1929 and use of the format for business dictation purposes persisted into the 1950s.
Disc records, too, were sometimes made in unusual sizes, or from unusual materials, or otherwise deviated from the format norms of their eras in some substantial way. The speed at which disc records were rotated was eventually standardized at about 78 rpm, but other speeds were sometimes used. Around 1950, slower speeds became standard: 45, 33 + 1 ⁄ 3 , and the rarely used 16 + 2 ⁄ 3 rpm. The standard material for discs changed from shellac to vinyl , although vinyl had been used for some special-purpose records since the early 1930s and some 78 rpm shellac records were still being made in the late 1950s.
Until the mid-1920s records were played on purely mechanical record players usually powered by a wind-up spring motor. The sound was amplified by an external or internal horn that was coupled to the diaphragm and stylus , although there was no real amplification: the horn simply improved the efficiency with which the diaphragm's vibrations were transmitted into the open air. The recording process was, in essence, the same non-electronic setup operating in reverse, but with a recording, stylus engraving a groove into a soft waxy master disc and carried slowly inward across it by a feed mechanism.
The advent of electrical recording in 1925 made it possible to use sensitive microphones to capture the sound and greatly improved the audio quality of records. A much wider range of frequencies could be recorded, the balance of high and low frequencies could be controlled by elementary electronic filters, and the signal could be amplified to the optimum level for driving the recording stylus. The leading record labels switched to the electrical process in 1925 and the rest soon followed, although one straggler in the US held out until 1929.
There was a period of nearly five years, from 1925 to 1930 when the top audiophile technology for home sound reproduction consisted of a combination of electrically recorded records with the specially-developed Victor Orthophonic Victrola , an acoustic phonograph that used waveguide engineering and a folded horn to provide a reasonably flat frequency response . The first electronically amplified record players reached the market only a few months later, around the start of 1926, but at first, they were much more expensive and their audio quality was impaired by their primitive loudspeakers ; they did not become common until the late 1930s.
Electrical recording increased the flexibility of the process, but the performance was still cut directly to the recording medium, so if a mistake was made the whole recording was spoiled. Disc-to-disc editing was possible, by using multiple turntables to play parts of different takes and recording them to a new master disc, but switching sources with split-second accuracy was difficult and lower sound quality was inevitable, so except for use in editing some early sound films and radio recordings it was rarely done.
Electrical recording made it more feasible to record one part to disc and then play that back while playing another part, recording both parts to a second disc. This and conceptually related techniques, known as overdubbing , enabled studios to create recorded performances that feature one or more artists each singing multiple parts or playing multiple instrument parts and that therefore could not be duplicated by the same artist or artists performing live. The first commercially issued records using overdubbing were released by the Victor Talking Machine Company in the late 1920s. However, overdubbing was of limited use until the advent of audio tape . Use of tape overdubbing was pioneered by Les Paul in the 1940s.
Wire recording or magnetic wire recording is an analog type of audio storage in which a magnetic recording is made on thin steel or stainless steel wire.
The wire is pulled rapidly across a recording head, which magnetizes each point along the wire in accordance with the intensity and polarity of the electrical audio signal being supplied to the recording head at that instant. By later drawing the wire across the same or a similar head while the head is not being supplied with an electrical signal, the varying magnetic field presented by the passing wire induces a similarly varying electric current in the head, recreating the original signal at a reduced level.
Magnetic wire recording was replaced by magnetic tape recording, but devices employing one or the other of these media had been more or less simultaneously under development for many years before either came into widespread use. The principles and electronics involved are nearly identical. Wire recording initially had the advantage that the recording medium itself was already fully developed, while tape recording was held back by the need to improve the materials and methods used to manufacture the tape.
Magnetic recording was demonstrated in principle as early as 1898 by Valdemar Poulsen in his telegraphone . Magnetic wire recording, and its successor, magnetic tape recording , involve the use of a magnetized medium that moves with a constant speed past a recording head . An electrical signal, which is analogous to the sound that is to be recorded, is fed to the recording head, inducing a pattern of magnetization similar to the signal. A playback head can then pick up the changes in the magnetic field from the tape and convert it into an electrical signal.
With the addition of electronic amplification developed by Curt Stille in the 1920s, the telegraphone evolved into wire recorders which were popular for voice recording and dictation during the 1940s and into the 1950s. The reproduction quality of wire recorders was significantly lower than that achievable with phonograph disk recording technology. There were also practical difficulties, such as the tendency of the wire to become tangled or snarled. Splicing could be performed by knotting together the cut wire ends, but the results were not very satisfactory.
On Christmas Day, 1932 the British Broadcasting Corporation first used a steel tape recorder for their broadcasts. The device used was a Marconi-Stille recorder, [ 13 ] a huge and dangerous machine which used steel tape that had sharp edges. The tape was 0.1 inches (2.5 mm) wide and 0.003 inches (0.076 mm) thick running at 5 feet per second (1.5 m/s) past the recording and reproducing heads. This meant that the length of tape required for a half-hour program was nearly 1.8 miles (2.9 km) and a full reel weighed 55 pounds (25 kg).
Engineers at AEG , working with the chemical giant IG Farben , created the world's first practical magnetic tape recorder, the 'K1', which was first demonstrated in 1935. During World War II , an engineer at the Reichs-Rundfunk-Gesellschaft discovered the AC biasing technique. With this technique, an inaudible high-frequency signal, typically in the range of 50 to 150 kHz, is added to the audio signal before being applied to the recording head. Biasing radically improved the sound quality of magnetic tape recordings. By 1943 AEG had developed stereo tape recorders.
During the war, the Allies became aware of radio broadcasts that seemed to be transcriptions (much of this due to the work of Richard H. Ranger ), but their audio quality was indistinguishable from that of a live broadcast and their duration was far longer than was possible with 78 rpm discs. At the end of the war, the Allies captured a number of German Magnetophon recorders from Radio Luxembourg which aroused great interest. These recorders incorporated all of the key technological features of analog magnetic recording, particularly the use of high-frequency bias.
American audio engineer John T. Mullin served in the U.S. Army Signal Corps and was posted to Paris in the final months of World War II. His unit was assigned to find out everything they could about German radio and electronics, including the investigation of claims that the Germans had been experimenting with high-energy directed radio beams as a means of disabling the electrical systems of aircraft. Mullin's unit soon amassed a collection of hundreds of low-quality magnetic dictating machines, but it was a chance visit to a studio at Bad Neuheim near Frankfurt while investigating radio beam rumors that yielded the real prize.
Mullin was given two suitcase-sized AEG 'Magnetophon' high-fidelity recorders and fifty reels of recording tape. He had them shipped home and over the next two years, he worked on the machines constantly, modifying them and improving their performance. His major aim was to interest Hollywood studios in using magnetic tape for movie soundtrack recording.
Mullin gave two public demonstrations of his machines, and they caused a sensation among American audio professionals—many listeners could not believe that what they were hearing was not a live performance. By luck, Mullin's second demonstration was held at MGM studios in Hollywood and in the audience that day was Bing Crosby's technical director, Murdo Mackenzie. He arranged for Mullin to meet Crosby and in June 1947 he gave Crosby a private demonstration of his magnetic tape recorders.
Crosby was stunned by the amazing sound quality and instantly saw the huge commercial potential of the new machines. Live music was the standard for American radio at the time and the major radio networks did not permit the use of disc recording in many programs because of their comparatively poor sound quality. But Crosby disliked the regimentation of live broadcasts, preferring the relaxed atmosphere of the recording studio . He had asked NBC to let him pre-record his 1944–45 series on transcription discs , but the network refused, so Crosby had withdrawn from live radio for a year, returning for the 1946–47 season only reluctantly.
Mullin's tape recorder came along at precisely the right moment. Crosby realized that the new technology would enable him to pre-record his radio show with a sound quality that equaled live broadcasts and that these tapes could be replayed many times with no appreciable loss of quality. Mullin was asked to tape one show as a test and was immediately hired as Crosby's chief engineer to pre-record the rest of the series.
Crosby became the first major American music star to use tape to pre-record radio broadcasts and the first to master commercial recordings on tape. The taped Crosby radio shows were painstakingly edited through tape-splicing to give them a pace and flow that was wholly unprecedented in radio. Mullin even claims to have been the first to use canned laughter ; at the insistence of Crosby's head writer, Bill Morrow, he inserted a segment of raucous laughter from an earlier show into a joke in a later show that had not worked well.
Keen to make use of the new recorders as soon as possible, Crosby invested $50,000 of his own money into Ampex , and the tiny six-man concern soon became the world leader in the development of tape recording, revolutionizing radio and recording with its famous Ampex Model 200 tape deck, issued in 1948 and developed directly from Mullin's modified Magnetophones.
Development of magnetic tape recorders in the late 1940s and early 1950s is associated with the Brush Development Company and its licensee, Ampex ; the equally important development of magnetic tape media itself was led by Minnesota Mining and Manufacturing corporation (now known as 3M).
The next major development in the magnetic tape was multitrack recording , in which the tape is divided into multiple tracks parallel with each other. Because they are carried on the same medium, the tracks stay in perfect synchronization. The first development in multitracking was stereo sound, which divided the recording head into two tracks. First developed by German audio engineers ca. 1943, two-track recording was rapidly adopted for modern music in the 1950s because it enabled signals from two or more microphones to be recorded separately at the same time (while the use of several microphones to record on the same track had been common since the emergence of the electrical era in the 1920s), enabling stereophonic recordings to be made and edited conveniently. (The first stereo recordings, on disks, had been made in the 1930s, but were never issued commercially.) Stereo (either true, two-microphone stereo or multi mixed) quickly became the norm for commercial classical recordings and radio broadcasts, although many pop music and jazz recordings continued to be issued in monophonic sound until the mid-1960s.
Much of the credit for the development of multitrack recording goes to guitarist, composer and technician Les Paul , who also helped design the famous electric guitar that bears his name . His experiments with tapes and recorders in the early 1950s led him to order the first custom-built eight-track recorder from Ampex, and his pioneering recordings with his then-wife, singer Mary Ford , were the first to make use of the technique of multitracking to record separate elements of a musical piece asynchronously — that is, separate elements could be recorded at different times. Paul's technique enabled him to listen to the tracks he had already taped and record new parts in time alongside them.
Multitrack recording was immediately taken up in a limited way by Ampex, who soon produced a commercial 3-track recorder. These proved extremely useful for popular music since they enabled backing music to be recorded on two tracks (either to allow the overdubbing of separate parts or to create a full stereo backing track) while the third track was reserved for the lead vocalist. Three-track recorders remained in widespread commercial use until the mid-1960s and much famous pop recordings — including many of Phil Spector 's so-called Wall of Sound productions and early Motown hits — were taped on Ampex 3-track recorders. Engineer Tom Dowd was among the first to use the multitrack recording for popular music production while working for Atlantic Records during the 1950s.
The next important development was 4-track recording. The advent of this improved system gave recording engineers and musicians vastly greater flexibility for recording and overdubbing, and 4-track was the studio standard for most of the later 1960s. Many of the most famous recordings by The Beatles and The Rolling Stones were recorded on 4-track, and the engineers at London's Abbey Road Studios became particularly adept at a technique called reduction mixes in the UK and bouncing down in the United States, in which several tracks were recorded onto one 4-track machine and then mixed together and transferred (bounced down) to one track of a second 4-track machine. In this way, it was possible to record literally dozens of separate tracks and combine them into finished recordings of great complexity.
All of the Beatles classic mid-1960s recordings, including the albums Revolver and Sgt. Pepper's Lonely Hearts Club Band , were recorded in this way. There were limitations, however, because of the build-up of noise during the bouncing-down process, and the Abbey Road engineers are still famed for their ability to create dense multitrack recordings while keeping background noise to a minimum.
4-track tape also enabled the development of quadraphonic sound, in which each of the four tracks was used to simulate a complete 360-degree surround sound. A number of albums were released both in stereo and quadrophonic format in the 1970s, but 'quad' failed to gain wide commercial acceptance. Although it is now considered a gimmick, it was the direct precursor of the surround sound technology that has become standard in many modern home theatre systems.
In a professional setting today, such as a studio, audio engineers may use 24 tracks or more for their recordings, using one or more tracks for each instrument played.
The combination of the ability to edit via tape splicing and the ability to record multiple tracks revolutionized studio recording. It became common studio recording practice to record on multiple tracks and bounce down afterward. The convenience of tape editing and multitrack recording led to the rapid adoption of magnetic tape as the primary technology for commercial musical recordings. Although 33 + 1 ⁄ 3 rpm and 45 rpm vinyl records were the dominant consumer format, recordings were customarily made first on tape, then transferred to disc, with Bing Crosby leading the way in the adoption of this method in the United States.
Analog magnetic tape recording introduces noise, usually called tape hiss , caused by the finite size of the magnetic particles in the tape. There is a direct tradeoff between noise and economics. Signal-to-noise ratio is increased at higher speeds and with wider tracks, and decreased at lower speeds and with narrower tracks.
By the late 1960s, disk reproducing equipment became so good that audiophiles soon became aware that some of the noise audible on recordings was not surface noise or deficiencies in their equipment, but reproduced tape hiss. A few specialist companies started making direct to disc recordings , made by feeding microphone signals directly to a disk cutter (after amplification and mixing), in essence reverting to the pre-War direct method of recording. These recordings never became popular, but they dramatically demonstrated the magnitude and importance of the tape hiss problem.
Before 1963, when Philips introduced the Compact audio cassette , almost all tape recording had used the reel-to-reel format. Previous attempts to package the tape in a convenient cassette that required no threading met with limited success; the most successful was 8-track cartridge used primarily in automobiles for playback only. The Philips Compact audio cassette added much-needed convenience to the tape recording format and a decade or so later had begun to dominate the consumer market, although it was to remain lower in quality than open-reel formats.
In the 1970s, advances in solid-state electronics made the design and marketing of more sophisticated analog circuitry economically feasible. This led to a number of attempts to reduce tape hiss through the use of various forms of volume compression and expansion, the most notable and commercially successful being several systems developed by Dolby Laboratories . These systems divided the frequency spectrum into several bands and applied volume compression /expansion independently to each band (Engineers now often use the term compansion to refer to this process). The Dolby systems were very successful at increasing the effective dynamic range and signal-to-noise ratio of analog audio recording; to all intents and purposes, audible tape hiss could be eliminated. The original Dolby A was only used in professional recording. Successors found use in both professional and consumer formats; Dolby B became almost universal for prerecorded music on cassette. Subsequent forms, including Dolby C , (and the short-lived Dolby S ) were developed for home use.
In the 1980s, digital recording methods were introduced, and analog tape recording was gradually displaced, although it has not disappeared by any means. (Many professional studios, particularly those catering to big-budget clients, use analog recorders for multitracking and/or mixdown.) The digital audio tape never became important as a consumer recording medium partially due to legal complications arising from piracy fears on the part of the record companies. They had opposed magnetic tape recording when it first became available to consumers, but the technical difficulty of juggling recording levels, overload distortion, and residual tape hiss was sufficiently high that unlicensed reproduction of magnetic tape never became an insurmountable commercial problem. With digital methods, copies of recordings could be exact, and copyright infringement might have become a serious commercial problem. Digital tape is still used in professional situations and the DAT variant has found a home in computer data backup applications. Many professional and home recordists now use hard-disk-based systems for recording, burning the final mixes to recordable CDs (CD-R's).
Most Police forces in the United Kingdom (and possibly elsewhere) still use analog compact cassette systems to record Police Interviews as it provides a medium less prone to accusations of tampering. [ 14 ]
The first attempts to record sound to an optical medium occurred around 1900. Prior to the use of recorded sound in film, theatres would have live orchestras present during silent films. The musicians would sit in the pit below the screen and would provide the background noise and set the mood for whatever was occurring in the movie. [ 15 ] In 1906, Eugene Augustin Lauste applied for a patent to record Sound-on-film , but was ahead of his time. In 1923, Lee de Forest applied for a patent to record to film; he also made a number of short experimental films, mostly of vaudeville performers. William Fox began releasing sound-on-film newsreels in 1926, the same year that Warner Bros. released Don Juan with music and sound effects recorded on discs, as well as a series of short films with fully- synchronized sound on discs. In 1927, the sound film The Jazz Singer was released; while not the first sound film, it made a tremendous hit and made the public and the film industry realize that sound film was more than a mere novelty.
The Jazz Singer used a process called Vitaphone that involved synchronizing the projected film to sound recorded on a disc. It essentially amounted to playing a phonograph record, but one that was recorded with the best electrical technology of the time. Audiences used to acoustic phonographs and recordings would, in the theatre, have heard something resembling 1950s high fidelity .
However, in the days of analog technology, no process involving a separate disk could hold synchronization precisely or reliably. Vitaphone was quickly supplanted by technologies that recorded an optical soundtrack directly onto the side of the strip of motion picture film . This was the dominant technology from the 1930s through the 1960s and is still in use as of 2013 [update] although the analog soundtrack is being replaced by digital sound on film formats .
There are two types of synchronized film soundtrack, optical and magnetic. Optical soundtracks are visual renditions of sound wave forms and provide sound through a light beam and optical sensor within the projector. Magnetic soundtracks are essentially the same as used in conventional analog tape recording.
Magnetic soundtracks can be joined with the moving image but it creates an abrupt discontinuity because of the offset of the audio track relative to the picture. Whether optical or magnetic, the audio pickup must be located several inches ahead of the projection lamp, shutter and drive sprockets . There is usually a flywheel as well to smooth out the film moves to eliminate the flutter that would otherwise result from the negative pulldown mechanism. If you have films with a magnetic track, you should keep them away from strong magnetic sources, such as televisions. These can weaken or wipe the magnetic sound signal.
For optical recording on film there are two methods utilized. Variable density recording uses changes in the darkness of the soundtrack side of the film to represent the soundwave. Variable area recording uses changes in the width of a dark strip to represent the soundwave.
In both cases, a light that is sent through the part of the film that corresponds to the soundtrack changes in intensity, proportional to the original sound, and that light is not projected on the screen but converted into an electrical signal by a light-sensitive device.
Optical soundtracks are prone to the same sorts of degradation that affect the picture, such as scratching and copying.
Unlike the film image that creates the illusion of continuity, soundtracks are continuous. This means that if film with a combined soundtrack is cut and spliced, the image will cut cleanly but the soundtrack will most likely produce a cracking sound. Fingerprints on the film may also produce cracking or interference.
In the late 1950s, the cinema industry, desperate to provide a theatre experience that would be overwhelmingly superior to television, introduced widescreen processes such as Cinerama , Todd-AO and CinemaScope . These processes at the same time introduced technical improvements in sound, generally involving the use of multitrack magnetic sound , recorded on an oxide stripe laminated onto the film. In subsequent decades, a gradual evolution occurred with more and more theatres installing various forms of magnetic-sound equipment.
In the 1990s, digital audio systems were introduced and began to prevail. In some of them, the sound recording is again recorded on a separate disk, as in Vitaphone; others use a digital, optical sound track on the film itself. Digital processes can now achieve reliable and perfect synchronization.
The first digital audio recorders were reel-to-reel decks introduced by companies such as Denon (1972), Soundstream (1979) and Mitsubishi. They used a digital technology known as PCM recording. Within a few years, however, many studios were using devices that encoded the digital audio data into a standard video signal, which was then recorded on a U-matic or other videotape recorder, using the rotating-head technology that was standard for video. A similar technology was used for a consumer format, known as Digital Audio Tape (DAT) which used rotating heads on a narrow tape contained in a cassette. DAT records at sampling rates of 48 kHz or 44.1 kHz, the latter being the same rate used on compact discs. Bit depth is 16 bits, also the same as compact discs. DAT was a failure in the consumer-audio field (too expensive, too finicky, and crippled by anti-copying regulations), but it became popular in studios (particularly home studios) and radio stations. A failed digital tape recording system was the Digital Compact Cassette (DCC).
Within a few years after the introduction of digital recording, multitrack recorders (using stationary heads) were being produced for use in professional studios. In the early 1990s, relatively affordable multitrack digital recorders were introduced for use in home studios; they returned to recording on videotape. The most notable of this type of recorder is the ADAT . Developed by Alesis and first released in 1991, the ADAT machine is capable of recording 8 tracks of digital audio onto a single S-VHS video cassette. The ADAT machine, followed by the Tascam equivalent, the DA-88, using a smaller Hi-8 video cassette, was a common fixture in professional and home studios around the world until approximately 2000 when it was supplanted by various interfaces and 'DAWs' (digital audio workstations) which allowed a computer's hard drive to be the recording medium..
In the consumer market, tapes and gramophones were largely displaced by the compact disc (CD) and a lesser extent the minidisc . These recording media are fully digital and require complex electronics to play back. Digital recording has progressed towards higher fidelity, with formats such as DVD-A offering sampling rates of up to 192 kHz.
Digital sound files can be stored on any computer storage medium. The development of the MP3 audio file format, and legal issues involved in copying such files, has driven most of the innovation in music distribution since their introduction in the late 1990s.
As hard disk capacities and computer CPU speeds increased at the end of the 1990s, hard disk recording became more popular. As of early 2005 hard disk recording takes two forms. One is the use of standard desktop or laptop computers, with adapters for encoding audio into two or many tracks of digital audio. These adapters can either be in-the-box soundcards or external devices, either connecting to in-box interface cards or connecting to the computer via USB or Firewire cables. The other common form of hard disk recording uses a dedicated recorder which contains analog-to-digital and digital-to-analog converters as well as one or two removable hard drives for data storage. Such recorders, packing 24 tracks in a few units of rack space, are actually single-purpose computers, which can in turn be connected to standard computers for editing.
Vinyl records, or long-playing (LP) records, have become popular again as a way to consume music despite the rise of digital media. Over 15 thousand units were sold between 2008 and 2012, [ 16 ] their sales reaching the highest level in 2012 since 1993. Popular artists have begun releasing their albums on vinyl, and stores such as Urban Outfitters and Whole Foods Market have started selling them. [ 17 ] Popular music corporations, such as Sony, have started manufacturing LP for the first time since 1989 as this medium becomes more popular. However, some companies are facing production problems as there are only 16 record plants currently functioning in the United States. [ 18 ]
The analog tape recorder made it possible to erase or record over a previous recording so that mistakes could be fixed. Another advantage of recording on tape is the ability to cut the tape and join it back together. This allows the recording to be edited. Pieces of the recording can be removed, or rearranged. See also audio editing , audio mixing , multitrack recording .
The advent of electronic instruments (especially keyboards and synthesizers ), effects and other instruments has led to the importance of MIDI in recording. For example, using MIDI timecode , it is possible to have different equipment 'trigger' without direct human intervention at the time of recording.
In more recent times, computers ( digital audio workstations ) have found an increasing role in the recording studio , as their use eases the tasks of cutting and looping , as well as allowing for instantaneous changes, such as duplication of parts, the addition of effects and the rearranging of parts of the recording. | https://en.wikipedia.org/wiki/History_of_sound_recording |
The scientific study of speciation — how species evolve to become new species — began around the time of Charles Darwin in the middle of the 19th century. Many naturalists at the time recognized the relationship between biogeography (the way species are distributed) and the evolution of species. The 20th century saw the growth of the field of speciation, with major contributors such as Ernst Mayr researching and documenting species' geographic patterns and relationships. The field grew in prominence with the modern evolutionary synthesis in the early part of that century. Since then, research on speciation has expanded immensely.
The language of speciation has grown more complex. Debate over classification schemes on the mechanisms of speciation and reproductive isolation continue. The 21st century has seen a resurgence in the study of speciation, with new techniques such as molecular phylogenetics and systematics . Speciation has largely been divided into discrete modes that correspond to rates of gene flow between two incipient populations. Current research has driven the development of alternative schemes and the discovery of new processes of speciation.
Charles Darwin introduced the idea that species could evolve and split into separate lineages, referring to it as specification in his 1859 book On the Origin of Species . [ 2 ] It was not until 1906 that the modern term speciation was coined by the biologist Orator F. Cook . [ 2 ] [ 3 ] Darwin, in his 1859 publication, focused primarily on the changes that can occur within a species, and less on how species may divide into two. [ 4 ] : 1 It is almost universally accepted that Darwin's book did not directly address its title. [ 1 ] Darwin instead saw speciation as occurring by species entering new ecological niches . [ 4 ] : 125
Controversy exists as to whether Charles Darwin recognized a true geographical-based model of speciation in his publication On the Origin of Species . [ 5 ] In chapter 11, "Geographical Distribution", Darwin discusses geographic barriers to migration, stating for example that "barriers of any kind, or obstacles to free migration, are related in a close and important manner to the differences between the productions of various regions [of the world]". [ 6 ] F. J. Sulloway contends that Darwin's position on speciation was "misleading" at the least [ 7 ] and may have later misinformed Wagner and David Starr Jordan into believing that Darwin viewed sympatric speciation as the most important mode of speciation. [ 4 ] : 83 Nevertheless, Darwin never fully accepted Wagner's concept of geographical speciation. [ 5 ]
The evolutionary biologist James Mallet maintains that the mantra repeated concerning Darwin's Origin of Species book having never actually discussed speciation is specious. [ 1 ] The claim began with Thomas Henry Huxley and George Romanes (contemporaries of Darwin's), who declared that Darwin failed to explain the origins of inviability and sterility in hybrids. [ 1 ] [ 8 ] Similar claims were promulgated by the mutationist school of thought during the late 20th century, and even after the modern evolutionary synthesis by Richard Goldschmidt . [ 1 ] [ 8 ] Another strong proponent of this view about Darwin came from Mayr. [ 1 ] [ 8 ] Mayr maintained that Darwin was unable to address the problem of speciation, as he did not define species using the biological species concept. [ 9 ] However, Mayr's view has not been entirely accepted, as Darwin's transmutation notebooks contained writings concerning the role of isolation in the splitting of species. [ 9 ] Furthermore, many of Darwin's ideas on speciation largely match the modern theories of both adaptive radiation and ecological speciation . [ 5 ]
In addressing the question of the origin of species, there are two key issues: (1) what are the evolutionary mechanisms of speciation, and (2) what accounts for the separateness and individuality of species in the biota? Since Charles Darwin's time, efforts to understand the nature of species have primarily focused on the first aspect, and it is now widely agreed that a critical factor behind the origin of new species is reproductive isolation. [ 10 ] Darwin also considered the second aspect of the origin of species.
Darwin was perplexed by the clustering of organisms into species. [ 11 ] Chapter 6 of Darwin's book is entitled "Difficulties of the Theory." In discussing these "difficulties" he noted "Firstly, why, if species have descended from other species by insensibly fine gradations, do we not everywhere see innumerable transitional forms? Why is not all nature in confusion instead of the species being, as we see them, well defined?" This dilemma can be referred to as the absence or rarity of transitional varieties in habitat space. [ 12 ]
Another dilemma, [ 13 ] related to the first one, is the absence or rarity of transitional varieties in time. Darwin pointed out that by the theory of natural selection "innumerable transitional forms must have existed," and wondered "why do we not find them embedded in countless numbers in the crust of the earth." That clearly defined species actually do exist in nature in both space and time implies that some fundamental feature of natural selection operates to generate and maintain species. [ 11 ]
A possible explanation for how these dilemmas can be resolved is discussed in the article Speciation in the section "Effect of sexual reproduction on species formation."
Recognition of geographic factors involved in species populations was present even before Darwin, with many naturalists aware of the role of isolation in species relationships. [ 14 ] : 482 In 1833, C. L. Gloger published The Variation of Birds Under the Influence of Climate in which he described geographic variations, but did not recognize that geographic isolation was an indicator of past speciation events. [ 14 ] : 482 Another naturalist in 1856, Wollaston , studied island beetles in comparison to mainland species. [ 14 ] : 482 He saw isolation as key to their differentiation. [ 14 ] : 482 However, he did not recognize that the pattern was due to speciation. [ 14 ] : 483 One naturalist, Leopold von Buch (1825) did recognize the geographic patterns and explicitly stated that geographic isolation may lead to species separating into new species. [ 14 ] : 483 Mayr suggests that Von Buch was likely the first naturalist to truly suggest geographic speciation. [ 15 ] Other naturalists, such as Henry Walter Bates (1863), recognized and accepted the patterns as evidence of speciation, but in Bate's case, did not propose a coherent model. [ 14 ] : 484
In 1868, Moritz Wagner was the first to propose the concept of geographic speciation [ 16 ] [ 14 ] : 484 in which he used the term Separationstheorie . [ 5 ] Edward Bagnall Poulton , the evolutionary biologist and a strong proponent of the importance of natural selection, highlighted the role of geographic isolation in promoting speciation, [ 17 ] in the process coining the term "sympatric speciation" in 1904. [ 18 ] [ 19 ]
Wagner and other naturalists who studied the geographic distributions of animals, such as Karl Jordan and David Starr Jordan , noticed that closely related species were often geographically isolated from one another (allopatrically distributed) which lead to the advocation of the importance of geographic isolation in the origin of species. [ 4 ] : 2 Karl Jordan is thought to have recognized the unification of mutation and isolation in the origin of new species — in stark contrast to the prevailing views at the time. [ 14 ] : 486 David Starr Jordan reiterated Wagner's proposal in 1905, providing a wealth of evidence from nature to support the theory, [ 16 ] [ 20 ] [ 4 ] : 2 and asserting that geographic isolation is obvious but had been unfortunately ignored by most geneticists and experimental evolutionary biologists at the time. [ 14 ] : 487 Joel Asaph Allen suggested the observed pattern of geographic separation of closely related species be called " Jordan's Law " (or Wagner's Law). [ 14 ] : 487 Despite the contentions, most taxonomists did accept the geographic model of speciation. [ 14 ] : 487
Many of the early terms used to describe speciation were outlined by Ernst Mayr. [ 21 ] He was the first to encapsulate the then contemporary literature in his 1942 publication Systematics and the Origin of Species, from the Viewpoint of a Zoologist and in his subsequent 1963 publication Animal Species and Evolution . Like Jordan's works, they relied on direct observations of nature, documenting the occurrence of geographic speciation. [ 4 ] : 86 He described the three modes: geographic, semi-geographic, and non-geographic; which today, are referred to as allopatric, parapatric, and sympatric respectively. [ 21 ] Mayr's 1942 publication, influenced heavily by the ideas of Karl Jordan and Poulton, was regarded as the authoritative review of speciation for over 20 years—and is still valuable today. [ 19 ]
A major focus of Mayr's works was on the importance of geography in facilitating speciation; with islands often acting as a central theme to many of the speciation concepts put forth. [ 22 ] One of which was the concept of peripatric speciation , a variant of allopatric speciation [ 23 ] [ 24 ] (he has since distinguished the two modes by referring to them as peripatric and dichopatric [ 25 ] ). This concept arose by an interpretation of Wagner's Separationstheorie as a form of founder effect speciation that focused on small geographically isolated species. [ 5 ] This model was later expanded and modified to incorporate sexual selection by Kenneth Y. Kaneshiro in 1976 and 1980. [ 26 ] [ 27 ] [ 28 ]
Many geneticists at the time did little to bridge the gap between the genetics of natural selection and the origin of reproductive barriers between species. [ 4 ] : 3 Ronald Fisher proposed a model of speciation in his 1930 publication The Genetical Theory of Natural Selection , where he described disruptive selection acting on sympatric or parapatric populations — with reproductive isolation completed by reinforcement. [ 29 ] Other geneticists such as J. B. S. Haldane did not even recognize that species were real, while Sewall Wright ignored the topic, despite accepting allopatric speciation. [ 4 ] : 3
The primary contributors to the incorporation of speciation into modern evolutionary synthesis were Ernst Mayr and Theodosius Dobzhansky . [ 29 ] Dobzhansky, a geneticist, published Genetics and the Origin of Species in 1937, in which he formulated the genetic framework for how speciation could occur. [ 4 ] : 2 He recognized that speciation was an unsolved problem in biology at the time, rejecting Darwin's position that new species arose by occupation of new niches — contending that reproductive isolation was instead based on barriers to gene flow. [ 4 ] : 2 Subsequently, Mayr conducted extensive work on the geography of species, emphasizing the importance of geographic separation and isolation, in which he filled Dobzhansky's gaps concerning the origin of biodiversity (in his 1942 book). [ 30 ] Both of their works gave rise, not without controversy, to the modern understanding of speciation; stimulating a wealth of research on the topic. [ 4 ] : 3 Furthermore, this extended to plants as well as animals with G. Ledyard Stebbins ’s book, Variation and Evolution in Plants and the much later, 1981 book, Plant Speciation by Verne Grant .
In 1947, "a consensus had been achieved among geneticists, paleontologists and systematists and that evolutionary biology as an independent biological discipline had been established" during a Princeton University conference. [ 32 ] This 20th century synthesis incorporated speciation. Since then, the ideas have been consistently and repeatedly confirmed. [ 30 ]
After the synthesis, speciation research continued largely within natural history and biogeography — with much less emphasis on genetics. [ 4 ] : 4 The study of speciation has seen its largest increase since the 1980s [ 4 ] : 4 with an influx of publications and a host of new terms, methods, concepts, and theories. [ 21 ] This "third phase" of work — as Jerry A. Coyne and H. Allen Orr put it — has led to a growing complexity of the language used to describe the many processes of speciation. [ 21 ] The research and literature on speciation have become, "enormous, scattered, and increasingly technical". [ 4 ] : 1
From the 1980s, new research tools increased the robustness of research, [ 4 ] : 4 assisted by new methods, theoretical frameworks, models, and approaches. [ 21 ] Coyne and Orr discuss the modern, post-1980s developments centered around five major themes:
Ecologists became aware that the ecological factors behind speciation were under-represented. This saw the growth in research concerning ecology's role in facilitating speciation — rightly designated ecological speciation . [ 4 ] : 4 This focus on ecology generated a host of new terms relating to the barriers to reproduction [ 21 ] ( e.g. allochronic speciation , in which gene flow is reduced or removed by timing of breeding periods; or habitat isolation, in which species occupy different habitats within the same area). Sympatric speciation , regarded by Mayr as unlikely, has become widely accepted. [ 33 ] [ 34 ] [ 35 ] Research on the influence of natural selection on speciation, including the process of reinforcement , has grown. [ 36 ]
Researchers have long debated the roles of sexual selection , natural selection, and genetic drift in speciation. [ 4 ] : 383 Darwin extensively discussed sexual selection, with his work greatly expanded on by Ronald Fisher; however, it was not until 1983 that the biologist Mary Jane West-Eberhard recognized the importance of sexual selection in speciation. [ 37 ] [ 4 ] : 3 Natural selection plays a role in that any selection towards reproductive isolation can result in speciation — whether indirectly or directly. Genetic drift has been widely researched from the 1950s onwards, especially with peak-shift models of speciation by genetic drift. [ 4 ] : 388 Mayr championed founder effects , in which isolated individuals, like those found on islands near a mainland, experience a strong population bottleneck, as they contain only a small sample of the genetic variation in the main population. [ 4 ] : 390 [ 38 ] Later, other biologists such as Hampton L. Carson , Alan Templeton , Sergey Gavrilets , and Alan Hastings developed related models of speciation by genetic drift, noting that islands were inhabited mostly by endemic species. [ 39 ] Selection's role in speciation is widely supported, whereas founder effect speciation is not, [ 4 ] : 410 having been subject to a number of criticisms. [ 40 ]
Throughout the history of research concerning speciation, classification and delineation of modes and processes have been debated. Julian Huxley divided speciation into three separate modes: geographical speciation, genetic speciation, and ecological speciation. [ 14 ] : 427 Sewall Wright proposed ten different, varying modes. [ 14 ] : 427 Ernst Mayr championed the importance of physical, geographic separation of species populations, maintaining it to be of major importance to speciation. He originally proposed the three primary modes known today: geographic, semi-geographic, non-geographic; [ 21 ] corresponding to allopatric, parapatric, and sympatric respectively.
The phrase "modes of speciation" is imprecisely defined, most often indicating speciation occurring as a result of a species geographic distribution. [ 41 ] More succinctly, the modern classification of speciation is often described as occurring on a gene flow continuum (i.e., allopatry at m = 0 {\displaystyle m=0} and sympatry at m = 0.5 {\displaystyle m=0.5} [ 42 ] [ 43 ] ) This gene flow concept views speciation as based on the exchange of genes between populations instead of seeing a purely geographic setting as necessarily relevant. Despite this, concepts of biogeographic modes can be translated into models of gene flow (such as that in the image at left); however, this translation has led to some confusion of language in the scientific literature. [ 21 ]
As research has expanded over the decades, the geographic scheme has been challenged. The traditional classification is considered by some researchers to be obsolete, [ 44 ] while others argue for its merits. Proponents of non-geographic schemes often justify non-geographic classifications, not by rejection of the importance of reproductive isolation (or even the processes themselves), but instead by the fact that it simplifies the complexity of speciation. [ 45 ] One major critique of the geographic framework is that it arbitrarily separates a biological continuum into discontinuous groups. [ 45 ] Another criticism rests with the fact that, when speciation is viewed as a continuum of gene flow, parapatric speciation becomes unreasonably represented by the entire continuum [ 46 ] —with allopatric and sympatric existing in the extremes. [ 45 ] Coyne and Orr argue that the geographic classification scheme is valuable in that biogeography controls the strength of the evolutionary forces at play, as gene flow and geography are clearly linked. [ 44 ] James Mallet and colleagues contend that the sympatric vs. allopatric dichotomy is valuable to determine the degree in which natural selection acts on speciation. [ 47 ] Kirkpatrick and Ravigné categorize speciation in terms of its genetic basis or by the forces driving reproductive isolation. [ 4 ] : 85 Here, the geographic modes of speciation are classified as types of assortive mating. [ 48 ] Fitzpatrick and colleagues believe that the biogeographic scheme "is a distraction that could be positively misleading if the real goal is to understand the influence of natural selection on divergence." [ 44 ] They maintain that, to fully understand speciation, "the spatial, ecological, and genetic factors" involved in divergence must be explored. [ 44 ] Sara Via recognizes the importance of geography in speciation but suggests that classification under this scheme be abandoned. [ 34 ]
Sympatric speciation , from its beginnings with Darwin (who did not coin the term), has been a contentious issue. [ 41 ] [ 4 ] : 125 Mayr, along with many other evolutionary biologists, interpreted Darwins's view of speciation and the origin of biodiversity as arising by species entering new ecological niches —a form of sympatric speciation. [ 1 ] Before Mayr, sympatric speciation was regarded as the primary mode of speciation. In 1963, Mayr provided a strong criticism, citing various flaws in the theory. [ 4 ] : 126 After that, sympatric speciation fell out of favor with biologists and has only recently seen a resurgence in interest. [ 4 ] : 126 Some biologists, such as James Mallet, believe that Darwin's view on speciation was misunderstood and misconstrued by Mayr. [ 1 ] [ 49 ] Today, sympatric speciation is supported by evidence from laboratory experiments and observations from nature. [ 4 ] : 127 [ 33 ]
For most of the history of speciation, hybridization (polyploidy) has been a contentious issue, as botanists and zoologists have traditionally viewed hybridization's role in speciation differently. [ 21 ] Carl Linnaeus was the earliest to suggest hybridization in 1760, [ 50 ] Øjvind Winge was the first to confirm allopolyploidy in 1917, [ 50 ] [ 51 ] and a later experiment conducted by Clausen and Goodspeed in 1925 confirmed the findings. [ 50 ] Today it is widely recognized as a common mechanism of speciation. [ 52 ]
Historically, zoologists considered hybridization to be a rare phenomenon, while botanists found it to be commonplace in plant species. [ 21 ] The botanists G. Ledyard Stebbins and Verne Grant were two of the well known botanists who championed the idea of hybrid speciation during the 1950s to the 1980s. [ 21 ] Hybrid speciation, also called polyploid speciation (or polyploidy) is speciation that results by an increase in the number of sets of chromosomes. [ 4 ] : 321 It is effectively a form of sympatric speciation that happens instantly. [ 4 ] : 322 Grant coined the term recombinational speciation in 1981; a special form of hybrid speciation where a new species results from hybridization and is itself, reproductively isolated from both its parents. [ 4 ] : 337 Recently, biologists have increasingly recognized that hybrid speciation can occur in animals as well. [ 53 ]
The concept of speciation by reinforcement has a complex history, with its popularity among scholars changing significantly over time. [ 36 ] [ 4 ] : 353 The theory of reinforcement experienced three phases of historical development: [ 4 ] : 366
It was originally proposed by Alfred Russel Wallace in 1889, [ 4 ] : 353 termed the Wallace effect—a term rarely used by scientists today. [ 54 ] Wallace's hypothesis differed from the modern conception in that it focused on post-zygotic isolation, strengthened by group selection . [ 4 ] : 353 [ 55 ] [ 56 ] Dobzhansky was the first to provide a thorough, modern description of the process in 1937, [ 4 ] : 353 though the actual term itself was not coined until 1955 by W. Frank Blair . [ 57 ]
In 1930, Ronald Fisher laid out the first genetic description of the process of reinforcement in The Genetical Theory of Natural Selection , and in 1965 and 1970 the first computer simulations were run to test for its plausibility. [ 4 ] : 366 Later, population genetic [ 58 ] and quantitative genetic [ 59 ] studies were conducted showing that completely unfit hybrids lead to an increase in pre-zygotic isolation. [ 4 ] : 368 After Dobzhansky's idea rose to the forefront of speciation research, it garnered significant support—with Dobzhansky suggesting that it illustrated the final step in speciation (e.g. after an allopatric population comes into secondary contact ). [ 4 ] : 353 In the 1980s, many evolutionary biologists began to doubt the plausibility of the idea, [ 4 ] : 353 based not on empirical evidence, but largely on the growth of theory that deemed it an unlikely mechanism of reproductive isolation. [ 60 ] A number of theoretical objections arose at the time. Since the early 1990s, reinforcement has seen a revival in popularity, with perceptions by evolutionary biologists accepting its plausibility—due primarily from a sudden increase in data, empirical evidence from laboratory studies and nature, complex computer simulations, and theoretical work. [ 4 ] : 372–375
The scientific language concerning reinforcement has also differed over time, with different researchers applying various definitions to the term. [ 54 ] First used to describe the observed mating call differences in Gastrophryne frogs within a secondary contact hybrid zone, [ 54 ] reinforcement has also been used to describe geographically separated populations that experience secondary contact. [ 61 ] Roger Butlin demarcated incomplete post-zygotic isolation from complete isolation, referring to incomplete isolation as reinforcement and completely isolated populations as experiencing reproductive character displacement . [ 62 ] Daniel J. Howard considered reproductive character displacement to represent either assortive mating or the divergence of traits for mate recognition (specifically between sympatric populations). [ 54 ] Under this definition, it includes pre-zygotic divergence and complete post-zygotic isolation. [ 63 ] Maria R. Servedio and Mohamed Noor consider any detected increase in pre-zygotic isolation as reinforcement, as long as it is a response to selection against mating between two different species. [ 64 ] Coyne and Orr contend that, "true reinforcement is restricted to cases in which isolation is enhanced between taxa that can still exchange genes". [ 4 ] : 354 | https://en.wikipedia.org/wiki/History_of_speciation |
Modern spectroscopy in the Western world started in the 17th century. New designs in optics , specifically prisms , enabled systematic observations of the solar spectrum . Isaac Newton first applied the word spectrum to describe the rainbow of colors that combine to form white light. During the early 1800s, Joseph von Fraunhofer conducted experiments with dispersive spectrometers that enabled spectroscopy to become a more precise and quantitative scientific technique. Since then, spectroscopy has played and continues to play a significant role in chemistry , physics and astronomy . Fraunhofer observed and measured dark lines in the Sun's spectrum , [ 1 ] which now bear his name although several of them were observed earlier by Wollaston . [ 2 ]
The Romans were already familiar with the ability of a prism to generate a rainbow of colors. [ 3 ] [ 4 ] Newton is traditionally regarded as the founder of spectroscopy, but he was not the first scientist who studied and reported on the solar spectrum. The works of Athanasius Kircher (1646), Jan Marek Marci (1648), Robert Boyle (1664), and Francesco Maria Grimaldi (1665), predate Newton's optics experiments (1666–1672). [ 5 ] Newton published his experiments and theoretical explanations of dispersion of light in his Opticks . His experiments demonstrated that white light could be split up into component colors by means of a prism and that these components could be recombined to generate white light. He demonstrated that the prism is not imparting or creating the colors but rather separating constituent parts of the white light. [ 6 ] Newton's corpuscular theory of light was gradually succeeded by the wave theory . It was not until the 19th century that the quantitative measurement of dispersed light was recognized and standardized. As with many subsequent spectroscopy experiments, Newton's sources of white light included flames and stars , including the Sun . Subsequent studies of the nature of light include those of Hooke , [ 7 ] Huygens , [ 8 ] Young . [ 9 ] [ 10 ] Subsequent experiments with prisms provided the first indications that spectra were associated uniquely with chemical constituents. Scientists observed the emission of distinct patterns of colour when salts were added to alcohol flames. [ 11 ] [ 12 ]
In 1802, William Hyde Wollaston built a spectrometer, improving on Newton's model, that included a lens to focus the Sun's spectrum on a screen. [ 2 ] Upon use, Wollaston realized that the colors were not spread uniformly, but instead had missing patches of colors, which appeared as dark bands in the sun's spectrum. [ 13 ] At the time, Wollaston believed these lines to be natural boundaries between the colors, [ 14 ] but this hypothesis was later ruled out in 1815 by Fraunhofer's work. [ 15 ]
Joseph von Fraunhofer made a significant experimental leap forward by replacing a prism with a diffraction grating as the source of wavelength dispersion . Fraunhofer built off the theories of light interference developed by Thomas Young , François Arago and Augustin-Jean Fresnel . He conducted his own experiments to demonstrate the effect of passing light through a single rectangular slit, two slits, and so forth, eventually developing a means of closely spacing thousands of slits to form a diffraction grating. The interference achieved by a diffraction grating both improves the spectral resolution over a prism and allows for the dispersed wavelengths to be quantified. Fraunhofer's establishment of a quantified wavelength scale paved the way for matching spectra observed in multiple laboratories, from multiple sources (flames and the sun) and with different instruments. Fraunhofer made and published systematic observations of the solar spectrum, and the dark bands he observed and specified the wavelengths of are still known as Fraunhofer lines . [ 16 ]
Throughout the early 1800s, a number of scientists pushed the techniques and understanding of spectroscopy forward. [ 13 ] [ 17 ] In the 1820s, both John Herschel and William H. F. Talbot made systematic observations of salts using flame spectroscopy . [ 18 ] [ 19 ] [ 20 ]
In 1835, Charles Wheatstone reported that different metals could be easily distinguished by the different bright lines in the emission spectra of their sparks , thereby introducing an alternative mechanism to flame spectroscopy. [ 21 ] [ 22 ] In 1849, J. B. L. Foucault experimentally demonstrated that absorption and emission lines appearing at the same wavelength are both due to the same material, with the difference between the two originating from the temperature of the light source. [ 23 ] [ 24 ] In 1853, the Swedish physicist Anders Jonas Ångström presented observations and theories about gas spectra in his work Optiska Undersökningar (Optical investigations) to the Royal Swedish Academy of Sciences . [ 25 ] Ångström postulated that an incandescent gas emits luminous rays of the same wavelength as those it can absorb. Ångström was unaware of Foucalt's experimental results. At the same time George Stokes and William Thomson (Kelvin) were discussing similar postulates. [ 23 ] Ångström also measured the emission spectrum from hydrogen later labeled the Balmer lines . [ 26 ] [ 27 ] In 1854 and 1855, David Alter published observations on the spectra of metals and gases, including an independent observation of the Balmer lines of hydrogen. [ 28 ] [ 29 ]
The systematic attribution of spectra to chemical elements began in the 1860s with the work of German physicists Robert Bunsen and Gustav Kirchhoff , [ 30 ] who found that Fraunhofer lines correspond to emission spectral lines observed in laboratory light sources. This laid way for spectrochemical analysis in laboratory and astrophysical science. Bunsen and Kirchhoff applied the optical techniques of Fraunhofer, Bunsen's improved flame source and a highly systematic experimental procedure to a detailed examination of the spectra of chemical compounds. They established the linkage between chemical elements and their unique spectral patterns. In the process, they established the technique of analytical spectroscopy. In 1860, they published their findings on the spectra of eight elements and identified these elements' presence in several natural compounds. [ 31 ] [ 32 ] They demonstrated that spectroscopy could be used for trace chemical analysis and several of the chemical elements they discovered were previously unknown. Kirchhoff and Bunsen also definitively established the link between absorption and emission lines, including attributing solar absorption lines to particular elements based on their corresponding spectra. [ 33 ] Kirchhoff went on to contribute fundamental research on the nature of spectral absorption and emission, including what is now known as Kirchhoff's law of thermal radiation . Kirchhoff's applications of this law to spectroscopy are captured in three laws of spectroscopy :
In the 1860s the husband-and-wife team of William and Margaret Huggins used spectroscopy to determine that the stars were composed of the same elements as found on earth. They also used the non-relativistic Doppler shift ( redshift ) equation on the spectrum of the star Sirius in 1868 to determine its axial speed. [ 34 ] [ 35 ] They were the first to take a spectrum of a planetary nebula when the Cat's Eye Nebula (NGC 6543) was analyzed. [ 36 ] [ 37 ] Using spectral techniques, they were able to distinguish nebulae from stars.
August Beer observed a relationship between light absorption and concentration [ 38 ] and created the color comparator which was later replaced by a more accurate device called the spectrophotometer . [ 39 ]
In the 19th century new developments such as the discovery of photography, Rowland's [ 40 ] invention of the concave diffraction grating , and Schumann's [ 41 ] works on discovery of vacuum ultraviolet (fluorite for prisms and lenses, low-gelatin photographic plates and absorption of UV in air below 185 nm ) made advance to shorter wavelengths very fast.
In 1871, Stoney suggested using a wavenumber scale for spectra and Hartley [ 42 ] followed up, finding constant wave-number differences in the triplets of zinc. [ 43 ] : I:375 Liveing and Dewar [ 44 ] observed that alkali spectra appeared to form a series and Alfred Cornu found similar structure in the spectra of thallium and aluminum, setting the stage for Balmer [ 45 ] to discover a relation connecting wavelengths in the visible hydrogen spectrum. [ 43 ] : 376 In 1890, Kayser and Runge organized the series reported by Liveing and Dewar using names like 'Principal', 'diffuse', and 'sharp' series. Rydberg [ 46 ] gave a formula for wave-numbers of all spectral series of all the alkalis and hydrogen. [ 43 ] : 376
In 1895, the German physicist Wilhelm Conrad Röntgen discovered and extensively studied X-rays , which were later used in X-ray spectroscopy . One year later, in 1896, French physicist Antoine Henri Becquerel discovered radioactivity, and Dutch physicist Pieter Zeeman observed spectral lines being split by a magnetic field. [ 47 ] [ 13 ]
In 1897, theoretical physicist, Joseph Larmor explained the splitting of the spectral lines in a magnetic field by the oscillation of electrons. [ 48 ] [ 49 ]
Physicist, Joseph Larmor, created the first solar system model of the atom in 1897. He also postulated the proton, calling it a “positive electron.” He said the destruction of this type of atom making up matter “is an occurrence of infinitely small probability.” [ 50 ]
The first decade of the 20th century brought the basics of quantum theory ( Planck , Einstein ) [ 51 ] [ 52 ] and interpretation of spectral series of hydrogen by Lyman [ 53 ] in VUV and by Paschen [ 54 ] in infrared . Ritz [ 55 ] formulated the combination principle .
John William Nicholson had created an atomic model in 1912, a year before Niels Bohr , that was both nuclear and quantum in which he showed that electron oscillations in his atom matched the solar and nebular spectral lines. [ 56 ] Bohr had been working on his atom during this period, but Bohr's model had only a single ground state and no spectra until he incorporated the Nicholson model and referenced the Nicholson papers in his model of the atom. [ 56 ] [ 57 ] [ 58 ]
In 1913, Bohr [ 59 ] formulated his quantum mechanical model of atom. This stimulated empirical term analysis. [ 60 ] : 83 Bohr published a theory of the hydrogen-like atoms that could explain the observed wavelengths of spectral lines due to electrons transitioning from different energy states. In 1937 "E. Lehrer created the first fully-automated spectrometer" to help more accurately measure spectral lines. [ 61 ] With the development of more advanced instruments such as photo-detectors scientists were then able to more accurately measure specific wavelength absorption of substances. [ 39 ]
Between 1920 and 1930 fundamental concepts of quantum mechanics were developed by Pauli , [ 62 ] Heisenberg , [ 63 ] Schrödinger , [ 64 ] and Dirac . [ 65 ] Understanding of the spin and exclusion principle allowed conceiving how electron shells of atoms are filled with the increasing atomic number .
This branch of spectroscopy deals with radiation related to atoms that are stripped of several electrons (multiply ionized atoms (MIA), multiply charged ions, highly charged ions ). These are observed in very hot plasmas (laboratory or astrophysical) or in accelerator experiments ( beam-foil , electron beam ion trap (EBIT)). The lowest exited electron shells of such ions decay into stable ground states producing photons in VUV , EUV and soft X-ray spectral regions (so-called resonance transitions).
Further progress in studies of atomic structure was in tight connection with the advance to shorter wavelength in EUV region. Millikan , [ 66 ] Sawyer , [ 67 ] Bowen [ 68 ] used electric discharges in vacuum to observe some emission spectral lines down to 13 nm they prescribed to stripped atoms. In 1927 Osgood [ 69 ] and Hoag [ 70 ] reported on grazing incidence concave grating spectrographs and photographed lines down to 4.4 nm (K α of carbon). Dauvillier [ 71 ] used a fatty acid crystal of large crystal grating space to extend soft x-ray spectra up to 12.1 nm, and the gap was closed. In the same period Manne Siegbahn constructed a very sophisticated grazing incidence spectrograph that enabled Ericson and Edlén [ 72 ] to obtain spectra of vacuum spark with high quality and to reliably identify lines of multiply ionized atoms up to O VI, with five stripped electrons. Grotrian [ 73 ] developed his graphic presentation of energy structure of the atoms. Russel and Saunders [ 74 ] proposed their coupling scheme for the spin-orbit interaction and their generally recognized notation for spectral terms .
Theoretical quantum-mechanical calculations become rather accurate to describe the energy structure of some simple electronic configurations. The results of theoretical developments were summarized by Condon and Shortley [ 75 ] in 1935.
Edlén thoroughly analyzed spectra of MIA for many chemical elements and derived regularities in energy structures of MIA for many isoelectronic sequences (ions with the same number of electrons, but different nuclear charges). Spectra of rather high ionization stages (e.g. Cu XIX) were observed.
The most exciting event was in 1942, when Edlén [ 76 ] proved the identification of some solar coronal lines on the basis of his precise analyses of spectra of MIA. This implied that the solar corona has a temperature of a million degrees, and strongly advanced understanding of solar and stellar physics.
After the WW II experiments on balloons and rockets were started to observe the VUV radiation of the Sun. (See X-ray astronomy ). More intense research continued since 1960 including spectrometers on satellites.
In the same period the laboratory spectroscopy of MIA becomes relevant as a diagnostic tool for hot plasmas of thermonuclear devices (see Nuclear fusion ) which begun with building Stellarator in 1951 by Spitzer, and continued with tokamaks , z-pinches and the laser produced plasmas. [ 77 ] [ 78 ] Progress in ion accelerators stimulated beam-foil spectroscopy as a means to measure lifetimes of exited states of MIA. [ 79 ] Many various data on highly exited energy levels, autoionization and inner-core ionization states were obtained.
Simultaneously theoretical and computational approaches provided data necessary for identification of new spectra and interpretation of observed line intensities. [ 80 ] New laboratory and theoretical data become very useful for spectral observation in space. [ 81 ] It was a real upheaval of works on MIA in USA, England, France, Italy, Israel, Sweden, Russia and other countries [ 82 ] [ 83 ]
A new page in the spectroscopy of MIA may be dated as 1986 with development of EBIT (Levine and Marrs, LLNL ) due to a favorable composition of modern high technologies such as cryogenics , ultra-high vacuum , superconducting magnets , powerful electron beams and semiconductor detectors . Very quickly EBIT sources were created in many countries (see NIST summary [ 84 ] for many details as well as reviews.) [ 85 ] [ 86 ]
A wide field of spectroscopic research with EBIT is enabled including achievement of highest grades of ionization (U 92+ ), wavelength measurement, hyperfine structure of energy levels, quantum electrodynamic studies, ionization cross-sections (CS) measurements, electron-impact excitation CS, X-ray polarization , relative line intensities, dielectronic recombination CS, magnetic octupole decay, lifetimes of forbidden transitions , charge-exchange recombination, etc.
Many early scientists who studied the IR spectra of compounds had to develop and build their own instruments to be able to record their measurements making it very difficult to get accurate measurements. During World War II , the U.S. government contracted different companies to develop a method for the polymerization of butadiene to create rubber , but this could only be done through analysis of C4 hydrocarbon isomers. These contracted companies started developing optical instruments and eventually created the first infrared spectrometers. With the development of these commercial spectrometers, Infrared Spectroscopy became a more popular method to determine the "fingerprint" for any molecule. [ 39 ] Raman spectroscopy was first observed in 1928 by Sir Chandrasekhara Venkata Raman in liquid substances and also by "Grigory Landsberg and Leonid Mandelstam in crystals". [ 61 ] Raman spectroscopy is based on the observation of the raman effect which is defined as "The intensity of the scattered light is dependent on the amount of the polarization potential change". [ 61 ] The raman spectrum records light intensity vs. light frequency (wavenumber) and the wavenumber shift is characteristic to each individual compound. [ 61 ]
Laser spectroscopy is a spectroscopic technique that uses lasers to be able determine the emitted frequencies of matter. [ 87 ] The laser was invented because spectroscopists took the concept of its predecessor, the maser , and applied it to the visible and infrared ranges of light. [ 87 ] The maser was invented by Charles Townes and other spectroscopists to stimulate matter to determine the radiative frequencies that specific atoms and molecules emitted. [ 87 ] While working on the maser, Townes realized that more accurate detections were possible as the frequency of the microwave emitted increased. [ 87 ] This led to an idea a few years later to use the visible and eventually the infrared ranges of light for spectroscopy that became a reality with the help of Arthur Schawlow . [ 87 ] Since then, lasers have gone on to significantly advance experimental spectroscopy. The laser light allowed for much higher precision experiments specifically in the uses of studying collisional effects of light as well as being able to accurately detect specific wavelengths and frequencies of light, allowing for the invention of devices such as laser atomic clocks. Lasers also made spectroscopy that used time methods more accurate by using speeds or decay times of photons at specific wavelengths and frequencies to keep time. [ 88 ] Laser spectroscopic techniques have been used for many different applications. One example is using laser spectroscopy to detect compounds in materials. One specific method is called Laser-induced Fluorescence Spectroscopy, and uses spectroscopic methods to be able to detect what materials are in a solid, liquid, or gas, in situ . This allows for direct testing of materials, instead of having to take the material to a lab to figure out what the solid, liquid, or gas is made of. [ 89 ] | https://en.wikipedia.org/wiki/History_of_spectroscopy |
The idea that matter consists of smaller particles and that there exists a limited number of sorts of primary, smallest particles in nature has existed in natural philosophy at least since the 6th century BC. Such ideas gained physical credibility beginning in the 19th century, but the concept of "elementary particle" underwent some changes in its meaning : notably, modern physics no longer deems elementary particles indestructible. Even elementary particles can decay or collide destructively ; they can cease to exist and create (other) particles in result.
Increasingly small particles have been discovered and researched: they include molecules , which are constructed of atoms , that in turn consist of subatomic particles , namely atomic nuclei and electrons . Many more types of subatomic particles have been found. Most such particles (but not electrons) were eventually found to be composed of even smaller particles such as quarks . Particle physics studies these smallest particles; nuclear physics studies atomic nuclei and their (immediate) constituents: protons and neutrons .
The idea that all matter is composed of elementary particles dates to as far as the 6th century BCE. [ 1 ] The Jains in ancient India were the earliest to advocate the particular nature of material objects between 9th and 5th century BCE. According to Jain leaders like Parshvanatha and Mahavira , the ajiva (non living part of universe) consists of matter or pudgala , of definite or indefinite shape which is made up tiny uncountable and invisible particles called permanu . Permanu occupies space-point and each permanu has definite colour, smell, taste and texture. Infinite varieties of permanu unite and form pudgala . [ 2 ] The philosophical doctrine of atomism and the nature of elementary particles were also studied by ancient Greek philosophers such as Leucippus , Democritus , and Epicurus ; ancient Indian philosophers such as Kanada , Dignāga , and Dharmakirti ; Muslim scientists such as Ibn al-Haytham , Ibn Sina , and Mohammad al-Ghazali ; and in early modern Europe by physicists such as Pierre Gassendi , Robert Boyle , and Isaac Newton . The particle theory of light was also proposed by Ibn al-Haytham , Ibn Sina , Gassendi, and Newton.
Those early ideas were founded through abstract , philosophical reasoning rather than experimentation and empirical observation and represented only one line of thought among many. In contrast, certain ideas of Gottfried Wilhelm Leibniz (see Monadology ) contradict to almost everything known in modern physics.
In the 19th century, John Dalton , through his work on stoichiometry , concluded that each chemical element was composed of a single, unique type of particle. Dalton and his contemporaries believed those were the fundamental particles of nature and thus named them atoms, after the Greek word atomos , meaning "indivisible" [ 3 ] or "uncut".
However, near the end of 19th century, physicists discovered that Dalton's atoms are not, in fact, the fundamental particles of nature, but conglomerates of even smaller particles.
Throughout the 1800s scientists explored many phenomena of electricity and magnetism, culminating in an accurate theory by James Clerk Maxwell . [ 4 ] This theory was a continuous field model developed around the ideas of luminiferous aether . When no experiment could produce evidence of such an ether, and in view of the growing evidence supporting the atomic model, Hendrik Antoon Lorentz developed a theory of electromagnetism based on "ions" that reproduced Maxwell's model. [ 5 ] : 77
The electron was discovered between 1879 and 1897 in works of William Crookes , Arthur Schuster , J. J. Thomson , and other physicists; its charge was carefully measured by Robert Andrews Millikan and Harvey Fletcher in their oil drop experiment of 1909. Physicists theorized that negatively charged electrons are constituent part of " atoms ", along with some (yet unknown) positively charged substance, and it was later confirmed. Electron became the first elementary, truly fundamental particle discovered.
Studies of the "radioactivity", that soon revealed the phenomenon of radioactive decay , provided another argument against considering chemical elements as fundamental nature's elements. Despite these discoveries, the term atom stuck to Dalton's (chemical) atoms and now denotes the smallest particle of a chemical element, not something really indivisible.
Early 20th-century physicists knew only two fundamental forces : electromagnetism and gravitation , where the latter could not explain the structure of atoms. So, it was obvious to assume that unknown positively charged substance attracts electrons by Coulomb force .
In 1909 Ernest Rutherford and Thomas Royds demonstrated that an alpha particle combines with two electrons and forms a helium atom. In modern terms, alpha particles are doubly ionized helium (more precisely, 4 He ) atoms. Speculation about the structure of atoms was severely constrained by Rutherford's 1907 gold foil experiment , showing that the atom is mainly empty space, with almost all its mass concentrated in a tiny atomic nucleus .
By 1914, experiments by Ernest Rutherford, Henry Moseley , James Franck and Gustav Hertz had largely established the structure of an atom as a dense nucleus of positive charge surrounded by lower-mass electrons. [ 6 ] These discoveries shed a light to the nature of radioactive decay and other forms of transmutation of elements, as well as of elements themselves. It appeared that atomic number is nothing else than (positive) electric charge of the atomic nucleus of a particular atom. Chemical transformations, governed by electromagnetic interactions , do not change nuclei – that's why elements are chemically indestructible. But when the nucleus change its charge and/or mass (by emitting or capturing a particle ), the atom can become the one of another element. Special relativity explained how the mass defect is related to the energy produced or consumed in reactions. The branch of physics that studies transformations and the structure of nuclei is now called nuclear physics , contrasted to atomic physics that studies the structure and properties of atoms ignoring most nuclear aspects. The development in the nascent quantum physics , such as Bohr model , led to the understanding of chemistry in terms of the arrangement of electrons in the mostly empty volume of atoms.
In 1918, Rutherford confirmed that the hydrogen nucleus was a particle with a positive charge, which he named the proton . By then, Frederick Soddy 's researches of radioactive elements, and experiments of J. J. Thomson and F.W. Aston conclusively demonstrated existence of isotopes , whose nuclei have different masses in spite of identical atomic numbers. It prompted Rutherford to conjecture that all nuclei other than hydrogen contain chargeless particles, which he named the neutron .
Evidences that atomic nuclei consist of some smaller particles (now called nucleons ) grew; it became obvious that, while protons repulse each other electrostatically , nucleons attract each other by some new force ( nuclear force ). It culminated in proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn ), and nuclear fusion by Hans Bethe in that same year. Those discoveries gave rise to an active industry of generating one atom from another, even rendering possible (although it will probably never be profitable) the transmutation of lead into gold ; and, those same discoveries also led to the development of nuclear weapons .
Further understanding of atomic and nuclear structures became impossible without improving the knowledge about the essence of particles. Experiments and improved theories (such as Erwin Schrödinger 's "electron waves") gradually revealed that there is no fundamental difference between particles and waves . For example, electromagnetic waves were reformulated in terms of particles called photons . It also revealed that physical objects do not change their parameters, such as total energy , position and momentum , as continuous functions of time , as it was thought of in classical physics: see atomic electron transition for example.
Another crucial discovery was identical particles or, more generally, quantum particle statistics . It was established that all electrons are identical: although two or more electrons can exist simultaneously that have different parameters, but they do not keep separate, distinguishable histories. This also applies to protons, neutrons, and (with certain differences) to photons as well. It suggested that there is a limited number of sorts of smallest particles in the universe .
The spin–statistics theorem established that any particle in our spacetime may be either a boson (that means its statistics is Bose–Einstein ) or a fermion (that means its statistics is Fermi–Dirac ). It was later found that all fundamental bosons transmit forces, like the photon that transmits light. Some of non-fundamental bosons (namely, mesons ) also may transmit forces (see below ), although non-fundamental ones. Fermions are particles "like electrons and nucleons" and generally comprise the matter. Note that any subatomic or atomic particle composed of even total number of fermions (such as protons, neutrons, and electrons) is a boson, so a boson is not necessarily a force transmitter and perfectly can be an ordinary material particle.
The spin is the quantity that distinguishes bosons and fermions. Practically it appears as an intrinsic angular momentum of a particle, that is unrelated to its motion but is linked with some other features like a magnetic dipole . Theoretically it is explained from different types representations of symmetry groups , namely tensor representations (including vectors and scalars) for bosons with their integer (in ħ ) spins, and spinor representations for fermions with their half-integer spins.
Improved understanding of the world of particles prompted physicists to make bold predictions, such as Dirac 's positron in 1928 (founded on the Dirac Sea model) and Pauli 's neutrino in 1930 (founded on conservation of energy and angular momentum in beta decay ). Both were later confirmed.
This culminated in the formulation of ideas of a quantum field theory . The first (and the only mathematically complete) of these theories, quantum electrodynamics , allowed to explain thoroughly the structure of atoms, including the Periodic Table and atomic spectra . Ideas of quantum mechanics and quantum field theory were applied to nuclear physics too. For example, α decay was explained as a quantum tunneling through nuclear potential, nucleons' fermionic statistics explained the nucleon pairing , and Hideki Yukawa proposed certain virtual particles (now knows as π-mesons ) as an explanation of the nuclear force.
Development of nuclear models (such as the liquid-drop model and nuclear shell model ) made prediction of properties of nuclides possible. No existing model of nucleon–nucleon interaction can analytically compute something more complex than 4 He based on principles of quantum mechanics, though (note that complete computation of electron shells in atoms is also impossible as yet).
The most developed branch of nuclear physics in 1940s was studies related to nuclear fission due to its military significance. The main focus of fission-related problems is interaction of atomic nuclei with neutrons: a process that occurs in a fission bomb and a nuclear fission reactor . It gradually drifted away from the rest of subatomic physics and virtually became the nuclear engineering . The first synthesised transuranium elements were also obtained in this context, through neutron capture and subsequent β − decay .
The elements beyond fermium cannot be produced in this way. To make a nuclide with more than 100 protons per nucleus one has to use an inventory and methods of particle physics (see details below), namely to accelerate and collide atomic nuclei. Production of progressively heavier synthetic elements continued into 21st century as a branch of nuclear physics, but only for scientific purposes.
The third important stream in nuclear physics are researches related to nuclear fusion . This is related to thermonuclear weapons (and conceived peaceful thermonuclear energy ), as well as to astrophysical researches, such as stellar nucleosynthesis and Big Bang nucleosynthesis .
In the 1950s, with development of particle accelerators and studies of cosmic rays , inelastic scattering experiments on protons (and other atomic nuclei) with energies about hundreds of MeVs became affordable. They created some short-lived resonance "particles" , but also hyperons and K-mesons with unusually long lifetime. The cause of the latter was found in a new quasi- conserved quantity, named strangeness , that is conserved in all circumstances except for the weak interaction . The strangeness of heavy particles and the μ-lepton were first two signs of what is now known as the second generation of fundamental particles.
The weak interaction revealed soon yet another mystery. In 1957 Chien-Shiung Wu proved that it does not conserve parity . In other words, the mirror symmetry was disproved as a fundamental symmetry law .
Throughout the 1950s and 1960s, improvements in particle accelerators and particle detectors led to a bewildering variety of particles found in high-energy experiments. The term elementary particle came to refer to dozens of particles, most of them unstable . It prompted Wolfgang Pauli's remark: "Had I foreseen this, I would have gone into botany". The entire collection was nicknamed the " particle zoo ". It became evident that some smaller constituents, yet invisible, form mesons and baryons that counted most of then-known particles.
The interaction of these particles by scattering and decay provided a key to new fundamental quantum theories. Murray Gell-Mann and Yuval Ne'eman brought some order to mesons and baryons, the most numerous classes of particles, by classifying them according to certain qualities. It began with what Gell-Mann referred to as the " Eightfold Way ", but proceeding into several different "octets" and "decuplets" which could predict new particles, most famously the Ω − , which was detected at Brookhaven National Laboratory in 1964, and which gave rise to the quark model of hadron composition. While the quark model at first seemed inadequate to describe strong nuclear forces , allowing the temporary rise of competing theories such as the S-matrix theory , the establishment of quantum chromodynamics in the 1970s finalized a set of fundamental and exchange particles ( Kragh 1999 ). It postulated the fundamental strong interaction , experienced by quarks and mediated by gluons . These particles were proposed as a building material for hadrons (see hadronization ). This theory is unusual because individual (free) quarks cannot be observed (see color confinement ), unlike the situation with composite atoms where electrons and nuclei can be isolated by transferring ionization energy to the atom.
Then, the old, broad denotation of the term elementary particle was deprecated and a replacement term subatomic particle covered all the "zoo", with its hyponym " hadron " referring to composite particles directly explained by the quark model. The designation of an "elementary" (or "fundamental") particle was reserved for leptons , quarks, their antiparticles , and quanta of fundamental interactions (see below) only.
Because the quantum field theory (see above ) postulates no difference between particles and interactions , classification of elementary particles allowed also to classify interactions and fields .
Now a large number of particles and (non-fundamental) interactions is explained as combinations of a (relatively) small number of fundamental substances, thought to be fundamental interactions (incarnated in fundamental bosons ), quarks (including antiparticles), and leptons (including antiparticles). As the theory distinguished several fundamental interactions, it became possible to see which elementary particles participate in which interaction. Namely:
The next step was a reduction in number of fundamental interactions, envisaged by early 20th century physicists as the " united field theory ". The first successful modern unified theory was the electroweak theory , developed by Abdus Salam , Steven Weinberg and, subsequently, Sheldon Glashow . This development culminated in the completion of the theory called the Standard Model in the 1970s, that included also the strong interaction, thus covering three fundamental forces. After the discovery, made at CERN , of the existence of neutral weak currents , [ 7 ] [ 8 ] [ 9 ] [ 10 ] mediated by the Z boson foreseen in the standard model, the physicists Salam, Glashow and Weinberg received the 1979 Nobel Prize in Physics for their electroweak theory. [ 11 ] The discovery of the weak gauge bosons (quanta of the weak interaction ) through the 1980s, and the verification of their properties through the 1990s is considered to be an age of consolidation in particle physics.
While accelerators have confirmed most aspects of the Standard Model by detecting expected particle interactions at various collision energies, no theory reconciling general relativity with the Standard Model has yet been found, although supersymmetry and string theory were believed by many theorists to be a promising avenue forward. The Large Hadron Collider , however, which began operating in 2008, has failed to find any evidence whatsoever that is supportive of supersymmetry and string theory, [ 12 ] and appears unlikely to do so, meaning "the current situation in fundamental theory is one of a serious lack of any new ideas at all." [ 13 ] This state of affairs should not be viewed as a crisis in physics, but rather, as David Gross has said, "the kind of acceptable scientific confusion that discovery eventually transcends." [ 14 ]
Gravitation , the fourth fundamental interaction, is not yet integrated into particle physics in a consistent way.
As of 2011, the Higgs boson , the quantum of a field that is thought to provide particles with rest masses , remained the only particle of the Standard Model to be verified.
On July 4, 2012, physicists working at CERN's Large Hadron Collider announced that they had discovered a new subatomic particle greatly resembling the Higgs boson, a potential key to an understanding of why elementary particles have masses and indeed to the existence of diversity and life in the universe. [ 15 ] Rolf-Dieter Heuer , the director general of CERN, said that it was too soon to know for sure whether it is an entirely new massive particle – one of the heaviest subatomic particles yet – or, indeed, the elusive particle predicted by the Standard Model , the theory that has ruled physics for the last half-century. [ 15 ] It is unknown if this particle is an impostor, a single particle or even the first of many particles yet to be discovered. The latter possibilities are particularly exciting to physicists since they could point the way to new deeper ideas, beyond the Standard Model , about the nature of reality. For now, some physicists are calling it a "Higgslike" particle. [ 15 ] Joe Incandela , of the University of California, Santa Barbara , said, "It's something that may, in the end, be one of the biggest observations of any new phenomena in our field in the last 30 or 40 years, going way back to the discovery of quarks, for example." [ 15 ] The groups operating the large detectors in the collider said that the likelihood that their signal was a result of a chance fluctuation was less than one chance in 3.5 million, so-called "five sigma," which is the gold standard in physics for a discovery. Michael Turner , a cosmologist at the University of Chicago and the chairman of the physics center board, said
This is a big moment for particle physics and a crossroads — will this be the high water mark or will it be the first of many discoveries that point us toward solving the really big questions that we have posed?
Confirmation of the Higgs boson or something very much like it would constitute a rendezvous with destiny for a generation of physicists who have believed the boson existed for half a century without ever seeing it. Further, it affirms a grand view of a universe ruled by simple and elegant and symmetrical laws, but in which everything interesting in it being a result of flaws or breaks in that symmetry. [ 15 ] According to the Standard Model, the Higgs boson is the only visible and particular manifestation of an invisible force field that permeates space and imbues elementary particles that would otherwise be massless with mass. Without this Higgs field, or something like it, physicists say all the elementary forms of matter would zoom around at the speed of light; there would be neither atoms nor life. The Higgs boson achieved a notoriety rare for abstract physics. [ 15 ] To the eternal dismay of his colleagues, Leon Lederman, the former director of Fermilab , called it the "God particle" in his book of the same name, later quipping that he had wanted to call it "the goddamn particle". [ 15 ] Professor Incandela also stated,
This boson is a very profound thing we have found. We're reaching into the fabric of the universe at a level we've never done before. We've kind of completed one particle's story [...] We're on the frontier now, on the edge of a new exploration. This could be the only part of the story that's left, or we could open a whole new realm of discovery.
Dr. Peter Higgs was one of six physicists, working in three independent groups, who in 1964 invented the notion of the cosmic molasses, or Higgs field. The others were Tom Kibble of Imperial College, London ; Carl Hagen of the University of Rochester ; Gerald Guralnik of Brown University ; and François Englert and Robert Brout , both of Université Libre de Bruxelles . [ 15 ] One implication of their theory was that this Higgs field would produce its own quantum particle if hit hard enough by the right amount of energy. The particle would be fragile and fall apart within a millionth of a second in a dozen different ways depending upon its own mass. Unfortunately, the theory did not predict the particle mass making it difficult to find. The particle eluded researchers at a succession of particle accelerators. [ 15 ] [ better source needed ]
Further experiments continued and in March 2013 it was tentatively confirmed that the newly discovered particle was a Higgs Boson.
Although they have never been seen, Higgslike fields play an important role in theories of the universe and in string theory. Under certain conditions, according to the strange accounting of Einsteinian physics, they can become suffused with energy that exerts an antigravitational force. Such fields have been proposed as the source of an enormous burst of expansion, known as inflation, early in the universe and, possibly, as the secret of the dark energy that now seems to be speeding up the expansion of the universe. [ 15 ] | https://en.wikipedia.org/wiki/History_of_subatomic_physics |
Superconductivity is the phenomenon of certain materials exhibiting zero electrical resistance and the expulsion of magnetic fields below a characteristic temperature . The history of superconductivity began with Dutch physicist Heike Kamerlingh Onnes 's discovery of superconductivity in mercury in 1911. Since then, many other superconducting materials have been discovered and the theory of superconductivity has been developed. These subjects remain active areas of study in the field of condensed matter physics .
The study of superconductivity has a fascinating history, with several breakthroughs having dramatically accelerated publication and patenting activity in this field, as shown in the figure on the right and described in details below. Throughout its 100+ year history the number of non-patent publications per year about superconductivity has been a factor of 10 larger than the number of patent families, which is characteristic of a technology, that has not achieved a substantial commercial success (see Technological applications of superconductivity ).
James Dewar initiated research into electrical resistance at low temperatures. Dewar and John Ambrose Fleming predicted that at absolute zero , pure metals would become perfect electromagnetic conductors (though, later, Dewar altered his opinion on the disappearance of resistance, believing that there would always be some resistance). Walther Hermann Nernst developed the third law of thermodynamics and stated that absolute zero was unattainable. Carl von Linde and William Hampson , both commercial researchers, nearly at the same time filed for patents on the Joule–Thomson effect for the liquefaction of gases . Linde's patent was the climax of 20 years of systematic investigation of established facts, using a regenerative counterflow method. Hampson's designs was also of a regenerative method. The combined process became known as the Hampson–Linde liquefaction process .
Onnes purchased a Linde machine for his research. On March 21, 1900, Nikola Tesla was granted a patent for the means for increasing the intensity of electrical oscillations by lowering the temperature, which was caused by lowered resistance. [ further explanation needed ] Within this patent it describes the increased intensity and duration of electric oscillations of a low temperature resonating circuit. It is believed that Tesla had intended that Linde's machine would be used to attain the cooling agents.
A milestone was achieved on July 10, 1908, when Heike Kamerlingh Onnes at Leiden University in the Netherlands produced, for the first time, liquified helium , which has a boiling point of 4.2 K (−269 °C) at atmospheric pressure.
Heike Kamerlingh Onnes and Jacob Clay reinvestigated Dewar's earlier experiments on the reduction of resistance at low temperatures. Onnes began the investigations with platinum and gold , replacing these later with mercury (a more readily refinable material). Onnes's research into the resistivity of solid mercury at cryogenic temperatures was accomplished by using liquid helium as a refrigerant. On April 8, 1911, 16:00 hours Onnes noted "Kwik nagenoeg nul", which translates as "[Resistance of] mercury almost zero." [ 4 ] At the temperature of 4.19 K, he observed that the resistivity abruptly disappeared (the measuring device Onnes was using did not indicate any resistance). Onnes disclosed his research in 1911, in a paper titled " On the Sudden Rate at Which the Resistance of Mercury Disappears. " Onnes stated in that paper that the "specific resistance" became thousands of times less in amount relative to the best conductor at ordinary temperature. Onnes later reversed the process and found that at 4.2 K, the resistance returned to the material. The next year, Onnes published more articles about the phenomenon. Initially, Onnes called the phenomenon " supraconductivity " (1913) and, only later, adopted the term " superconductivity. " For his research, he was awarded the Nobel Prize in Physics in 1913.
Onnes conducted an experiment, in 1912, on the usability of superconductivity. Onnes introduced an electric current into a superconductive ring and removed the battery that generated it. Upon measuring the electric current, Onnes found that its intensity did not diminish with the time. [ 5 ] The current persisted due to the superconductive state of the conductive medium.
In subsequent decades, superconductivity was found in several other materials; In 1913, lead at 7 K, in 1930's niobium at 10 K, and in 1941 niobium nitride at 16 K. [ citation needed ]
The next important step in understanding superconductivity occurred in 1933, when Walther Meissner and Robert Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon that has come to be known as the Meissner effect . In 1935, brothers Fritz London and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.
In 1937, Lev Shubnikov discovered a new type of superconductors (later called type-II superconductors ), that presented a mixed phase between ordinary and superconductive properties.
In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Lev Landau and Vitaly Ginzburg . The Ginzburg–Landau theory, which combined Landau's theory of second-order phase transitions with a Schrödinger -like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Alexei Abrikosov showed that Ginzburg–Landau theory predicts the division of superconductors into the two categories now referred to as type I and type II supeconductivity. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize in Physics for their work (Landau having died in 1968). Also in 1950, Emanuel Maxwell and, almost simultaneously, C.A. Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element . This important discovery pointed to the electron-phonon interaction as the microscopic mechanism responsible for superconductivity.
On the experimental side, collaborations of Bernd T. Matthias in the 1950s with John Kenneth Hulm and Theodore H. Geballe , led to the discovery of hundreds of low temperature superconductors using a technique based on the Meissner effect. Due to his experience, he came up with Matthias' rules in 1954, a set of empirical guidelines on how to find these types of superconductors. [ 6 ]
The complete microscopic theory of superconductivity was finally proposed in 1957 by John Bardeen , Leon N. Cooper , and Robert Schrieffer . This BCS theory explained the superconducting current as a superfluid of Cooper pairs , pairs of electrons interacting through the exchange of phonons . For this work, the authors were awarded the Nobel Prize in Physics in 1972. The BCS theory was set on a firmer footing in 1958, when Nikolay Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian . In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg-Landau theory close to the critical temperature. Gor'kov was the first to derive the superconducting phase evolution equation 2 e V = ℏ ∂ ϕ ∂ t {\displaystyle 2eV=\hbar {\frac {\partial \phi }{\partial t}}} .
The Little–Parks effect was discovered in 1962 in experiments with empty and thin-walled superconducting cylinders subjected to a parallel magnetic field . The electrical resistance of such cylinders shows a periodic oscillation with the magnetic flux through the cylinder, the period being h /2 e = 2.07×10 −15 V·s. The explanation provided by William Little and Ronald Parks is that the resistance oscillation reflects a more fundamental phenomenon, i.e. periodic oscillation of the superconducting critical temperature ( T c ). This is the temperature at which the sample becomes superconducting. The Little-Parks effect is a result of collective quantum behavior of superconducting electrons. It reflects the general fact that it is the fluxoid rather than the flux which is quantized in superconductors. The Little-Parks effect demonstrates that the vector potential couples to an observable physical quantity, namely the superconducting critical temperature.
Soon after discovering superconductivity in 1911, Kamerlingh Onnes attempted to make an electromagnet with superconducting windings but found that relatively low magnetic fields destroyed superconductivity in the materials he investigated. Much later, in 1955, George Yntema succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. [ 7 ] Then, in 1961, J. E. Kunzler , E. Buehler, F. S. L. Hsu, and J. H. Wernick made the startling discovery that at 4.2 kelvins, a compound consisting of three parts niobium and one part tin was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 teslas. [ 8 ] Despite being brittle and difficult to fabricate, niobium-tin has since proved extremely useful in supermagnets generating magnetic fields as high as 20 teslas. In 1962, Ted Berlincourt and Richard Hake discovered that less brittle alloys of niobium and titanium are suitable for applications up to 10 teslas. [ 9 ] [ 10 ] Promptly thereafter, commercial production of niobium-titanium supermagnet wire commenced at Westinghouse Electric Corporation and at Wah Chang Corporation . [ citation needed ] Although niobium-titanium boasts less-impressive superconducting properties than those of niobium-tin, niobium-titanium has, nevertheless, become the most widely used “workhorse” supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium-tin and niobium-titanium find wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy particle accelerators, and a host of other applications. Conectus, a European consortium for superconductivity, estimated that in 2014, global economic activity, for which superconductivity was indispensable, amounted to about five billion euros, with MRI systems accounting for about 80% of that total. [ citation needed ]
In 1962, Brian Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect , is exploited by superconducting devices such as SQUIDs . It is used in the most accurate available measurements of the magnetic flux quantum h /2 e , and thus (coupled with the quantum Hall resistivity ) for the Planck constant h . Josephson was awarded the Nobel Prize in Physics for this work in 1973. [ citation needed ]
In 1973 Nb 3 Ge found to have T c of 23 K, which remained the highest ambient-pressure T c until the discovery of the cuprate high-temperature superconductors in 1986 (see below). [ citation needed ]
In 1979, two new classes of superconductors where discovered that could not be explained by BCS theory: heavy fermion superconductors and organic superconductors . [ 11 ]
The first heavy fermion superconductor, CeCu 2 Si 2 , was discovered by Frank Steglich . [ 12 ] Since then over 30 heavy fermion superconductors were found (in materials based on Ce, U), with a critical temperature up to 2.3 K (in CeCoIn 5 ). [ 13 ]
Klaus Bechgaard and Denis Jérome synthesized the first organic superconductor (TMTSF) 2 PF 6 (the corresponding material class was named after him later) with a transition temperature of T C = 0.9 K, at an external pressure of 11 kbar. [ 14 ]
In 1986, J. Georg Bednorz and K. Alex Mueller discovered superconductivity in a lanthanum -based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987) and was the first of the high-temperature superconductors . It was shortly found (by Ching-Wu Chu ) that replacing the lanthanum with yttrium , i.e. making YBCO , raised the critical temperature to 92 K, which was important because liquid nitrogen could then be used as a refrigerant (at atmospheric pressure, the boiling point of nitrogen is 77 K). This is important commercially because liquid nitrogen can be produced cheaply on-site with no raw materials, and is not prone to some of the problems (solid air plugs, etc.) of helium in piping. Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed-matter physics . [ citation needed ]
In March 2001, superconductivity of magnesium diboride ( MgB 2 ) was found with T c = 39 K. [ citation needed ]
In 2008, the oxypnictide or iron-based superconductors were discovered, which led to a flurry of work in the hope that studying them would provide a theory of the cuprate superconductors. [ citation needed ]
In 2013, room-temperature superconductivity was attained in YBCO for picoseconds, using short pulses of infrared laser light to deform the material's crystal structure. [ 15 ]
In 2017 it was suggested that undiscovered superhard materials (e.g. critically doped beta-titanium Au) might be a candidate for a new superconductor with Tc, substantially higher than HgBaCuO (138 K), possibly up to 233 K, which would be higher even than H 2 S. A lot of research suggests that additionally nickel could replace copper in some perovskites, offering another route to room temperature. Li+ doped materials can also be used, i.e. the spinel battery material LiTi 2 O x and the lattice pressure can increase Tc to over 13.8 K. Also LiHx has been theorized to metallise at a substantially lower pressure than H and could be a candidate for a Type 1 superconductor. [ 16 ] [ 17 ] [ 18 ] [ 19 ]
Papers by H.K. Onnes
BCS theory
Other key papers
Patents | https://en.wikipedia.org/wiki/History_of_superconductivity |
The history of synthetic-aperture radar begins in 1951, with the invention of the technology by mathematician Carl A. Wiley , and its development in the following decade. Initially developed for military use, the technology has since been applied in the field of planetary science .
Carl A. Wiley , [ 1 ] a mathematician at Goodyear Aircraft Company in Litchfield Park, Arizona , invented synthetic-aperture radar in June 1951 while working on a correlation guidance system for the Atlas ICBM program. [ 2 ] In early 1952, Wiley, together with Fred Heisley and Bill Welty, constructed a concept validation system known as DOUSER (" Doppler Unbeamed Search Radar"). During the 1950s and 1960s, Goodyear Aircraft (later Goodyear Aerospace) introduced numerous advancements in SAR technology, many with help from Don Beckerleg. [ 3 ]
Independently of Wiley's work, experimental trials in early 1952 by Sherwin and others at the University of Illinois ' Control Systems Laboratory showed results that they pointed out "could provide the basis for radar systems with greatly improved angular resolution" and might even lead to systems capable of focusing at all ranges simultaneously. [ 4 ]
In both of those programs, processing of the radar returns was done by electrical-circuit filtering methods. In essence, signal strength in isolated discrete bands of Doppler frequency defined image intensities that were displayed at matching angular positions within proper range locations. When only the central (zero-Doppler band) portion of the return signals was used, the effect was as if only that central part of the beam existed. That led to the term Doppler Beam Sharpening. Displaying returns from several adjacent non-zero Doppler frequency bands accomplished further "beam-subdividing" (sometimes called "unfocused radar", though it could have been considered "semi-focused"). Wiley's patent, applied for in 1954, still proposed similar processing. The bulkiness of the circuitry then available limited the extent to which those schemes might further improve resolution.
The principle was included in a memorandum [ 5 ] authored by Walter Hausz of General Electric that was part of the then-secret report of a 1952 Dept. of Defense summer study conference called TEOTA ("The Eyes of the Army"), [ 6 ] which sought to identify new techniques useful for military reconnaissance and technical gathering of intelligence. A follow-on summer program in 1953 at the University of Michigan , called Project Wolverine, identified several of the TEOTA subjects, including Doppler-assisted sub-beamwidth resolution, as research efforts to be sponsored by the Department of Defense (DoD) at various academic and industrial research laboratories. In that same year, the Illinois group produced a "strip-map" image exhibiting a considerable amount of sub-beamwidth resolution.
A more advanced focused-radar project was among several remote sensing schemes assigned in 1953 to Project Michigan, a tri-service-sponsored (Army, Navy, Air Force) program at the University of Michigan 's Willow Run Research Center (WRRC), that program being administered by the Army Signal Corps . Initially called the side-looking radar project, it was carried out by a group first known as the Radar Laboratory and later as the Radar and Optics Laboratory. It proposed to take into account, not just the short-term existence of several particular Doppler shifts, but the entire history of the steadily varying shifts from each target as the latter crossed the beam. An early analysis by Dr. Louis J. Cutrona, Weston E. Vivian, and Emmett N. Leith of that group showed that such a fully focused system should yield, at all ranges, a resolution equal to the width (or, by some criteria, the half-width) of the real antenna carried on the radar aircraft and continually pointed broadside to the aircraft's path. [ 7 ]
The required data processing amounted to calculating cross-correlations of the received signals with samples of the forms of signals to be expected from unit-amplitude sources at the various ranges. At that time, even large digital computers had capabilities somewhat near the levels of today's four-function handheld calculators, hence were nowhere near able to do such a huge amount of computation. Instead, the device for doing the correlation computations was to be an optical correlator .
It was proposed that signals received by the traveling antenna and coherently detected be displayed as a single range-trace line across the diameter of the face of a cathode-ray tube , the line's successive forms being recorded as images projected onto a film traveling perpendicular to the length of that line. The information on the developed film was to be subsequently processed in the laboratory on equipment still to be devised as a principal task of the project. In the initial processor proposal, an arrangement of lenses was expected to multiply the recorded signals point-by-point with the known signal forms by passing light successively through both the signal film and another film containing the known signal pattern. The subsequent summation, or integration, step of the correlation was to be done by converging appropriate sets of multiplication products by the focusing action of one or more spherical and cylindrical lenses. The processor was to be, in effect, an optical analog computer performing large-scale scalar arithmetic calculations in many channels (with many light "rays") at once. Ultimately, two such devices would be needed, their outputs to be combined as quadrature components of the complete solution.
A desire to keep the equipment small had led to recording the reference pattern on 35 mm film . Trials promptly showed that the patterns on the film were so fine as to show pronounced diffraction effects that prevented sharp final focusing. [ 8 ]
That led Leith, a physicist who was devising the correlator, to recognize that those effects in themselves could, by natural processes, perform a significant part of the needed processing, since along-track strips of the recording operated like diametrical slices of a series of circular optical zone plates. Any such plate performs somewhat like a lens, each plate having a specific focal length for any given wavelength. The recording that had been considered as scalar became recognized as pairs of opposite-sign vector ones of many spatial frequencies plus a zero-frequency "bias" quantity. The needed correlation summation changed from a pair of scalar ones to a single vector one.
Each zone plate strip has two equal but oppositely signed focal lengths, one real, where a beam through it converges to a focus, and one virtual, where another beam appears to have diverged from, beyond the other face of the zone plate. The zero-frequency ( DC bias ) component has no focal point, but overlays both the converging and diverging beams. The key to obtaining, from the converging wave component, focused images that are not overlaid with unwanted haze from the other two is to block the latter, allowing only the wanted beam to pass through a properly positioned frequency-band selecting aperture.
Each radar range yields a zone plate strip with a focal length proportional to that range. This fact became a principal complication in the design of optical processors . Consequently, technical journals of the time contain a large volume of material devoted to ways for coping with the variation of focus with range.
For that major change in approach, the light used had to be both monochromatic and coherent, properties that were already a requirement on the radar radiation. Lasers also then being in the future, the best then-available approximation to a coherent light source was the output of a mercury vapor lamp , passed through a color filter that was matched to the lamp spectrum's green band, and then concentrated as well as possible onto a very small beam-limiting aperture. While the resulting amount of light was so weak that very long exposure times had to be used, a workable optical correlator was assembled in time to be used when appropriate data became available.
Although creating that radar was a more straightforward task based on already-known techniques, that work did demand the achievement of signal linearity and frequency stability that were at the extreme state of the art. An adequate instrument was designed and built by the Radar Laboratory and was installed in a C-46 ( Curtiss Commando ) aircraft. Because the aircraft was bailed to WRRC by the U. S. Army and was flown and maintained by WRRC's own pilots and ground personnel, it was available for many flights at times matching the Radar Laboratory's needs, a feature important for allowing frequent re-testing and "debugging" of the continually developing complex equipment. By contrast, the Illinois group had used a C-46 belonging to the Air Force and flown by AF pilots only by pre-arrangement, resulting, in the eyes of those researchers, in limitation to a less-than-desirable frequency of flight tests of their equipment, hence a low bandwidth of feedback from tests. (Later work with newer Convair aircraft continued the Michigan group's local control of flight schedules.)
Michigan's chosen 5-foot (1.5 m)-wide World War II-surplus antenna was theoretically capable of 5-foot (1.5 m) resolution, but data from only 10% of the beamwidth was used at first, the goal at that time being to demonstrate 50-foot (15 m) resolution. It was understood that finer resolution would require the added development of means for sensing departures of the aircraft from an ideal heading and flight path, and for using that information for making needed corrections to the antenna pointing and to the received signals before processing. After numerous trials in which even small atmospheric turbulence kept the aircraft from flying straight and level enough for good 50-foot (15 m) data, one pre-dawn flight in August 1957 [ 9 ] yielded a map-like image of the Willow Run Airport area which did demonstrate 50-foot (15 m) resolution in some parts of the image, whereas the illuminated beam width there was 900 feet (270 m). Although the program had been considered for termination by DoD due to what had seemed to be a lack of results, that first success ensured further funding to continue development leading to solutions to those recognized needs.
The SAR principle was first acknowledged publicly via an April 1960 press release about the U. S. Army experimental AN/UPD-1 system, which consisted of an airborne element made by Texas Instruments and installed in a Beech L-23D aircraft and a mobile ground data-processing station made by WRRC and installed in a military van. At the time, the nature of the data processor was not revealed. A technical article in the journal of the IRE ( Institute of Radio Engineers ) Professional Group on Military Electronics in February 1961 [ 10 ] described the SAR principle and both the C-46 and AN/UPD-1 versions, but did not tell how the data were processed, nor that the UPD-1's maximum resolution capability was about 50 feet (15 m). However, the June 1960 issue of the IRE Professional Group on Information Theory had contained a long article [ 11 ] on "Optical Data Processing and Filtering Systems" by members of the Michigan group. Although it did not refer to the use of those techniques for radar, readers of both journals could quite easily understand the existence of a connection between articles sharing some authors.
An operational system to be carried in a reconnaissance version of the F-4 "Phantom" aircraft was quickly devised and was used briefly in Vietnam, where it failed to favorably impress its users, due to the combination of its low resolution (similar to the UPD-1's), the speckly nature of its coherent-wave images (similar to the speckliness of laser images), and the poorly understood dissimilarity of its range/cross-range images from the angle/angle optical ones familiar to military photo interpreters. The lessons it provided were well learned by subsequent researchers, operational system designers, image-interpreter trainers, and the DoD sponsors of further development and acquisition.
In subsequent work the technique's latent capability was eventually achieved. That work, depending on advanced radar circuit designs and precision sensing of departures from ideal straight flight, along with more sophisticated optical processors using laser light sources and specially designed very large lenses made from remarkably clear glass, allowed the Michigan group to advance system resolution, at about 5-year intervals, first to 15 feet (4.6 m), then 5 feet (1.5 m), and, by the mid-1970s, to 1 foot (the latter only over very short range intervals while processing was still being done optically). The latter levels and the associated very wide dynamic range proved suitable for identifying many objects of military concern as well as soil, water, vegetation, and ice features being studied by a variety of environmental researchers having security clearances allowing them access to what was then classified imagery. Similarly improved operational systems soon followed each of those finer-resolution steps.
Even the 5-foot (1.5 m) resolution stage had over-taxed the ability of cathode-ray tubes (limited to about 2000 distinguishable items across the screen diameter) to deliver fine enough details to signal films while still covering wide range swaths, and taxed the optical processing systems in similar ways. However, at about the same time, digital computers finally became capable of doing the processing without similar limitation, and the consequent presentation of the images on cathode ray tube monitors instead of film allowed for better control over tonal reproduction and for more convenient image mensuration.
Achievement of the finest resolutions at long ranges was aided by adding the capability to swing a larger airborne antenna so as to more strongly illuminate a limited target area continually while collecting data over several degrees of aspect, removing the previous limitation of resolution to the antenna width. This was referred to as the spotlight mode, which no longer produced continuous-swath images but, instead, images of isolated patches of terrain.
It was understood very early in SAR development that the extremely smooth orbital path of an out-of-the-atmosphere platform made it ideally suited to SAR operation. Early experience with artificial earth satellites had also demonstrated that the Doppler frequency shifts of signals traveling through the ionosphere and atmosphere were stable enough to permit very fine resolution to be achievable even at ranges of hundreds of kilometers. [ 12 ] The first spaceborne SAR images of Earth were demonstrated by a project now referred to as Quill (declassified in 2012). [ 13 ]
After the initial work began, several of the capabilities for creating useful classified systems did not exist for another two decades.
That seemingly slow rate of advances was often paced by the progress of other inventions, such as the laser, the digital computer , circuit miniaturization, and compact data storage. Once the laser appeared, optical data processing became a fast process because it provided many parallel analog channels, but devising optical chains suited to matching signal focal lengths to ranges proceeded by many stages and turned out to call for some novel optical components. Since the process depended on diffraction of light waves, it required anti-vibration mountings , clean rooms , and highly trained operators. Even at its best, its use of CRTs and film for data storage placed limits on the range depth of images.
At several stages, attaining the frequently over-optimistic expectations for digital computation equipment proved to take far longer than anticipated. For example, the SEASAT system was ready to orbit before its digital processor became available, so a quickly assembled optical recording and processing scheme had to be used to obtain timely confirmation of system operation. In 1978, the first digital SAR processor was developed by the Canadian aerospace company MacDonald Dettwiler (MDA) . [ 14 ] When its digital processor was finally completed and used, the digital equipment of that time took many hours to create one swath of image from each run of a few seconds of data. [ 15 ] Still, while that was a step down in speed, it was a step up in image quality. Modern methods now provide both high speed and high quality.
Highly accurate data can be collected by aircraft overflying the terrain in question. In the 1980s, as a prototype for instruments to be flown on the NASA Space Shuttles, NASA operated a synthetic aperture radar on a NASA Convair 990 . In 1986, this plane caught fire on takeoff. In 1988, NASA rebuilt a C, L, and P-band SAR to fly on the NASA DC-8 aircraft. Called AIRSAR , it flew missions at sites around the world until 2004. Another such aircraft, the Convair 580 , was flown by the Canada Center for Remote Sensing until about 1996 when it was handed over to Environment Canada due to budgetary reasons. Most land-surveying applications are now carried out by satellite observation. Satellites such as ERS-1 /2, JERS-1 , Envisat ASAR, and RADARSAT-1 were launched explicitly to carry out this sort of observation. Their capabilities differ, particularly in their support for interferometry, but all have collected tremendous amounts of valuable data. The Space Shuttle also carried synthetic aperture radar equipment during the SIR-A and SIR-B missions during the 1980s, the Shuttle Radar Laboratory (SRL) missions in 1994 and the Shuttle Radar Topography Mission in 2000.
The Venera 15 and Venera 16 followed later by the Magellan space probe mapped the surface of Venus over several years using synthetic aperture radar.
Synthetic aperture radar was first used by NASA on JPL's Seasat oceanographic satellite in 1978 (this mission also carried an altimeter and a scatterometer ); it was later developed more extensively on the Spaceborne Imaging Radar (SIR) missions on the space shuttle in 1981, 1984 and 1994. The Cassini mission to Saturn used SAR to map the surface of the planet's major moon Titan , whose surface is partly hidden from direct optical inspection by atmospheric haze. The SHARAD sounding radar on the Mars Reconnaissance Orbiter and MARSIS instrument on Mars Express have observed bedrock beneath the surface of the Mars polar ice and also indicated the likelihood of substantial water ice in the Martian middle latitudes. The Lunar Reconnaissance Orbiter , launched in 2009, carries a SAR instrument called Mini-RF , which was designed largely to look for water ice deposits on the poles of the Moon .
The Mineseeker Project is designing a system for determining whether regions contain landmines based on a blimp carrying ultra-wideband synthetic aperture radar. Initial trials show promise; the radar is able to detect even buried plastic mines.
The National Reconnaissance Office maintains a fleet of (now declassified) synthetic aperture radar satellites commonly designated as Lacrosse or Onyx .
In February 2009, the Sentinel R1 surveillance aircraft entered service in the RAF, equipped with the SAR-based Airborne Stand-Off Radar ( ASTOR ) system.
The German Armed Forces' ( Bundeswehr ) military SAR-Lupe reconnaissance satellite system has been fully operational since 22 July 2008.
As of January 2021, multiple commercial companies have started launching constellations of satellites for collecting SAR imagery of Earth. [ 16 ]
The Alaska Satellite Facility provides production, archiving and distribution to the scientific community of SAR data products and tools from active and past missions, including the June 2013 release of newly processed, 35-year-old Seasat SAR imagery.
The Center for Southeastern Tropical Advanced Remote Sensing (CSTARS) downlinks and processes SAR data (as well as other data) from a variety of satellites and supports the University of Miami 's Rosenstiel School of Marine, Atmospheric, and Earth Science . CSTARS also supports disaster relief operations, oceanographic and meteorological research, and port and maritime security research projects. | https://en.wikipedia.org/wiki/History_of_synthetic-aperture_radar |
The history of tablet computers and the associated special operating software is an example of pen computing technology, and thus the development of tablets has deep historical roots. [ 1 ] The first patent for a system that recognized handwritten characters by analyzing the handwriting motion was granted in 1914. [ 2 ] The first publicly demonstrated system using a tablet and handwriting recognition instead of a keyboard for working with a modern digital computer dates to 1956. [ 3 ]
The tablet computer and the associated special operating software is an example of pen computing technology, and the development of tablets has deep historical roots.
In addition to many academic and research systems, there were several companies with commercial products in the 1980s: Pencept and Communications Intelligence Corporation were among the best known of a crowded field.
Tablet computers appeared in a number of works of science fiction in the second half of the 20th century, with the depiction of Arthur C. Clarke 's NewsPad [ 4 ] appearing in Stanley Kubrick's 1968 film 2001: A Space Odyssey , the description of the Calculator Pad in the 1951 novel Foundation by Isaac Asimov, the Opton in the 1961 novel Return from the Stars , by Stanislaw Lem, and The Hitchhiker's Guide to the Galaxy in Douglas Adams 1978 comedy of the same name, all helping to promote and disseminate the concept to a wider audience. [ 5 ]
In 1968, Alan Kay envisioned a KiddiComp; while a PhD candidate [ 6 ] [ 7 ] he developed and described the concept as a Dynabook in his 1972 proposal: A personal computer for children of all ages, [ 8 ] the paper outlines the requirements for a conceptual portable educational device that would offer functionality similar to that supplied via a laptop computer or (in some of its other incarnations) a tablet or slate computer with the exception of the requirement for any Dynabook device offering near eternal battery life. Adults could also use a Dynabook, but the target audience was children.
Steve Jobs of Apple envisioned in a 1983 speech an "incredibly great computer in a book that you can carry around with you and learn how to use in 20 minutes". [ 9 ] In 1985, as the home-computer market significantly declined after several years of strong growth, Dan Bricklin said that a successful home computer needed to be the size of and as convenient to carry as a spiral notebook. He and others urged the industry to research the Dynabook concept. [ 10 ] In 1988, Apple, as part of the "Design the Personal Computer of the Year 2000" contest, awarded a project named TABLET, [ 11 ] inspired by the Dynabook. This tablet included all the features found in "modern" smartphones: camera, video recorder, microphone, speaker, cellular communication, GPS, and more.
Star Trek: The Next Generation featured extensive use of tablet computers. [ 12 ]
In 1986, Hindsight, a startup in Enfield CT, developed the Letterbug, an 8086-based tablet computer for the educational market. Prototypes were shown at trade shows in New England in 1987, but no production models ever came out. [ 13 ]
In 1987 Linus Technologies released the Write-Top , the first tablet computer with pen input and handwriting recognition. It weighed 9 pounds and was based on MS-DOS with an electroluminescent backlit CGA display and a "resistive type touch screen in which a voltage is applied to the screen edges, and a stylus detects the voltage at the touched location." The handwriting had to be individually trained for each user. Around 1500 units were sold. [ 14 ] [ 15 ]
1988, Hermann Hauser , co-founder of Acorn Computers , with Olivetti , would establish the Active Book Company Ltd, to develop an ARM based pen computer, with GSM connectivity, and utilising a Smalltalk based touch OS. [ 16 ] [ 17 ] The company would be bought by AT&T, and some technology borrowed for its 1991 EO Personal Communicator . [ 18 ]
In 1989, Grid Systems released the GridPad 1900, the first commercially successful tablet computer. It weighed 4.5 pounds and had a tethered pen resistive screen like the Write-Top. The handwriting recognition was created by Jeff Hawkins who led the GridPad development and later created the PalmPilot . Its GRiDPen software ran on MS-DOS and was later licensed as PenRight.
The 1991, Atari ST -PAD Stylus was demonstrated but did not enter production. [ 19 ]
In 1991, AT&T released their first EO Personal Communicator , this was one of the first commercially available tablets and ran the GO Corporation 's PenPoint OS on AT&T's own hardware, including their own AT&T Hobbit CPU.
In 1992, Samsung introduced the PenMaster. [ 20 ] It was based around the Intel i386SL CPU. As the OS, it used the newly released Windows for Pen Computing from Microsoft. The touchscreen relied on a chipset by Wacom and it used a battery powered pen. GRID Systems licensed the design from Samsung and was also sold as the better known GRiDPad SL. [ 21 ]
In 1993, Apple Computer released the Apple Newton , with a 6-inch screen and 800 grams weight). [ 22 ] It utilized Apple's own new Newton OS , initially running on hardware manufactured by Motorola and incorporating an ARM CPU, that Apple had specifically co-developed with Acorn Computers. The operating system and platform design were later licensed to Sharp and Digital Ocean , who went on to manufacture their own variants.
The Compaq Concerto was released in 1993 with a Compaq-modified version of MS-DOS 6.2 and Windows 3.1, a.k.a. Windows for PEN, with pen-entry and Wacom compatibility. Functionally the Concerto was a full featured laptop that could operate in pen-mode when the keyboard was removed.
In 1994 media company Knight Ridder made a concept video of a tablet device with a color display and a focus on media consumption . [ 23 ] The company didn't create it as a commercial product because of deficiencies of weight and energy consumption in display technology.
In 1994, the European Union initiated the 'OMI-NewsPAD' project (EP9252), requiring a consumer device be developed for the receipt and consumption of electronically delivered news / newspapers and associated multi-media. [ 24 ] The NewsPad name and project goals were borrowed from and inspired by Arthur C. Clarke's 1965 screen play and Stanley Kubrick's 1968 film: 2001: A Space Odyssey. [ 25 ] [ 26 ] Acorn Computers developed and delivered an ARM based touch screen tablet computer for this program, branded the NewsPad. The device was supplied for the duration of the Barcelona-based trial, which ended in 1997. [ 27 ] [ 28 ]
In 1996, The Webbook Company announced the first Internet-based tablet, then referred to as a Web Surfboard, that would run Java and utilize a RISC processor . [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] However, it never went into production. [ citation needed ]
Also in 1996, Palm, Inc. released the first of the Palm OS based PalmPilot touch and stylus based PDA, the touch based devices initially incorporating a Motorola Dragonball (68000) CPU.
Again in 1996, Fujitsu released the Stylistic 1000 tablet format PC, running Microsoft Windows 95, on a 100 MHz AMD486 DX4 CPU, with 8 MB RAM offering stylus input, with the option of connecting a conventional Keyboard and mouse.
In 1999, Intel announced a StrongARM based touch screen tablet computer under the name WebPAD, the tablet was later re-branded as the "Intel Web Tablet". [ 34 ] [ 35 ] [ 36 ]
In April 2000, Microsoft launched the Pocket PC 2000, utilising their touch capable Windows CE 3.0 operating system. The devices were manufactured by several manufacturers, based on a mix of: x86 , MIPS , ARM , and SuperH hardware.
One early implementation of a Linux tablet was the ProGear by FrontPath. The ProGear used a Transmeta chip and a resistive digitizer. The ProGear initially came with a version of Slackware Linux , but could later be bought with Windows 98.
In 1999, Microsoft attempted to re-institute the then decades old tablet concept by assigning two well-known experts in the field, from Xerox Palo Alto Research Center , to the project. [ 37 ]
In 2000, Microsoft coined the term " Microsoft Tablet PC " for tablet computers built to Microsoft's specification, and running a licensed specific tablet enhanced version of its Microsoft Windows OS, popularizing the term tablet PC for this class of devices. [ 38 ] [ 39 ] [ 40 ] Microsoft Tablet PCs were targeted to address business needs mainly as note-taking devices, and as rugged devices for field work. [ 41 ] In the health care sector, tablet computers were intended for data capture – such as registering feedback on the patient experience at the bedside as well and supporting data collection through digital survey instruments. [ 42 ]
In 2002, original equipment manufacturers released the first tablet PCs designed to the Microsoft Tablet PC specification. This generation of Microsoft Tablet PCs were designed to run Windows XP Tablet PC Edition, the Tablet PC version of Windows XP . [ 43 ] This version of Microsoft Windows superseded Microsoft's earlier pen computing operating environment, Windows for Pen Computing 2.0 . After releasing Windows XP Tablet PC Edition, Microsoft designed the successive desktop computer versions of Windows, Windows Vista and Windows 7 , to support pen computing intrinsically.
Tablet PCs failed to gain popularity in the consumer space because of unresolved problems. [ 44 ] The existing devices were too heavy to be held with one hand for extended periods, the specific software features designed to support usage as a tablet (such as finger and virtual keyboard support) were not present in all contexts, [ 45 ] [ 46 ] and there were not enough applications specific to the platform [ 47 ] – legacy applications created for desktop interfaces made them not well adapted to the slate format.
One early implementation of a Linux tablet was the ProGear by FrontPath. The ProGear used a Transmeta chip and a resistive digitizer.
The ProGear initially came with a version of Slackware Linux , but could later be bought with Windows 98 . Because these computers are general purpose IBM PC compatible machines, they can run many different operating systems. However, the device is no longer for sale and FrontPath has ceased operations. Many touch screen sub-notebook computers can run any of several Linux distributions with little customization. [ citation needed ]
X.org supports screen rotation and tablet input through Wacom drivers, and handwriting recognition software from both the Qt -based Qtopia and GTK+ -based Internet Tablet OS provide promising free and open source systems for future development.
Open source note taking software in Linux includes applications such as Xournal (which supports PDF file annotation), Gournal (a Gnome-based note taking application), and the Java-based Jarnal (which supports handwriting recognition as a built-in function). Before the advent of the aforementioned software, many users had to rely on on-screen keyboards and alternative text input methods like Dasher . There is a stand-alone handwriting recognition program available, CellWriter, in which users must write letters separately in a grid.
A number of Linux-based OS projects are dedicated to tablet PCs. Since all these are open source, they are freely available and can be run or ported to devices that conform to the tablet PC design. In 2003, Hitachi introduced the VisionPlate rugged tablet [ 48 ] that was used as a point-of-sale device. [ 49 ] Maemo (rebranded MeeGo in 2010), a Debian GNU/Linux based graphical user environment, was developed for the Nokia Internet Tablet devices (770, N800, N810 & N900). The Ubuntu Netbook Remix edition , as well as the Intel sponsored Moblin project, both have touchscreen support integrated into their user interfaces. Canonical Ltd has started a program for better supporting tablets with the Unity UI for Ubuntu 10.10 . [ 50 ]
TabletKiosk offered a hybrid digitizer / touch device running openSUSE . [ 51 ]
Initially developed by Palm, Inc. in January 2009, as the Palm OS , webOS was purchased by HP to be their proprietary operating system running on the Linux kernel. Versions 1.0 to 2.1 of webOS uses the patched Linux 2.6.24 kernel. HP has continued to develop the webOS platform for use in multiple products, including smartphones, tablet PCs, and printers. HP announced plans in March 2011, for a version of webOS by the end of 2011, to run within the Microsoft Windows operating system to be used in HP desktop and notebook computers in 2012.
HP TouchPad, the first addition to HP's tablet family, was shipped out with version 3.0.2. [ 52 ] Version 3.0.2 gives the tablet support for multitasking, applications, and HP Synergy. HP have also claimed in its webcatalog to support over 200 apps with its release. [ 53 ]
On 18 August 2011, HP announced that it would discontinue production of all webOS devices. [ 54 ] [ 55 ]
Nokia entered the tablet space with the Nokia 770 running Maemo , a Debian-based Linux distribution custom-made for their Nokia Internet Tablet line. The product line continued with the N900 which is the first to add phone capabilities. Intel, following the launch of the UMPC, started the Mobile Internet Device initiative, which took the same hardware and combined it with a Linux operating system custom-built for portable tablets. Intel co-developed the lightweight Moblin operating system following the successful launch of the Atom CPU series on netbooks.
MeeGo is an operating system developed by Intel and Nokia to support Netbooks, Smartphones and tablet PCs. In 2010, Nokia and Intel combined the Maemo and Moblin projects to form MeeGo. The first [ clarification needed ] MeeGo powered tablet PC is the Neofonie WeTab . The WeTab uses an extended version of the MeeGo operating system called WeTab OS. WeTab OS adds runtimes for Android and Adobe AIR and provides a proprietary user interface optimized for the WeTab device. [ citation needed ]
Apple has never sold a tablet PC computer running Mac OS X , although OS X does have support for handwriting recognition via Inkwell . However, Apple sells the iOS -based iPad Tablet computer , introduced in 2010.
Before the introduction of the iPad, Axiotron introduced the Modbook , a heavily modified Apple MacBook , Mac OS X-based tablet computer at Macworld in 2007. [ 56 ] The Modbook used Apple's Inkwell handwriting and gesture recognition, and used digitization hardware from Wacom . To support the digitizer on the integrated tablet, the Modbook was supplied with a third-party driver called TabletMagic . Wacom does not provide drivers for this device.
The tablet computer market was reinvigorated by Apple through the introduction of the iPad device in 2010. [ 57 ] While the iPad places restrictions on the owner to install software [ 58 ] [ 59 ] [ 60 ] thus deviating it from the PC tradition, its attention to detail for the touch interface [ 61 ] is considered a milestone in the history of the development of the tablet computer [ 44 ] that defined the tablet computer as a new class of portable device, different from a laptop PC or netbook. [ 62 ] A WiFi-only model of the tablet was released in April 2010, and a WiFi+3G model was introduced about a month later, using a no-contract data plan from AT&T . Since then, the iPad 2 has launched, bringing 3G support from both AT&T and Verizon Wireless . The iPad has been characterized by some as a tablet computer that mainly focuses on media consumption such as web browsing, email, photos, videos, and e-reading, even though full-featured, Microsoft Office-compatible software for word processing ( Pages ), spreadsheets ( Numbers ), and presentations ( Keynote ) were released alongside the initial model. One month after the iPad's release Apple subsidiary FileMaker Inc. released a version of the Bento database software for it. [ 63 ] With the introduction of the iPad 2 Apple also released full-featured first party software for multi-track music composition ( GarageBand ) and video editing ( iMovie ). As of the release of iOS 5 in October 2011, iPads no longer require being plugged into a separate personal computer for initial activation and backups, eliminating one of the drawbacks of using a non-PC architecture-based tablet computer.
On 20 May 2010, IDC published a press release defining the term media tablet as personal devices with screens from 7 to 12 inches, lightweight operating systems "currently based on ARM processors" which "provide a broad range of applications and connectivity, differentiating them from primarily single-function devices such as ereaders". [ 64 ] IDC also predicted a market growth for tablets from 7.6 million units in 2010, to more than 46 million units in 2014. More recent reports show predictions from various analysts in the range from 26 to 64 million units in 2013. [ 65 ] On 2 March 2011, Apple announced that 15 million iPads had been sold in three fiscal quarters of 2010, [ 66 ] double the number that IDC then predicted.
Early competitors to Apple's iPad in the market for tablet computers not based on the traditional PC architecture were the 5 inch Dell Streak , released in June 2010, the original 7 inch Samsung Galaxy Tab , released in September 2010. [ citation needed ] and the Fusion5 X220 Tablet PC, also released in September 2010.
At the Consumer Electronics Show in January 2011, over 80 new tablets were announced to compete with the iPad. Companies who announced tablets included: Dell with the Streak Tablet , Acer with the new Acer Tab , Motorola with its Xoom tablet ( Android 3.0 ), Samsung with a new Samsung Galaxy Tab ( Android 2.2 ), Research in Motion demonstrating their BlackBerry Playbook , Vizio with the Via Tablet, Toshiba with the Android 3.0 – run Toshiba Thrive , and others including Asus , and the startup company Notion Ink. Many of these tablets were designed to run Android 3.0 Honeycomb, Google 's mobile operating system for tablets, while others run older versions of Android like 2.3 , or a completely different OS such as the BlackBerry Playbook's QNX . [ 67 ] Other than the Motorola Xoom, by the time most competitors released devices of comparable size and price to the original iPad, Apple in March 2011, had already released their second generation iPad 2.
Hewlett-Packard announced its TouchPad based on the WebOS system in June 2011. HP released it a month later in July, only to discontinue it after less than 49 days of sales, becoming the first casualty in the post-PC tablet computer market. [ 68 ] [ 69 ] The fire sale on TouchPad tablets when its price was dropped from US$499 to as low as $99 after it was discontinued resulted in a surge of interest. [ 70 ] This dramatic increase in its popularity [ 71 ] potentially raised its market share above all other non-Apple tablets, at least temporarily.
In September 2011, Amazon.com announced the Kindle Fire , a 7-inch tablet deeply tied into their Kindle ebook service, Amazon Appstore , and other Amazon services for digital music, video, and other content. The Kindle Fire runs on Amazon's custom fork of v2.3 of the Android operating system. [ 72 ] Using Amazon's cloud services for accelerated web browsing and remote storage, Amazon has set it up to have very little other connection back to Google, aside from supporting Gmail as one of the several webmail services it can access. [ 72 ] At a cost of only US$199 for the Kindle Fire it has been suggested that Amazon's business strategy is to make their money on selling content through it, as well as the device acting as a storefront for physical goods sold through Amazon. [ 73 ] [ 74 ] Besides the Kindle Fire's low price, reviewers have also noted that it is polished on its initial release, in comparison to other tablets that often needed software updates. [ 75 ]
Despite the large number of competing tablets released in 2011, none of them had managed to gain considerable traction as the market continued to be dominated by the iPad and iPad 2. Several manufacturers had to resort to deep discounts to move excess inventory, as what happened with the HP TouchPad (after its announced discontinuation) and the BlackBerry Playbook. It has been suggested that many companies, in their rush to jump on the "tablet bandwagon", had released products that might have had decent hardware but lacked refinement and came with software bugs that needed updates. [ 75 ] [ 76 ]
According to IDC, Android have 63% of all "media tablet" sales in 2013 and rising and Windows is also rising in market share. Apple's iPad had 83% of all "media tablet" sales in 2010 and 28% of market share in 2013. [ 77 ] At the unveiling of the iPad 2 in March 2011, Steve Jobs claimed that the iPad held more than 90% market share, but the difference between the figures could be explained by the difference between the amount of hardware shipped into the channel versus the number that have been actually sold. [ 78 ]
In August 2011, the iPad and iPad 2 dominated sales, outselling Android and other rival OS tablets by a ratio of eight to one. [ 80 ] [ 81 ] Apple's iPad held 66 percent of the global tablet market in Q1 of 2011, [ citation needed ] but the share is predicted to drop to 58 percent by the end of the year [ citation needed ] due to the influx of new products, mostly Android tablets. Technology experts [ who? ] suggest that Apple is getting court injunctions to stop the slide, although these injunctions are only preliminary measures as Apple has to provide more substantial evidence in subsequent court proceedings that the design of competing products infringed its patents or copied their designs in order to make any bans permanent. These cases take months or even years to come to court, unless there is no settlement, and if Apple loses it will be liable for the business lost by a competitor due to the injunction. Although risky, experts [ who? ] say that this kind of strategy gives time for Apple to hold off rivals and grab even greater market share with their iPad, since it is a market that is developing fast where Apple leads, regardless of the damages that they have to pay if they lose the case. Google's David Drummond complained "They (Apple) want to make it harder for manufacturers to sell Android devices. Instead of competing by building new features or devices, they are fighting through litigation." [ 82 ]
On 14 September 2011, IDC announced that in the second calendar quarter of 2011, the market share of the iPad increased to 68.3% from 65.7% in the previous quarter, while market share for Android-based tablets decreased from 34.0% the previous quarter down to 26.8% in the second quarter. Besides being affected by the introduction of the iPad 2 in March 2011, this can also be partially attributed to the introduction of RIM's PlayBook tablet, which took 4.9% share of the market in the quarter. [ 79 ]
On 22 September 2011, Gartner lowered their forecast for sales of tablet computers based on the Android OS by 28 percent from the previous quarter's projection, [ 83 ] explaining that "Android’s appeal in the tablet market has been constrained by high prices, weak user interface and limited tablet applications." Further, they state that they expect the iPad to have a "free run" through the 2011 holiday season and that Apple will "maintain a market share lead throughout our forecast period by commanding more than 50 percent of the market until 2014." [ 84 ] Gartner revised their projection of Apple's worldwide tablet market share at the end of 2011, up to 73.4% after their previous projection of 68.7% for the year.
In October 2011, at the Launch Pad conference Ryan Block from gadget site gdgt showed slides identifying the makeup of the site's users who bought tablets in 2011 consisting of 76% iPad (39% iPad 2, 37% original iPad), 6% HP TouchPad, and no other tablet at over 4%. He noted that the numbers did not include previous purchases of the iPad or other tablets in 2010. In a breakdown by platform he showed a chart indicating Apple's iOS at 76%, Google's Android at 17%, HP's webOS at 6%, and RIM's PlayBook OS at 2%. [ 85 ]
A report by Strategy Analytic showed that the share of Android tablet computers had risen sharply at the expense of Apple's iOS in the fourth quarter of 2011. According to Strategy Analytic, Android accounted for 39% of the global tablet market in the final three months of 2011, up from 29% a year earlier. Apple's share fell to 58% from 68%. A total of 26.8 million tablet computers were sold in the quarter, up from 10.7 million a year ago, the report said. [ 86 ]
In China, according to an AlphaWise survey of 1,553 Chinese consumers across 16 cities over the summer of 2011, Apple's iPad currently holds a 65% share of that nation's tablet market. When asked about future purchases, 68% of those surveyed indicated an intent to buy an iPad, versus other brands' shares of 10% for Asus, 8% for Lenovo, 6% for Samsung, and 3% or less for any other brand. [ 87 ]
According to eMarketer & Forbes , advertisers will spend nearly $1.23 billion on mobile advertising in 2011 in the US, up from $743 million last year. By 2015, the US mobile advertising market is set to reach almost $4.4 billion. This includes spending on display ads (such as banners, rich media and video), search and messaging-based advertising, and covers ads viewed on both mobile phones and tablets. [ 88 ] | https://en.wikipedia.org/wiki/History_of_tablet_computers |
In computer science , the Actor model , first published in 1973, is a mathematical model of concurrent computation .
A fundamental challenge in defining the Actor model is that it did not provide for global states so that a computational step could not be defined as going from one global state to the next global state as had been done in all previous models of computation.
In 1963 in the field of Artificial Intelligence , John McCarthy introduced situation variables in logic in the Situational Calculus. In McCarthy and Hayes 1969, a situation is defined as "the complete state of the universe at an instant of time." In this respect, the situations of McCarthy are not suitable for use in the Actor model since it has no global states.
From the definition of an Actor, it can be seen that numerous events take place: local decisions, creating Actors, sending messages, receiving messages, and designating how to respond to the next message received. Partial orderings on such events have been axiomatized in the Actor model and their relationship to physics explored (see Actor model theory ).
According to Hewitt (2006), the Actor model is based on physics in contrast with other models of computation that were based on mathematical logic, set theory, algebra, etc. Physics influenced the Actor model in many ways, especially quantum physics and relativistic physics . One issue is what can be observed about Actor systems. The question does not have an obvious answer because it poses both theoretical and observational challenges similar to those that had arisen in constructing the foundations of quantum physics. In concrete terms for Actor systems, typically we cannot observe the details by which the arrival order of messages for an Actor is determined (see Indeterminacy in concurrent computation ). Attempting to do so affects the results and can even push the indeterminacy elsewhere. e.g. , see metastability in electronics . Instead of observing the insides of arbitration processes of Actor computations, we await the outcomes.
The Actor model builds on previous models of computation.
The lambda calculus of Alonzo Church can be viewed as the earliest message passing programming language (see Hewitt, Bishop, and Steiger 1973; Abelson and Sussman 1985 ). For example, the lambda expression below implements a tree data structure when supplied with parameters for a leftSubTree and rightSubTree . When such a tree is given a parameter message "getLeft" , it returns leftSubTree and likewise when given the message "getRight" it returns rightSubTree .
However, the semantics of the lambda calculus were expressed using variable substitution in which the values of parameters were substituted into the body of an invoked lambda expression. The substitution model is unsuitable for concurrency because it does not allow the capability of sharing of changing resources. Inspired by the lambda calculus, the interpreter for the programming language Lisp made use of a data structure called an environment so that the values of parameters did not have to be substituted into the body of an invoked lambda expression. This allowed for sharing of the effects of updating shared data structures but did not provide for concurrency.
Simula 67 pioneered using message passing for computation, motivated by discrete event simulation applications. These applications had become large and unmodular in previous simulation languages. At each time step, a large central program would have to go through and update the state of each simulation object that changed depending on the state of whichever simulation objects it interacted with on that step. Kristen Nygaard and Ole-Johan Dahl developed the idea (first described in an IFIP workshop in 1967) of having methods on each object that would update its own local state based on messages from other objects. In addition they introduced a class structure for objects with inheritance . Their innovations considerably improved the modularity of programs.
However, Simula used coroutine control structure instead of true concurrency.
Alan Kay was influenced by message passing in the pattern-directed invocation of Planner in developing Smalltalk -71. Hewitt was intrigued by Smalltalk-71 but was put off by the complexity of communication that included invocations with many fields including global , sender , receiver , reply-style , status , reply , operator selector , etc.
In 1972 Kay visited MIT and discussed some of his ideas for Smalltalk-72 building on the Logo work of Seymour Papert and the "little person" model of computation used for teaching children to program. However, the message passing of Smalltalk-72 was quite complex. Code in the language was viewed by the interpreter as simply a stream of tokens. As Dan Ingalls later described it:
Thus the message-passing model in Smalltalk-72 was closely tied to a particular machine model and programming-language syntax that did not lend itself to concurrency. Also, although the system was bootstrapped on itself, the language constructs were not formally defined as objects that respond to Eval messages (see discussion below). This led some to believe that a new mathematical model of concurrent computation based on message passing should be simpler than Smalltalk-72.
Subsequent versions of the Smalltalk language largely followed the path of using the virtual methods of Simula in the message-passing structure of programs. However Smalltalk-72 made primitives such as integers, floating point numbers, etc. into objects . The authors of Simula had considered making such primitives into objects but refrained largely for efficiency reasons. Java at first used the expedient of having both primitive and object versions of integers, floating point numbers, etc. The C# programming language (and later versions of Java, starting with Java 1.5) adopted using boxing and unboxing , a variant of which had been used earlier in some Lisp implementations.
The Smalltalk system went on to become very influential, innovating in bitmap displays, personal computing, the class browser interface, and many other ways. For details see Kay's The Early History of Smalltalk . [ 1 ] Meanwhile, the Actor efforts at MIT remained focused on developing the science and engineering of higher level concurrency. (See the paper by Jean-Pierre Briot for ideas that were developed later on how to incorporate some kinds of Actor concurrency into later versions of Smalltalk.)
Prior to the development of the Actor model, Petri nets were widely used to model nondeterministic computation. However, they were widely acknowledged to have an important limitation: they modeled control flow but not data flow. Consequently, they were not readily composable, thereby limiting their modularity. Hewitt pointed out another difficulty with Petri nets: simultaneous action. I.e. , the atomic step of computation in Petri nets is a transition in which tokens simultaneously disappear from the input places of a transition and appear in the output places. The physical basis of using a primitive with this kind of simultaneity seemed questionable to him. Despite these apparent difficulties, Petri nets continue to be a popular approach to modelling concurrency, and are still the subject of active research.
Prior to the Actor model, concurrency was defined in low-level machine terms of threads , locks and buffers ( channels ). It certainly is the case that implementations of the Actor model typically make use of these hardware capabilities. However, there is no reason that the model could not be implemented directly in hardware without exposing any hardware threads and locks. Also, there is no necessary relationship between the number of Actors, threads, and locks that might be involved in a computation. Implementations of the Actor model are free to make use of threads and locks in any way that is compatible with the laws for Actors.
An important challenge in defining the Actor model was to abstract away implementation details.
For example, consider the following question: "Does each Actor have a queue in which its communications are stored until received by the Actor to be processed?" Carl Hewitt argued against including such queues as an integral part of the Actor model. One consideration was that such queues could themselves be modeled as Actors that received messages to enqueue and dequeue the communications. Another consideration was that some Actors would not use such queues in their actual implementation. E.g., an Actor might have a network of arbiters instead. Of course, there is a mathematical abstraction which is the sequence of communications that have been received by an Actor. But this sequence emerged only as the Actor operated. In fact the ordering of this sequence can be indeterminate (see Indeterminacy in concurrent computation ).
Another example of abstracting away implementation detail was the question of interpretation : "Should interpretation be an integral part of the Actor model?" The idea of interpretation is that an Actor would be defined by how its program script processed eval messages. (In this way Actors would be defined in a manner analogous to Lisp which was "defined" by a meta-circular interpreter procedure named eval written in Lisp.) Hewitt argued against making interpretation integral to the Actor model. One consideration was that to process the eval messages, the program script of an Actor would itself have a program script (which in turn would have ...)! Another consideration was that some Actors would not use interpretation in their actual interpretation. E.g., an Actor might be implemented in hardware instead. Of course there is nothing wrong with interpretation per se . Also implementing interpreters using eval messages is more modular and extensible than the monolithic interpreter approach of Lisp.
Nevertheless, progress developing the model was steady. In 1975, Irene Greif published the first operational model in her dissertation.
Gerald Sussman and Guy Steele then took an interest in Actors and published a paper on their Scheme interpreter in which they concluded "we discovered that the 'actors' and the lambda expressions were identical in implementation." According to Hewitt, the lambda calculus is capable of expressing some kinds of parallelism but, in general, not the concurrency expressed in the Actor model. On the other hand, the Actor model is capable of expressing all of the parallelism in the lambda calculus.
Two years after Greif published her operational model, Carl Hewitt and Henry Baker published the Laws for Actors.
Using the laws of the Actor model, Hewitt and Baker proved that any Actor that behaves like a function is continuous in the sense defined by Dana Scott (see denotational semantics ).
Aki Yonezawa published his specification and verification techniques for Actors. Russ Atkinson and Carl Hewitt published a paper on specification and proof techniques for serializers providing an efficient solution to encapsulating shared resources for concurrency control .
Finally eight years after the first Actor publication, Will Clinger (building on the work of Irene Greif 1975, Gordon Plotkin 1976, Michael Smyth 1978, Henry Baker 1978, Francez, Hoare , Lehmann, and de Roever 1979, and Milne and Milnor 1979) published the first satisfactory mathematical denotational model incorporating unbounded nondeterminism using domain theory in his dissertation in 1981 (see Clinger's model ). Subsequently, Hewitt [2006] augmented the diagrams with arrival times to construct a technically simpler denotational model that is easier to understand. See History of denotational semantics . | https://en.wikipedia.org/wiki/History_of_the_Actor_model |
The history of the Berkeley Software Distribution began in the 1970s when University of California, Berkeley received a copy of Unix . Professors and students at the university began adding software to the operating system and released it as BSD to select universities. Since it contained proprietary Unix code, it originally had to be distributed subject to AT&T licenses. The bundled software from AT&T was then rewritten and released as free software under the BSD license . However, this resulted in a lawsuit with Unix System Laboratories, the AT&T subsidiary responsible for Unix. Eventually, in the 1990s, the final versions of BSD were publicly released without any proprietary licenses, which led to many descendants of the operating system that are still maintained today.
The earliest distributions of Unix from Bell Labs in the 1970s included the source code to the operating system, allowing researchers at universities to modify and extend Unix. The operating system arrived at Berkeley in 1974, at the request of computer science professor Bob Fabry who had been on the program committee for the Symposium on Operating Systems Principles where Unix was first presented. A PDP-11/45 was bought to run the system, but for budgetary reasons, this machine was shared with the mathematics and statistics groups at Berkeley, who used RSTS , so that Unix only ran on the machine eight hours per day (sometimes during the day, sometimes during the night). A larger PDP-11/70 was installed at Berkeley the following year, using money from the Ingres database project. [ 1 ]
Also in 1975, Ken Thompson took a sabbatical from Bell Labs and came to Berkeley as a visiting professor. He helped to install Version 6 Unix and started working on a Pascal implementation for the system. Graduate students Chuck Haley and Bill Joy improved Thompson's Pascal and implemented an improved text editor, ex . [ 1 ] Other universities became interested in the software at Berkeley, and so in 1977 Joy started compiling the first Berkeley Software Distribution (1BSD), which was released on March 9, 1978. [ 2 ] 1BSD was an add-on to Version 6 Unix rather than a complete operating system in its own right. Some thirty copies were sent out. [ 1 ]
The Second Berkeley Software Distribution (2BSD), released in May 1979, [ 3 ] included updated versions of the 1BSD software as well as two new programs by Joy that persist on Unix systems to this day: the vi text editor (a visual version of ex ) and the C shell . Some 75 copies of 2BSD were sent out by Bill Joy. [ 1 ] A further feature was a networking package called Berknet , developed by Eric Schmidt as part of his master's thesis work, that could connect up to twenty-six computers and provided email and file transfer. [ 4 ]
After 3BSD (see below) had come out for the VAX line of computers, new releases of 2BSD for the PDP-11 were still issued and distributed through USENIX ; for example, 1982's 2.8.1BSD included a collection of fixes for performance problems in Version 7 Unix , [ 5 ] and later releases contained ports of changes from the VAX-based releases of BSD back to the PDP-11 architecture. 2.9BSD from 1983 included code from 4.1cBSD, and was the first release that was a full OS (a modified V7 Unix) rather than a set of applications and patches.
The most recent release, 2.11BSD , was first issued in 1991. [ 6 ] Unlike the previous releases, it required split instruction/data space , to accommodate the ever-increasing size of its utility programs.
In the 21st century, maintenance updates from volunteers continued: patch #490 was released on April 16, 2025. [ 7 ]
A DEC VAX computer was installed at Berkeley in 1978, but the port of Version 7 Unix to the VAX architecture, UNIX/32V , did not take advantage of the VAX's virtual memory capabilities. The kernel of 32V was largely rewritten by Berkeley graduate student Özalp Babaoğlu to include a virtual memory implementation, and a complete operating system including the new kernel, ports of the 2BSD utilities to the VAX, and the utilities from 32V was released as 3BSD at the end of 1979. 3BSD was also alternatively called Virtual VAX/UNIX or VMUNIX (for Virtual Memory Unix), and BSD kernel images were normally called /vmunix until 4.4BSD.
The success of 3BSD was a major factor in the Defense Advanced Research Projects Agency 's (DARPA) decision to fund Berkeley's Computer Systems Research Group (CSRG), which would develop a standard Unix platform for future DARPA research in the VLSI Project .
4BSD (November 1980) offered a number of enhancements over 3BSD, notably job control in the previously released csh , delivermail (the ancestor of sendmail ), "reliable" signals , and the Curses programming library. In a 1985 review of BSD releases, John Quarterman et al. , wrote: [ 8 ]
4BSD was the operating system of choice for VAXs from the beginning until the release of System III (1979–1982) [...] Most organizations would buy a 32V license and order 4BSD from Berkeley without ever bothering to get a 32V tape. Many installations inside the Bell System ran 4.1BSD (many still do, and many others run 4.2BSD).
4.1BSD (June 1981) was a response to criticisms of BSD's performance relative to the dominant VAX operating system, VMS . The 4.1BSD kernel was systematically tuned up by Bill Joy until it could perform as well as VMS on several benchmarks. The release would have been called 5BSD , but after objections from AT&T the name was changed; AT&T feared confusion with AT&T 's UNIX System V . [ 9 ] Several tapes have turned up, all with a label that says 4.1BSD, yet differences between the tapes are present. [ 10 ] The software development that would lead from 4.1BSD to 4.2BSD was funded from sources including ARPA, Order Number 4031, Contract N00039-82-C-0235 which was in effect at least from November 15, 1981 through September 30, 1983. [ 11 ] [ 12 ]
4.2BSD (August 1983) would take over two years to implement and contained several major overhauls. Before its official release came three intermediate versions: 4.1a from April 1982 [ 13 ] incorporated a modified version of BBN's preliminary TCP/IP implementation; 4.1b from June 1982 included the new Berkeley Fast File System , implemented by Marshall Kirk McKusick ; and 4.1c in April 1983 was an interim release during the last few months of 4.2BSD's development. Back at Bell Labs, 4.1cBSD became the basis of the 8th Edition of Research Unix , and a commercially supported version was available from mt Xinu .
To guide the design of 4.2BSD, Duane Adams of DARPA formed a "steering committee" consisting of Bob Fabry , Bill Joy and Sam Leffler from UCB , Alan Nemeth and Rob Gurwitz from BBN, Dennis Ritchie from Bell Labs , Keith Lantz from Stanford , Rick Rashid from Carnegie Mellon , Bert Halstead from MIT , Dan Lynch from ISI , and Gerald J. Popek of UCLA . The committee met from April 1981 to June 1983.
Apart from the Fast File System, several features from outside contributors were accepted, including disk quotas and job control. Sun Microsystems provided testing on its Motorola 68000 machines prior to release, improving portability of the system. [ 8 ] Sun hardware support is plainly visible in the 4.1c BSD artifacts in the CSRG ISO. [ 14 ]
The official 4.2BSD release came in August 1983. It was notable as the first version released after the 1982 departure of Bill Joy to co-found Sun Microsystems; Mike Karels and Marshall Kirk McKusick took on leadership roles within the project from that point forward. On a lighter note, it also marked the debut of BSD's daemon mascot in a drawing by John Lasseter that appeared on the cover of the printed manuals distributed by USENIX .
4.3BSD was released in June 1986. Its main changes were to improve the performance of many of the new contributions of 4.2BSD that had not been as heavily tuned as the 4.1BSD code. Prior to the release, BSD's implementation of TCP/IP had diverged considerably from BBN's official implementation. After several months of testing, DARPA determined that the 4.2BSD version was superior and would remain in 4.3BSD. (See also History of the Internet .)
After 4.3BSD, it was determined that BSD would move away from the aging VAX platform. The Power 6/32 platform (codenamed "Tahoe") developed by Computer Consoles Inc. seemed promising at the time, but was abandoned by its developers shortly thereafter. Nonetheless, the 4.3BSD-Tahoe port (June 1988) proved valuable, as it led to a separation of machine-dependent and machine-independent code in BSD which would improve the system's future portability.
Apart from portability, the CSRG worked on an implementation of the OSI network protocol stack, improvements to the kernel virtual memory system and (with Van Jacobson of LBL ) new TCP/IP algorithms to accommodate the growth of the Internet. [ 15 ]
Until then, all versions of BSD incorporated proprietary AT&T Unix code and were, therefore, subject to an AT&T software license. Source code licenses had become very expensive and several outside parties had expressed interest in a separate release of the networking code, which had been developed entirely outside AT&T and would not be subject to the licensing requirement. This led to Networking Release 1 ( Net/1 ), which was made available to non-licensees of AT&T code and was freely redistributable under the terms of the BSD license . It was released in June 1989.
4.3BSD-Reno came in early 1990. It was an interim release during the early development of 4.4BSD, and its use was considered a "gamble", hence the naming after the gambling center of Reno, Nevada . This release explicitly moved towards POSIX compliance. [ 15 ] Among the new features were an NFS implementation from the University of Guelph , a status key ("Ctrl-T") and support for the HP 9000 range of computers, originating in the University of Utah 's "HPBSD" port. [ 16 ]
In August 2006, InformationWeek magazine rated 4.3BSD as the "Greatest Software Ever Written". [ 17 ] They commented: "BSD 4.3 represents the single biggest theoretical undergirder of the Internet."
After Net/1, BSD developer Keith Bostic proposed that more non-AT&T sections of the BSD system be released under the same license as Net/1. To this end, he started a project to reimplement most of the standard Unix utilities without using the AT&T code. For example, vi , which had been based on the original Unix version of ed , was rewritten as nvi (new vi). Within eighteen months, all of the AT&T utilities had been replaced, and it was determined that only a few AT&T files remained in the kernel. These files were removed, and the result was the June 1991 release of Networking Release 2 , aka Network(ing) 2 or Net/2 , a nearly complete operating system that was freely distributable.
Net/2 was the basis for two separate ports of BSD to the Intel 80386 architecture: the free 386BSD by William Jolitz and the proprietary BSD/386 (later renamed BSD/OS) by Berkeley Software Design (BSDi). 386BSD itself was short-lived, but became the initial code base of the NetBSD and FreeBSD projects that were started shortly thereafter.
BSDi soon found itself in legal trouble with AT&T's Unix System Laboratories (USL) subsidiary, then the owners of the System V copyright and the Unix trademark. The USL v. BSDi lawsuit was filed in April 1992 and led to an injunction on the distribution of Net/2 until the validity of USL's copyright claims on the source could be determined.
The lawsuit slowed development of the free-software descendants of BSD for nearly two years while their legal status was in question, and as a result systems based on the Linux kernel , which did not have such legal ambiguity, gained greater support. Although not released until 1992, development of 386BSD predated that of Linux. Linus Torvalds has said that if 386BSD or the GNU kernel had been available at the time, he probably would not have created Linux. [ 18 ] [ 19 ]
In August 1992, 4.4BSD-Alpha was released. In June 1993, 4.4BSD-Encumbered was released only to USL licensees.
The lawsuit was settled in January 1994, largely in Berkeley's favor. Of the 18,000 files in the Berkeley distribution, only three had to be removed and 70 modified to show USL copyright notices. A further condition of the settlement was that USL would not file further lawsuits against users and distributors of the Berkeley-owned code in the upcoming 4.4BSD release. Marshall Kirk McKusick summarizes the lawsuit and its outcome: [ 20 ]
Code copying and theft of trade secrets was alleged. The actual infringing code was not identified for nearly two years. The lawsuit could have dragged on for much longer but for the fact that Novell bought USL from AT&T and sought a settlement. In the end, three files were removed from the 18,000 that made up the distribution, and a number of minor changes were made to other files. In addition, the University agreed to add USL copyrights to about 70 files, with the stipulation that those files continued to be freely redistributed.
In March 1994, 4.4BSD-Lite was released that no longer require a USL source license and also contained many other changes over the original 4.4BSD-Encumbered release.
The final release from Berkeley was 1995's 4.4BSD-Lite Release 2 , after which the CSRG was dissolved and development of BSD at Berkeley ceased. Since then, several variants based directly or indirectly on 4.4BSD-Lite (such as FreeBSD , NetBSD , OpenBSD and DragonFly BSD ) have been maintained.
In addition, the permissive nature of the BSD license has allowed many other operating systems, both free and proprietary, to incorporate BSD code. For example, Microsoft Windows has used BSD-derived code in its implementation of TCP/IP [ 21 ] and bundles recompiled versions of BSD's command-line networking tools since Windows 2000 . [ 22 ] Also Darwin , the system on which Apple's macOS is built, is a derivative of 4.4BSD-Lite2 and FreeBSD. Various commercial Unix operating systems, such as Solaris , also contain varying amounts of BSD code.
BSD has been the base of a large number of operating systems. Most notable among these today are perhaps the major open source BSDs: FreeBSD, NetBSD and OpenBSD, which are all derived from 386BSD and 4.4BSD -Lite by various routes. Both NetBSD and FreeBSD started life in 1993, initially derived from 386BSD, but in 1994 migrating to a 4.4BSD-Lite code base. OpenBSD was forked in 1995 from NetBSD. A number of commercial operating systems are also partly or wholly based on BSD or its descendants, including Sun's SunOS and Apple Inc. 's macOS .
Most of the current BSD operating systems are open source and available for download, free of charge, under the BSD License , the most notable exception being macOS . They also generally use a monolithic kernel architecture, apart from macOS and DragonFly BSD which feature hybrid kernels . The various open source BSD projects generally develop the kernel and userland programs and libraries together, the source code being managed using a single central source repository.
In the past, BSD was also used as a basis for several proprietary versions of Unix, such as Sun 's SunOS , Sequent 's Dynix , NeXT 's NeXTSTEP , DEC 's Ultrix and OSF/1 AXP (now Tru64 UNIX ). Parts of NeXT's software became the foundation for macOS , among the most commercially successful BSD variants in the general market.
A selection of significant Unix versions and Unix-like operating systems that descend from BSD includes: | https://en.wikipedia.org/wiki/History_of_the_Berkeley_Software_Distribution |
The history of the Big Bang theory began with the Big Bang 's development from observations and theoretical considerations. Much of the theoretical work in cosmology now involves extensions and refinements to the basic Big Bang model. The theory itself was originally formalised by Father Georges Lemaître in 1927. [ 1 ] Hubble's law of the expansion of the universe provided foundational support for the theory.
In medieval philosophy , there was much debate over whether the universe had a finite or infinite past (see Temporal finitism ). The philosophy of Aristotle held that the universe had an infinite past, which caused problems for past Jewish and Islamic philosophers who were unable to reconcile the Aristotelian conception of the eternal with the Abrahamic view of creation . [ 2 ] As a result, a variety of logical arguments for the universe having a finite past were developed by John Philoponus , Al-Kindi , Saadia Gaon , Al-Ghazali and Immanuel Kant , among others. [ 3 ]
English theologian Robert Grosseteste explored the nature of matter and the cosmos in his 1225 treatise De Luce ( On Light ). He described the birth of the universe in an explosion and the crystallization of matter to form stars and planets in a set of nested spheres around Earth. De Luce is the first attempt to describe the heavens and Earth using a single set of physical laws. [ 4 ]
In 1610, Johannes Kepler used the dark night sky to argue for a finite universe. Seventy-seven years later, Isaac Newton described large-scale motion throughout the universe.
The description of a universe that expanded and contracted in a cyclic manner was first put forward in a poem published in 1791 by Erasmus Darwin . Edgar Allan Poe presented a similar cyclic system in his 1848 essay titled Eureka: A Prose Poem ; it is obviously not a scientific work, but Poe, while starting from metaphysical principles, tried to explain the universe using contemporary physical and mental knowledge. Ignored by the scientific community and often misunderstood by literary critics, its scientific implications have been reevaluated in recent times.
According to Poe, the initial state of matter was a single "Primorial Particle". "Divine Volition", manifesting itself as a repulsive force, fragmented the Primordial Particle into atoms. Atoms spread evenly throughout space, until the repulsive force stops, and attraction appears as a reaction: then matter begins to clump together forming stars and star systems, while the material universe is drawn back together by gravity, finally collapsing and ending eventually returning to the Primordial Particle stage in order to begin the process of repulsion and attraction once again. This part of Eureka describes a Newtonian evolving universe which shares a number of properties with relativistic models, and for this reason Poe anticipates some themes of modern cosmology. [ 5 ]
Observationally, in the 1910s, Vesto Slipher and later, Carl Wilhelm Wirtz , determined that most spiral nebulae (now called spiral galaxies ) were receding from Earth. Slipher used spectroscopy to investigate the rotation periods of planets, the composition of planetary atmospheres, and was the first to observe the radial velocities of galaxies. Wirtz observed a systematic redshift of nebulae, which was difficult to interpret in terms of a cosmology in which the universe is filled more or less uniformly with stars and nebulae. They weren't aware of the cosmological implications, nor that the supposed nebulae were actually galaxies outside our own Milky Way . [ 6 ]
Also in that decade, Albert Einstein 's theory of general relativity was found to admit no static cosmological solutions , given the basic assumptions of cosmology described in the Big Bang's theoretical underpinnings . The universe (i.e., the space-time metric) was described by a metric tensor that was either expanding or shrinking (i.e., was not constant or invariant). This result, coming from an evaluation of the field equations of the general theory, at first led Einstein himself to consider that his formulation of the field equations of the general theory may be in error, and he tried to correct it by adding a cosmological constant . This constant would restore to the general theory's description of space-time an invariant metric tensor for the fabric of space/existence. The first person to seriously apply general relativity to cosmology without the stabilizing cosmological constant was Alexander Friedmann . Friedmann derived the expanding-universe solution to general relativity field equations in 1922. Friedmann's 1924 papers included " Über die Möglichkeit einer Welt mit konstanter negativer Krümmung des Raumes " ( About the possibility of a world with constant negative curvature ) which was published by the Berlin Academy of Sciences on 7 January 1924. [ 7 ] Friedmann's equations describe the Friedmann–Lemaitre–Robertson–Walker universe.
In 1927, the Belgian physicist Georges Lemaitre proposed an expanding model for the universe to explain the observed redshifts of spiral nebulae, and calculated the Hubble law . He based his theory on the work of Einstein and De Sitter , and independently derived Friedmann's equations for an expanding universe. Also, the red shifts themselves were not constant, but varied in such manner as to lead to the conclusion that there was a definite relationship between amount of red-shift of nebulae, and their distance from observers.
In 1929, Edwin Hubble provided a comprehensive observational foundation for Lemaitre's theory. Hubble's experimental observations discovered that, relative to the Earth and all other observed bodies, galaxies are receding in every direction at velocities (calculated from their observed red-shifts) directly proportional to their distance from the Earth and each other. In 1929, Hubble and Milton Humason formulated the empirical Redshift Distance Law of galaxies, nowadays known as Hubble's law , which, once the Redshift is interpreted as a measure of recession speed, is consistent with the solutions of Einstein's General Relativity Equations for a homogeneous, isotropic expanding universe. The law states that the greater the distance between any two galaxies, the greater their relative speed of separation. In 1929, Edwin Hubble discovered that most of the universe was expanding and moving away from everything else. If everything is moving away from everything else, then it should be thought that everything was once closer together. The logical conclusion is that at some point, all matter started from a single point a few millimetres across before exploding outward. It was so hot that it consisted of only raw energy for hundreds of thousands of years before the matter could form. Whatever happened had to unleash an unfathomable force, since the universe is still expanding billions of years later. The theory he devised to explain what he found is called the Big Bang theory. [ citation needed ]
In 1931, Lemaître proposed in his " hypothèse de l'atome primitif " (hypothesis of the primeval atom) that the universe began with the "explosion" of the "primeval atom " – what was later called the Big Bang. Lemaître first took cosmic rays to be the remnants of the event, although it is now known that they originate within the local galaxy . Lemaitre had to wait until shortly before his death to learn of the discovery of cosmic microwave background radiation , the remnant radiation of a dense and hot phase in the early universe. [ 8 ]
Hubble's law had suggested that the universe was expanding, contradicting the cosmological principle whereby the universe, when viewed on sufficiently large distance scales, has no preferred directions or preferred places. Hubble's idea allowed for two opposing hypotheses to be suggested. One was Lemaître's Big Bang, advocated and developed by George Gamow . The other model was Fred Hoyle 's steady-state model , in which new matter would be created as the galaxies moved away from each other. In this model, the universe is roughly the same at any point in time. It was actually Hoyle who coined the name of Lemaître's theory, referring to it as "this 'big bang' idea" during a radio broadcast on 28 March 1949, on BBC 's Third Programme . It is popularly reported that Hoyle, who favored an alternative "steady-state" cosmological model, intended this to be pejorative, but Hoyle explicitly denied this and said it was just a striking image meant to highlight the difference between the two models. [ 9 ] Hoyle repeated the term in further broadcasts in early 1950, as part of a series of five lectures entitled The Nature of The Universe . The text of each lecture was published in The Listener a week after the broadcast, the first time that the term "big bang" appeared in print. [ 10 ] As evidence in favour of the Big Bang model mounted, and the consensus became widespread, Hoyle himself, albeit somewhat reluctantly, admitted to it by formulating a new cosmological model that other scientists later referred to as the "steady Bang". [ 11 ]
From around 1950 to 1965, the support for these theories was evenly divided, with a slight imbalance arising from the fact that the Big Bang theory could explain both the formation and the observed abundances of hydrogen and helium , whereas the steady-state model could explain how they were formed, but not why they should have the observed abundances. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. Objects such as quasars and radio galaxies were observed to be much more common at large distances (therefore in the distant past) than in the nearby universe, whereas the steady-state model predicted that the average properties of the universe should be unchanging with time. In addition, the discovery of the cosmic microwave background radiation in 1964 was considered the death knell of the steady-state model, although this prediction was only qualitative, and failed to predict the exact temperature of the CMB. (The key big bang prediction is the black-body spectrum of the CMB, which was not measured with high accuracy until COBE in 1990). After some reformulation, the Big Bang has been regarded as the best theory of the origin and evolution of the cosmos. Before the late 1960s, many cosmologists thought the infinitely dense and physically paradoxical singularity at the starting time of Friedmann's cosmological model could be avoided by allowing for a universe which was contracting before entering the hot dense state, and starting to expand again. This was formalized as Richard Tolman 's oscillating universe . In the sixties, Stephen Hawking and others demonstrated that this idea was unworkable, [ citation needed ] and the singularity is an essential feature of the physics described by Einstein's gravity. This led the majority of cosmologists to accept the notion that the universe as currently described by the physics of general relativity has a finite age. However, due to a lack of a theory of quantum gravity , there is no way to say whether the singularity is an actual origin point for the universe, or whether the physical processes that govern the regime cause the universe to be effectively eternal in character.
Through the 1970s and 1980s, most cosmologists accepted the Big Bang, but several puzzles remained, including the non-discovery of anisotropies in the CMB, and occasional observations hinting at deviations from a black-body spectrum; thus the theory was not very strongly confirmed.
Huge advances in Big Bang cosmology were made in the 1990s and the early 21st century, as a result of major advances in telescope technology in combination with large amounts of satellite data, such as COBE , the Hubble Space Telescope and WMAP .
In 1990, measurements from the COBE satellite showed that the spectrum of the CMB matches a 2.725 K black-body to very high precision; deviations do not exceed 2 parts in 100 000 . This showed that earlier claims of spectral deviations were incorrect, and essentially proved that the universe was hot and dense in the past, since no other known mechanism can produce a black-body to such high accuracy. Further observations from COBE in 1992 discovered the very small anisotropies of the CMB on large scales, approximately as predicted from Big Bang models with dark matter . From then on, models of non-standard cosmology without some form of Big Bang became very rare in the mainstream astronomy journals.
In 1998, measurements of distant supernovae indicated that the expansion of the universe is accelerating, and this was supported by other observations including ground-based CMB observations and large galaxy red-shift surveys. In 1999–2000, the Boomerang and Maxima balloon-borne
CMB observations showed that the geometry of the universe is close to flat, then in 2001 the 2dFGRS galaxy red-shift survey estimated the mean matter density around 25–30 percent of critical density.
From 2001 to 2010, NASA 's WMAP spacecraft took very detailed pictures of the universe by means of the cosmic microwave background radiation. The images can be interpreted to indicate that the universe is 13.7 billion years old (within one percent error) and that the Lambda-CDM model and the inflationary theory are correct. No other cosmological theory can yet explain such a wide range of observed parameters, from the ratio of the elemental abundances in the early universe to the structure of the cosmic microwave background, the observed higher abundance of active galactic nuclei in the early universe and the observed masses of clusters of galaxies .
In 2013 and 2015, ESA's Planck spacecraft released even more detailed images of the cosmic microwave background, showing consistency with the Lambda-CDM model to still higher precision.
Much of the current work in cosmology includes understanding how galaxies form in the context of the Big Bang, understanding what happened in the earliest times after the Big Bang, and reconciling observations with the basic theory. Cosmologists continue to calculate many of the parameters of the Big Bang to a new level of precision, and carry out more detailed observations which are hoped to provide clues to the nature of dark energy and dark matter , and to test the theory of General Relativity on cosmic scales. | https://en.wikipedia.org/wiki/History_of_the_Big_Bang_theory |
The history of the Church–Turing thesis ("thesis") involves the history of the development of the study of the nature of functions whose values are effectively calculable; or, in more modern terms, functions whose values are algorithmically computable. It is an important topic in modern mathematical theory and computer science, particularly associated with the work of Alonzo Church and Alan Turing .
The debate and discovery of the meaning of "computation" and " recursion " has been long and contentious. This article provides detail of that debate and discovery from Peano's axioms in 1889 through recent discussion of the meaning of " axiom ".
In 1889, Giuseppe Peano presented his The principles of arithmetic, presented by a new method , based on the work of Dedekind . Soare proposes that the origination of "primitive recursion" began formally with the axioms of Peano, although
Observe that in fact Peano's axioms are 9 in number and axiom 9 is the recursion/induction axiom. [ 2 ]
At the International Congress of Mathematicians (ICM) in 1900 in Paris the famous mathematician David Hilbert posed a set of problems – now known as Hilbert's problems – his beacon illuminating the way for mathematicians of the twentieth century. Hilbert's 2nd and 10th problems introduced the " Entscheidungsproblem " (the "decision problem"). In his 2nd problem he asked for a proof that "arithmetic" is " consistent ". Kurt Gödel would prove in 1931 that, within what he called "P" (nowadays called Peano Arithmetic ), "there exist undecidable sentences [propositions]". [ 4 ] Because of this, "the consistency of P is unprovable in P, provided P is consistent". [ 5 ] While Gödel’s proof would display the tools necessary for Alonzo Church and Alan Turing to resolve the Entscheidungsproblem , he himself would not answer it.
It is within Hilbert's 10th problem where the question of an Entscheidungsproblem actually appears. The heart of matter was the following question: "What do we mean when we say that a function is 'effectively calculable'"? The answer would be something to this effect: "When the function is calculated by a mechanical procedure (process, method)." Although stated easily nowadays, the question (and answer) would float about for almost 30 years before it was framed precisely.
Hilbert's original description of problem 10 begins as follows:
By 1922, the specific question of an Entscheidungsproblem applied to Diophantine equations had developed into the more general question about a "decision method" for any mathematical formula . Martin Davis explains it this way: Suppose we are given a "calculational procedure" that consists of (1) a set of axioms and (2) a logical conclusion written in first-order logic , that is—written in what Davis calls " Frege's rules of deduction" (or the modern equivalent of Boolean logic ). Gödel’s doctoral dissertation [ 7 ] proved that Frege's rules were complete "... in the sense that every valid formula is provable". [ 8 ] Given that encouraging fact, could there be a generalized "calculational procedure" that would tell us whether a conclusion can be derived from its premises? Davis calls such calculational procedures " algorithms ". The Entscheidungsproblem would be an algorithm as well. "In principle, an algorithm for [the] Entscheidungsproblem would have reduced all human deductive reasoning to brute calculation". [ 9 ]
In other words: Is there an "algorithm" that can tell us if any formula is "true" (i.e. an algorithm that always correctly yields a judgment "truth" or "falsehood"?)
Indeed: What about our Entscheidungsproblem algorithm itself? Can it determine, in a finite number of steps, whether it, itself, is "successful" and "truthful" (that is, it does not get hung up in an endless "circle" or " loop ", and it correctly yields a judgment "truth" or "falsehood" about its own behavior and results)?
At the 1928 Congress [in Bologna, Italy ] Hilbert refines the question very carefully into three parts. The following is Stephen Hawking's summary:
Gabriel Sudan (1927) and Wilhelm Ackermann (1928) display recursive functions that are not primitive recursive:
In subsequent years Kleene [ 13 ] observes that Rózsa Péter (1935) simplified Ackermann's example ("cf. also Hilbert- Bernays 1934") and Raphael Robinson (1948). Péter exhibited another example (1935) that employed Cantor's diagonal argument . Péter (1950) and Ackermann (1940) also displayed " transfinite recursions ", and this led Kleene to wonder:
Kleene concludes [ 15 ] that all "recursions" involve (i) the formal analysis he presents in his §54 Formal calculations of primitive recursive functions and, (ii) the use of mathematical induction . He immediately goes on to state that indeed the Gödel-Herbrand definition does indeed "characterize all recursive functions" – see the quote in 1934, below .
In 1930, mathematicians gathered for a mathematics meeting and retirement event for Hilbert . As luck would have it,
He announced that the answer to the first two of Hilbert's three questions of 1928 was NO.
Subsequently in 1931 Gödel published his famous paper On Formally Undecidable Propositions of Principia Mathematica and Related Systems I. In his preface to this paper Martin Davis delivers a caution:
To quote Kleene (1952), "The characterization of all "recursive functions" was accomplished in the definition of ' general recursive function ' by Gödel 1934, who built on a suggestion of Herbrand " (Kleene 1952:274). Gödel delivered a series of lectures at the Institute for Advanced Study (IAS), Princeton NJ. In a preface written by Martin Davis [ 19 ] Davis observes that
Dawson states that these lectures were meant to clarify concerns that the "incompleteness theorems were somehow dependent on the particularities of formalization": [ 21 ]
Kleene and Rosser transcribed Gödel's 1934 lectures in Princeton. In his paper General Recursive Functions of Natural Numbers [ 25 ] Kleene states:
Church's paper An Unsolvable Problem of Elementary Number Theory (1936) proved that the Entscheidungsproblem was undecidable within the λ-calculus and Gödel-Herbrand's general recursion; moreover Church cites two theorems of Kleene's that proved that the functions defined in the λ-calculus are identical to the functions defined by general recursion:
The paper opens with a very long footnote, 3. Another footnote, 9, is also of interest. Martin Davis states that "This paper is principally important for its explicit statement (since known as Church's thesis ) that the functions which can be computed by a finite algorithm are precisely the recursive functions, and for the consequence that an explicit unsolvable problem can be given": [ 28 ]
Footnote 9 is in section §4 Recursive functions :
Some time prior to Church's paper An Unsolvable Problem of Elementary Number Theory (1936) a dialog occurred between Gödel and Church as to whether or not λ-definability was sufficient for the definition of the notion of "algorithm" and "effective calculability".
In Church (1936) we see, under the chapter §7 The notion of effective calculability , a footnote 18 which states the following:
By "identifying" Church means – not "establishing the identity of" – but rather "to cause to be or become identical", "to conceive as united" (as in spirit, outlook or principle) (vt form), and (vi form) as "to be or become the same". [ 32 ]
Post's doubts as to whether or not recursion was an adequate definition of "effective calculability", plus the publishing of Church's paper, encouraged him in the fall of 1936 to propose a "formulation" with "psychological fidelity": A worker moves through "a sequence of spaces or boxes" [ 33 ] performing machine-like "primitive acts" on a sheet of paper in each box. The worker is equipped with "a fixed unalterable set of directions". [ 33 ] Each instruction consists of three or four symbols: (1) an identifying label/number, (2) an operation, (3) next instruction j i ; however, if the instruction is of type (e) and the determination is "yes" THEN instruction j i ' ELSE if it is "no" instruction j i . The "primitive acts" [ 33 ] are of only 1 of 5 types: (a) mark the paper in the box he's in (or over-mark a mark already there), (b) erase the mark (or over-erase), (c) move one room to the right, (d) move one room to the left, (e) determine if the paper is marked or blank. The worker starts at step 1 in the starting-room, and does what the instructions instruct them to do. (See more at Post–Turing machine .)
This matter, mentioned in the introduction about "intuitive theories" caused Post to take a potent poke at Church:
In other words Post is saying "Just because you defined it so doesn't make it truly so; your definition is based on no more than an intuition." Post was searching for more than a definition: "The success of the above program would, for us, change this hypothesis not so much to a definition or to an axiom but to a natural law . Only so, it seems to the writer, can Gödel's theorem ... and Church's results ... be transformed into conclusions concerning all symbolic logics and all methods of solvability." [ 35 ]
This contentious stance finds grumpy expression in Alan Turing 1939, and it will reappear with Gödel, Gandy , and Sieg.
A. M. Turing's paper On Computable Numbers, With an Application to the Entscheidungsproblem was delivered to the London Mathematical Society in November 1936. Again the reader must bear in mind a caution: as used by Turing, the word "computer" is a human being, and the action of a "computer" he calls "computing"; for example, he states "Computing is normally done by writing certain symbols on paper" (p. 135). But he uses the word "computation" [ 36 ] in the context of his machine-definition, and his definition of "computable" numbers is as follows:
What is Turing's definition of his "machine?" Turing gives two definitions, the first a summary in §1 Computing machines and another very similar in §9.I derived from his more detailed analysis of the actions a human "computer". With regards to his definition §1 he says that "justification lies in the fact that the human memory is necessarily limited", [ 38 ] and he concludes §1 with the bald assertion of his proposed machine with his use of the word "all"
The emphasis of the word one in the above brackets is intentional. With regards to §9.I he allows the machine to examine more squares; it is this more-square sort of behavior that he claims typifies the actions of a computer (person):
Turing goes on to define a "computing machine" in §2 is (i) "a-machine" ("automatic machine") as defined in §1 with the added restriction (ii): (ii) It prints two kinds of symbols – figures 0 and 1 – and other symbols. The figures 0 and 1 will represent "the sequence computed by the machine". [ 36 ]
Furthermore, to define the if the number is to be considered "computable", the machine must print an infinite number of 0's and 1's; if not it is considered to be "circular"; otherwise it is considered to be "circle-free":
Although he doesn't call it his "thesis", Turing proposes a proof that his "computability" is equivalent to Church's "effective calculability":
The Appendix: Computability and effective calculability begins in the following manner; observe that he does not mention recursion here, and in fact his proof-sketch has his machine munch strings of symbols in the λ-calculus and the calculus munch "complete configurations" of his machine, and nowhere is recursion mentioned. The proof of the equivalence of machine-computability and recursion must wait for Kleene 1943 and 1952:
Gandy (1960) seems to confuse this bold proof-sketch with Church's Thesis ; see 1960 and 1995 below. Moreover a careful reading of Turing's definitions leads the reader to observe that Turing was asserting that the "operations" of his proposed machine in §1 are sufficient to compute any computable number, and the machine that imitates the action of a human "computer" as presented in §9.I is a variety of this proposed machine. This point will be reiterated by Turing in 1939.
Alan Turing's massive Princeton PhD thesis (under Alonzo Church ) appears as Systems of Logic Based on Ordinals . In it he summarizes the quest for a definition of "effectively calculable". He proposes a definition as shown in the boldface type that specifically identifies (renders identical) the notions of "machine computation" and "effectively calculable".
This is a powerful expression. because "identicality" is actually an unequivocal statement of necessary and sufficient conditions, in other words there are no other contingencies to the identification" except what interpretation is given to the words "function", "machine", "computable", and "effectively calculable":
J. B. Rosser's paper An Informal Exposition of Proofs of Gödel's Theorem and Church's Theorem [ 44 ] states the following:
Kleene defines "general recursive" functions and "partial recursive functions" in his paper Recursive Predicates and Quantifiers . The representing function, mu-operator, etc. make their appearance. He goes on in §12 Algorithm theories to state his famous Thesis I, what he would come to call Church's Thesis in 1952:
In his chapter §60, Kleene defines the " Church's thesis " as follows:
On page 317 he explicitly calls the above thesis "Church's thesis":
About Turing's "formulation", Kleene says:
Kleene proposes that what Turing showed: "Turing's computable functions (1936-1937) are those which can be computed by a machine of a kind which is designed, according to his analysis, to reproduce all the sorts of operations which a human computer could perform, working according to preassigned instructions." [ 53 ]
Kleene defines Turing's Thesis as follows:
Indeed immediately before this statement, Kleene states the Theorem XXX:
To his 1931 paper On Formally Undecidable Propositions , Gödel added a Note added 28 August 1963 which clarifies his opinion of the alternative forms/expression of "a formal system ". He reiterates his opinions even more clearly in 1964 (see below):
Gödel 1964 – In Gödel's Postscriptum to his lecture's notes of 1934 at the IAS at Princeton , [ 55 ] he repeats, but reiterates in even more bold terms, his less-than-glowing opinion about the efficacy of computability as defined by Church's λ-definability and recursion (we have to infer that both are denigrated because of his use of the plural "definitions" in the following). This was in a letter to Martin Davis (presumably as he was assembling The Undecidable ). The repeat of some of the phrasing is striking:
Footnote 3 is in the body of the 1934 lecture notes:
Davis does observe that "in fact the equivalence between his [Gödel's] definition [of recursion] and Kleene's [1936] is not quite trivial. So, despite appearances to the contrary, footnote 3 of these lectures is not a statement of Church's thesis ." [ 59 ]
Robin Gandy 's influential paper titled Church's Thesis and Principles for Mechanisms appears in Barwise et al. Gandy starts off with an unlikely expression of Church's Thesis , framed as follows:
Robert Soare (1995, see below) had issues with this framing, considering Church's paper (1936) published prior to Turing's "Appendix proof" (1937).
Gandy attempts to "analyze mechanical processes and so to provide arguments for the following:
Gandy "exclude[s] from consideration devices which are essentially analogue machines ... .The only physical presuppositions made about mechanical devices (Cf Principle IV below) are that there is a lower bound on the linear dimensions of every atomic part of the device and that there is an upper bound (the velocity of light) on the speed of propagation of change". [ 62 ] But then he restricts his machines even more:
He in fact makes an argument for this "Thesis M" that he calls his "Theorem", the most important "Principle" of which is "Principle IV: Principle of local causation":
In 1985 the "Thesis M" was adapted for Quantum Turing machine , resulting in a Church–Turing–Deutsch principle . [ 64 ]
Soare 's thorough examination of Computability and Recursion appears. He quotes Gödel's 1964 opinion (above) with respect to the "much less suitable" definition of computability, and goes on to add:
Soare's footnote 7 (1995) also catches Gandy's "confusion", but apparently it continues into Gandy (1988). This confusion represents a serious error of research and/or thought and remains a cloud hovering over his whole program:
Breger points out a problem when one is approaching a notion "axiomatically", that is, an "axiomatic system" may have imbedded in it one or more tacit axioms that are unspoken when the axiom-set is presented.
For example, an active agent with knowledge (and capability) may be a (potential) fundamental axiom in any axiomatic system: "the know-how of a human being is necessary – a know-how which is not formalized in the axioms. ¶ ... Mathematics as a purely formal system of symbols without a human being possessing the know-how with the symbols is impossible ..." [ 67 ]
He quotes Hilbert :
Breger further supports his argument with examples from Giuseppe Veronese (1891) and Hermann Weyl (1930-1). He goes on to discuss the problem of then expression of an axiom-set in a particular language: i.e. a language known by the agent, e.g. German. [ 69 ] [ 70 ]
See more about this at Algorithm characterizations , in particular Searle 's opinion that outside any computation there must be an observer that gives meaning to the symbols used.
At the "Feferfest" – Solomon Feferman's 70th birthday – Wilfried Sieg first presents a paper written two years earlier titled "Calculations By Man and Machine: Conceptual Analysis", reprinted in (Sieg et al. 2002:390–409). Earlier Sieg published "Mechanical Procedures and Mathematical Experience" (in George 1994, p. 71ff) presenting a history of "calculability" beginning with Richard Dedekind and ending in the 1950s with the later papers of Alan Turing and Stephen Cole Kleene . The Feferfest paper distills the prior paper to its major points and dwells primarily on Robin Gandy 's paper of 1980. Sieg extends Turing's "computability by string machine" (human "computor") as reduced to mechanism "computability by letter machine" [ 71 ] to the parallel machines of Gandy.
Sieg cites more recent work including "Kolmogorov and Uspensky's work on algorithms" and (De Pisapia 2000), in particular, the KU-pointer machine-model ), and artificial neural networks [ 72 ] and asserts:
He claims to "step toward a more satisfactory stance ... [by] abstracting further away from particular types of configurations and operations ..." [ 72 ]
Whether the above statement is true or not is left to the reader to ponder. Sieg goes on to describe Gandy's analysis (see above 1980). In doing so he attempts to formalize what he calls " Gandy machines " (with a detailed analysis in an Appendix). About the Gandy machines: | https://en.wikipedia.org/wiki/History_of_the_Church–Turing_thesis |
Dylan programming language history first introduces the history with a continuous text. The second section gives a timeline overview of the history and present several milestones and watersheds. The third section presents quotations related to the history of the Dylan programming language.
Dylan was originally developed by Apple Cambridge, then a part of the Apple Advanced Technology Group (ATG). Its initial goal was to produce a new system programming application development programming language for the Apple Newton PDA, but soon it became clear that this would take too much time. Walter Smith developed NewtonScript for scripting and application development, and systems programming was done in the language C . Development continued on Dylan for the Macintosh. The group produced an early Technology Release of its Apple Dylan product, but the group was dismantled due to internal restructuring before they could finish any real usable products.
According to Apple Confidential by Owen W. Linzmayer, the original code name for the Dylan project was Ralph, for Ralph Ellison , author of the novel Invisible Man , to reflect its status as a secret research project.
The initial killer application for Dylan was the Apple Newton PDA, but the initial implementation came too late for it. Also, the performance and size objectives were missed. So Dylan was retargeted toward a general computer programming audience. To compete in this market, it was decided to switch to infix notation .
Andrew Shalit (along with David A. Moon and Orca Starbuck) wrote the Dylan Reference Manual, which served as a basis for work at Harlequin and Carnegie Mellon University . When Apple Cambridge was closed, several members went to Harlequin, which produces a working compiler and development environment for Microsoft Windows . When Harlequin got bought and split, some of the developers founded Functional Objects. In 2003, the firm contributed its repository to the Dylan open source community . This repository was the foundation of the free and open-source software Dylan implementation Open Dylan.
In 2003, the Dylan community had already proven its engagement for Dylan. In summer 1998, the community took over the code from the Carnegie Mellon University (CMU) Dylan implementation named for the Gwydion project, and founded the open-source model project Gwydion Dylan. At that time, CMU had already stopped working at their Dylan implementation because Apple in its financial crisis could no longer sponsor the project. CMU thus shifted its research toward the mainstream and toward Java .
Today, Gwydion Dylan and Open Dylan are the only working Dylan compilers. While the first is still a Dylan-to-C compiler, Open Dylan produces native code for Intel processors. Open Dylan was designed to account for the Architecture Neutral Distribution Format (ANDF).
Dylan was created by the same group at Apple that was responsible for Macintosh Common Lisp. The first implementation had a Lisp-like syntax.
The acknowledgments from the First Dylan Manual (1992) states:
The two non-Apple collaborators were CMU Gwydion and Harlequin.
CMU still provide an information page about Gwydion .
The developers at the Cambridge lab and CMU thought they'd get better reception from the C/C++ community out there if they changed the syntax to make it look more like these languages.
Rob MacLachlan, at Carnegie Mellon during the Dylan project, from comp.lang.dylan:
Bruce Hoult replied:
Oliver Steele in a ll1-discuss :
Raffael Cavallaro once provided some insights:
Gabor Greif:
Oliver Steele:
From Mike Lockwood, a former member of the Apple Cambridge Labs (originally published on apple.computerhistory.org ): [ 9 ]
A picture of the shirt can be seen here . [ 10 ]
Gary M. Palter about Functional Objects and the history of the Dylan project at Harlequin:
CMU's Gwydion Project became open source in 1998. Eric Kidd in a message to the Gwydion Hackers about the process:
The Gwydion website is http://www.gwydiondylan.org .
Before Functional Objects, formerly Harlequin Dylan, ceased operating in January 2006, they open sourced their repository in 2004 to Gwydion Dylan Maintainers. The repository included white papers, design papers, documentation once written for the commercial product, and the code for:
The project is now known as Open Dylan and its website is https://opendylan.org . | https://en.wikipedia.org/wiki/History_of_the_Dylan_programming_language |
The Hindu–Arabic numeral system is a decimal place-value numeral system that uses a zero glyph as in "205". [ 1 ]
Its glyphs are descended from the Indian Brahmi numerals . The full system emerged by the 8th to 9th centuries, and is first described outside India in Al-Khwarizmi 's On the Calculation with Hindu Numerals (ca. 825), and second Al-Kindi 's four-volume work On the Use of the Indian Numerals (ca. 830). [ 2 ] Today the name Hindu–Arabic numerals is usually used.
Historians trace modern numerals in most languages to the Brahmi numerals , which were in use around the middle of the 3rd century BC. [ 3 ] The place value system, however, developed later. The Brahmi numerals have been found in inscriptions in caves and on coins in regions near Pune, Maharashtra [ 2 ] and Uttar Pradesh in India. These numerals (with slight variations) were in use up to the 4th century. [ 3 ]
During the Gupta period (early 4th century to the late 6th century), the Gupta numerals developed from the Brahmi numerals and were spread over large areas by the Gupta empire as they conquered territory. [ 3 ] Beginning around 7th century, the Gupta numerals developed into the Nagari numerals.
During the Vedic period (1500–500 BCE), motivated by geometric construction of the fire altars and astronomy, the use of a numerical system and of basic mathematical operations developed in northern India. [ 4 ] [ 5 ] Hindu cosmology required the mastery of very large numbers such as the kalpa (the lifetime of the universe) said to be 4,320,000,000 years and the "orbit of the heaven" said to be 18,712,069,200,000,000 yojanas . [ 6 ] Numbers were expressed using a "named place-value notation", using names for the powers of 10, like dasa , shatha , sahasra , ayuta , niyuta , prayuta , arbuda , nyarbuda , samudra , madhya , anta , parardha etc., the last of these being the name for a trillion (10 12 ). [ 7 ] For example, the number 26,432 was expressed as "2 ayuta , 6 sahasra , 4 shatha , 3 dasa , 2." [ 8 ] In the Buddhist text Lalitavistara , the Buddha is said to have narrated a scheme of numbers up to 10 53 . [ 9 ] [ 10 ]
The form of numerals in Ashoka 's inscriptions in the Brahmi script (middle of the third century BCE) involved separate signs for the numbers 1 to 9, 10 to 90, 100 and 1000. A multiple of 100 or 1000 was represented by a modification (or "enciphering" [ 11 ] ) of the sign for the number using the sign for the multiplier number. [ 12 ] Such enciphered numerals directly represented the named place-value numerals used verbally. They continued to be used in inscriptions until the end of the 9th century.
In his seminal text of 499 CE, Aryabhata devised a novel positional number system, using Sanskrit consonants for small numbers and vowels for powers of 10. Using the system, numbers up to a billion could be expressed using short phrases, e. g., khyu-ghṛ representing the number 4,320,000. The system did not catch on because it produced quite unpronounceable phrases, but it might have driven home the principle of positional number system (called dasa-gunottara , exponents of 10) to later mathematicians. [ 13 ] A more elegant katapayadi scheme was devised in later centuries representing a place-value system including zero. [ 14 ]
While the numerals in texts and inscriptions used a named place-value notation, a more efficient notation might have been employed in calculations, possibly from the 1st century CE. Computations were carried out on clay tablets covered with a thin layer of sand, giving rise to the term dhuli-karana ('sand-work') for higher computation. Karl Menninger believes that, in such computations, they must have dispensed with the enciphered numerals and written down just sequences of digits to represent the numbers. A zero would have been represented as a "missing place", such as a dot. [ 15 ] The single manuscript with worked examples available to us, the Bakhshali manuscript (of unclear date), uses a place value system with a dot to denote the zero. The dot was called the shunya-sthāna 'empty-place'. The same symbol was also used in algebraic expressions for the unknown (as in the canonical x in modern algebra). [ 16 ]
Textual references to a place-value system are seen from the 5th century CE onward. A commentary on Patanjali 's Yoga Sutras from the 5th century reads, "Just as a line in the hundreds place [means] a hundred, in the tens place ten, and one in the ones place, so one and the same woman is called mother, daughter and sister." [ 17 ]
A system called bhūta-sankhya ('object numbers' or 'concrete numbers') was employed for representing numerals in Sanskrit verses, by using a concept representing a digit to stand for the digit itself. The Jain text entitled the Lokavibhaga , dated 458 CE, [ 18 ] mentions the objectified numeral
" panchabhyah khalu shunyebhyah param dve sapta chambaram ekam trini cha rupam cha "
meaning 'five voids, then two and seven, the sky, one and three and the form', i.e., the number 13107200000. [ 19 ] [ 20 ] Such objectified numbers were used extensively from the 6th century onward, especially after Varāhamihira ( c. 5th century CE). Zero is explicitly represented in such numbers as "the void" ( sunya ) or the "heaven-space" ( ambara akasha ). [ 21 ] Correspondingly, the dot used in place of zero in written numerals was referred to as a sunya-bindu . [ 22 ]
In 628 CE, astronomer-mathematician Brahmagupta wrote his text Brahma Sphuta Siddhanta which contained the first mathematical treatment of zero. He defined zero as the result of subtracting a number from itself, postulated negative numbers and discussed their properties under arithmetical operations. His word for zero was shunya (void), the same term previously used for the empty spot in 9-digit place-value system. [ 25 ] This provided a new perspective on the shunya-bindu as a numeral and paved the way for the eventual evolution of a zero digit. The dot continued to be used for at least 100 years afterwards, and transmitted to Southeast Asia and Arabia. Kashmir's Sharada script has retained the dot for zero until this day.
By the end of the 7th century, decimal numbers begin to appear in inscriptions in Southeast Asia as well as in India. [ 22 ] Some scholars hold that they appeared even earlier. A 6th century copper-plate grant at Mankani bearing the numeral 346 (corresponding to 594 CE) is often cited. [ 26 ] But its reliability is subject to dispute. [ 22 ] [ 27 ] The first indisputable occurrence of 0 in an inscription occurs at Gwalior in 876 CE, containing a numeral "270" in a notation surprisingly similar to the modern numerals. [ 28 ] Throughout the 8th and 9th centuries, both the old Brahmi numerals and the new decimal numerals were used, sometimes appearing in the same inscriptions. In some documents, a transition is seen to occur around 866 CE. [ 22 ]
Before the rise of the Caliphate , the Hindu–Arabic numeral system was already moving West and was mentioned in Syria in 662 AD by the Syriac Nestorian scholar Severus Sebokht who wrote the following:
According to Al-Qifti 's History of Learned Men : [ 29 ]
The work was most likely to have been Brahmagupta 's Brāhmasphuṭasiddhānta (The Opening of the Universe) which was written in 628. [ 29 ] [ 30 ] Irrespective of whether this is wrong, since all Indian texts after Aryabhata 's Aryabhatiya used the Indian number system, certainly from this time the Arabs had a translation of a text written in the Indian number system. [ 29 ]
In his text The Arithmetic of Al-Uqlîdisî (Dordrecht: D. Reidel, 1978), A.S. Saidan 's studies were unable to answer in full how the numerals reached the Arab world:
Al-Uqlidisi developed a notation to represent decimal fractions. [ 31 ] [ 32 ] The numerals came to fame due to their use in the pivotal work of the Persian mathematician Al-Khwarizmi , whose book On the Calculation with Hindu Numerals was written about 825, and the Arab mathematician Al-Kindi , who wrote four volumes (see [2]) "On the Use of the Indian Numerals" (Ketab fi Isti'mal al-'Adad al-Hindi) about 830. They, amongst other works, contributed to the diffusion of the Indian system of numeration in the Middle East and the West.
The development of the numerals in early Europe is shown below:
In the last few centuries, the European variety of Arabic numbers was spread around the world and gradually became the most commonly used numeral system in the world.
Even in many countries in languages which have their own numeral systems, the European Arabic numerals are widely used in commerce and mathematics.
The significance of the development of the positional number system is described by the French mathematician Pierre-Simon Laplace (1749–1827) who wrote:
It is India that gave us the ingenious method of expressing all numbers by the means of ten symbols, each symbol receiving a value of position, as well as an absolute value; a profound and important idea which appears so simple to us now that we ignore its true merit, but its very simplicity, the great ease which it has lent to all computations, puts our arithmetic in the first rank of useful inventions, and we shall appreciate the grandeur of this achievement when we remember that it escaped the genius of Archimedes and Apollonius , two of the greatest minds produced by antiquity. [ 34 ] | https://en.wikipedia.org/wiki/History_of_the_Hindu–Arabic_numeral_system |
The history of the programming language Scheme begins with the development of earlier members of the Lisp family of languages during the second half of the twentieth century. During the design and development period of Scheme, language designers Guy L. Steele and Gerald Jay Sussman released an influential series of Massachusetts Institute of Technology (MIT) AI Memos known as the Lambda Papers (1975–1980). This resulted in the growth of popularity in the language and the era of standardization from 1990 onward. Much of the history of Scheme has been documented by the developers themselves. [ 1 ]
The development of Scheme was heavily influenced by two predecessors that were quite different from one another: Lisp provided its general semantics and syntax, and ALGOL provided its lexical scope and block structure. Scheme is a dialect of Lisp but Lisp has evolved; the Lisp dialects from which Scheme evolved—although they were in the mainstream at the time—are quite different from any modern Lisp.
Lisp was invented by John McCarthy in 1958 while he was at the Massachusetts Institute of Technology (MIT). McCarthy published its design in a paper in Communications of the ACM in 1960, entitled "Recursive Functions of Symbolic Expressions and Their Computation by Machine, Part I" [ 2 ] (Part II was never published). He showed that with a few simple operators and a notation for functions, one can build a Turing-complete language for algorithms.
The use of s-expressions which characterize the syntax of Lisp was initially intended to be an interim measure pending the development of a language employing what McCarthy called " m-expressions ". As an example, the m-expression car[cons[A,B]] is equivalent to the s-expression (car (cons A B)) . S-expressions proved popular, however, and the many attempts to implement m-expressions failed to catch on.
The first implementation of Lisp was on an IBM 704 by Steve Russell , who read McCarthy's paper and coded the eval function he described in machine code. The familiar (but puzzling to newcomers) names CAR and CDR used in Lisp to describe the head element of a list and its tail, evolved from two IBM 704 assembly language commands: Contents of Address Register and Contents of Decrement Register, each of which returned the contents of a 15-bit register corresponding to segments of a 36-bit IBM 704 instruction word .
The first complete Lisp compiler, written in Lisp, was implemented in 1962 by Tim Hart and Mike Levin at MIT. [ 3 ] This compiler introduced the Lisp model of incremental compilation, in which compiled and interpreted functions can intermix freely.
The two variants of Lisp most significant in the development of Scheme were both developed at MIT: LISP 1.5 [ 4 ] developed by McCarthy and others, and Maclisp [ 5 ] – developed for MIT's Project MAC , a direct descendant of LISP 1.5. which ran on the PDP-10 and Multics systems.
Since its inception, Lisp was closely connected with the artificial intelligence (AI) research community, especially on PDP-10 . The 36-bit word size of the PDP-6 and PDP-10 was influenced by the usefulness of having two Lisp 18-bit pointers in one word. [ 6 ]
ALGOL 58 , originally to be called IAL for "International Algorithmic Language", was developed jointly by a committee of European and American computer scientists in a meeting in 1958 at ETH Zurich . ALGOL 60 , a later revision developed at the ALGOL 60 meeting in Paris and now commonly named ALGOL , became the standard for the publication of algorithms and had a profound effect on future language development, despite the language's lack of commercial success and its limitations. Tony Hoare has remarked: "Here is a language so far ahead of its time that it was not only an improvement on its predecessors but also on nearly all its successors." [ 7 ]
ALGOL introduced the use of block structure and lexical scope. It was also notorious for its difficult call by name default parameter passing mechanism, which was defined so as to require textual substitution of the expression representing the working parameter in place of the formal parameter during execution of a procedure or function, causing it to be re-evaluated each time it is referenced during execution. ALGOL implementors developed a mechanism they called a thunk , which captured the context of the working parameter, enabling it to be evaluated during execution of the procedure or function.
In 1971 Sussman, Drew McDermott , and Eugene Charniak had developed a system called Micro-Planner which was a partial and somewhat unsatisfactory implementation of Carl Hewitt 's ambitious Planner project. Sussman and Hewitt worked together along with others on Muddle, later renamed MDL , an extended Lisp which formed a component of Hewitt's project. Drew McDermott, and Sussman in 1972 developed the Lisp-based language Conniver , which revised the use of automatic backtracking in Planner which they thought was unproductive. Hewitt was dubious that the "hairy control structure" in Conniver was a solution to the problems with Planner. Pat Hayes remarked: "Their [Sussman and McDermott] solution, to give the user access to the implementation primitives of Planner, is however, something of a retrograde step (what are Conniver's semantics?)" [ 8 ]
In November 1972, Hewitt and his students invented the Actor model of computation as a solution to the problems with Planner. [ 9 ] A partial implementation of Actors was developed called Planner-73 (later called PLASMA). Steele, then a graduate student at MIT, had been following these developments, and he and Sussman decided to implement a version of the Actor model in their own "tiny Lisp" developed on Maclisp , to understand the model better. Using this basis they then began to develop mechanisms for creating actors and sending messages. [ 10 ]
PLASMA's use of lexical scope was similar to the lambda calculus . Sussman and Steele decided to try to model Actors in the lambda calculus. They called their modeling system Schemer, eventually changing it to Scheme to fit the six-character limit on the ITS file system on their DEC PDP-10 . They soon concluded Actors were essentially closures that never return but instead invoke a continuation , and thus they decided that the closure and the Actor were, for the purposes of their investigation, essentially identical concepts. They eliminated what they regarded as redundant code and, at that point, discovered that they had written a very small and capable dialect of Lisp. Hewitt remained critical of the "hairy control structure" in Scheme [ 11 ] [ 12 ] and considered primitives (e.g., START!PROCESS , STOP!PROCESS , and EVALUATE!UNINTERRUPTIBLY ) used in the Scheme implementation to be a backward step.
25 years later, in 1998, Sussman and Steele reflected that the minimalism of Scheme was not a conscious design goal, but rather the unintended outcome of the design process. "We were actually trying to build something complicated and discovered, serendipitously, that we had accidentally designed something that met all our goals but was much simpler than we had intended... we realized that the lambda calculus—a small, simple formalism—could serve as the core of a powerful and expressive programming language." [ 10 ]
On the other hand, Hewitt remained critical of the lambda calculus as a foundation for computation writing "The actual situation is that the λ-calculus is capable of expressing some kinds of sequential and parallel control structures but, in general, not the concurrency expressed in the Actor model. On the other hand, the Actor model is capable of expressing everything in the λ-calculus and more." He has also been critical of aspects of Scheme that derive from the lambda calculus such as reliance on continuation functions and the lack of exceptions. [ 13 ]
Between 1975 and 1980 Sussman and Steele worked on developing their ideas about using the lambda calculus, continuations and other advanced programming concepts such as optimization of tail recursion , and published them in a series of AI Memos which have become collectively termed the Lambda Papers . [ 14 ]
Scheme was the first dialect of Lisp to choose lexical scope . It was also one of the first programming languages after Reynold's Definitional Language [ 15 ] to support first-class continuations . It had a large impact on the effort that led to the development of its sister-language, Common Lisp , to which Guy Steele was a contributor. [ 16 ]
The Scheme language is standardized in the official Institute of Electrical and Electronics Engineers (IEEE) standard, [ 17 ] and a de facto standard called the Revised n Report on the Algorithmic Language Scheme (R n RS). The most widely implemented standard is R5RS (1998), [ 18 ] and a new standard, R6RS , [ 19 ] was ratified in 2007. [ 20 ] Besides the RnRS standards there are also Scheme Requests for Implementation documents, that contain additional libraries that may be added by Scheme implementations. | https://en.wikipedia.org/wiki/History_of_the_Scheme_programming_language |
History of the Theory of Numbers is a three-volume work by Leonard Eugene Dickson summarizing work in number theory up to about 1920. The style is unusual in that Dickson mostly just lists results by various authors, with little further discussion. The central topic of quadratic reciprocity and higher reciprocity laws is barely mentioned; this was apparently going to be the topic of a fourth volume that was never written ( Fenster 1999 ).
This article about a mathematical publication is a stub . You can help Wikipedia by expanding it .
This article about the history of mathematics is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/History_of_the_Theory_of_Numbers |
The World Wide Web ("WWW", "W3" or simply "the Web") is a global information medium that users can access via computers connected to the Internet . The term is often mistakenly used as a synonym for the Internet, but the Web is a service that operates over the Internet, just as email and Usenet do. The history of the Internet and the history of hypertext date back significantly further than that of the World Wide Web.
Tim Berners-Lee invented the World Wide Web while working at CERN in 1989. He proposed a "universal linked information system" using several concepts and technologies, the most fundamental of which was the connections that existed between information. [ 1 ] [ 2 ] He developed the first web server , the first web browser , and a document formatting protocol, called Hypertext Markup Language (HTML). After publishing the markup language in 1991, and releasing the browser source code for public use in 1993, many other web browsers were soon developed, with Marc Andreessen 's Mosaic (later Netscape Navigator ), being particularly easy to use and install, and often credited with sparking the Internet boom of the 1990s. It was a graphical browser which ran on several popular office and home computers, bringing multimedia content to non-technical users by including images and text on the same page.
Websites for use by the general public began to emerge in 1993–94. This spurred competition in server and browser software, highlighted in the Browser wars which was initially dominated by Netscape Navigator and Internet Explorer . Following the complete removal of commercial restrictions on Internet use by 1995, commercialization of the Web amidst macroeconomic factors led to the dot-com boom and bust in the late 1990s and early 2000s.
The features of HTML evolved over time, leading to HTML version 2 in 1995, HTML3 and HTML4 in 1997, and HTML5 in 2014. The language was extended with advanced formatting in Cascading Style Sheets (CSS) and with programming capability by JavaScript . AJAX programming delivered dynamic content to users, which sparked a new era in Web design , styled Web 2.0 . The use of social media , becoming common-place in the 2010s, allowed users to compose multimedia content without programming skills, making the Web ubiquitous in every-day life.
The underlying concept of hypertext as a user interface paradigm originated in projects in the 1960s, from research such as the Hypertext Editing System (HES) by Andries van Dam at Brown University, IBM Generalized Markup Language , Ted Nelson's Project Xanadu , and Douglas Engelbart's oN-Line System (NLS). [ 3 ] [ page needed ] [ non-primary source needed ] Both Nelson and Engelbart were in turn inspired by Vannevar Bush 's microfilm -based memex , which was described in the 1945 essay " As We May Think ". [ 4 ] [ title missing ] [ 5 ] Other precursors were FRESS and Intermedia . Paul Otlet's project Mundaneum has also been named as an early 20th-century precursor of the Web.
In 1980, Tim Berners-Lee , at the European Organization for Nuclear Research (CERN) in Switzerland, built ENQUIRE , as a personal database of people and software models, but also as a way to experiment with hypertext; each new page of information in ENQUIRE had to be linked to another page. [ 6 ] [ 7 ] [ 8 ] When Berners-Lee built ENQUIRE, the ideas developed by Bush, Engelbart, and Nelson did not influence his work, since he was not aware of them. However, as Berners-Lee began to refine his ideas, the work of these predecessors would later help to confirm the legitimacy of his concept. [ 9 ] [ 10 ]
During the 1980s, many packet-switched data networks emerged based on various communication protocols (see Protocol Wars ). One of these standards was the Internet protocol suite , which is often referred to as TCP/IP. As the Internet grew through the 1980s, many people realized the increasing need to be able to find and organize files and use information. By 1985, the Domain Name System (upon which the Uniform Resource Locator is built) came into being. [ 11 ] [ better source needed ] [ failed verification ] Many small, self-contained hypertext systems were created, such as Apple Computer's HyperCard (1987).
Berners-Lee's contract in 1980 was from June to December, but in 1984 he returned to CERN in a permanent role, and considered its problems of information management: physicists from around the world needed to share data, yet they lacked common machines and any shared presentation software. Shortly after Berners-Lee's return to CERN, TCP/IP protocols were installed on Unix machines at the institution, turning it into the largest Internet site in Europe. In 1988, the first direct IP connection between Europe and North America was established and Berners-Lee began to openly discuss the possibility of a web-like system at CERN. [ 12 ] He was inspired by a book, Enquire Within upon Everything . Many online services existed before the creation of the World Wide Web, such as for example CompuServe , Usenet , [ 13 ] Internet Relay Chat , [ 14 ] Telnet [ 15 ] and bulletin board systems . [ 16 ] Before the internet, UUCP was used for online services such as e-mail , [ 17 ] and BITNET was also another popular network. [ 18 ]
While working at CERN , Tim Berners-Lee became frustrated with the inefficiencies and difficulties posed by finding information stored on different computers. [ 19 ] On 12 March 1989, he submitted a memorandum, titled "Information Management: A Proposal", [ 1 ] [ 20 ] to the management at CERN. The proposal used the term "web" and was based on "a large hypertext database with typed links". It described a system called "Mesh" that referenced ENQUIRE , the database and software project he had built in 1980, with a more elaborate information management system based on links embedded as text: "Imagine, then, the references in this document all being associated with the network address of the thing to which they referred, so that while reading this document, you could skip to them with a click of the mouse." Such a system, he explained, could be referred to using one of the existing meanings of the word hypertext , a term that he says was coined in the 1950s. Berners-Lee notes the possibility of multimedia documents that include graphics, speech and video, which he terms hypermedia . [ 1 ] [ 2 ]
Although the proposal attracted little interest, Berners-Lee was encouraged by his manager, Mike Sendall, to begin implementing his system on a newly acquired NeXT workstation. He considered several names, including Information Mesh , The Information Mine or Mine of Information , but settled on World Wide Web . Berners-Lee found an enthusiastic supporter in his colleague and fellow hypertext enthusiast Robert Cailliau who began to promote the proposed system throughout CERN. Berners-Lee and Cailliau pitched Berners-Lee's ideas to the European Conference on Hypertext Technology in September 1990, but found no vendors who could appreciate his vision.
Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web , he explains that he had repeatedly suggested to members of both technical communities that a marriage between the two technologies was possible. But, when no one took up his invitation, he finally assumed the project himself. In the process, he developed three essential technologies:
With help from Cailliau he published a more formal proposal on 12 November 1990 to build a "hypertext project" called WorldWideWeb (abbreviated "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture . [ 22 ] [ 23 ] The proposal was modelled after the Standard Generalized Markup Language (SGML) reader Dynatext by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University . The Dynatext system, licensed by CERN, was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration. [ citation needed ]
At this point HTML and HTTP had already been in development for about two months and the first web server was about a month from completing its first successful test. Berners-Lee's proposal estimated that a read-only Web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available".
By December 1990, Berners-Lee and his work team had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP), the HyperText Markup Language (HTML), the first web browser (named WorldWideWeb , which was also a web editor ), the first web server (later known as CERN httpd ) and the first web site ( https://info.cern.ch/ ) containing the first web pages that described the project itself was published on 20 December 1990. [ 24 ] [ 25 ] The browser could access Usenet newsgroups and FTP files as well. A NeXT Computer was used by Berners-Lee as the web server and also to write the web browser. [ 26 ]
Working with Berners-Lee at CERN, Nicola Pellow developed the first cross-platform web browser, the Line Mode Browser . [ 27 ]
In January 1991, the first web servers outside CERN were switched on. On 6 August 1991, Berners-Lee published a short summary of the World Wide Web project on the newsgroup alt.hypertext , inviting collaborators. [ 28 ]
Paul Kunz from the Stanford Linear Accelerator Center (SLAC) visited CERN in September 1991, and was captivated by the Web. He brought the NeXT software back to SLAC, where librarian Louise Addis adapted it for the VM/CMS operating system on the IBM mainframe as a way to host the SPIRES -HEP database and display SLAC's catalog of online documents. [ 29 ] [ 30 ] [ 31 ] [ 32 ] This was the first web server outside of Europe and the first in North America. [ 33 ]
The World Wide Web had several differences from other hypertext systems available at the time. The Web required only unidirectional links rather than bidirectional ones, making it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn, presented the chronic problem of link rot .
The WorldWideWeb browser only ran on NeXTSTEP operating system. This shortcoming was discussed in January 1992, [ 34 ] and alleviated in April 1992 by the release of Erwise , an application developed at the Helsinki University of Technology , and in May by ViolaWWW , created by Pei-Yuan Wei , which included advanced features such as embedded graphics, scripting, and animation. ViolaWWW was originally an application for HyperCard . [ 35 ] Both programs ran on the X Window System for Unix . In 1992, the first tests between browsers on different platforms were concluded successfully between buildings 513 and 31 in CERN, between browsers on the NexT station and the X11-ported Mosaic browser. ViolaWWW became the recommended browser at CERN. To encourage use within CERN, Bernd Pollermann put the CERN telephone directory on the web—previously users had to log onto the mainframe in order to look up phone numbers. The Web was successful at CERN and spread to other scientific and academic institutions.
Students at the University of Kansas adapted an existing text-only hypertext browser, Lynx , to access the web in 1992. Lynx was available on Unix and DOS, and some web designers, unimpressed with glossy graphical websites, held that a website not accessible through Lynx was not worth visiting.
In these earliest browsers, images opened in a separate "helper" application.
In the early 1990s, Internet-based projects such as Archie , Gopher , Wide Area Information Servers (WAIS), and the FTP Archive list attempted to create ways to organize distributed data. Gopher was a document browsing system for the Internet, released in 1991 by the University of Minnesota . Invented by Mark P. McCahill , it became the first commonly used hypertext interface to the Internet. While Gopher menu items were examples of hypertext, they were not commonly perceived in that way [ clarification needed ] . In less than a year, there were hundreds of Gopher servers. [ 36 ] It offered a viable alternative to the World Wide Web in the early 1990s and the consensus was that Gopher would be the primary way that people would interact with the Internet. [ 37 ] [ 38 ] However, in 1993, the University of Minnesota declared that Gopher was proprietary and would have to be licensed. [ 36 ]
In response, on 30 April 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due, and released their code into the public domain. [ 39 ] This made it possible to develop servers and clients independently and to add extensions without licensing restrictions. [ citation needed ] Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this spurred the development of various browsers which precipitated a rapid shift away from Gopher. [ 40 ] By releasing Berners-Lee's invention for public use, CERN encouraged and enabled its widespread use. [ 41 ]
Early websites intermingled links for both the HTTP web protocol and the Gopher protocol , which provided access to content through hypertext menus presented as a file system rather than through HTML files. Early Web users would navigate either by bookmarking popular directory pages or by consulting updated lists such as the NCSA "What's New" page. Some sites were also indexed by WAIS, enabling users to submit full-text searches similar to the capability later provided by search engines .
After 1993 the World Wide Web saw many advances to indexing and ease of access through search engines, which often neglected Gopher and Gopherspace. As its popularity increased through ease of use, incentives for commercial investment in the Web also grew. By the middle of 1994, the Web was outcompeting Gopher and the other browsing systems for the Internet. [ 42 ]
The National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana–Champaign (UIUC) established a website in November 1992. After Marc Andreessen , a student at UIUC, was shown ViolaWWW in late 1992, [ 35 ] he began work on Mosaic with another UIUC student Eric Bina , using funding from the High-Performance Computing and Communications Initiative , a US-federal research and development program initiated by US Senator Al Gore . [ 43 ] [ 44 ] [ 45 ] Andreessen and Bina released a Unix version of the browser in February 1993; Mac and Windows versions followed in August 1993. The browser gained popularity due to its strong support of integrated multimedia , and the authors' rapid response to user bug reports and recommendations for new features. [ 35 ] Historians generally agree that the 1993 introduction of the Mosaic web browser was a turning point for the World Wide Web. [ 46 ] [ 47 ] [ 48 ]
Before the release of Mosaic in 1993, graphics were not commonly mixed with text in web pages, and the Web was less popular than older protocols such as Gopher and WAIS. Mosaic could display inline images [ 49 ] and submit forms [ 50 ] [ 51 ] for Windows, Macintosh and X-Windows. NCSA also developed HTTPd , a Unix web server that used the Common Gateway Interface to process forms and Server Side Includes for dynamic content. Both the client and server were free to use with no restrictions. [ 52 ] Mosaic was an immediate hit; [ 53 ] its graphical user interface allowed the Web to become by far the most popular protocol on the Internet. Within a year, web traffic surpassed Gopher's. [ 36 ] Wired declared that Mosaic made non-Internet online services obsolete, [ 54 ] and the Web became the preferred interface for accessing the Internet. [ citation needed ]
The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularising use of the Internet. [ 55 ] Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet . [ 56 ] The Web is an information space containing hyperlinked documents and other resources , identified by their URIs. [ 57 ] It is implemented as both client and server software using Internet protocols such as TCP/IP and HTTP .
In keeping with its origins at CERN, early adopters of the Web were primarily university-based scientific departments or physics laboratories such as SLAC and Fermilab . By January 1993 there were fifty web servers across the world. [ 58 ] By October 1993 there were over five hundred servers online, including some notable websites . [ 59 ]
Practical media distribution and streaming media over the Web was made possible by advances in data compression , due to the impractically high bandwidth requirements of uncompressed media. Following the introduction of the Web, several media formats based on discrete cosine transform (DCT) were introduced for practical media distribution and streaming over the Web, including the MPEG video format in 1991 and the JPEG image format in 1992. The high level of image compression made JPEG a good format for compensating slow Internet access speeds, typical in the age of dial-up Internet access . JPEG became the most widely used image format for the World Wide Web. A DCT variation, the modified discrete cosine transform (MDCT) algorithm, led to the development of MP3 , which was introduced in 1991 and became the first popular audio format on the Web.
In 1992 the Computing and Networking Department of CERN, headed by David Williams, withdrew support of Berners-Lee's work. A two-page email sent by Williams stated that the work of Berners-Lee, with the goal of creating a facility to exchange information such as results and comments from CERN experiments to the scientific community, was not the core activity of CERN and was a misallocation of CERN's IT resources. Following this decision, Tim Berners-Lee left CERN for the Massachusetts Institute of Technology (MIT), where he continued to develop HTTP. [ citation needed ]
The first Microsoft Windows browser was Cello , written by Thomas R. Bruce for the Legal Information Institute at Cornell Law School to provide legal information, since access to Windows was more widespread amongst lawyers than access to Unix. Cello was released in June 1993.
The rate of web site deployment increased sharply around the world, and fostered development of international standards for protocols and content formatting. [ 60 ] Berners-Lee continued to stay involved in guiding web standards, such as the markup languages to compose web pages, and he advocated his vision of a Semantic Web (sometimes known as Web 3.0) based around machine-readability and interoperability standards.
In May 1994, the first International WWW Conference , organized by Robert Cailliau , was held at CERN; the conference has been held every year since.
The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in September/October 1994 in order to create open standards for the Web. [ 61 ] It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet. A year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission ; and in 1996, a third continental site was created in Japan at Keio University .
W3C comprised various companies that were willing to create standards and recommendations to improve the quality of the Web. Berners-Lee made the Web available freely, with no patent and no royalties due. The W3C decided that its standards must be based on royalty-free technology, so they can be easily adopted by anyone. Netscape and Microsoft, in the middle of a browser war , ignored the W3C and added elements to HTML ad hoc (e.g., blink and marquee ). Finally, in 1995, Netscape and Microsoft came to their senses and agreed to abide by the W3C's standard. [ 62 ]
The W3C published the standard for HTML 4 in 1997, which included Cascading Style Sheets (CSS) , giving designers more control over the appearance of web pages without the need for additional HTML tags. The W3C could not enforce compliance so none of the browsers were fully compliant. This frustrated web designers who formed the Web Standards Project (WaSP) in 1998 with the goal of cajoling compliance with standards. [ 63 ] A List Apart and CSS Zen Garden were influential websites that promoted good design and adherence to standards. [ 64 ] Nevertheless, AOL halted development of Netscape [ 65 ] and Microsoft was slow to update IE. [ 66 ] Mozilla and Apple both released browsers that aimed to be more standards compliant ( Firefox and Safari ), but were unable to dislodge IE as the dominant browser.
As the Web grew in the mid-1990s, web directories and primitive search engines were created to index pages and allow people to find things. Commercial use restrictions on the Internet were lifted in 1995 when NSFNET was shut down.
In the US, the online service America Online (AOL) offered their users a connection to the Internet via their own internal browser, using a dial-up Internet connection. In January 1994, Yahoo! was founded by Jerry Yang and David Filo , then students at Stanford University . Yahoo! Directory became the first popular web directory . Yahoo! Search , launched the same year, was the first popular search engine on the World Wide Web. Yahoo! became the quintessential example of a first mover on the Web.
Online shopping began to emerge with the launch of Amazon 's shopping site by Jeff Bezos in 1995 and eBay by Pierre Omidyar the same year.
By 1994, Marc Andreessen's Netscape Navigator superseded Mosaic in popularity, holding the position for some time. Bill Gates outlined Microsoft's strategy to dominate the Internet in his Tidal Wave memo in 1995. [ 67 ] With the release of Windows 95 and the popular Internet Explorer browser, many public companies began to develop a Web presence. At first, people mainly anticipated the possibilities of free publishing and instant worldwide information. By the late 1990s, the directory model had given way to search engines, corresponding with the rise of Google Search , which developed new approaches to relevancy ranking . Directory features, while still commonly available, became after-thoughts to search engines.
Netscape had a very successful IPO valuing the company at $2.9 billion despite the lack of profits and triggering the dot-com bubble . [ 68 ] Increasing familiarity with the Web led to the growth of direct Web-based commerce ( e-commerce ) and instantaneous group communications worldwide. Many dot-com companies , displaying products on hypertext webpages, were added into the Web. Over the next 5 years, over a trillion dollars was raised to fund thousands of startups consisting of little more than a website.
During the dot-com boom , many companies vied to create a dominant web portal in the belief that such a website would best be able to attract a large audience that in turn would attract online advertising revenue. While most of these portals offered a search engine, they were not interested in encouraging users to find other websites and leave the portal and instead concentrated on "sticky" content. [ 69 ] In contrast, Google was a stripped-down search engine that delivered superior results. [ 70 ] It was a hit with users who switched from portals to Google. Furthermore, with AdWords , Google had an effective business model. [ 71 ] [ 72 ]
AOL bought Netscape in 1998. [ 73 ] In spite of their early success, Netscape was unable to fend off Microsoft. [ 74 ] Internet Explorer and a variety of other browsers almost completely replaced it.
Faster broadband internet connections replaced many dial-up connections from the beginning of the 2000s.
With the bursting of the dot-com bubble, many web portals either scaled back operations, floundered, [ 75 ] or shut down entirely. [ 76 ] [ 77 ] [ 78 ] AOL disbanded Netscape in 2003. [ 79 ]
Web server software was developed to allow computers to act as web servers . The first web servers supported only static files, such as HTML (and images), but now they commonly allow embedding of server side applications. Web framework software enabled building and deploying web applications. Content management systems (CMS) were developed to organize and facilitate collaborative content creation. Many of them were built on top of separate content management frameworks .
After Robert McCool joined Netscape, development on the NCSA HTTPd server languished. In 1995, Brian Behlendorf and Cliff Skolnick created a mailing list to coordinate efforts to fix bugs and make improvements to HTTPd . [ 80 ] They called their version of HTTPd, Apache . [ 81 ] Apache quickly became the dominant server on the Web. [ 82 ] After adding support for modules, Apache was able to allow developers to handle web requests with a variety of languages including Perl , PHP and Python . Together with Linux and MySQL , it became known as the LAMP platform.
Following the success of Apache, the Apache Software Foundation was founded in 1999 and produced many open source web software projects in the same collaborative spirit.
After graduating from UIUC, Andreessen and Jim Clark , former CEO of Silicon Graphics , met and formed Mosaic Communications Corporation in April 1994 to develop the Mosaic Netscape browser commercially. The company later changed its name to Netscape , and the browser was developed further as Netscape Navigator , which soon became the dominant web client. They also released the Netsite Commerce web server which could handle SSL requests, thus enabling e-commerce on the Web. [ 83 ] SSL became the standard method to encrypt web traffic. Navigator 1.0 also introduced cookies , but Netscape did not publicize this feature. Netscape followed up with Navigator 2 in 1995 introducing frames , Java applets and JavaScript . In 1998, Netscape made Navigator open source and launched Mozilla . [ 84 ]
Microsoft licensed Mosaic from Spyglass and released Internet Explorer 1.0 that year and IE2 later the same year. IE2 added features pioneered at Netscape such as cookies, SSL, and JavaScript. The browser wars became a competition for dominance when Explorer was bundled with Windows. [ 85 ] [ 86 ] This led to the United States v. Microsoft Corporation antitrust lawsuit.
IE3 , released in 1996, added support for Java applets, ActiveX , and CSS . At this point, Microsoft began bundling IE with Windows. IE3 managed to increase Microsoft's share of the browser market from under 10% to over 20%. [ 87 ] IE4 , released the following year, introduced Dynamic HTML setting the stage for the Web 2.0 revolution. By 1998, IE was able to capture the majority of the desktop browser market. [ 74 ] It would be the dominant browser for the next fourteen years.
Google released their Chrome browser in 2008 with the first JIT JavaScript engine , V8 . Chrome overtook IE to become the dominant desktop browser in four years, [ 88 ] and overtook Safari to become the dominant mobile browser in two. [ 89 ] At the same time, Google open sourced Chrome's codebase as Chromium . [ 90 ]
Ryan Dahl used Chromium's V8 engine in 2009 to power an event driven runtime system , Node.js , which allowed JavaScript code to be used on servers as well as browsers. This led to the development of new software stacks such as MEAN . Thanks to frameworks such as Electron , developers can bundle up node applications as standalone desktop applications such as Slack .
Acer and Samsung began selling Chromebooks , cheap laptops running ChromeOS capable of running web apps, in 2011. Over the next decade, more companies offered Chromebooks. Chromebooks outsold MacOS devices in 2020 to become the second most popular OS in the world. [ 91 ]
Other notable web browsers emerged including Mozilla's Firefox , Opera's Opera browser and Apple's Safari .
Web 1.0 is a retronym referring to the first stage of the World Wide Web 's evolution, from roughly 1989 to 2004. According to Graham Cormode and Balachander Krishnamurthy, "content creators were few in Web 1.0 with the vast majority of users simply acting as consumers of content". [ 92 ] Personal web pages were common, consisting mainly of static pages hosted on ISP -run web servers , or on free web hosting services such as Tripod and the now-defunct GeoCities . [ 93 ] [ 94 ]
Some common design elements of a Web 1.0 site include: [ 95 ]
Terry Flew , in his third edition of New Media, described the differences between Web 1.0 and Web 2.0 as a
"move from personal websites to blogs and blog site aggregation, from publishing to participation, from web content as the outcome of large up-front investment to an ongoing and interactive process, and from content management systems to links based on "tagging" website content using keywords ( folksonomy )."
Flew believed these factors formed the trends that resulted in the onset of the Web 2.0 "craze". [ 98 ]
Web pages were initially conceived as structured documents based upon HTML. They could include images, video, and other content, although the use of media was initially relatively limited and the content was mainly static. By the mid-2000s, new approaches to sharing and exchanging content, such as blogs and RSS , rapidly gained acceptance on the Web. The video-sharing website YouTube launched the concept of user-generated content. [ 99 ] As new technologies made it easier to create websites that behaved dynamically, the Web attained greater ease of use and gained a sense of interactivity which ushered in a period of rapid popularization. This new era also brought into existence social networking websites , such as Friendster , MySpace , Facebook , and Twitter , and photo- and video-sharing websites such as Flickr and, later, Instagram which gained users rapidly and became a central part of youth culture . Wikipedia 's user-edited content quickly displaced the professionally-written Microsoft Encarta . [ 100 ] The popularity of these sites, combined with developments in the technology that enabled them, and the increasing availability and affordability of high-speed connections made video content far more common on all kinds of websites. This new media-rich model for information exchange, featuring user-generated and user-edited websites, was dubbed Web 2.0 , a term coined in 1999 by Darcy DiNucci [ 101 ] and popularized in 2004 at the Web 2.0 Conference . The Web 2.0 boom drew investment from companies worldwide and saw many new service-oriented startups catering to a newly "democratized" Web. [ 102 ] [ 103 ] [ 104 ] [ 105 ] [ 106 ] [ 107 ]
JavaScript made the development of interactive web applications possible. Web pages could run JavaScript and respond to user input, but they could not interact with the network. Browsers could submit data to servers via forms and receive new pages, but this was slow compared to traditional desktop applications. Developers that wanted to offer sophisticated applications over the Web used Java or nonstandard solutions such as Adobe Flash or Microsoft's ActiveX .
Microsoft added a little-noticed feature called XMLHttpRequest to Internet Explorer in 1999, which enabled a web page to communicate with the server while remaining visible. Developers at Oddpost used this feature in 2002 to create the first Ajax application, a webmail client that performed as well as a desktop application. [ 108 ] Ajax apps were revolutionary. Web pages evolved beyond static documents to full-blown applications. Websites began offering APIs in addition to webpages. Developers created a plethora of Ajax apps including widgets , mashups and new types of social apps . Analysts called it Web 2.0 . [ 109 ]
Browser vendors improved the performance of their JavaScript engines [ 110 ] and dropped support for Flash and Java. [ 111 ] [ 112 ] Traditional client server applications were replaced by cloud apps . Amazon reinvented itself as a cloud service provider .
The use of social media on the Web has become ubiquitous in everyday life. [ 113 ] [ 114 ] The 2010s also saw the rise of streaming services, such as Netflix .
In spite of the success of Web 2.0 applications, the W3C forged ahead with their plan to replace HTML with XHTML and represent all data in XML . In 2004, representatives from Mozilla, Opera , and Apple formed an opposing group, the Web Hypertext Application Technology Working Group (WHATWG), dedicated to improving HTML while maintaining backward compatibility. [ 115 ] For the next several years, websites did not transition their content to XHTML; browser vendors did not adopt XHTML2; and developers eschewed XML in favor of JSON . [ 116 ] By 2007, the W3C conceded and announced they were restarting work on HTML [ 117 ] and in 2009, they officially abandoned XHTML. [ 118 ] In 2019, the W3C ceded control of the HTML specification, now called the HTML Living Standard, to WHATWG. [ 119 ]
Microsoft rewrote their Edge browser in 2021 to use Chromium as its code base in order to be more compatible with Chrome. [ 120 ]
The increasing use of encrypted connections ( HTTPS ) enabled e-commerce and online banking . Nonetheless, the 2010s saw the emergence of various controversial trends, such as internet censorship and the growth of cybercrime , including web-based cyberattacks and ransomware . [ 121 ] [ 122 ]
Early attempts to allow wireless devices to access the Web used simplified formats such as i-mode and WAP . Apple introduced the first smartphone in 2007 with a full-featured browser. Other companies followed suit and in 2011, smartphone sales overtook PCs. [ 123 ] Since 2016, most visitors access websites with mobile devices [ 124 ] which led to the adoption of responsive web design .
Apple, Mozilla, and Google have taken different approaches to integrating smartphones with modern web apps. Apple initially promoted web apps for the iPhone, but then encouraged developers to make native apps . [ 125 ] Mozilla announced Web APIs in 2011 to allow webapps to access hardware features such as audio, camera or GPS. [ 126 ] Frameworks such as Cordova and Ionic allow developers to build hybrid apps . Mozilla released a mobile OS designed to run web apps in 2012, [ 127 ] but discontinued it in 2015. [ 128 ]
Google announced specifications for Accelerated Mobile Pages (AMP), [ 129 ] and progressive web applications (PWA) in 2015. [ 130 ] AMPs use a combination of HTML, JavaScript, and Web Components to optimize web pages for mobile devices; and PWAs are web pages that, with a combination of web workers and manifest files , can be saved to a mobile device and opened like a native app.
The extension of the Web to facilitate data exchange was explored as an approach to create a Semantic Web (sometimes called Web 3.0). This involved using machine-readable information and interoperability standards to enable context-understanding programs to intelligently select information for users. [ 131 ] Continued extension of the Web has focused on connecting devices to the Internet, coined Intelligent Device Management . As Internet connectivity becomes ubiquitous, manufacturers have started to leverage the expanded computing power of their devices to enhance their usability and capability. Through Internet connectivity, manufacturers are now able to interact with the devices they have sold and shipped to their customers, and customers are able to interact with the manufacturer (and other providers) to access a lot of new content. [ 132 ]
This phenomenon has led to the rise of the Internet of Things (IoT), [ 133 ] where modern devices are connected through sensors, software, and other technologies that exchange information with other devices and systems on the Internet. This creates an environment where data can be collected and analyzed instantly, providing better insights and improving the decision-making process. Additionally, the integration of AI with IoT devices continues to improve their capabilities, allowing them to predict customer needs and perform tasks, increasing efficiency and user satisfaction.
Web3 (sometimes also referred to as Web 3.0) is an idea for a decentralized Web based on public blockchains , smart contracts , digital tokens and digital wallets . [ 134 ]
The next generation of the Web is often termed Web 4.0, but its definition is not clear. According to some sources, it is a Web that involves artificial intelligence , [ 135 ] the internet of things , pervasive computing , ubiquitous computing and the Web of Things among other concepts. [ 136 ] According to the European Union, Web 4.0 is "the expected fourth generation of the World Wide Web. Using advanced artificial and ambient intelligence, the internet of things, trusted blockchain transactions, virtual worlds and XR capabilities, digital and real objects and environments are fully integrated and communicate with each other, enabling truly intuitive, immersive experiences, seamlessly blending the physical and digital worlds". [ 137 ]
Historiography of the Web poses specific challenges, including disposable data, missing links, lost content and archived websites, which have consequences for web historians. Sites such as the Internet Archive aim to preserve content. [ 138 ] [ 139 ] | https://en.wikipedia.org/wiki/History_of_the_World_Wide_Web |
The center of the Universe is a concept that lacks a coherent definition in modern astronomy ; according to standard cosmological theories on the shape of the universe , it has no distinct spatial center.
Historically, different people have suggested various locations as the center of the Universe. Many mythological cosmologies included an axis mundi , the central axis of a flat Earth that connects the Earth, heavens, and other realms together. In the 4th century BC Greece, philosophers developed the geocentric model , based on astronomical observation; this model proposed that the center of the Universe lies at the center of a spherical, stationary Earth, around which the Sun, Moon, planets, and stars rotate. With the development of the heliocentric model by Nicolaus Copernicus in the 16th century, the Sun was believed to be the center of the Universe, with the planets (including Earth) and stars orbiting it.
In the early-20th century, the discovery of other galaxies and the development of the Big Bang theory , led to the development of cosmological models of a homogeneous, isotropic Universe , which lacks a distinct spatial central point, which is rather everywhere, [ 1 ] for space expands from a shared central point in time, the Big Bang. [ 2 ]
In religion and mythology, the axis mundi (also cosmic axis, world axis, world pillar, columna cerului, center of the world) is a point described as the center of the world, the connection between it and Heaven, or both.
Mount Hermon was regarded as the axis mundi in Canaanite tradition, from where the sons of God are introduced descending in 1 Enoch (1En6:6). [ 3 ] The ancient Greeks regarded several sites as places of earth's omphalos (navel) stone, notably the oracle at Delphi , while still maintaining a belief in a cosmic world tree and in Mount Olympus as the abode of the gods. Judaism has the Temple Mount and Mount Sinai , Christianity has the Mount of Olives and Calvary , Islam has Mecca , said to be the place on earth that was created first, and the Temple Mount ( Dome of the Rock ). In Shinto , the Ise Shrine is the omphalos. In addition to the Kun Lun Mountains , where it is believed the peach tree of immortality is located, the Chinese folk religion recognizes four other specific mountains as pillars of the world.
Sacred places constitute world centers ( omphalos ) with the altar or place of prayer as the axis. Altars, incense sticks, candles and torches form the axis by sending a column of smoke, and prayer, toward heaven. The architecture of sacred places often reflects this role. "Every temple or palace--and by extension, every sacred city or royal residence--is a Sacred Mountain, thus becoming a Centre." [ 4 ] The stupa of Hinduism , and later Buddhism , reflects Mount Meru . Cathedrals are laid out in the form of a cross , with the vertical bar representing the union of Earth and heaven as the horizontal bars represent union of people to one another, with the altar at the intersection. Pagoda structures in Asian temples take the form of a stairway linking Earth and heaven. A steeple in a church or a minaret in a mosque also serve as connections of Earth and heaven. Structures such as the maypole , derived from the Saxons ' Irminsul , and the totem pole among indigenous peoples of the Americas also represent world axes. The calumet , or sacred pipe, represents a column of smoke (the soul) rising form a world center. [ 5 ] A mandala creates a world center within the boundaries of its two-dimensional space analogous to that created in three-dimensional space by a shrine. [ 6 ]
In medieval times some Christians thought of Jerusalem as the center of the world (Latin: umbilicus mundi , Greek: Omphalos ), and was so represented in the so-called T and O maps . Byzantine hymns speak of the Cross being "planted in the center of the earth."
The Flat Earth model is a belief that the Earth 's shape is a plane or disk covered by a firmament containing heavenly bodies. Most pre-scientific cultures have had conceptions of a Flat Earth, including Greece until the classical period , the Bronze Age and Iron Age civilizations of the Near East until the Hellenistic period , India until the Gupta period (early centuries AD) and China until the 17th century. [ citation needed ] It was also typically held in the aboriginal cultures of the Americas , and a flat Earth domed by the firmament in the shape of an inverted bowl is common in pre-scientific societies. [ 7 ]
"Center" is well-defined in a Flat Earth model. A flat Earth would have a definite geographic center. There would also be a unique point at the exact center of a spherical firmament (or a firmament that was a half-sphere).
The Flat Earth model gave way to an understanding of a Spherical Earth . Aristotle (384–322 BC) provided observational arguments supporting the idea of a spherical Earth, namely that different stars are visible in different locations, travelers going south see southern constellations rise higher above the horizon, and the shadow of Earth on the Moon during a lunar eclipse is round, and spheres cast circular shadows while discs generally do not.
This understanding was accompanied by models of the Universe that depicted the Sun , Moon , stars , and naked eye planets circling the spherical Earth, including the noteworthy models of Aristotle (see Aristotelian physics ) and Ptolemy . [ 8 ] This geocentric model was the dominant model from the 4th century BC until the 17th century AD.
Heliocentrism, or heliocentricism, [ 9 ] [ note 1 ] is the astronomical model in which the Earth and planets revolve around a relatively stationary Sun at the center of the Solar System . The word comes from the Greek ( ἥλιος helios "sun" and κέντρον kentron "center").
The notion that the Earth revolves around the Sun had been proposed as early as the 3rd century BC by Aristarchus of Samos , [ 10 ] [ 11 ] [ note 2 ] but had received no support from most other ancient astronomers.
Nicolaus Copernicus ' major theory of a heliocentric model was published in De revolutionibus orbium coelestium ( On the Revolutions of the Celestial Spheres ), in 1543, the year of his death, though he had formulated the theory several decades earlier. Copernicus' ideas were not immediately accepted, but they did begin a paradigm shift away from the Ptolemaic geocentric model to a heliocentric model. The Copernican Revolution , as this paradigm shift would come to be called, would last until Isaac Newton ’s work over a century later.
Johannes Kepler published his first two laws about planetary motion in 1609, having found them by analyzing the astronomical observations of Tycho Brahe . [ 12 ] Kepler's third law was published in 1619. [ 12 ] The first law was "The orbit of every planet is an ellipse with the Sun at one of the two foci ."
On 7 January 1610 Galileo used his telescope, with optics superior to what had been available [ citation needed ] before. He described "three fixed stars, totally invisible [ 13 ] by their smallness", all close to Jupiter, and lying on a straight line through it. [ 14 ] Observations on subsequent nights showed that the positions of these "stars" relative to Jupiter were changing in a way that would have been inexplicable if they had really been fixed stars. On 10 January Galileo noted that one of them had disappeared, an observation which he attributed to its being hidden behind Jupiter. Within a few days he concluded that they were orbiting Jupiter: [ 15 ] Galileo stated that he had reached this conclusion on 11 January. [ 14 ] He had discovered three of Jupiter's four largest satellites (moons). He discovered the fourth on 13 January.
His observations of the satellites of Jupiter created a revolution in astronomy: a planet with smaller planets orbiting it did not conform to the principles of Aristotelian Cosmology , which held that all heavenly bodies should circle the Earth. [ 14 ] [ 16 ] Many astronomers and philosophers initially refused to believe that Galileo could have discovered such a thing; by showing that, like Earth, other planets could also have moons of their own that followed prescribed paths, and hence that orbital mechanics didn't apply only to the Earth, planets, and Sun, what Galileo had essentially done was to show that other planets might be "like Earth". [ 14 ]
Newton made clear his heliocentric view of the Solar System – developed in a somewhat modern way, because already in the mid-1680s he recognised the "deviation of the Sun" from the centre of gravity of the Solar System. [ 17 ] For Newton, it was not precisely the centre of the Sun or any other body that could be considered at rest, but rather "the common centre of gravity of the Earth, the Sun and all the Planets is to be esteem'd the Centre of the World", and this centre of gravity "either is at rest or moves uniformly forward in a right line" (Newton adopted the "at rest" alternative in view of common consent that the centre, wherever it was, was at rest). [ 18 ]
Before the 1920s, it was generally believed that there were no galaxies other than the Milky Way (see for example The Great Debate ). Thus, to astronomers of previous centuries, there was no distinction between a hypothetical center of the galaxy and a hypothetical center of the universe.
In 1750 Thomas Wright , in his work An original theory or new hypothesis of the Universe , correctly speculated that the Milky Way might be a body of a huge number of stars held together by gravitational forces rotating about a Galactic Center , akin to the Solar System but on a much larger scale. The resulting disk of stars can be seen as a band on the sky from the Earth's perspective inside the disk. [ 19 ] In a treatise in 1755, Immanuel Kant elaborated on Wright's idea about the structure of the Milky Way. In 1785, William Herschel proposed such a model based on observation and measurement, [ 20 ] leading to scientific acceptance of galactocentrism , a form of heliocentrism with the Sun at the center of the Milky Way.
The 19th century astronomer Johann Heinrich von Mädler proposed the Central Sun Hypothesis, according to which the stars of the universe revolved around a point in the Pleiades .
In 1917, Heber Doust Curtis observed a nova within what then was called the " Andromeda Nebula". Searching the photographic record, 11 more novae were discovered. Curtis noticed that novas in Andromeda were drastically fainter than novas in the Milky Way . Based on this, Curtis was able to estimate that Andromeda was 500,000 light-years away. As a result, Curtis became a proponent of the so-called "island Universes" hypothesis, which held that objects previously believed to be spiral nebulae within the Milky Way were actually independent galaxies. [ 21 ]
In 1920, the Great Debate between Harlow Shapley and Curtis took place, concerning the nature of the Milky Way, spiral nebulae, and the dimensions of the Universe. To support his claim that the Great Andromeda Nebula (M31) was an external galaxy, Curtis also noted the appearance of dark lanes resembling the dust clouds in this galaxy, as well as the significant Doppler shift . In 1922 Ernst Öpik presented an elegant and simple astrophysical method to estimate the distance of M31. His result put the Andromeda Nebula far outside this galaxy at a distance of about 450,000 parsec , which is about 1,500,000 ly . [ 22 ] Edwin Hubble settled the debate about whether other galaxies exist in 1925 when he identified extragalactic Cepheid variable stars for the first time on astronomical photos of M31. These were made using the 2.5 metre (100 in) Hooker telescope , and they enabled the distance of Great Andromeda Nebula to be determined. His measurement demonstrated conclusively that this feature was not a cluster of stars and gas within this galaxy, but an entirely separate galaxy located a significant distance from the Milky Way. This proved the existence of other galaxies. [ 23 ]
Hubble also demonstrated that the redshift of other galaxies is approximately proportional to their distance from Earth ( Hubble's law ). This raised the appearance of this galaxy being in the center of an expanding Universe, however, Hubble rejected the findings philosophically:
...if we see the nebulae all receding from our position in space, then every other observer, no matter where he may be located, will see the nebulae all receding from his position. However, the assumption is adopted. There must be no favoured location in the Universe, no centre, no boundary; all must see the Universe alike. And, in order to ensure this situation, the cosmologist postulates spatial isotropy and spatial homogeneity, which is his way of stating that the Universe must be pretty much alike everywhere and in all directions." [ 24 ]
The redshift observations of Hubble, in which galaxies appear to be moving away from us at a rate proportional to their distance from us, are now understood to be associated with the expansion of the universe . All observers anywhere in the Universe will observe the same effect.
The Copernican principle , named after Nicolaus Copernicus, states that the Earth is not in a central, specially favored position. [ 25 ] Hermann Bondi named the principle after Copernicus in the mid-20th century, although the principle itself dates back to the 16th-17th century paradigm shift away from the geocentric Ptolemaic system .
The cosmological principle is an extension of the Copernican principle which states that the Universe is homogeneous (the same observational evidence is available to observers at different locations in the Universe) and isotropic (the same observational evidence is available by looking in any direction in the Universe). A homogeneous, isotropic Universe does not have a center. [ 26 ] | https://en.wikipedia.org/wiki/History_of_the_center_of_the_universe |
The chemical industry comprises the companies and other organizations that develop and produce industrial, specialty and other chemicals . Central to the modern world economy , the chemical industry converts raw materials ( oil , natural gas , air , water , metals , and minerals ) into commodity chemicals for industrial and consumer products . It includes industries for petrochemicals such as polymers for plastics and synthetic fibers ; inorganic chemicals such as acids and alkalis ; agricultural chemicals such as fertilizers , pesticides and herbicides ; and other categories such as industrial gases , speciality chemicals and pharmaceuticals .
Various professionals are involved in the chemical industry including chemical engineers, chemists and lab technicians.
Although chemicals were made and used throughout history, the birth of the heavy chemical industry (production of chemicals in large quantities for a variety of uses) coincided with the beginnings of the Industrial Revolution .
One of the first chemicals to be produced in large amounts through industrial processes was sulfuric acid . In 1736 pharmacist Joshua Ward developed a process for its production that involved heating sulfur with saltpeter, allowing the sulfur to oxidize and combine with water. It was the first practical production of sulphuric acid on a large scale. John Roebuck and Samuel Garbett were the first to establish a large-scale factory in Prestonpans, Scotland , in 1749, which used leaden condensing chambers for the manufacture of sulfuric acid. [ 1 ] [ 2 ]
In the early 18th century, cloth was bleached by treating it with stale urine or sour milk and exposing it to sunlight for long periods of time, which created a severe bottleneck in production. Sulfuric acid began to be used as a more efficient agent as well as lime by the middle of the century, but it was the discovery of bleaching powder by Charles Tennant that spurred the creation of the first great chemical industrial enterprise. His powder was made by reacting chlorine with dry slaked lime and proved to be a cheap and successful product. He opened the St Rollox Chemical Works , north of Glasgow , and production went from just 52 tons in 1799 to almost 10,000 tons just five years later. [ 3 ]
Soda ash was used since ancient times in the production of glass , textile , soap , and paper , and the source of the potash had traditionally been wood ashes in Western Europe . By the 18th century, this source was becoming uneconomical due to deforestation, and the French Academy of Sciences offered a prize of 2400 livres for a method to produce alkali from sea salt ( sodium chloride ). The Leblanc process was patented in 1791 by Nicolas Leblanc who then built a Leblanc plant at Saint-Denis . [ 4 ] He was denied his prize money because of the French Revolution . [ 5 ]
In Britain, the Leblanc process became popular. [ 5 ] William Losh built the first soda works in Britain at the Losh, Wilson and Bell works on the River Tyne in 1816, but it remained on a small scale due to large tariffs on salt production until 1824. When these tariffs were repealed, the British soda industry was able to rapidly expand. James Muspratt 's chemical works in Liverpool and Charles Tennant's complex near Glasgow became the largest chemical production centres anywhere. By the 1870s, the British soda output of 200,000 tons annually exceeded that of all other nations in the world combined.
These huge factories began to produce a greater diversity of chemicals as the Industrial Revolution matured. Originally, large quantities of alkaline waste were vented into the environment from the production of soda, provoking one of the first pieces of environmental legislation to be passed in 1863. This provided for close inspection of the factories and imposed heavy fines on those exceeding the limits on pollution. Methods were devised to make useful byproducts from the alkali.
The Solvay process was developed by the Belgian industrial chemist Ernest Solvay in 1861. In 1864, Solvay and his brother Alfred constructed a plant in Charleroi Belgium. In 1874, they expanded into a larger plant in Nancy , France. The new process proved more economical and less polluting than the Leblanc method, and its use spread. In the same year, Ludwig Mond visited Solvay to acquire the rights to use his process, and he and John Brunner formed Brunner, Mond & Co. , and built a Solvay plant at Winnington , England. Mond was instrumental in making the Solvay process a commercial success. He made several refinements between 1873 and 1880 that removed byproducts that could inhibit the production of sodium carbonate in the process.
The manufacture of chemical products from fossil fuels began at scale in the early 19th century. The coal tar and ammoniacal liquor residues of coal gas manufacture for gas lighting began to be processed in 1822 at the Bonnington Chemical Works in Edinburgh to make naphtha , pitch oil (later called creosote ), pitch , lampblack ( carbon black ) and sal ammoniac ( ammonium chloride ). [ 6 ] Ammonium sulphate fertiliser, asphalt road surfacing , coke oil and coke were later added to the product line.
The late 19th century saw an explosion in both the quantity of production and the variety of chemicals that were manufactured. Large chemical industries arose in Germany and later in the United States.
Production of artificial manufactured fertilizer for agriculture was pioneered by Sir John Lawes at his purpose-built Rothamsted Research facility. In the 1840s he established large works near London for the manufacture of superphosphate of lime . Processes for the vulcanization of rubber were patented by Charles Goodyear in the United States and Thomas Hancock in England in the 1840s. The first synthetic dye was discovered by William Henry Perkin in London . He partly transformed aniline into a crude mixture which, when extracted with alcohol, produced a substance with an intense purple colour. He also developed the first synthetic perfumes. German industry quickly began to dominate the field of synthetic dyes. The three major firms BASF , Bayer , and Hoechst produced several hundred different dyes. By 1913, German industries produced almost 90% of the world's supply of dyestuffs and sold approximately 80% of their production abroad. [ 7 ] In the United States, Herbert Henry Dow 's use of electrochemistry to produce chemicals from brine was a commercial success that helped to promote the country's chemical industry. [ 8 ]
The petrochemical industry can be traced back to the oil works of Scottish chemist James Young , and Canadian Abraham Pineo Gesner . The first plastic was invented by Alexander Parkes , an English metallurgist . In 1856, he patented Parkesine , a celluloid based on nitrocellulose treated with a variety of solvents. [ 9 ] This material, exhibited at the 1862 London International Exhibition, anticipated many of the modern aesthetic and utility uses of plastics. The industrial production of soap from vegetable oils was started by William Lever and his brother James in 1885 in Lancashire based on a modern chemical process invented by William Hough Watson that used glycerin and vegetable oils . [ 10 ]
By the 1920s, chemical firms consolidated into large conglomerates ; IG Farben in Germany, Rhône-Poulenc in France and Imperial Chemical Industries in Britain. Dupont became a major chemicals firm in the early 20th century in America.
Polymers and plastics such as polyethylene , polypropylene , polyvinyl chloride , polyethylene terephthalate , polystyrene and polycarbonate comprise about 80% of the industry's output worldwide. [ 11 ] Chemicals are used in many different consumer goods, and are also used in many different sectors. This includes agriculture manufacturing, construction, and service industries. [ 11 ] Major industrial customers include rubber and plastic products, textiles , apparel, petroleum refining, pulp and paper , and primary metals. Chemicals are nearly a $5 trillion global enterprise, and the EU and U.S. chemical companies are the world's largest producers. [ 12 ]
Sales of the chemical business can be divided into a few broad categories, including basic chemicals (about 35% – 37% of dollar output), life sciences (30%), specialty chemicals (20% – 25%) and consumer products (about 10%). [ 13 ]
Basic chemicals, or "commodity chemicals" are a broad chemical category including polymers, bulk petrochemicals and intermediates, other derivatives and basic industrials, inorganic chemicals , and fertilizers .
Polymers are the largest revenue segment and includes all categories of plastics and human-made fibers. The major markets for plastics are packaging , followed by home construction, containers, appliances, pipe, transportation, toys, and games.
Principal raw materials for polymers are bulk petrochemicals like ethylene, propylene and benzene.
Petrochemicals and intermediate chemicals are primarily made from liquefied petroleum gas (LPG), natural gas and crude oil fractions. Large volume products include ethylene , propylene , benzene , toluene , xylenes , methanol , vinyl chloride monomer (VCM), styrene , butadiene , and ethylene oxide . These basic or commodity chemicals are the starting materials used to manufacture many polymers and other more complex organic chemicals particularly those that are made for use in the specialty chemicals category.
Other derivatives and basic industrials include synthetic rubber , surfactants , dyes and pigments , turpentine , resins , carbon black , explosives , and rubber products and contribute about 20 percent of the basic chemicals' external sales.
Inorganic chemicals (about 12% of the revenue output) make up the oldest of the chemical categories. Products include salt , chlorine , caustic soda , soda ash , acids (such as nitric acid , phosphoric acid , and sulfuric acid ), titanium dioxide , and hydrogen peroxide .
Fertilizers are the smallest category (about 6 percent) and include phosphates , ammonia , and potash chemicals.
Life sciences (about 30% of the dollar output of the chemistry business) include differentiated chemical and biological substances, pharmaceuticals , diagnostics, animal health products , vitamins , and pesticides . While much smaller in volume than other chemical sectors, their products tend to have high prices – over ten dollars per pound – growth rates of 1.5 to 6 times GDP , and research and development spending at 15 to 25% of sales. Life science products are usually produced with high specifications and are closely scrutinized by government agencies such as the Food and Drug Administration. Pesticides, also called "crop protection chemicals", are about 10% of this category and include herbicides , insecticides , and fungicides . [ 13 ]
Specialty chemicals are a category of relatively high-valued, rapidly growing chemicals with diverse end product markets. Typical growth rates are one to three times GDP with prices over a dollar per pound. They are generally characterized by their innovative aspects. Products are sold for what they can do rather than for what chemicals they contain. Products include electronic chemicals, industrial gases , adhesives and sealants as well as coatings, industrial and institutional cleaning chemicals, and catalysts. In 2012, excluding fine chemicals, the $546 billion global specialty chemical market was 33% Paints, Coating and Surface Treatments, 27% Advanced Polymer, 14% Adhesives and Sealants, 13% additives, and 13% pigments and inks. [ 14 ]
Speciality chemicals are sold as effect or performance chemicals. Sometimes they are mixtures of formulations, unlike " fine chemicals ", which are almost always single-molecule products.
Consumer products include direct product sales of chemicals such as soaps , detergents , and cosmetics . Typical growth rates are 0.8 to 1.0 times GDP. [ citation needed ]
Consumers rarely come into contact with basic chemicals. Polymers and specialty chemicals are materials that they encounter everywhere daily. Examples are plastics, cleaning materials, cosmetics, paints and coatings, electronics, automobiles and the materials used in home construction. [ 14 ] These specialty products are marketed by chemical companies to the downstream manufacturing industries as pesticides , specialty polymers , electronic chemicals, surfactants , construction chemicals, industrial cleaners, flavours and fragrances , specialty coatings, printing inks, water-soluble polymers, food additives , paper chemicals , oil field chemicals, plastic adhesives, adhesives and sealants , cosmetic chemicals , water management chemicals , catalysts , and textile chemicals. Chemical companies rarely supply these products directly to the consumer.
Annually the American Chemistry Council tabulates the US production volume of the top 100 chemicals. In 2000, the aggregate production volume of the top 100 chemicals totaled 502 million tons, up from 397 million tons in 1990. Inorganic chemicals tend to be the largest volume but much smaller in dollar revenue due to their low prices. The top 11 of the 100 chemicals in 2000 were sulfuric acid (44 million tons), nitrogen (34), ethylene (28), oxygen (27), lime (22), ammonia (17), propylene (16), polyethylene (15), chlorine (13), phosphoric acid (13) and diammonium phosphates (12). [ citation needed ]
The largest chemical producers today are global companies with international operations and plants in numerous countries. Below is a list of the top 25 chemical companies by chemical sales in 2015. (Note: Chemical sales represent only a portion of total sales for some companies.)
Top chemical companies by chemical sales in 2015. [ 15 ]
London , United Kingdom
From the perspective of chemical engineers, the chemical industry involves the use of chemical processes such as chemical reactions and refining methods to produce a wide variety of solid, liquid, and gaseous materials. Most of these products serve to manufacture other items, although a smaller number go directly to consumers. Solvents , pesticides , lye , washing soda , and portland cement provide a few examples of products used by consumers.
The industry includes manufacturers of inorganic - and organic -industrial chemicals, ceramic products, petrochemicals, agrochemicals, polymers and rubber (elastomers), oleochemicals (oils, fats, and waxes), explosives, fragrances and flavors. Examples of these products are shown in the Table below.
Related industries include petroleum , glass , paint , ink , sealant , adhesive , pharmaceuticals and food processing .
Chemical processes such as chemical reactions operate in chemical plants to form new substances in various types of reaction vessels. In many cases, the reactions take place in special corrosion-resistant equipment at elevated temperatures and pressures with the use of catalysts . The products of these reactions are separated using a variety of techniques including distillation especially fractional distillation , precipitation , crystallization , adsorption , filtration , sublimation , and drying .
The processes and products or products are usually tested during and after manufacture by dedicated instruments and on-site quality control laboratories to ensure safe operation and to assure that the product will meet required specifications . More organizations within the industry are implementing chemical compliance software to maintain quality products and manufacturing standards. [ 16 ] The products are packaged and delivered by many methods, including pipelines, tank-cars, and tank-trucks (for both solids and liquids), cylinders, drums, bottles, and boxes. Chemical companies often have a research-and-development laboratory for developing and testing products and processes. These facilities may include pilot plants and such research facilities may be located at a site separate from the production plant(s).
The scale of chemical manufacturing tends to be organized from largest in volume ( petrochemicals and commodity chemicals ), to specialty chemicals , and the smallest, fine chemicals .
The petrochemical and commodity chemical manufacturing units are on the whole single product continuous processing plants. Not all petrochemical or commodity chemical materials are made in one single location, but groups of related materials often are to induce industrial symbiosis as well as material, energy and utility efficiency and other economies of scale .
Those chemicals made on the largest of scales are made in a few manufacturing locations around the world, for example in Texas and Louisiana along the Gulf Coast of the United States , on Teesside ( United Kingdom ), and in Rotterdam in the Netherlands . The large-scale manufacturing locations often have clusters of manufacturing units that share utilities and large-scale infrastructure such as power stations , port facilities , and road and rail terminals. To demonstrate the clustering and integration mentioned above, some 50% of the United Kingdom's petrochemical and commodity chemicals are produced by the Northeast of England Process Industry Cluster on Teesside .
Specialty chemical and fine chemical manufacturing are mostly made in discrete batch processes. These manufacturers are often found in similar locations but in many cases, they are to be found in multi-sector business parks.
In the U.S. there are 170 major chemical companies. [ 17 ] They operate internationally with more than 2,800 facilities outside the U.S. and 1,700 foreign subsidiaries or affiliates operating. The U.S. chemical output is $750 billion a year. The U.S. industry records large trade surpluses and employs more than a million people in the United States alone. The chemical industry is also the second largest consumer of energy in manufacturing and spends over $5 billion annually on pollution abatement.
In Europe, the chemical, plastics, and rubber sectors are among the largest industrial sectors. [ 18 ] Together they generate about 3.2 million jobs in more than 60,000 companies. Since 2000 the chemical sector alone has represented 2/3 of the entire manufacturing trade surplus of the EU.
In 2012, the chemical sector accounted for 12% of the EU manufacturing industry's added value. Europe remains the world's biggest chemical trading region with 43% of the world's exports and 37% of the world's imports, although the latest data shows that Asia is catching up with 34% of the exports and 37% of imports. [ 19 ] Even so, Europe still has a trading surplus with all regions of the world except Japan and China where in 2011 there was a chemical trade balance. Europe's trade surplus with the rest of the world today amounts to 41.7 billion Euros. [ 20 ]
Over the 20 years between 1991 and 2011, the European Chemical industry saw its sales increase from 295 billion Euros to 539 billion Euros, a picture of constant growth. Despite this, the European industry's share of the world chemical market has fallen from 36% to 20%. This has resulted from the huge increase in production and sales in emerging markets like India and China. [ 21 ] The data suggest that 95% of this impact is from China alone. In 2012 the data from the European Chemical Industry Council shows that five European countries account for 71% of the EU's chemicals sales. These are Germany, France, the United Kingdom, Italy and the Netherlands. [ 22 ]
The chemical industry has seen growth in China, India, Korea, the Middle East, South East Asia, Nigeria and Brazil. The growth is driven by changes in feedstock availability and price, labor and energy costs, differential rates of economic growth and environmental pressures.
Just as companies emerge as the main producers of the chemical industry, we can also look on a more global scale at how industrialized countries rank, with regard to the billions of dollars worth of production a country or region could export. Though the business of chemistry is worldwide in scope, the bulk of the world's $3.7 trillion chemical output is accounted for by only a handful of industrialized nations. The United States alone produced $689 billion, 18.6 percent of the total world chemical output in 2008. [ 23 ] | https://en.wikipedia.org/wiki/History_of_the_chemical_industry |
The chemical industry in the People's Republic of China valued at around $1.75 trillion in 2014. [ 1 ] The country is currently the largest chemicals manufacturer in the world. [ 2 ]
The chemical industry is central to modern China's economy. It uses special methods to alter the structure, composition or synthesis of substances to produce new products, such as steel, plastic, and ethyl. The chemical industry provides building materials for China's infrastructure, including subway, high-speed train, and highway.
Prior to 1978, most of the products were produced by the state-owned enterprises, but the share in product output from state-owned business has since decreased.
The Chinese chemical industry is also one of the world's largest producers of both controlled and non-controlled precursor chemicals used in the Global illicit drug trade , particularly in the Golden Triangle , Mexico, Latin America and Europe, [ 3 ] with large volumes of these substances being traded through the growing research chemical (RC) industry online through social media and on B2B platforms and the dark web. [ 4 ] [ 5 ]
The modern chemical industry was born after the Industrial Revolution which took place in 1760 to sometime between 1820 and 1840. This revolution included the change from hand production methods to machines, iron production processes and new chemical manufacturing. [ 6 ] Before that, China's chemical products were mainly produced by hand workshop.
Shennong has tested hundreds of herbs to find their medical value, and have written "The Divine Farmer's Herb-Root Classic". This book recorded the efficacy of 365 medicines derived from plants, animals, and minerals and gave rarity ratings and grade. Shennong's work led the way to Chinese medicine. In the Ming Dynasty, Li Shizhen wrote "Compendium of Materia Medica" which contained more than 1,800 kinds of drugs. It also described the nature, flavor, form, type and usage in disease cure of over 1000 herbs. The book is considered as the primary reference work for herbal preparation. [ 7 ] These works were significant to the development of traditional Chinese medicine, and they laid the foundation for modern Chinese medical chemistry.
Tu Youyou is a pharmaceutical chemist of China. She discovered qinghaosu ( artemisinin ) and applied to cure malaria. Qinghaosu saves millions of lives in South China, South America, Southeast Asia, and Africa. It is an important breakthrough in the medicine area last century, and Tu Youyou received the 2015 Nobel Prize in Physiology or Medicine and Lasker Award in Clinical Medicine for her work. She is the first Chinese female to receive a Nobel Prize in Physiology or Medicine. [ 8 ]
China's agriculture production efficiency boosted in the 20th century, because of the application of chemical pesticides and fertilizers. In 1909, Franklin Hiram King , US Professor of Agriculture, made a tour of China. His book "Farmers of Forty Centuries" described China's farming. This book inspired many Chinese farmers to conduct ecological farming and use fertilizers. [ 9 ] Beginning in 1978, the Chinese government created the Family Production Responsibility System and encouraged farmers to use fertilizer. [ 10 ]
Chemical fertilizer can increase the output by 50% to 80%. [ 11 ] The chemical industry produces micronutrients fertilizer contained nitrogen, phosphorus and potassium, which can meet the demand of different crops and soil structure. China is currently the largest consumer and producer of nitrogen fertilizers. [ 12 ]
According to statistics, by 1984, there were actually about 9 million chemical substances in the world, of which about 43% were materials. Although the number of materials is large, if classified according to chemical composition, it can be summarized into three categories: metal materials, inorganic non-metal materials and composite materials.
Steel is an important metal material in the chemical industry of China. In 2016, the annual production of steel of the world is 1621 million tons, of which 804 million tons are produced in China (49.6%), 105 million tons are produced in Japan (6.5%), 89 million tons are produced in India (5.5%), 79 million tons are produced in US (4.9%). [ 13 ]
China's production of steel increased from 100 million tons in 2000 to 250 million tons in 2004. It caused rising demand for raw materials which is necessary for steel production, included pig iron, iron ore, scrap metal, lime and dolomite, coke and coal. The price of iron ore increased by over 70% from 2004 to 2005. Thus, in December 2005, China decided to limit production of steel to not more than 400 million tons per year within five years, in order to lower the increasing rate of raw material prices. [ 14 ]
In 2016, China ethyl alcohol and other basic organic chemicals markets and plastic materials and resins market were valued at $137 and $184 billion respectively, which had 9% and 10% growth rates.
China is the largest producer and exporter plastic materials market in the world. The main driver of this market is the expanded application of ethanol in China. The demand for ethanol in China is about 2.3 million tons now. [ 15 ]
China has a key operating division, Chenguang Institute, which has developed a number of advanced epoxy resin , organic silicone , polymer material and specialty engineering plastics. It has signed a JV agreement with DuPont 's high-performance polymer division, to produce and sale premixed rubber and raw fluoro-rubber . The JV agreement included the establishment of an ultra-modern pre-mixed rubber factory in Shanghai and it began to operate in 2011. [ 16 ]
The composite material is new structural material. It is characterized by a combination of volumetric strength , volumetric stiffness and corrosion resistance over metallic materials. It is composed of a matrix material such as synthetic resin, metal or ceramic, and a reinforcing material composed of inorganic or organic synthetic fibres. There are a variety of substrates and reinforcing materials so that a selective fit can be made to produce various composites with satisfactory performance, which has a broader prospect for chemical materials.
Sinochem and Shanghai Chemical Industry Institute have set up a laboratory for composite materials. The two sides will jointly develop technology, transform the results and apply in the industry of carbon fiber and its curing resins, in order to promote the technologies and products of high-performance composite materials and facilitate its industrialization and marketization . At present, this laboratory has launched a project to research and develop the spray-free carbon fiber composite material. At first, this material will be applied to new energy cars, which can not only reduce the weight of the cars but also reduce the cost of applying composite materials while improving production efficiency significantly. [ 17 ]
China has a company which is the top 3 chemical companies all over the world. That is Sinopec. It has $43.8 billion in chemical sales in 2015.
A list of the top 20 China's chemical corporation by turnover in 2018 shows below. [ 18 ]
Chinese companies plan to go into the specialties side of the market, and some of them already become one of the players in the market, such as Zhejiang NHU, a vitamin maker; Yantai Wanhua, an isocyanates maker; and Bairun, the leader in the Chinese flavors-and-fragrances market. [ 19 ]
The chemical market value of China had increased in the past 30 years. In 2015, it represented about 30% of the chemicals demand all over the world. [ 20 ]
China's demand growth of the chemical industry has slowed down from the two-digit rates in the past 10 years, but it still has 60% growth for the global demand from 2011 to 2020. [ 19 ]
As of the end of November 2011, there were 24,125 enterprises above designated size in the China national chemical industry, with a total output value of 6.0 trillion yuan, a year-on-year increase of 35.2%, accounting for 58.61% of the total output value of the whole industry. In the first 11 months of 2011, the fixed assets investment in the chemical industry was 861.721 billion yuan, a year-on-year increase of 26.9%, which was 5.5 percentage points higher than the industry average, accounting for 70.12%. In the first 10 months of 2011, the total profit of the chemical industry was 320.88 billion yuan, a year-on-year increase of 44.4%, accounting for 47.1% of the total industry profits. The annual output value of the chemical industry is expected to be about 6.58 trillion yuan, a year-on-year increase of 32%, and the total profit is 350 billion yuan, an increase of 35%. In 2011, the added value of the chemical industry increased by 14.8% year-on-year, and the growth rate slowed by 1% year-on-year. [ 21 ]
A list of China chemical industry's main products in 2011 shows below. [ 21 ]
China government set up policy goals to solve the unemployment issue and boost the economy, in order to against the increasing population. The government's policies and goals have progressed as the economy was opened up in 1978. It can be divided into three periods:
1978-1990: China's market was opened to the world in 1978, and the government knew the importance of the chemical industry, so permitted the foreign direct investment get into domestic but control heavily. Meanwhile, China's domestic chemical demand increased, so most companies decided to invest in produce.
1990-2000: Multinationals were allowed to enter the Chinese market, to join the chemical produce cooperate with Chinese firms.
2000-2011: Foreign direct investment in this period has not to limit, while multinationals booming because China became a major exporter for chemical in the world. [ 20 ]
China's chemical industry has developed over the past 40 years, from an economic backwater to the largest chemicals manufacturing economy, that consumes raw materials and energy. This change has helped hundreds of millions of Chinese out of poverty but polluted China's air and water at the same time. [ 22 ]
China government has made efforts to fight the pollution. Free plastic shopping bags were banned in 2008. The production of plastic bags causes a waste of resource and energy and environmental pollution because plastic bags are non-recyclable . [ 23 ]
Chemical industries in China are starting to research and develop green technologies by the recommendation of the government such as the use of alternative fuels to produce chemical products. Some industries are using carbon dioxide and others naturally available to produce industrial products, fuels and other substances. For example, a specialty chemicals company called Elevance Renewable Sciences produces highly concentrated detergents by using green technology metathesis , which significantly lowers the energy consumption and minimizes pollution. [ 15 ]
China’s chemical industry is one of the world's largest producers of both controlled and non-controlled precursor chemicals used in the global illicit drug trade . These chemicals can be found utilized in drug industries located in the Golden Triangle , Mexico, Latin America and Europe. [ 3 ]
Laos is the key route into the Golden Triangle for precursors from China, and finished products from the region have been seized across the region and beyond. [ 24 ] [ 25 ] Numerous seizures of precursors from China have occurred in recent years, including one disclosed in 2021 in which Lao authorities seized 200 tons of precursors including 72 tons of propionyl chloride from China. The chemicals had been transported to Bokeo, Laos, which is a key gateway to Myanmar drug factories, especially via the Golden Triangle Special Economic Zone area. Commenting on the seizure, a Lao official told Voice of America: “You can only imagine the amount of drugs that volume of precursors can make.” [ 26 ] [ 27 ] | https://en.wikipedia.org/wiki/History_of_the_chemical_industry_in_China |
The chemical industry in the United Kingdom is one of the UK's main manufacturing industries . At one time, the UK's chemical industry was a world leader. The industry has also been environmentally damaging , and includes radioactive nuclear industries .
Alexander Parkes in 1855 develops the first plastic, in Birmingham, a form of celluloid. Daniel Spill , his assistant, develops it further, as xylonite. The American John Wesley Hyatt later tries to claim the patent, after developing another process for celluloid, with camphor , in 1869. The subsequent British Xylonite Company, formed in 1877, later becomes BX Plastics . A division, Cascelloid, formed in Leicestershire in 1919, becomes Palitoy . Another division, Halex, made sports products.
Sir William Henry Perkin FRS discovered the first synthetic dye mauveine in 1856, produced from aniline , having tried to synthesise quinine at his home on Cable Street in east London . Perkin's work, alone, led the way to the British chemical industry.
Otto Witt, a Swiss chemist in Brentford, made the first commercial azo dye in 1875, which he called Chrysoidine . Sir James Morton (chemist) , in Carlisle, in the late 1890s developed some of the first dyes that were resistant to sunlight.
Before World War I, the country was largely dependent on Germany for fine chemicals.
Sir Harry Melville (chemist) at the University of Birmingham, conducted much important polymer research, with Birmingham becoming a world leader in polymer research.
21% of the UK's chemical industry is in North West England , notably around Runcorn and Widnes . The chemical industry is 6.8% of UK manufacturing; around 85% of the UK chemical industry is in England.
It employs 500,000, including 350,000 indirectly.
It accounts for around 20% of the UK's research and development.
The Castner-Kellner works at Runcorn, of the United Alkali Company (formed in 1890) had made chlorine gas for the First World War trenches. Subsequently ICI's Special Products Department was at Weston Point in Runcorn, under Holbrook Gaskell , who had managed this production of chlorine gas.
The Sutton Oak Chemical Defence Research Establishment in Merseyside was established in 1915, on the former Magnum Steelworks, where Foster Neville Woodward was head of research from 1937. Research was conducted on nerve agents , such as DFP . Poison gas production was at Rhydymwyn in Flintshire in north Wales. Porton Down has researched toxins since 1916. The Germans had researched nerve agents at Dyhernfurth , under Gerhard Schrader and Austrian Richard Kuhn . The Americans conducted experiments at Edgewood Arsenal , and the Canadians at Suffield Experimental Station . The V-series of nerve agents, such as VE , VG and VX , were discovered by Ranajit Ghosh (1909 - February 1992) and James Frederick Newman (1915 - 30 June 2004) at ICI in 1952. VG was discovered first, also known as Tetram . Swedish Lars-Erik Tammelin was also conducting work into this.
The Aberporth Rocket Projectile Establishment began in 1941; this is now ParcAberporth , the only site in the UK licensed to fly UAVs , run by Qinetiq .
In 2015, the UK chemical industry exported £50bn of products. [ 6 ]
Below the UK chemical industry, the UK automotive industry exports £35bn, and the UK aerospace industry exports £32bn. [ 7 ]
The industry employs about 30,000 in research and development. The industry invests £5bn in research. The UK automotive industry invests £2.7bn and the UK aerospace industry invests £2.1bn.
Centres of research include the National Formulation Centre at Sedgefield , the Advanced Propulsion Centre in Coventry, with the nearby UK Battery Industrialisation Centre , and the Centre for Process Innovation in the north east. Unilever Research & Development Port Sunlight Laboratory is in the north west. BP has the Sunbury Research Centre in south-west London.
Regulation of the UK chemical industry is largely under the European Chemicals Agency (ECHA) and the Registration, Evaluation, Authorisation and Restriction of Chemicals legislation (REACH).
Teesside and Cheshire are areas with an established chemical industry. Significant chemical plants in the UK include:
Significant chemical companies in the UK have been:
Relevant organisations related to the UK chemical industry are the Institution of Chemical Engineers (IChemE), the Chemical Industries Association , and the Society of Chemical Industry . The chemical industry in Europe is represented by the European Chemical Industry Council or CEFIC. | https://en.wikipedia.org/wiki/History_of_the_chemical_industry_in_the_United_Kingdom |
The compass is a magnetometer used for navigation and orientation that shows direction in regards to the geographic cardinal points . The structure of a compass consists of the compass rose, which displays the four main directions on it: East (E), South (S), West (W) and North (N). The angle increases in the clockwise position. North corresponds to 0°, so east is 90°, south is 180° and west is 270°.
The history of the compass started more than 2000 years ago during the Han dynasty (202 BC – 220 AD). The first compasses were made of lodestone , a naturally magnetized stone of iron, in Han dynasty China. [ 1 ] [ 2 ] It was called the "South Pointing Fish" and was used for land navigation by the mid-11th century during the Song dynasty (960–1279 AD). Shen Kuo provided the first explicit description of a magnetized needle in 1088 and Zhu Yu mentioned its use in maritime navigation in the text Pingzhou Table Talks , dated 1111–1117. [ 3 ] [ 4 ] Later compasses were made of iron needles, magnetized by striking them with a lodestone. Magnetized needles and compasses were first described in medieval Europe by the English theologian Alexander Neckam (1157–1217 AD). The first literary description of a compass in Western Europe was recorded in around 1190 and in the Islamic world 1232. [ 5 ] Dry compasses begin appearing around 1269 in Medieval Europe and 1300 in the Medieval Islamic world . [ 6 ] [ 7 ] [ 8 ] This was replaced in the early 20th century by the liquid-filled magnetic compass. [ 9 ]
Before the introduction of the compass, geographical position and direction at sea were primarily determined by the sighting of landmarks, supplemented with the observation of the position of celestial bodies. [ 10 ] Other techniques included sampling mud from the seafloor (China), [ 11 ] analyzing the flight path of birds, and observing wind, sea debris, and sea state (Polynesia and elsewhere). [ 12 ] Objects that have been understood as having been used for navigation by measuring the angles between celestial objects were discovered in the Indus Valley site of Lothal. [ 13 ] The Norse are believed to have used a type of sun compass to locate true north. On cloudy days, the Vikings may have used cordierite or some other birefringent crystal to determine the sun's direction and elevation from the polarization of daylight; their astronomical knowledge was sufficient to let them use this information to determine their proper heading. [ 14 ] The invention of the compass made it possible to determine a heading when the sky was overcast or foggy, and when landmarks were not in sight. This enabled mariners to navigate safely far from land, increasing sea trade, and contributing to the Age of Discovery . [ 15 ] [ 16 ]
The compass was invented in China during the Han dynasty between the 2nd century BC and 1st century AD where it was called the "south-governor" ( sīnán 司南 ) or "South Pointing Fish" ( 指南魚 ). [ 3 ] The magnetic compass was not, at first, used for navigation, but for geomancy and fortune-telling by the Chinese . The earliest Chinese magnetic compasses were possibly used to order and harmonize buildings by the geomantic principles of feng shui . These early compasses were made with lodestone , a form of the mineral magnetite that is a naturally occurring magnet and aligns itself with the Earth's magnetic field. [ 10 ] People in ancient China discovered that if a lodestone was suspended so it could turn freely, it would always point toward the magnetic poles. Early compasses were used to choose areas suitable for building houses, growing crops, and to search for rare gems. Compasses were later adapted for navigation during the Song dynasty in the 11th century. [ 1 ]
Based on Krotser and Coe's discovery of an Olmec hematite artifact in Mesoamerica , radiocarbon dated to 1400–1000 BC, astronomer John Carlson has hypothesized that the Olmec might have used the geomagnetic lodestone earlier than 1000 BC for geomancy , a method of divination , which if proven true, predates the Chinese use of magnetism for feng shui by a millennium. [ 17 ] Carlson speculates that the Olmecs used similar artifacts as a directional device for astronomical or geomantic purposes but does not suggest navigational usage. The artifact is part of a polished hematite bar with a groove at one end, possibly used for sighting. Carlson's claims have been disputed by other scientific researchers, who have suggested that the artifact is actually a constituent piece of a decorative ornament and not a purposely built compass. [ 18 ] Several other hematite or magnetite artifacts have been found at pre-Columbian archaeological sites in Mexico and Guatemala. [ 19 ] [ 20 ]
A number of early cultures used lodestone so they could turn, as magnetic compasses for navigation. Early mechanical compasses are referenced in written records of the Chinese , who began using it for navigation "some time before 1050, possibly as early as 850." [ 21 ] [ 10 ] At present, according to Kreutz, scholarly consensus is that the Chinese invention used in navigation pre-dates the first European mention of a compass by 150 years. [ 22 ] The first recorded appearance of the use of the compass in Europe (1190) [ 23 ] is earlier than in the Muslim world (1232), [ 24 ] [ 25 ] as a description of a magnetized needle and its use among sailors occurs in Alexander Neckam 's De naturis rerum (On the Natures of Things), written in 1190. [ 23 ] [ 26 ]
However, there are questions over diffusion. Some historians suggest that the Arabs introduced the compass from China to Europe. [ 27 ] [ 28 ] Some suggested the compass was transmitted from China to Europe and the Islamic world via the Indian Ocean, [ 29 ] or was brought by the crusaders to Europe from China. [ 30 ] However, some scholars have proposed an independent European invention of the compass. [ 31 ]
These are noteworthy Chinese literary references in evidence for its antiquity:
Thus, the use of a magnetic compass by the military for land navigation occurred sometime before 1044, but incontestable evidence for the use of the compass as a maritime navigational device did not appear until 1117.
The typical Chinese navigational compass was in the form of a magnetic needle floating in a bowl of water. [ 46 ] According to Needham , the Chinese in the Song dynasty and continuing Yuan dynasty did make use of a dry compass, although this type never became as widely used in China as the wet compass. [ 47 ] Evidence of this is found in the Shilin Guang Ji ("Guide Through the Forest of Affairs"), published in 1325 by Chen Yuanjing, although its compilation had taken place between 1100 and 1250. [ 47 ] The dry compass in China was a dry suspension compass, a wooden frame crafted in the shape of a turtle hung upside down by a board, with the lodestone sealed in by wax, and if rotated, the needle at the tail would always point in the northern cardinal direction. [ 47 ] Although the European compass-card in a box frame and dry pivot needle was adopted in China after its use was taken by Japanese pirates in the 16th century (who had, in turn, learned of it from Europeans), [ 48 ] the Chinese design of the suspended dry compass persisted in use well into the 18th century. [ 49 ] However, according to Kreutz there is only a single Chinese reference to a dry-mounted needle (built into a pivoted wooden tortoise) which is dated to between 1150 and 1250 and claims that there is no clear indication that Chinese mariners ever used anything but the floating needle in a bowl until the 16th century. [ 46 ]
The first recorded use of a 48 position mariner's compass on sea navigation was noted in The Customs of Cambodia by Yuan dynasty diplomat Zhou Daguan , he described his 1296 voyage from Wenzhou to Angkor Thom in detail; when his ship set sail from Wenzhou, the mariner took a needle direction of “ding Wei” position, which is equivalent to 22.5 degree SW. After they arrived at Baria , the mariner took "Kun Shen needle", or 52.5 degree SW. [ 50 ] Zheng He 's Navigation Map, also known as the " Mao Kun Map ", contains a large amount of detail "needle records" of Zheng He's expeditions . [ 51 ]
Alexander Neckam reported the use of a magnetic compass for the region of the English Channel in the texts De utensilibus and De naturis rerum , [ 6 ] written between 1187 and 1202, after he returned to England from France [ 53 ] and prior to entering the Augustinian abbey at Cirencester. [ 54 ] In his 1863 edition of Neckam 's De naturis rerum , Thomas Wright provides a translation of the passage in which Neckam mentions sailors being guided by a compass' needle:
The sailors, moreover, as they sail over the sea, when in cloudy whether they can no longer profit by the light of the sun, or when the world is wrapped up in the darkness of the shades of night, and they are ignorant to what point of the compass their ship's course is directed, they touch the magnet with a needle, which (the needle) is whirled round in a circle until, when its motion ceases, its point looks direct to the north. [ 55 ]
In 1269 Petrus Peregrinus of Maricourt described a floating compass for astronomical purposes as well as a dry compass for seafaring, in his well-known Epistola de magnete . [ 6 ]
In the Mediterranean, the introduction of the compass, at first only known as a magnetized pointer floating in a bowl of water, [ 56 ] went hand in hand with improvements in dead reckoning methods, and the development of Portolan charts , leading to more navigation during winter months in the second half of the 13th century. [ 57 ] [ 10 ] While the practice from ancient times had been to curtail sea travel between October and April, due in part to the lack of dependable clear skies during the Mediterranean winter, the prolongation of the sailing season resulted in a gradual, but sustained increase in shipping movement; by around 1290 the sailing season could start in late January or February, and end in December. [ 58 ] The additional few months were of considerable economic importance. For instance, it enabled Venetian convoys to make two round trips a year to the Levant , instead of one. [ 59 ]
Between 1295 and 1302, Flavio Gioja converted the compass from a needle floating in water to what we use today, a round box with a compass card that rotates 360 degrees attached to a magnetic element. [ 60 ]
At the same time, traffic between the Mediterranean and northern Europe also increased, with the first evidence of direct commercial voyages from the Mediterranean into the English Channel coming in the closing decades of the 13th century, and one factor may be that the compass made traversal of the Bay of Biscay safer and easier. [ 61 ] However, critics like Kreutz have suggested that it was later in 1410 that anyone really started steering by compass. [ 62 ]
The earliest reference to a compass in the Muslim world occurs in a Persian talebook from 1232, Jawami ul-Hikayat , [ 24 ] where a compass is used for navigation during a trip in the Red Sea or the Persian Gulf . [ 8 ] The fish-shaped iron leaf described indicates that this early Chinese design has spread outside of China. [ 63 ] The earliest Arabic reference to a compass, in the form of magnetic needle in a bowl of water, comes from a work by Baylak al-Qibjāqī, written in 1282 while in Cairo. [ 24 ] [ 64 ] Al-Qibjāqī described a needle-and-bowl compass used for navigation on a voyage he took from Syria to Alexandria in 1242. [ 24 ] Since the author describes having witnessed the use of a compass on a ship trip some forty years earlier, some scholars are inclined to antedate its first appearance in the Arab world accordingly. [ 24 ] Al-Qibjāqī also reports that sailors in the Indian Ocean used iron fish instead of needles. [ 65 ]
Late in the 13th century, the Yemeni Sultan and astronomer al-Malik al-Ashraf described the use of the compass as a " Qibla indicator" to find the direction to Mecca . [ 66 ] In a treatise about astrolabes and sundials , al-Ashraf includes several paragraphs on the construction of a compass bowl (ṭāsa). He then uses the compass to determine the north point, the meridian (khaṭṭ niṣf al-nahār), and the Qibla. This is the first mention of a compass in a medieval Islamic scientific text and its earliest known use as a Qibla indicator, although al-Ashraf did not claim to be the first to use it for this purpose. [ 6 ] [ 67 ]
In 1300, an Arabic treatise written by the Egyptian astronomer and muezzin Ibn Simʿūn describes a dry compass used for determining qibla. Like Peregrinus' compass, however, Ibn Simʿūn's compass did not feature a compass card. [ 6 ] In the 14th century, the Syrian astronomer and timekeeper Ibn al-Shatir (1304–1375) invented a timekeeping device incorporating both a universal sundial and a magnetic compass. He invented it for the purpose of finding the times of prayers . [ 68 ] Arab navigators also introduced the 32-point compass rose during this time. [ 69 ] In 1399, an Egyptian reports two different kinds of magnetic compass. One instrument is a “fish” made of willow wood or pumpkin, into which a magnetic needle is inserted and sealed with tar or wax to prevent the penetration of water. The other instrument is a dry compass. [ 65 ]
In the 15th century, the description given by Ibn Majid while aligning the compass with the pole star indicates that he was aware of magnetic declination . An explicit value for the declination is given by ʿIzz al-Dīn al-Wafāʾī (fl. the 1450s in Cairo). [ 8 ]
Pre modern Arabic sources refer to the compass using the term ṭāsa (lit. "bowl") for the floating compass, or ālat al-qiblah ("qibla instrument") for a device used for orienting towards Mecca. [ 8 ]
Friedrich Hirth suggested that Arab and Persian traders, who learned about the polarity of the magnetic needle from the Chinese, applied the compass for navigation before the Chinese did. [ 70 ] However, Needham described this theory as "erroneous" and "it originates because of a mistranslation" of the term chia-ling found in Zhu Yu 's book Pingchow Table Talks . [ 71 ]
The development of the magnetic compass is highly uncertain. The compass is mentioned in fourth-century AD Tamil nautical books; moreover, its early name of macchayantra (fish machine) suggest a Chinese origin. In its Indian form, the wet compass often consisted of a fish-shaped magnet, float in a bowl filled with oil. [ 72 ] [ 73 ]
There is evidence that the distribution of the compass from China likely also reached eastern Africa by way of trade through the end of the Silk Road that ended in East African centre of trade in Somalia and the Swahili city-state kingdoms. [ 74 ] There is evidence that Swahili maritime merchants and sailors acquired the compass at some point and used it for navigation. [ 75 ]
The dry mariner's compass consists of three elements: A freely pivoting needle on a pin enclosed in a little box with a glass cover and a wind rose , whereby "the wind rose or compass card is attached to a magnetized needle in such a manner that when placed on a pivot in a box fastened in line with the keel of the ship the card would turn as the ship changed direction, indicating always what course the ship was on". [ 7 ] Later, compasses were often fitted into a gimbal mounting to reduce grounding of the needle or card when used on the pitching and the rolling deck of a ship.
While pivoting needles in glass boxes had already been described by the French scholar Peter Peregrinus in 1269, [ 76 ] and by the Egyptian scholar Ibn Simʿūn in 1300, [ 6 ] traditionally Flavio Gioja (fl. 1302), an Italian pilot from Amalfi , has been credited with perfecting the sailor's compass by suspending its needle over a compass card, thus giving the compass its familiar appearance. [ 77 ] [ 10 ] Such a compass with the needle attached to a rotating card is also described in a commentary on Dante 's Divine Comedy from 1380, while an earlier source refers to a portable compass in a box (1318), [ 78 ] supporting the notion that the dry compass was known in Europe by then. [ 46 ]
A bearing compass is a magnetic compass mounted in such a way that it allows the taking of bearings of objects by aligning them with the lubber line of the bearing compass. [ 79 ] A surveyor's compass is a specialized compass made to accurately measure heading of landmarks and measure horizontal angles to help with map making . These were already in common use by the early 18th century and are described in 1728 Cyclopaedia . The bearing compass was steadily reduced in size and weight to increase portability, resulting in a model that could be carried and operated in one hand. In 1885, a patent was granted for a hand compass fitted with a viewing prism and lens that enabled the user to accurately sight the heading of geographical landmarks, thus creating the prismatic compass . [ 80 ] Another sighting method was employing a reflective mirror. First patented in 1902, the Bézard compass consisted of a field compass with a mirror mounted above it. [ 81 ] [ 82 ] This arrangement enabled the user to align the compass with an objective while simultaneously viewing its bearing in the mirror. [ 81 ] [ 83 ]
In 1928, Gunnar Tillander, a Swedish unemployed instrument maker and an avid participant in the sport of orienteering , invented a new style of bearing the compass. Dissatisfied with existing field compasses, which required a separate protractor to take bearings from a map, Tillander decided to incorporate both instruments into a single instrument. It combined a compass with a protractor built into the base. His design featured a metal compass capsule containing a magnetic needle with orienting marks mounted into a transparent protractor baseplate with a lubber line (later called a direction of travel indicator ). By rotating the capsule to align the needle with the orienting marks, the course bearing could be read at the lubber line. Moreover, by aligning the baseplate with a course drawn on a map – ignoring the needle – the compass could also function as a protractor. Tillander took his design to fellow orienteers Björn , Alvin, and Alvar Kjellström, who were selling basic compasses, and the four men modified Tillander's design. [ 84 ] In December 1932, the Silva Company was formed with Tillander and the three Kjellström brothers, and the company began manufacturing and selling its Silva orienteering compass to Swedish orienteers, outdoorsmen, and army officers. [ 84 ] [ 85 ] [ 86 ] [ 87 ]
The liquid compass is a design in which the magnetized needle or card is damped by fluid to protect against excessive swing or wobble, improving readability while reducing wear. A rudimentary working model of a liquid compass was introduced by Sir Edmond Halley at a meeting of the Royal Society in 1690. [ 88 ] However, as early liquid compasses were fairly cumbersome and heavy and subject to damage, their main advantage was aboard the ship. Protected in a binnacle and normally gimbal -mounted, the liquid inside the compass housing effectively damped shock and vibration, while eliminating excessive swing and grounding of the card caused by the pitch and roll of the vessel. The first liquid mariner's compass believed practicable for limited use was patented by the Englishman Francis Crow in 1813. [ 89 ] [ 90 ] Liquid-damped marine compasses for ships and small boats were occasionally used by the Royal Navy from the 1830s through 1860, but the standard Admiralty compass remained a dry-mount type. [ 91 ] In the latter year, the American physicist and inventor Edward Samuel Ritchie patented a greatly improved liquid marine compass that was adopted in revised form for general use by the United States Navy , and later purchased by the Royal Navy as well. [ 92 ]
Despite these advances, the liquid compass was not introduced generally into the Royal Navy until 1908. An early version developed by RN Captain Creek proved to be operational under heavy gunfire and seas but was felt to lack navigational precision compared with the design by Lord Kelvin. [ 93 ] [ 94 ] However, with ship and gun sizes continuously increasing, the advantages of the liquid compass over the Kelvin compass became unavoidably apparent to the Admiralty, and after widespread adoption by other navies, the liquid compass was generally adopted by the Royal Navy. [ 93 ]
Liquid compasses were next adapted for aircraft. In 1909, Captain F.O. Creagh-Osborne , Superintendent of Compasses at the Admiralty, introduced his Creagh-Osborne aircraft compass, which used a mixture of alcohol and distilled water to damp the compass card. [ 95 ] [ 96 ] After the success of this invention, Capt. Creagh-Osborne adapted his design to a much smaller pocket model [ 97 ] for individual use [ 98 ] by officers of artillery or infantry, receiving a patent in 1915. [ 99 ]
In December 1931, the newly founded Silva Company of Sweden introduced its first baseplate or bearing compass that used a liquid-filled capsule to damp the swing of the magnetized needle. [ 84 ] The liquid-damped Silva took only four seconds for its needle to settle in comparison to thirty seconds for the original version. [ 84 ]
In 1933 Tuomas Vohlonen , a surveyor by profession, applied for a patent for a unique method of filling and sealing a lightweight celluloid compass housing or capsule with a petroleum distillate to dampen the needle and protect it from shock and wear caused by excessive motion. [ 100 ] Introduced in a wrist-mount model in 1936 as the Suunto Oy Model M-311 , the new capsule design led directly to the lightweight liquid-filled field compasses of today. [ 100 ]
The first gyroscope for scientific use was made by the French physicist Léon Foucault (1819–1868) in 1852, who also named the device while researching in the same line that led him to use the eponymous pendulum, for which he was awarded a Copley Medal by the Royal Society. The gyrocompass was patented in 1885 by Marinus Gerardus van den Bos in The Netherlands after continuous spinning was made possible by small electric motors, which were, in turn, a technological outcome of the discovery of magnetic induction. [ 10 ] Yet only in 1906 was the German inventor Hermann Anschütz-Kaempfe (1872–1931) able to build the first practical gyrocompass. It had two major advantages over magnetic compasses: it indicated true north and was unaffected by ferromagnetic materials, such as the steel hull of ships. Thus, it was widely used in the warships of World War I and modern aircraft. [ 101 ]
Three compasses meant for establishing the meridian was described by Peter Peregrinus in 1269 (referring to experiments made before 1248) [ 102 ] Late in the 13th century, al-Malik al-Ashraf of Yemen wrote a treatise on astrolabes, which included instructions and diagrams on using the compass to determine the meridian (khaṭṭ niṣf al-nahār) and Qibla . [ 6 ] In 1300, a treatise written by the Egyptian astronomer and muezzin Ibn Simʿūn describes a dry compass for use as a "Qibla indicator" to find the direction to Mecca . Ibn Simʿūn's the compass, however, did not feature a compass card nor the familiar glass box. [ 6 ] In the 14th century, the Syrian astronomer and timekeeper Ibn al-Shatir (1304–1375) invented a timekeeping device incorporating both a universal sundial and the magnetic compass. He invented it to find the times of salat prayers. [ 68 ]
Evidence for the orientation of buildings by the means of a magnetic compass can be found in 12th-century Denmark : one fourth of its 570 Romanesque churches are rotated by 5–15 degrees clockwise from true east–west, thus corresponding to the predominant magnetic declination of the time of their construction. [ 103 ] Most of these churches were built in the 12th century, indicating a fairly common usage of magnetic compasses in Europe by then. [ 104 ]
The use of a compass as a direction finder underground was pioneered in the Tuscan mining town Massa where floating magnetic needles were employed for tunneling, and for defining the claims of the various mining companies, as early as the 13th century. [ 105 ] In the second half of the 15th century, the compass became standard equipment for Tyrolian miners. Shortly afterward the first detailed treatise dealing with the underground use of compasses was published by a German miner Rülein von Calw (1463–1525). [ 106 ]
A sun compass uses the position of the Sun in the sky to determine the directions of the cardinal points, making allowance for the local latitude and longitude, time of day, equation of time , and so on. At fairly high latitudes, an analog-display watch can be used as a very approximate sun compass. A simple sundial can be used as a much better one. An automatic sun compass developed by Lt. Col. James Allason , a mechanized cavalry officer, was adopted by the British Army in India in 1938 for use in tanks and other armored vehicles where the magnetic field was subject to distortion, affecting the standard-issue prismatic compass. Cloudy skies prohibited its use in European theatres. A copy of the manual is preserved in the Imperial War Museum in London. [ 107 ] | https://en.wikipedia.org/wiki/History_of_the_compass |
The existence of extraterrestrial life is a scientific idea that has been debated for centuries. Initially, the question was purely speculative; in modern times a limited amount of scientific evidence provides some answers. The idea was first proposed in Ancient Greece , where it was supported by atomists and rejected by Aristotelians . The debate continued during the Middle Ages, when the discussion centered upon whether the notion of extraterrestrial life was compatible with the doctrines of Christianity . The Copernican Revolution radically altered mankind's image of the architecture of the cosmos by removing Earth from the center of the universe, which made the concept of extraterrestrial life more plausible. Today we have no conclusive evidence of extraterrestrial life, but experts in many different disciplines gather to study the idea under the scientific umbrella of astrobiology .
During the early days of the history of astronomy the things seen in the night sky were explained as the actions of mythological deities. However, it soon became evident that celestial objects move and behave in regular and predictable patterns, which helped in keeping track of time, tides, and seasons, crucial for ancient agriculture . Most ancient civilizations had great knowledge of astronomy but only used it for religious and practical needs. Ancient Greek astronomy sought to go beyond that and explain the architecture of the cosmos. [ 1 ]
Thales of Miletus sought to explain the nature of the universe without relying on supernatural explanations, and reasoned that Earth was a flat disk floating on an ocean of water. The idea was not widely accepted even then, but it established the underlying idea that the universe is intrinsically understandable. Greek philosophers did not follow the scientific method but based their ideas on pure thought instead. However, their discussions laid some principles that would eventually lead to it, such as the rejection of supernatural explanations and that ideas would not be valid if they were contradicted by observable facts. They also developed geometry , which helped with architecture and other practical tasks, but also with astronomic observations. [ 2 ]
The initial idea of a flat Earth covered by a celestial dome was soon discarded. Thales' student Anaximander proposed a full celestial sphere instead. He also noticed evidences of the curved surface of the world and proposed that the Earth was shaped like a cylinder. Most other Greeks, however, preferred the proposal of Pythagoras that Earth was a perfect sphere, as they associated circles and spheres with mathematical perfection. The model of the celestial sphere works for distant stars, which seem to be at fixed locations in the sky to the naked eye , but the Sun and the Moon move at different speeds and the other classical planets follow complex paths and vary in their brightness. This was explained by adding other layers to the celestial sphere. This was detailed in the Ptolemaic model . Aristarchus of Samos proposed instead that it is Earth that spins around the sun, which makes it easier to explain the retrograde motion of the classical planets, but this was rejected by other Greeks. They pointed out that if Earth moves a stellar parallax would change the location of stars in the sky during the year. Although stellar parallax does exist, stars are too far away from Earth, more than Greeks considered, to be noticeable by the naked eye. [ 3 ]
The Greeks discussed as well the possible existence of other worlds, but did not consider the planets as such. In their view, the celestial sphere was a part of Earth and other potential worlds would have their own ones. There was consensus that the world was made of the four classical elements , earth, water, fire and air. From there, they had two opposite ideas: Atomists thought that all existence was composed by atoms, small and indivisible pieces of the four elements, and Aristotelians thought that the four elements were exclusive to Earth and that the universe was made of a fifth one, the Aether . The atomist view would allow the existence of other worlds, as the processes that created Earth may happen elsewhere as well. Although very few of their writings were preserved, it is known that early atomists Leucippus and Democritus thought that atoms should create other worlds the same way Earth was created. [ 4 ] Epicurus said in his "Letter to Herodotus" that "There are infinite worlds both like and unlike this world of ours... we must believe that in all worlds there are living creatures and plants and other things we see in this world". [ 5 ]
Aristotle and Plato opposed the idea of a plurality of worlds. [ 6 ] Plato reasoned that there could be a single heaven, and that if there were several worlds the universe would be composite, eventually falling into dissolution and decay. [ 7 ] Aristotle thought that the earth element would tend to fall to the center of the universe and fire to rise away from it, under that logic the existence of other worlds would not be possible. He also thought that Aether moves in circles, and for that reason the universe could not be spatially infinite. [ 8 ] Aristotle also rejected the plurality of universes, or heavens, arguing that the universe has a Prime Mover that started it all. If there were more than one universe then there would be more than one Prime Mover, and he considered that idea to be impossible. This idea may be influenced by his theological views, as well as his views about physics and cosmology. [ 9 ] He concluded that "The world must be unique... There cannot be several worlds". [ 10 ]
The Greek ideas and debates expanded across the ancient world, beyond Greece. Epicureanism spread across the Roman Empire , with proponents such as Lucretius with his book De rerum natura . [ 11 ] Alexander the Great made a series of military campaigns that expanded the Greek Macedonian Empire to the Middle East, founding the city of Alexandria in Egypt, which would house the Library of Alexandria which was eventually destroyed. Baghdad became a hub of learning and trade during the Islamic Golden Age . Many Islamic scholars studied at the Library and cited or translated the work of the Greek authors, which did not get completely lost. They were also in contact with Hindu scholars from India, who were in turn influenced by the Chinese works and discoveries. Thus, Baghdad created a synthesis of the combined works of Ancient Greece, India, China, and their own scholars. This knowledge spread across the Byzantine Empire , and finally returned to Europe when many scholars escaped from the fall of Constantinople . [ 12 ]
The views of the atomists fell under religious scrutiny when Christianity became a prominent religion. All Church Fathers who made mention of the idea of the plurality of worlds dismissed it as a heresy . The only exception was Origen , who did not believe in many worlds existing at the same time, but rather in worlds that may exist before and after Earth. He developed this idea to explain God's apparent lack of purpose and activities before creating the world. [ 13 ] Augustine of Hippo rejected this idea, proposing that time only manifests in the motion of the material, which means that there was no time "before" the creation because time itself started with it. [ 14 ] Thomas Aquinas discussed it in his Summa Theologica : according to John 1:10 "the world was made by Him", with "world" in singular, which would mean only one. A single world would also mean order, in contrast with the plurality of words held by atomists, who would believe in chance rather than in an "ordaining wisdom" creating it all. He cited the Aristotlean thought in his support; On the Heavens had been translated to Latin by Gerard of Cremona a few years before. He also considered that, as God was only one, he would create only one world to mirror his own perfection. [ 15 ] However, the ideas of Aquinas were banned by the Condemnation of 1277 : they considered that God was being analyzed in a very rational way, and that they were close to suggesting that God could not do certain things, such as creating infinite worlds. In the following years several scholars discussed the plurality of worlds and maintained that it was not a theological impossibility, even if they rejected it for other reasons. [ 16 ]
William Vorilong was likely the first author to discuss the death and resurrection of Christ in the context of the plurality of worlds. He reasoned that if there were people on other worlds they would not be living in sin, because they would not descend from Adam and Eve, but they would still live by virtue of God. He assumed that the death of Christ would surely redeem the people of other worlds just as it did for humans on Earth, and did not consider fitting that God would repeatedly manifest at each different world. [ 17 ]
Nicolaus Copernicus published De revolutionibus orbium coelestium in 1543, kickstarting the Copernican Revolution . This book restored and updated the old idea of Aristarchus that Earth spins around the sun. The new version was written with so much mathematical detail that it could contest the Ptolemaic model. By this time, scientists noticed several inaccuracies in the Ptolemaic model. They were more open to revising it, but largely kept using it because of the huge work involved in changing the tables. Copernicus thought that Earth spinning around the sun could provide a simpler explanation for the retrograde motion of the planets, and calculated the distance of the planets to the Sun. However, he kept the idea of circular orbits, and added several composite orbits to explain the errors caused by it. Although he correctly displaced the center of the Solar System, this first model turned out to be as inaccurate and as complex as the Ptolematic one, and did not get much supporters in the first decades. [ 18 ] [ 19 ]
A recurring problem for both models was the lack of quality data, as the telescope had not been invented yet and naked eye observations are highly inaccurate. The Danish Tycho Brahe sought to gather such data, by creating huge naked-eye observatories. On his deathbed, he asked his assistant Johannes Kepler to make sense of his observations, so that it did not feel like he lived in vain. Kepler initially kept the circular orbits, and eventually found a system that would explain all the data, except for a mistake of 8 arcminutes on the position of Mars. However, Kepler trusted the accuracy of Tycho's observations, and so refused his provisional results. Instead, he challenged the circular orbits and tried with other shapes. With orbits shaped as ellipses he could explain the recorded motion of all the planets, including Mars' retrograde motion, without using composite circles to do so. He compiled his final results as the Kepler's laws of planetary motion . [ 20 ]
However, some scientists had concerns over the new model. Aristotle had once stated that Earth could not move because, if it did so, birds, clouds, and falling objects would be left behind. Orbits had to be circular because the heavens had to be perfect and unchanging. And if Earth moved, the stars should leave a stellar parallax. Those concerns were addressed by Galileo Galilei . First, he explained that an object in motion stays in motion unless a force stops it; a principle nowadays included in the first of Newton's laws of motion . The idea of heavenly perfection was already being challenged by Tycho's observations. Tycho had observed a supernova , which proved that sometimes the heavens do change. The newly invented telescope also revealed "imperfections" in celestial bodies: the sun was shown with sunspots , and the Moon has many features such as craters and mountain ranges. If the heavens were not as perfect as originally considered, then the idea that orbits are not perfect circles was not so questionable. Galileo also discovered the Galilean moons of Jupiter, celestial bodies orbiting another planet, and the phases of Venus . The existence of the Galilean moons refuted the common argument that the Moon would not stay with a moving Earth. As for the stellar parallax, Galileo could not prove that the stars were more distant than estimated, but got strong evidence suggesting it: a closer look at the Milky Way revealed that it is composed of several stars. [ 21 ]
Although those discoveries proved that Earth was not located at the center of everything, they did not completely prove that it spins around the Sun; this fact was fully confirmed when the stellar parallax was measured in detail and with stellar aberration . However, the idea generated such controversy that Galileo was summoned by the Inquisition and forced to recant his findings. Galileo, who was 70 at the time and probably fearing that his life would be at stake, did as ordered. It is said that Galileo muttered " Eppur si muove " (Italian for "And yet it moves"), but most historians doubt it, given the possible consequences Galileo would have faced if heard. [ 22 ]
Despite the trial, by 1630 the model of Kepler and the clarifications of Galileo were unanimously accepted. However, although it was accepted that planets moved in ellipses, it was not clear why they did so. The reason was finally explained by Sir Isaac Newton in his book Philosophiæ Naturalis Principia Mathematica , which described the three laws of motion . [ a ] This book also introduced the law of universal gravitation , and explains all motions in the universe. It also uses maths to explain that Kepler's laws of planetary motion are a natural consequence of the laws of gravitation and motion. With this, the geocentric model was completely discarded. [ 22 ]
The Copernican revolution, the time between Copernicus and Newton, was almost 150 years and changed science forever. It changed the view of the universe and the place of Earth and humankind in it, shifting from a central position to just a world like many others. It also changed the way science works. Previous academics were willing to give leeway to mistakes and errors of measurement, which were strictly less tolerated by the new generations. There was also a stronger emphasis to understand not only how nature works, but also why it works that way and not another. Mere guesses like atomism or aesthetic preferences like heavenly perfection would not fly anymore. Any explanation and assumption was required to be proved before being accepted. [ 23 ]
Although the dispute was not specifically about extraterrestrial life, the outcome kickstarted it. As there was a conflict between atomists and Aristotelians back in ancient Greece, and the Aristotelians were proved to be wrong, many assumed that this meant that atomists were right and that other worlds were just like Earth. However, the only fact about this that was found at this time was that the stars and the classical planets are not lights but celestial objects analogous to Earth, and that life in them may be plausible, if still unknown. [ 24 ] The idea of extraterrestrial life, which was once a radical notion held by limited and specific people, became an accepted idea discussed in college classrooms. The change was also possible because of the changes in religious and philosophical thinking that took place at the time. [ 25 ]
Besides that, there was much speculation. Galileo confused the lunar mares with seas. Kepler said that the Moon has an atmosphere and intelligent inhabitants, even writing a science fiction story about them. Dominican philosopher Giordano Bruno accepted the existence of extraterrestrial life, which became one of the charges leveled against him at the Inquisition, leading to his execution. [ 24 ]
The study of astronomy continued after Newton, and later technological devices and math models allowed to study objects that were undreamt of at the time. Although no actual extraterrestrial life has been found, either in the Solar System or elsewhere, science currently has a far greater understanding of the context of such life or lack thereof. Biology studies the nature of life, and chemistry and biochemistry the way it works. Chemistry and biochemistry also help to understand abiogenesis , the process by which life can be generated by non-living things, which is not yet completely understood. Physics in general and planetary science in particular help to understand the conditions at places other than Earth and how they can be more beneficial or harmful for life. All those sciences are collectively studied under the umbrella science of astrobiology . [ 26 ]
Most knowledge of astronomy is relevant in some way for the discussion of extraterrestrial life, but there are three main tenets. One, that the universe is incredibly vast and old. Second, the elements that make up life on Earth are plentiful. Third, that the laws that rule matter are the same across the universe. As a result, it can be reasoned that there is nothing special about Earth, and that life on other worlds should be plausible. [ 27 ] | https://en.wikipedia.org/wiki/History_of_the_extraterrestrial_life_debate |
The history of the graphical user interface , understood as the use of graphic icons and a pointing device to control a computer , covers a five-decade span of incremental refinements, built on some constant core principles. Several vendors have created their own windowing systems based on independent code , but with basic elements in common that define the WIMP "window, icon, menu and pointing device" paradigm.
There have been important technological achievements, and enhancements to the general interaction in small steps over previous systems. There have been a few significant breakthroughs in terms of use, but the same organizational metaphors and interaction idioms are still in use. Desktop computers are often controlled by computer mice and/or keyboards while laptops often have a pointing stick or touchpad , and smartphones and tablet computers have a touchscreen . The influence of game computers and joystick operation has been omitted.
Early dynamic information devices such as radar displays, where input devices were used for direct control of computer-created data, set the basis for later improvements of graphical interfaces. [ 2 ] Some early cathode-ray-tube (CRT) screens used a light pen , rather than a mouse, as the pointing device.
The concept of a multi-panel windowing system was introduced by the first real-time graphic display systems for computers: the SAGE Project and Ivan Sutherland 's Sketchpad . [ citation needed ]
In the 1960s, Douglas Engelbart 's Augmentation of Human Intellect project at the Augmentation Research Center at SRI International in Menlo Park, California developed the oN-Line System (NLS). [ 3 ] This computer incorporated a mouse-driven cursor and multiple windows used to work on hypertext . Engelbart had been inspired, in part, by the memex desk-based information machine suggested by Vannevar Bush in 1945.
Much of the early research was based on how young children learn. So, the design was based on the childlike characteristics of hand–eye coordination , rather than use of command languages , user-defined macro procedures, or automated transformation of data as later used by adult professionals.
Engelbart publicly demonstrated this work at the Association for Computing Machinery / Institute of Electrical and Electronics Engineers (ACM/IEEE)—Computer Society's Fall Joint Computer Conference in San Francisco on December 9, 1968. It was so-called The Mother of All Demos . [ 4 ]
The development of computers having multiple overlapping and resizable windows on a "desktop" is commonly, and incorrectly, attributed to Xerox PARC and its Alto . The Xerox Alto's windowing system was inspired by the DNLS (Display NLS)'s overlapping multi-windowing system, which was operational by early 1973 and used at several ARPA locations. [ 6 ] In the DNLS, overlapping windows were referred to as "display areas" and could store multiple lines of strings.
In 1971, the screen could only be split into two display areas, vertically or horizontally; by early 1973, the full overlapping windowing system was implemented, and was capable of displaying on an Imlac PDS-1 . [ 7 ] [ 6 ] The Xerox Alto greatly improved upon this system by adding the capability to display bitmapped images, buttons, and other graphics in these windows, as opposed to the DNLS's overlapping display areas which could only display strings of text.
Engelbart's work directly led to the advances at Xerox PARC . Several people went from SRI to Xerox PARC in the early 1970s.
In 1973, Xerox PARC developed the Alto personal computer. It had a bitmapped screen, and was the first computer to demonstrate the desktop metaphor and graphical user interface (GUI). Several thousand units were built and were heavily used at PARC, as well as other XEROX offices, and at several universities for many years. The Alto greatly influenced the design of personal computers during the late 1970s and early 1980s, notably the Three Rivers PERQ , the Apple Lisa and Macintosh , and the first Sun workstations.
The modern WIMP GUI was first developed at Xerox PARC by Alan Kay , Larry Tesler , Dan Ingalls , David Smith , Clarence Ellis and a number of other researchers. This was introduced in the Smalltalk programming environment. It used windows , icons , and menus (including the first fixed drop-down menu) to support commands such as opening files, deleting files, moving files, etc. In 1974, work began at PARC on Gypsy, the first bitmap What-You-See-Is-What-You-Get ( WYSIWYG ) cut and paste editor. In 1975, Xerox engineers demonstrated a graphical user interface "including icons and the first use of pop-up menus". [ 8 ]
In 1981 Xerox introduced a pioneering product, Star , a workstation incorporating many of PARC's innovations. Although not commercially successful, Star greatly influenced future developments, for example at Apple , Microsoft and Sun Microsystems . [ 9 ]
Released by digital imaging company Quantel in 1981, the Paintbox was a color graphical workstation with support for mouse input, but more oriented for graphics tablets ; this model also was notable as one of the first systems to implement pop-up menus . [ 10 ]
The Blit , a graphics terminal, was developed at Bell Labs in 1982.
Lisp machines originally developed at MIT and later commercialized by Symbolics and other manufacturers, were early high-end single user computer workstations with advanced graphical user interfaces, windowing, and mouse as an input device. First workstations from Symbolics came to market in 1981, with more advanced designs in the subsequent years.
Beginning in 1979, started by Steve Jobs and led by Jef Raskin , the Apple Lisa and Macintosh teams at Apple Computer (which included former members of the Xerox PARC group) continued to develop such ideas. The Lisa, released in 1983, featured a document-centric graphical interface atop an advanced hard disk based OS that featured such things as preemptive multitasking and graphically oriented inter-process communication . The comparatively simplified Macintosh, released in 1984 and designed to be lower in cost, was the first commercially successful product to use a multi-panel window interface. A desktop metaphor was used, in which files looked like pieces of paper, file directories looked like file folders, there were a set of desk accessories like a calculator, notepad, and alarm clock that the user could place around the screen as desired, and the user could delete files and folders by dragging them to a trash-can icon on the screen. The Macintosh, in contrast to the Lisa, used a program-centric rather than document-centric design. Apple revisited the document-centric design, in a limited manner, much later with OpenDoc .
There is still some controversy over the amount of influence that Xerox's PARC work, as opposed to previous academic research, had on the GUIs of the Apple Lisa and Macintosh, but it is clear that the influence was extensive, because first versions of Lisa GUIs even lacked icons. [ 11 ] [ 12 ] These prototype GUIs are at least mouse-driven, but completely ignored the WIMP ( "window, icon, menu, pointing device") concept. Screenshots of first GUIs of Apple Lisa prototypes show the early designs. Apple engineers visited the PARC facilities (Apple secured the rights for the visit by compensating Xerox with a pre-IPO purchase of Apple stock) and a number of PARC employees subsequently moved to Apple to work on the Lisa and Macintosh GUI. However, the Apple work extended PARC's considerably, adding manipulatable icons, and drag and drop manipulation of objects in the file system (see Macintosh Finder ) for example. A list of the improvements made by Apple, beyond the PARC interface, can be read at Folklore.org. [ 13 ] Jef Raskin warns that many of the reported facts in the history of the PARC and Macintosh development are inaccurate, distorted or even fabricated, due to the lack of usage by historians of direct primary sources. [ 14 ]
In 1984, Apple released a television commercial which introduced the Apple Macintosh during the telecast of Super Bowl XVIII by CBS , [ 15 ] with allusions to George Orwell 's noted novel, Nineteen Eighty-Four . The commercial was aimed at making people think about computers, identifying the user-friendly interface as a personal computer which departed from previous business-oriented systems, [ 16 ] and becoming a signature representation of Apple products. [ 17 ]
In 1986, the Apple II GS was launched with 16-bit CPU and significantly improved graphics and audio. It shipped with a new operating system, Apple GS/OS , with a Finder -like GUI similar to the Macintosh series.
The Soviet Union-produced Agat PC featured a graphical interface and a mouse device and was released in 1983. [ 18 ]
Founded 1982, SGI introduced the IRIS 1000 Series [ 19 ] in 1983. [ 20 ] The first graphical terminals (IRIS 1000) shipped in late 1983, and the corresponding workstation model (IRIS 1400) was released in mid-1984. The machines used an early version of the MEX windowing system on top of the GL2 Release 1 operating environment. [ 21 ] Examples of the MEX user interface can be seen in a 1988 article in the journal "Computer Graphics", [ 22 ] while earlier screenshots can not be found. The first commercial GUI-based systems, these did not find widespread use as to their (discounted) academic list price of $22,500 and $35,700 for the IRIS 1000 and IRIS 1400, respectively. [ 20 ] However, these systems were commercially successful enough to start SGI's business as one of the main graphical workstation vendors. In later revisions of graphical workstations, SGI switched to the X window system , which had been developed starting at MIT since 1984 and which became the standard for UNIX workstations.
VisiCorp 's Visi On was a GUI designed to run on DOS for IBM PCs. It was released in December 1983. Visi On had many features of a modern GUI, and included a few that did not become common until many years later. It was fully mouse-driven, used a bit-mapped display for both text and graphics, included on-line help, and allowed the user to open a number of programs at once, each in its own window, and switch between them to multitask. [ 23 ] Visi On did not, however, include a graphical file manager. Visi On also demanded a hard drive in order to implement its virtual memory system used for "fast switching", at a time when hard drives were very expensive.
Digital Research (DRI) created GEM as an add-on program for personal computers. GEM was developed to work with existing CP/M and MS-DOS compatible operating systems on business computers such as IBM PC compatibles . It was developed from DRI software, known as GSX, designed by a former PARC employee. Its similarity to the Macintosh desktop led to a copyright lawsuit from Apple Computer , and a settlement which involved some changes to GEM. This was to be the first of a series of " look and feel " lawsuits related to GUI design in the 1980s.
GEM received widespread use in the consumer market from 1985, when it was made the default user interface built into the Atari TOS operating system of the Atari ST line of personal computers. It was also bundled by other computer manufacturers and distributors, such as Amstrad . Later, it was distributed with the best-sold Digital Research version of DOS for IBM PC compatibles, the DR-DOS 6.0. The GEM desktop faded from the market with the withdrawal of the Atari ST line in 1992 and with the popularity of the Microsoft Windows 3.0 in the PC front around the same period of time. The Falcon030, released in 1993 was the last computer from Atari to use GEM.
Tandy's DeskMate appeared in the early 1980s on its TRS-80 machines and was ported to its Tandy 1000 range in 1984. Like most PC GUIs of the time, it depended on a disk operating system such as TRSDOS or MS-DOS . The application was popular at the time and included a number of programs like Draw, Text and Calendar, as well as attracting outside investment such as Lotus 1-2-3 for DeskMate.
MSX-View was developed for MSX computers by ASCII Corporation and HAL Laboratory . MSX-View contains software such as Page Edit, Page View, Page Link, VShell, VTed, VPaint and VDraw. An external version of the built-in MSX View of the Panasonic FS-A1GT was released as an add-on for the Panasonic FS-A1ST on disk instead of 512 KB ROM DISK.
The Amiga computer was launched by Commodore in 1985 with a GUI called Workbench . Workbench was based on an internal engine developed mostly by RJ Mical , called Intuition , which drove all the input events. The first versions used a blue/orange/white/black default palette, which was selected for high contrast on televisions and composite monitors . Workbench presented directories as drawers to fit in with the " workbench " theme. Intuition was the widget and graphics library that made the GUI work. It was driven by user events through the mouse, keyboard, and other input devices.
Due to a mistake made by the Commodore sales department, the first floppies of AmigaOS (released with the Amiga1000) named the whole OS "Workbench". Since then, users and CBM itself referred to "Workbench" as the nickname for the whole AmigaOS (including Amiga DOS, Extras, etc.). This common consent ended with release of version 2.0 of AmigaOS , which re-introduced proper names to the installation floppies of AmigaDOS , Workbench, Extras, etc.
Starting with Workbench 1.0, AmigaOS treated the Workbench as a backdrop, borderless window sitting atop a blank screen. With the introduction of AmigaOS 2.0, however, the user was free to select whether the main Workbench window appeared as a normally layered window, complete with a border and scrollbars, through a menu item.
Amiga users were able to boot their computer into a command-line interface (also known as the CLI or Amiga Shell). This was a keyboard-based environment without the Workbench GUI. Later they could invoke it with the CLI/SHELL command "LoadWB" which loaded Workbench GUI.
One major difference between other OS's of the time (and for some time after) was the Amiga's fully multi-tasking operating system , a powerful built-in animation system using a hardware blitter and copper and four channels of 26 kHz 8-bit sampled sound. This made the Amiga the first multi-media computer years before other OS's.
Like most GUIs of the day, Amiga's Intuition followed Xerox's, and sometimes Apple's, lead. But a CLI was included which dramatically extended the functionality of the platform. However, the CLI/Shell of Amiga is not just a simple text-based interface like in MS-DOS , but another graphic process driven by Intuition, and with the same gadgets included in Amiga's graphics.library. The CLI/Shell interface integrates itself with the Workbench, sharing privileges with the GUI.
The Amiga Workbench evolved over the 1990s, even after Commodore's 1994 bankruptcy.
Acorn's 8-bit BBC Master Compact shipped with Acorn's first public GUI interface in 1986. [ 24 ] Little commercial software, beyond that included on the Welcome disk, was ever made available for the system, despite the claim by Acorn at the time that "the major software houses have worked with Acorn to make over 100 titles available on compilation discs at launch". [ 25 ] The most avid supporter of the Master Compact appeared to be Superior Software , who produced and specifically labelled their games as 'Master Compact' compatible.
RISC OS / r ɪ s k oʊ ˈ ɛ s / [ 26 ] is a series of graphical user interface -based computer operating systems (OSes) designed for ARM architecture systems. It takes its name from the RISC ( reduced instruction set computer ) architecture supported. The OS was originally developed by Acorn Computers for use with their 1987 range of Archimedes personal computers using the Acorn RISC Machine (ARM) processors. It comprises a command-line interface and desktop environment with a windowing system .
Originally branded as the Arthur 1.20 the subsequent Arthur 2 release was shipped under the name RISC OS 2.
The WIMP interface incorporates three mouse buttons (named Select , Menu and Adjust ), context-sensitive menus, window stack control (i.e. send to back) and dynamic window focus (a window can have input focus at any position on the stack). The Icon bar ( Dock ) holds icons which represent mounted disc drives, RAM discs, network directories, running applications, system utilities and docked: Files, Directories or inactive Applications. These icons and open windows have context-sensitive menus and support drag-and-drop behaviour. They represent the running application as a whole, irrespective of whether it has open windows.
The application has control of the context-sensitive menus, inapplicable menu choices can be 'greyed out' to make them unavailable. Menus have their own titles and may be moved around the desktop by the user. Any menu can have further sub-menus or a new window for complicated choices.
The GUI is centered around the concept of files. The Filer displays the contents of a disc. Applications are run from the Filer view and files can be dragged to the Filer view from applications to perform saves. The opposite can perform a load. With their co-operation data can be copied or moved directly between applications by saving (dragging) to another application.
Application directories are used to store applications. The OS differentiates them from normal directories through the use of a pling (exclamation mark, also called shriek) prefix. Double-clicking on such a directory launches the application rather than opening the directory. The application's executable files and resources are contained within the directory, but normally they remain hidden from the user. Because applications are self-contained, this allows drag-and-drop installation and removal.
Files are normally typed. RISC OS has some predefined types. Applications can supplement the set of known types. Double-clicking a file with a known type will launch the appropriate application to load the file.
The RISC OS Style Guide encourages a consistent look and feel across applications. This was introduced in RISC OS 3 and specifies application appearance and behaviour. Acorn's own main bundled applications were not updated to comply with the guide until RISCOS Ltd 's Select release in 2001. [ 27 ]
The outline fonts manager provides spatial anti-aliasing of fonts, the OS being the first operating system to include such a feature, [ 28 ] [ 29 ] [ 30 ] [ 31 ] having included it since before January 1989. [ 32 ] Since 1994, in RISC OS 3.5, it has been possible to use an outline anti-aliased font in the WindowManager for UI elements, rather than the bitmap system font from previous versions. [ 33 ]
Because most of the very early IBM PC and compatibles lacked any common true graphical capability (they used the 80-column basic text mode compatible with the original MDA display adapter), a series of file managers arose, including Microsoft 's DOS Shell , which features typical GUI elements as menus, push buttons, lists with scrollbars and mouse pointer. The name text-based user interface was later invented to name this kind of interface. Many MS-DOS text mode applications, like the default text editor for MS-DOS 5.0 (and related tools, like QBasic ), also used the same philosophy. The IBM DOS Shell included with IBM DOS 5.0 (circa 1992) supported both text display modes and actual graphics display modes, making it both a TUI and a GUI, depending on the chosen mode.
Advanced file managers for MS-DOS were able to redefine character shapes with EGA and better display adapters, giving some basic low resolution icons and graphical interface elements, including an arrow (instead of a coloured cell block) for the mouse pointer. When the display adapter lacks the ability to change the character's shapes, they default to the CP437 character set found in the adapter's ROM . Some popular utility suites for MS-DOS, as Norton Utilities (pictured) and PC Tools used these techniques as well.
DESQview was a text mode multitasking program introduced in July 1985. Running on top of MS-DOS , it allowed users to run multiple DOS programs concurrently in windows. It was the first program to bring multitasking and windowing capabilities to a DOS environment in which existing DOS programs could be used. DESQview was not a true GUI but offered certain components of one, such as resizable, overlapping windows and mouse pointing.
Before the MS-Windows age, and with the lack of a true common GUI under MS-DOS, most graphical applications which worked with EGA , VGA and better graphic cards had proprietary built-in GUIs. One of the best known such graphical applications was Deluxe Paint , a popular painting software with a typical WIMP interface.
The original Adobe Acrobat Reader executable file for MS-DOS was able to run on both the standard Windows 3.x GUI and the standard DOS command prompt. When it was launched from the command prompt, on a machine with a VGA graphics card, it provided its own GUI.
Windows 1.0 , a GUI for the MS-DOS operating system , was released in 1985. [ 34 ] The market's response was less than stellar. [ 35 ] Windows 2.0 followed, but it wasn't until the 1990 launch of Windows 3.0 , based on Common User Access that its popularity truly exploded. The GUI has seen minor redesigns since, mainly the networking enabled Windows 3.11 and its Win32s 32-bit patch. The 16-bit line of MS Windows were discontinued with the introduction of Windows 95 and Windows NT 32-bit based architecture in the 1990s.
The main window of a given application can occupy the full screen in maximized status. The users must then to switch between maximized applications using the Alt+Tab keyboard shortcut; no alternative with the mouse except for de-maximize. When none of the running application windows are maximized, switching can be done by clicking on a partially visible window, as is the common way in other GUIs.
In 1988, Apple sued Microsoft for copyright infringement of the Lisa and Apple Macintosh GUI. The court case lasted 4 years before almost all of Apple's claims were denied on a contractual technicality. Subsequent appeals by Apple were also denied. Microsoft and Apple apparently entered a final, private settlement of the matter in 1997.
GEOS was launched in 1986, originally written for the 8-bit home computer Commodore 64 , and shortly after, the Apple II . The name was later used by the company as PC/Geos for IBM PC systems, then Geoworks Ensemble. It came with several application programs like a calendar and word processor. A cut-down version served as the basis for America Online 's MS-DOS client. Compared to the competing Windows 3.0 GUI, it could run reasonably well on simpler hardware, but its developer had a restrictive policy towards third-party developers that prevented it from becoming a serious competitor. Additionally, it was targeted at 8-bit machines, whilst the 16-bit computer age was dawning.
The standard windowing system in the Unix world is the X Window System (commonly X11 or X), first released in the mid-1980s. The W Window System (1983) was the precursor to X; X was developed at MIT as Project Athena . Its original purpose was to allow users of the newly emerging graphic terminals to access remote graphics workstations without regard to the workstation's operating system or the hardware. Due largely to the availability of the source code used to write X, it has become the standard layer for management of graphical and input/output devices and for the building of both local and remote graphical interfaces on virtually all Unix, Linux and other Unix-like operating systems, with the notable exceptions of macOS and Android .
X allows a graphical terminal user to make use of remote resources on the network as if they were all located locally to the user by running a single module of software called the X server. The software running on the remote machine is called the client application. X's network transparency protocols allow the display and input portions of any application to be separated from the remainder of the application and 'served up' to any of a large number of remote users. X is available today as free software .
The PostScript -based NeWS (Network extensible Window System) was developed by Sun Microsystems in the mid-1980s. For several years SunOS included a window system combining NeWS and the X Window System . Although NeWS was considered technically elegant by some commentators, Sun eventually dropped the product. Unlike X, NeWS was always proprietary software .
The widespread adoption of the PC platform in homes and small businesses popularized computers among people with no formal training. This created a fast-growing market, opening an opportunity for commercial exploitation and of easy-to-use interfaces and making economically viable the incremental refinement of the existing GUIs for home systems.
Also, the spreading of high-color and true-color capabilities of display adapters providing thousands and millions of colors , along with faster CPUs and accelerated graphic cards, cheaper RAM , storage devices orders of magnitude larger (from megabytes to gigabytes ) and larger bandwidth for telecom networking at lower cost helped to create an environment in which the common user was able to run complicated GUIs which began to favor aesthetics.
After Windows 3.11, Microsoft started development on a new consumer-oriented version of the operating system. Windows 95 was intended to integrate Microsoft's formerly separate MS-DOS and Windows products and included an enhanced version of DOS, often referred to as MS-DOS 7.0. It also featured a significant redesign of the GUI, dubbed "Cairo". While Cairo never really materialized, parts of Cairo found their way into subsequent versions of the operating system starting with Windows 95. Both Win95 and WinNT could run 32-bit applications, and could exploit the abilities of the Intel 80386 CPU , as the preemptive multitasking and up to 4 GiB of linear address memory space . Windows 95 was touted as a 32-bit based operating system but it was actually based on a hybrid kernel (VWIN32.VXD) with the 16-bit user interface (USER.EXE) and graphic device interface (GDI.EXE) of Windows for Workgroups (3.11), which had 16-bit kernel components with a 32-bit subsystem (USER32.DLL and GDI32.DLL) that allowed it to run native 16-bit applications as well as 32-bit applications. In the marketplace, Windows 95 was an unqualified success, promoting a general upgrade to 32-bit technology, and within a year or two of its release had become the most successful operating system ever produced.
Accompanied by an extensive marketing campaign , [ 36 ] Windows 95 was a major success in the marketplace at launch and shortly became the most popular desktop operating system. [ 37 ]
Windows 95 saw the beginning of the browser wars , when the World Wide Web began receiving a great deal of attention in popular culture and mass media. Microsoft at first did not see potential in the Web, and Windows 95 was shipped with Microsoft's own online service called The Microsoft Network , which was dial-up only and was used primarily for its own content, not internet access. As versions of Netscape Navigator and Internet Explorer were released at a rapid pace over the following few years, Microsoft used its desktop dominance to push its browser and shape the ecology of the web mainly as a monoculture .
Windows 95 evolved through the years into Windows 98 and Windows ME . Windows ME was the last in the line of the Windows 3.x-based operating systems from Microsoft. Windows underwent a parallel 32-bit evolutionary path, where Windows NT 3.1 was released in 1993. Windows NT (for New Technology) [ 38 ] was a native 32-bit operating system with a new driver model, was unicode-based, and provided for true separation between applications. Windows NT also supported 16-bit applications in an NTVDM, but it did not support VxD based drivers. Windows 95 was supposed to be released before 1993 as the predecessor to Windows NT. The idea was to promote the development of 32-bit applications with backward compatibility – leading the way for more successful NT release. After multiple delays, Windows 95 was released without unicode and used the VxD driver model. Windows NT 3.1 evolved to Windows NT 3.5, 3.51 and then 4.0 when it finally shared a similar interface with its Windows 9x desktop counterpart and included a Start button. The evolution continued with Windows 2000, Windows XP, Windows Vista, then Windows 7. Windows XP and higher were also made available in 64-bit modes. Windows server products branched off with the introduction of Windows Server 2003 (available in 32-bit and 64-bit IA64 or x64), then Windows Server 2008 and then Windows Server 2008 R2. Windows 2000 and XP shared the same basic GUI although XP introduced Visual Styles. With Windows 98, the Active Desktop theme was introduced, allowing an HTML approach for the desktop, but this feature was coldly received by customers, who frequently disabled it. At the end, Windows Vista definitively discontinued it, but put a new SideBar on the desktop.
The Macintosh's GUI has been revised multiple times since 1984, with major updates including System 7 and Mac OS 8 . It underwent its largest revision to date with the introduction of the " Aqua " interface in 2001's Mac OS X . It was a new operating system built primarily on technology from NeXTSTEP with UI elements of the original Mac OS grafted on. macOS uses a technology known as Quartz , for graphics rendering and drawing on-screen. Some interface features of macOS are inherited from NeXTSTEP (such as the Dock , the automatic wait cursor, or double-buffered windows giving a solid appearance and flicker-free window redraws), while others are inherited from the old Mac OS operating system (the single system-wide menu-bar). Mac OS X 10.3 introduced features to improve usability including Exposé , which is designed to make finding open windows easier.
With Mac OS X 10.4 released in April 2005, [ 39 ] new features were added, including Dashboard (a virtual alternate desktop for mini specific-purpose applications) and a search tool called Spotlight , which provides users with an option for searching through files instead of browsing through folders.
With Mac OS X 10.7 released in July 2011, included support for full screen apps and Mac OS X 10.11 (El Capitan) released in September 2015 support creating a full screen split view by pressing the green button on left upper corner of the window or Control+Cmd+F keyboard shortcut.
In the early days of X Window development, Sun Microsystems and AT&T attempted to push for a GUI standard called OPEN LOOK in competition with Motif . OPEN LOOK was developed from scratch in conjunction with Xerox , while Motif was a collective effort. [ 40 ] Motif eventually gained prominence and became the basis for Hewlett-Packard 's Visual User Environment (VUE), which later became the Common Desktop Environment (CDE).
In the late 1990s, there was significant growth in the Unix world, especially among the free software community . New graphical desktop movements grew up around Linux and similar operating systems, based on the X Window System. A new emphasis on providing an integrated and uniform interface to the user brought about new desktop environments, such as KDE Plasma 5 , GNOME and Xfce which have supplanted CDE in popularity on both Unix and Unix-like operating systems. The Xfce, KDE and GNOME look and feel each tend to undergo more rapid change and less codification than the earlier OPEN LOOK and Motif environments.
Later releases added improvements over the original Workbench, like support for high-color Workbench screens, context menus, and embossed 2D icons with pseudo-3D aspect. Some Amiga users preferred alternative interfaces to standard Workbench, such as Directory Opus Magellan.
The use of improved, third-party GUI engines became common amongst users who preferred more attractive interfaces – such as Magic User Interface (MUI), and ReAction . These object-oriented graphic engines driven by user interface classes and methods were then standardized into the Amiga environment and changed Amiga Workbench to a complete and modern guided interface, with new standard gadgets, animated buttons, true 24-bit-color icons, increased use of wallpapers for screens and windows, alpha channel, transparencies and shadows as any modern GUI provides.
Modern derivatives of Workbench are Ambient for MorphOS , Scalos, Workbench for AmigaOS 4 and Wanderer for AROS .
There is a brief article on Ambient and descriptions of MUI icons, menus and gadgets at aps.fr Archived September 7, 2005, at the Wayback Machine and images of Zune stay at main AROS site .
Use of object oriented graphic engines dramatically changes the look and feel of a GUI to match actual styleguides.
Originally collaboratively developed by Microsoft and IBM to replace DOS, OS/2 version 1.0 (released in 1987) had no GUI at all. Version 1.1 (released 1988) included Presentation Manager (PM), an implementation of IBM Common User Access , which looked a lot like the later Windows 3.1 UI. After the split with Microsoft, IBM developed the Workplace Shell (WPS) for version 2.0 (released in 1992), a quite radical, object-oriented approach to GUIs. Microsoft later imitated much of this look in Windows 95 [ citation needed ] .
The NeXTSTEP user interface was used in the NeXT line of computers. NeXTSTEP's first major version was released in 1989. It used Display PostScript for its graphical underpinning. The NeXTSTEP interface's most significant feature was the Dock , carried with some modification into Mac OS X , and had other minor interface details that some found made it easier and more intuitive to use than previous GUIs. NeXTSTEP's GUI was the first to feature opaque dragging of windows in its user interface, on a comparatively weak machine by today's standards, ideally aided by high performance graphics hardware .
BeOS was developed on custom AT&T Hobbit -based computers before switching to PowerPC hardware by a team led by former Apple executive Jean-Louis Gassée as an alternative to Mac OS. BeOS was later ported to Intel hardware. It used an object-oriented kernel written by Be, and did not use the X Window System , but a different GUI written from scratch. Much effort was spent by the developers to make it an efficient platform for multimedia applications. Be Inc. was acquired by PalmSource, Inc. (Palm Inc. at the time) in 2001. [ 41 ] The BeOS GUI still lives in Haiku , an open-source software reimplementation of the BeOS.
General Magic is the apparent parent of all modern smartphone GUI, i.e. touch-screen based including the iPhone et al. In 2007, with the iPhone [ 42 ] and later in 2010 with the introduction of the iPad , [ 43 ] Apple popularized the post-WIMP style of interaction for multi-touch screens, with those devices considered to be milestones in the development of mobile devices . [ 44 ] [ 45 ]
Other portable devices such as MP3 players and cell phones have been a burgeoning area of deployment for GUIs in recent years. Since the mid-2000s, a vast majority of portable devices have advanced to having high-screen resolutions and sizes. (The Galaxy Note 4 's 2,560 × 1,440 pixel display is an example). Because of this, these devices have their own famed user interfaces and operating systems that have large homebrew communities dedicated to creating their own visual elements, such as icons, menus, wallpapers, and more. Post-WIMP interfaces are often used in these mobile devices, where the traditional pointing devices required by the desktop metaphor are not practical.
As high-powered graphics hardware draws considerable power and generates significant heat, many of the 3D effects developed between 2000 and 2010 are not practical on this class of device. This has led to the development of simpler interfaces making a design feature of two dimensionality such as exhibited by the Metro (Modern) UI first used in Windows 8 and the 2012 Gmail redesign. [ citation needed ] [ dubious – discuss ]
In the first decade of the 21st century, the rapid development of GPUs led to a trend for the inclusion of 3D effects in window management. It is based in experimental research [ citation needed ] in user interface design trying to expand the expressive power of the existing toolkits in order to enhance the physical cues that allow for direct manipulation . New effects common to several projects are scale resizing and zooming, several windows transformations and animations (wobbly windows, smooth minimization to system tray...), composition of images (used for window drop shadows and transparency) and enhancing the global organization of open windows ( zooming to virtual desktops , desktop cube , Exposé , etc.) The proof-of-concept BumpTop desktop combines a physical representation of documents with tools for document classification possible only in the simulated environment, like instant reordering and automated grouping of related documents.
These effects are popularized thanks to the widespread use of 3D video cards (mainly due to gaming) which allow for complex visual processing with low CPU use, using the 3D acceleration in most modern graphics cards to render the application clients in a 3D scene. The application window is drawn off-screen in a pixel buffer, and the graphics card renders it into the 3D scene. [ 46 ]
This can have the advantage of moving some of the window rendering to the GPU on the graphics card and thus reducing the load on the main CPU , but the facilities that allow this must be available on the graphics card to be able to take advantage of this.
Examples of 3D user-interface software include Xgl and Compiz from Novell , and AIGLX bundled with Red Hat / Fedora . Quartz Extreme for macOS and Windows 7 and Vista 's Aero interface use 3D rendering for shading and transparency effects as well as Exposé and Windows Flip and Flip 3D , respectively. Windows Vista uses Direct3D to accomplish this, whereas the other interfaces use OpenGL .
The notebook interface is widely used in data science and other areas of research. Notebooks allow users to mix text, calculations, and graphs in the same interface which was previously impossible with a command-line interface .
Virtual reality devices such as the Oculus Rift and Sony's PlayStation VR (formerly Project Morpheus) [ 47 ] aim to provide users with presence , a perception of full immersion into a virtual environment. | https://en.wikipedia.org/wiki/History_of_the_graphical_user_interface |
Saudi Arabian oil was first discovered by the Americans and British in commercial quantities at Dammam oil well No. 7 in 1938 in what is now modern day Dhahran .
On January 15, 1902, Ibn Saud took Riyadh from the Rashid tribe. In 1913, his forces captured the province of al-Hasa from the Ottoman Turks. In 1922, he completed his conquest of the Nejd, and in 1925, he conquered the Hijaz . In 1932, the Kingdom of Saudi Arabia was proclaimed with Ibn Saud as king. [ 2 ] Without stability in the region, the search for oil would have been difficult, as evidenced by early oil exploration in neighbouring countries such as Yemen and Oman. [ 3 ]
Prior to 1938, there were three main factors that triggered the search for oil in Arabia:
In 1922, Ibn Saud met a New Zealand mining engineer, Major Frank Holmes . During World War I, Holmes had been to Gallipoli and then Ethiopia , where he first heard rumours of the oil seeps of the Persian Gulf region. [ 2 ] He was convinced that much oil would be found throughout the region. After the war, Holmes helped to set up Eastern and General Syndicate Ltd in order, among other things, to seek oil concessions in the region.
In 1923, the king signed a concession with Holmes allowing him to search for oil in eastern Saudi Arabia. Eastern and General Syndicate brought in a Swiss geologist to evaluate the land, but he claimed that searching for oil in Arabia would be “a pure gamble”. [ 2 ] This discouraged the major banks and oil companies from investing in Arabian oil ventures.
In 1925, Holmes signed a concession with the sheikh of Bahrain , allowing him to search for oil there. He then proceeded to the United States to find an oil company that might be interested in taking on the concession. He found help from Gulf Oil . In 1927, Gulf Oil took control of the concessions that Holmes made years ago. But Gulf Oil was a partner in the Iraq Petroleum Company , which was jointly owned by Royal Dutch — Shell, Anglo-Persian, the Compagnie Française des Pétroles (ancestor of French major TotalEnergies ), and "the Near East Development Company", representing the interests of the American companies. [ 4 ] The partners had signed up to the “ Red Line Agreement ”, which meant that Gulf Oil was precluded from taking up the Bahrain concession without the consent of the other partners; and they declined. [ 2 ] Despite a promising survey in Bahrain, Gulf Oil was forced to transfer its interest to another company, Standard Oil of California (SOCAL), which was not a bound by the Red Line Agreement. [ 5 ]
Meanwhile Ibn Saud had dispatched American mining engineer Karl Twitchell to examine eastern Arabia. Twitchell found encouraging signs of oil, asphalt seeps in the vicinity of Qatif, but advised the king to await the outcome of the Bahrain No.1 well before inviting bids for a concession for Al-Ahsa . [ 6 ] To the American engineers working in Bahrain, standing on the Jebel Dukhan and gazing across a twenty-mile (32 km) stretch of the Persian Gulf at the Arabian Peninsula in the clear light of early morning, the outline of the low Dhahran hills in the distance were an obvious oil prospect.
On 31 May 1932, the SOCAL subsidiary, the Bahrain Petroleum Company (BAPCO) struck oil in Bahrain . [ 2 ] The discovery brought fresh impetus to the search for oil on the Arabian peninsula.
Negotiations for an oil concession for al-Hasa province opened at Jeddah in March, 1933. Twitchell attended with lawyer Lloyd Hamilton on behalf of SOCAL. The Iraq Petroleum Company represented by Stephen Longrigg competed in the bidding, but SOCAL was granted the concession on 23 May 1933. Under the agreement, SOCAL was given “exploration rights to some 930,000 square kilometers of land for 60 years”. [ 2 ] Soon after the agreement, geologists arrived in al-Hasa and the search for oil was underway.
SOCAL set up a subsidiary company, the California Arabian Standard Oil Company (CASOC) to develop the oil concession. SOCAL also joined forces with the Texas Oil Company when together they formed CALTEX in 1936 to take advantage of the latter's formidable marketing network in Africa and Asia.
When CASOC geologists surveyed the concession area, they identified a promising site and named it Dammam No. 7 , after a nearby village. Over the next three years, the drillers were unsuccessful in making a commercial strike, but chief geologist Max Steineke persevered. He urged the team to drill deeper, even when Dammam No. 7 was plagued by cave-ins, stuck drill bits and other problems, before the drillers finally struck oil on 3 March 1938. [ 7 ] This discovery would turn out to be first of many, eventually revealing the largest source of crude oil in the world. [ 8 ] For the king, oil revenues became a crucial source of wealth since he no longer had to rely on receipts from pilgrimages to Mecca . This discovery would alter Middle Eastern political relations forever. [ citation needed ] [ clarification needed ]
Following the success of Dammam No. 7, CASOC sought to expand operations by identifying a second oil field to justify its investments. In early 1939, drilling commenced at Abu Hadriya, 160 kilometers northwest of Dhahran. Geologist Thomas Barger , later Aramco’s CEO (1961–1969), documented the campaign’s high stakes in personal correspondence, noting that failure would undermine confidence in regional exploration. [ 9 ]
The Abu Hadriya well reached depths exceeding 3,000 meters (10,000 feet) by early 1940—twice the depth of Dammam No. 7—but yielded no oil. [ 9 ] Despite arguments to continue drilling, CASOC’s board ordered operations to cease. This decision redirected efforts to Abqaiq , where drilling began in 1941.
In 1943, the name of the company in control in Saudi Arabia was changed to Arabian American Oil Company (ARAMCO). In addition, numerous changes were made to the original concession after the striking of oil. In 1939, the first modification gave the Arabian American Oil Company a greater area to search for oil and extended the concession until 1949, increasing the original deal by six years. In return, ARAMCO agreed to provide the Saudi Arabian government with large amounts of free kerosene and gasoline , and to pay higher payments than originally stipulated.
Beginning in 1950, the Saudi Arabian government began a pattern of trying to increase government shares of revenue from oil production. In 1950, a fifty-fifty profit-sharing agreement was signed, whereby a tax was levied by the government. This tax considerably increased government revenues. The government continued this trend well into the ‘80s. By 1982, ARAMCO’s concession area was reduced to 220,000 square kilometers, down from the original 930,000 square kilometers. By 1988, ARAMCO was officially bought out by Saudi Arabia and became known as Saudi Aramco .
Due to the quantity of the oil in Saudi Arabia, construction of pipelines became necessary to increase efficiency of production and transport. ARAMCO soon realized that “advantages of a pipeline to the Mediterranean Sea seemed obvious, saving about 3,200 kilometers of sea travel and the transit fees of the Suez Canal ”. [ 10 ] In 1945, the Trans-Arabian Pipeline Company (Tapline) was started and was completed in 1950. The pipeline greatly increased efficiency of oil transport, but also had its shortcomings. Issues concerning taxes and damages plagued it for years. It had to be shut down numerous times for repairs, and by 1983 was officially shut down. [ 10 ]
The Yom Kippur War was a conflict between Egypt , Syria , and their backers and Israel . Because the United States was a supporter of Israel, the Arab countries participated in an oil boycott of Canada , Japan , the Netherlands , the United Kingdom , and the United States . [ 11 ] This boycott later included Portugal , Rhodesia , and South Africa . This was one of the major causes of the 1973 energy crisis that occurred in the United States. [ 12 ] After the completion of the war, the price of oil increased drastically allowing Saudi Arabia to gain much wealth and power. [ 13 ] | https://en.wikipedia.org/wiki/History_of_the_oil_industry_in_Saudi_Arabia |
Before 1800 A.D., the iron and steel industry was located where raw material, power supply and running water were easily available. After 1950, the iron and steel industry began to be located on large areas of flat land near sea ports. The history of the modern steel industry began in the late 1850s. Since then, steel has become a staple of the world's industrial economy. This article is intended only to address the business, economic and social dimensions of the industry, since the bulk production of steel began as a result of Henry Bessemer 's development of the Bessemer converter , in 1857. Previously, steel was very expensive to produce, and was only used in small, expensive items, such as knives, swords and armor.
Steel is an alloy composed of between 0.2 and 2.0 percent carbon, with the balance being iron. From prehistory through the creation of the blast furnace , iron was produced from iron ore as wrought iron , 99.82–100 percent Fe , and the process of making steel involved adding carbon to iron, usually in a serendipitous manner, in the forge, or via the cementation process . The introduction of the blast furnace reversed the problem. A blast furnace produces pig iron — an alloy of approximately 90 percent iron and 10 percent carbon. When the process of steel-making is started with pig iron , instead of wrought iron , the challenge is to remove a sufficient amount of carbon to reduce it to the 0.2 to 2 percentage for steel.
Before about 1860, steel was an expensive product, made in small quantities and used mostly for swords, tools and cutlery; all large metal structures were made of wrought or cast iron . Steelmaking was centered in Sheffield and Middlesbrough , Britain, which supplied the European and American markets. The introduction of cheap steel was due to the Bessemer and the open hearth processes, two technological advances made in England. In the Bessemer process , molten pig iron is converted to steel by blowing air through it after it was removed from the furnace. The air blast burned the carbon and silicon out of the pig iron, releasing heat and causing the temperature of the molten metal to rise. Henry Bessemer demonstrated the process in 1856 and had a successful operation going by 1864. By 1870 Bessemer steel was widely used for ship plate. By the 1850s, the speed, weight, and quantity of railway traffic was limited by the strength of the wrought iron rails in use. The solution was to turn to steel rails, which the Bessemer process made competitive in price. Experience quickly proved steel had much greater strength and durability and could handle the increasingly heavy and faster engines and cars. [ 1 ]
After 1890 the Bessemer process was gradually supplanted by open-hearth steelmaking and by the middle of the 20th century was no longer in use. [ 2 ] The open-hearth process originated in the 1860s in Germany and France. The usual open-hearth process used pig iron, ore, and scrap, and became known as the Siemens-Martin process . Its process allowed closer control over the composition of the steel; also, a substantial quantity of scrap could be included in the charge. The crucible process remained important for making high-quality alloy steel into the 20th century. [ 3 ] By 1900 the electric arc furnace was adapted to steelmaking and by the 1920s, the falling cost of electricity allowed it to largely supplant the crucible process for specialty steels. [ 4 ]
Britain led the world's Industrial Revolution with its early commitment to coal mining, steam power, textile mills, machinery, railways, and shipbuilding. Britain's demand for iron and steel, combined with ample capital and energetic entrepreneurs, made it the world leader in the first half of the 19th century. Steel has a vital role during the industrial revolution.
In 1875, Britain accounted for 47% of world production of pig iron , a third of which came from the Middlesbrough area and almost 40% of steel. 40% of British output was exported to the U.S., which was rapidly building its rail and industrial infrastructure. Two decades later in 1896, however, the British share of world production had plunged to 29% for pig iron and 22.5% for steel, and little was sent to the U.S. The U.S. was now the world leader and Germany was catching up to Britain. Britain had lost its American market, and was losing its role elsewhere; indeed American products were now underselling British steel in Britain. [ 5 ]
The growth of pig iron output was dramatic. Britain went from 1.3 million tons in 1840 to 6.7 million in 1870 and 10.4 in 1913. The US started from a lower base, but grew faster; from 0.3 million tons in 1840, to 1.7 million in 1870, and 31.5 million in 1913. Germany went from 0.2 million tons in 1859 to 1.6 in 1871 and 19.3 in 1913. France, Belgium, Austria-Hungary, and Russia, combined, went from 2.2 million tons in 1870 to 14.1 million tons in 1913, on the eve of the First World War . During the war the demand for artillery shells and other supplies caused a spurt in output and a diversion to military uses.
Abé (1996) explores the record of iron and steel firms in Victorian England by analyzing Bolckow Vaughan & Company. It was wedded for too long to obsolescent technology and was a very late adopter of the open hearth furnace method. Abé concludes that the firm—and the British steel industry—suffered from a failure of entrepreneurship and planning. [ 6 ]
Blair (1997) explores the history of the British Steel industry since the Second World War to evaluate the impact of government intervention in a market economy. Entrepreneurship was lacking in the 1940s; the government could not persuade the industry to upgrade its plants. For generations the industry had followed a patchwork growth pattern which proved inefficient in the face of world competition. In 1946 the first steel development plan was put into practice with the aim of increasing capacity; the Iron and Steel Act 1949 meant nationalization of the industry in the form of the Iron and Steel Corporation of Great Britain . However, the reforms were dismantled by the Conservative Party governments in the 1950s. In 1967, under Labour Party control again, the industry was again nationalized. But by then twenty years of political manipulation had left companies such as the British Steel Corporation with serious problems: a complacency with existing equipment, plants operating under capacity (low efficiency), poor quality assets, outdated technology, government price controls, higher coal and oil costs, lack of funds for capital improvement, and increasing world market competition. By the 1970s the Labour government had its main goal to keep employment high in the declining industry. Since British Steel was a main employer in depressed regions, it had kept many mills and facilities that were operating at a loss. In the 1980s, Conservative Prime Minister Margaret Thatcher re-privatized BSC as British Steel plc .
There were various iron-making ventures during the 19th century , and steel was made but only on a very small scale.
The first commercial scale production of steel in Australia was by William Sandford Limited at the Eskbank Ironworks at Lithgow, New South Wales , in 1901. The plant became Australia's first integrated iron and steel works in 1907. It was later expanded by Charles Hoskins . The first steel rails rolled in Australia were rolled there in 1911. Between 1928 and 1932, the operations at Lithgow were transferred, under the management of Cecil Hoskins , to a new plant at Port Kembla , still the site of most of Australia's steel production today.
The Minister for Public Works, Arthur Hill Griffith , had consistently advocated for the greater industrialization of Newcastle , then, under William Holman , personally negotiated the establishment of a steelworks with G. D. Delprat of BHP . Griffith was also the architect of the Walsh Island establishment. [ 7 ] [ 8 ]
In 1915, BHP ventured into steel manufacturing with its Newcastle Steelworks , which was closed in 1999. [ 9 ] The 'long products' side of the steel business was spun off to form OneSteel in 2000. [ 10 ] BHP's decision to move from mining ore to open a steelworks at Newcastle was precipitated by the technical limitations in recovering value from mining the 'lower-lying sulphide ores'. [ 11 ] The discovery of Iron Knob and Iron Monarch near the western shore of the Spencer Gulf in South Australia combined with the development by the BHP metallurgist, Archibald Drummond Carmichael , of a technique for 'separating zinc sulphides from the accompanying earth and rock' led BHP 'to implement the startlingly simple and cheap process for liberating vast amounts of valuable metals out of sulphide ores , including huge heaps of tailings and slimes up to' 40 ft (12 m) high. [ 12 ]
The Ruhr Valley provided an excellent location for the German iron and steel industry because of the availability of raw materials, coal, transport, a skilled labor force, nearby markets, and an entrepreneurial spirit that led to the creation of many firms, often in close conjunction with coal mines. By 1850 the Ruhr had 50 iron works with 2,813 full-time employees. The first modern furnace was built in 1849. The unification of Germany in 1871 gave further impetus to rapid growth, as the German Empire started to catch up with Britain. From 1880 to World War I, the industry of the Ruhr area consisted of numerous enterprises, each working on a separate level of production. Mixed enterprises could unite all levels of production through vertical integration, thus lowering production costs. Technological progress brought new advantages as well. These developments set the stage for the creation of combined business concerns. [ 13 ]
The leading firm was Friedrich Krupp AG run by the Krupp family. [ 14 ] [ 15 ] Many diverse, large-scale family firms such as Krupp's reorganized in order to adapt to the changing conditions and meet the economic depression of the 1870s, which reduced the earnings in the German iron and steel industry. Krupp reformed his accounting system to better manage his growing empire, adding a specialized bureau of calculation as well as a bureau for the control of times and wages. The rival firm GHH quickly followed, [ 16 ] as did Thyssen AG , which had been founded by August Thyssen in 1867. Germany became Europe's leading steel-producing nation in the late 19th century, thanks in large part to the protection from American and British competition afforded by tariffs and cartels. [ 17 ]
By 1913 American and German exports dominated the world steel market, and Britain slipped to third place. [ 18 ] German steel production grew explosively from 1 million metric tons in 1885 to 10 million in 1905 and peaked at 19 million in 1918. In the 1920s Germany produced about 15 million tons, but output plunged to 6 million in 1933. Under Nazi rule , steel output peaked at 22 million tons in 1940, then dipped to 18 million in 1944 under Allied bombing . [ 19 ] The merger of four major firms into the German Steel Trust (Vereinigte Stahlwerke) in 1926 was modeled on the U.S. Steel corporation in the U.S. The goal was to move beyond the limitations of the old cartel system by incorporating advances simultaneously inside a single corporation. The new company emphasized rationalization of management structures and modernization of the technology; it employed a multi-divisional structure and used return on investment as its measure of success. [ 20 ] It represented the modernization of the German steel industry because its internal structure, management methods, use of technology, and emphasis on mass production. The chief difference was that consumer capitalism as an industrial strategy did not seem plausible to German steel industrialists. [ 21 ]
In iron and steel and other industries, German firms avoided cut-throat competition and instead relied on trade associations. Germany was a world leader because of its prevailing "corporatist mentality", its strong bureaucratic tradition, and the encouragement of the government. These associations regulated competition and allowed small firms to function in the shadow of much larger companies. [ 22 ]
With the need to rebuild the bombed-out infrastructure after the Second World War , Marshall Plan (1948–51) enabled West Germany to rebuild and modernize its mills. It produced 3 million tons of steel in 1947, 12 million in 1950, 34 million in 1960 and 46 million in 1970. East Germany produced about a tenth as much. [ 23 ]
The French iron industry lagged behind Britain and Belgium in the early 19th century. [ 24 ] After 1850 it also lagged behind Germany and Luxembourg. Its industry comprised too many small, inefficient firms. [ 25 ] 20th century growth was not robust, due more to traditional, social and economic attitudes than to inherent geographic, population, or resource factors. Despite a high national income level, the French steel industry remained laggard. [ 26 ] The industry was based on large supplies of coal and iron ore, and was dispersed across the country. The greatest output came in 1929, at 10.4 million metric tons. [ 27 ] The industry suffered sharply during the Great Depression and World War II . Prosperity returned by mid-1950s, but profits came largely from strong domestic demand rather than competitive capacity. Late modernization delayed the development of powerful unions and collective bargaining. [ 28 ]
In Italy a shortage of coal led the steel industry to specialize in the use of hydro-electrical energy , exploiting ideas pioneered by Ernesto Stassano [ it ] from 1898 ( Stassano furnace ). Despite periods of innovation (1907–14), growth (1915–18), and consolidation (1918–22), early expectations were only partly realized. Steel output in the 1920s and 1930s averaged about 2.1 million metric tons. Per capita consumption was much lower than the average of Western Europe. [ 29 ] Electrical processes were an important substitute, yet did not improve competitiveness or reduce prices. Instead, they reinforced the dualism of the sector and initiated a vicious circle that prevented market expansion. [ 30 ] Italy modernized its industry in the 1950s and 1960s and it grew rapidly, becoming second only to West Germany in the 1970s. Strong labour unions kept employment levels high. Troubles multiplied after 1980, however, as foreign competition became stiffer. In 1980 the largest producer Nuova Italsider [now dubbed Ilva (company) lost 746 billion lira in its inefficient operations. [ 31 ] In the 1990s the Italian steel industry, then mostly state-owned, was largely privatised. [ 32 ] Today the country is the world's seventh-largest steel exporter. [ 33 ]
From 1875 to 1920 American steel production grew from 380,000 tons to 60 million tons annually, making the U.S. the world leader. The annual growth rates in steel 1870–1913 were 7.0% for the US; 1.0% for Britain; 6.0% for Germany; and 4.3% for France, Belgium, and Russia, the other major producers. [ 34 ] This explosive American growth rested on solid technological foundations and the continuous rapid expansion of urban infrastructures, office buildings, factories, railroads, bridges and other sectors that increasingly demanded steel. The use of steel in automobiles and household appliances came in the 20th century.
Some key elements in the growth of steel production included the easy availability of iron ore, and coal. Iron ore of fair quality was abundant in the eastern states, but the Lake Superior region contained massive deposits of exceedingly rich ore; the Marquette Iron Range was discovered in 1844; operations began in 1846. Other ranges were opened by 1910, including the Menominee, Gogebic , Vermilion, Cuyuna , and, greatest of all, (in 1892) the Mesabi range in Minnesota. This iron ore was shipped through the Great Lakes to ports such as Chicago , Detroit , Cleveland , Erie and Buffalo for shipment by rail to the steel mills. [ 35 ] Abundant coal was available in Pennsylvania , West Virginia , and Ohio . Manpower was short. Few native-born Americans in the United States wanted to work in the mills, but immigrants from Britain and Germany (and later from Eastern Europe ) arrived in great numbers. [ 36 ]
In 1869 iron was already a major industry, accounting for 6.6% of manufacturing employment and 7.8% of manufacturing output. By then the central figure was Andrew Carnegie , [ 37 ] who made Pittsburgh the center of the industry. [ 38 ] He sold his operations to US Steel in 1901, which became the world's largest steel corporation for decades.
In the 1880s, the transition from wrought iron puddling to mass-produced Bessemer steel greatly increased worker productivity. Highly skilled workers remained essential, but the average level of skill declined. Nevertheless, steelworkers earned much more than ironworkers despite their fewer skills. Workers in an integrated, synchronized mass production environment wielded greater strategic power, for the greater cost of mistakes bolstered workers' status. The experience demonstrated that the new technology did not decrease worker bargaining leverage by creating an interchangeable, unskilled workforce. [ 39 ]
In Alabama , industrialization was generating a ravenous appetite for the state's coal and iron ore. Production was booming, and unions were attempting to organize unincarcerated miners. Convicts provided an ideal captive work force: cheap, usually docile, unable to organize and available when unincarcerated laborers went on strike." [ 40 ] The Southern agrarian economy did not accommodate convict leasing as well as the industrial economy did, whose jobs were often unappealing or dangerous, offering hard-labor and low pay. The competition, expansion, and growth of mining and steel companies also created a high demand for labor, but union labor posed a threat to expanding companies. As unions bargained for higher wages and better conditions, often organizing strikes in order to achieve their goals, the growing companies would be forced to agree to union demands or face abrupt halts in production. The rate companies paid for convict leases, which paid the laborer nothing, was regulated by government and state officials who entered the labor contracts with companies. "The companies built their own prisons, fed and clothed the convicts, and supplied guards as they saw fit." ( Blackmon 2001) [ 40 ] Alabama's use of convict leasing was commanding; 51 of its 67 counties regularly leased convicts serving for misdemeanors at a rate of about $5–20 per month, equal to about $160–500 in 2015. [ 41 ] Although the influence of labor unions forced some states to move away from the profitable convict lease agreements and run traditional prisons, plenty of companies began substituting convict labor in their operations in the twentieth century. "The biggest user of forced labor in Alabama at the turn of the century was Tennessee Coal, Iron & Railroad Co. , [of] U.S. Steel " [ 40 ]
Andrew Carnegie, a Scottish immigrant, advanced the cheap and efficient mass production of steel rails for railroad lines, by adopting the Bessemer process . After an early career in railroads, Carnegie foresaw the potential for steel to amass vast profits. He asked his cousin, George Lauder to join him in America from Scotland. Lauder was a leading mechanical engineer who had studied under Lord Kelvin . Lauder devised several new systems for the Carnegie Steel Company including the process for washing and coking dross from coal mines, which resulted in a significant increase in scale, profits, and enterprise value. [ 42 ]
Carnegie's first mill was the Edgar Thomson Works in Braddock, PA , just outside of Pittsburgh. In 1888, he bought the rival Homestead Steel Works , which included an extensive plant served by tributary coal and iron fields, a 425-mile (685 km) long railway, and a line of lake steamships . He would also add the Duquesne Works to his empire. These three mills on the Monongahela River would make Pittsburgh the steel capital of the world. In the late 1880s, the Carnegie Steel Company was the largest manufacturer of pig iron , steel rails, and coke in the world, with a capacity to produce approximately 2,000 tons of pig iron per day. A consolidation of Carnegie's assets and those of his associates occurred in 1892 with the launching of the Carnegie Steel Company . [ citation needed ]
Lauder would go on to lead the development of the use of steel in armor and armaments for the Carnegie Steel Company , spending significant time at the Krupp factory in Germany in 1886 before returning to build the massive armor plate mill at the Homestead Steel Works that would revolutionize naval warfare. [ 43 ]
By 1889, the U.S. output of steel exceeded that of Britain, and Andrew Carnegie owned a large part of it. By 1900, the profits of Carnegie Bros. & Company alone stood at $480,000,000 with $225,000,000 being Carnegie's share.
Carnegie, through Keystone, supplied the steel for and owned shares in the landmark Eads Bridge project across the Mississippi River in St. Louis, Missouri (completed 1874). This project was an important proof-of-concept for steel technology which marked the opening of a new steel market.
The Homestead Strike was a violent labor dispute in 1892 that involved an attack by strikers against private security guards. The governor called in the National Guard. The strike failed and the union collapsed. The dispute took place at Carnegie's Homestead Steel Works between the Amalgamated Association of Iron and Steel Workers and the Carnegie Steel Company. The final result was a major defeat for the union and a setback for efforts to unionize steelworkers. [ 44 ]
Carnegie sold all his steel holdings in 1901; they were merged into U.S. Steel and it was non-union until the late 1930s.
By 1900 the US was the largest producer and also the lowest cost producer, and demand for steel seemed inexhaustible. Output had tripled since 1890, but customers, not producers, mostly benefitted. Productivity-enhancing technology encouraged faster and faster rates of investment in new plants. However, during recessions, demand fell sharply taking down output, prices, and profits. Charles M. Schwab of Carnegie Steel proposed a solution: consolidation. Financier J. P. Morgan arranged the buyout of Carnegie and most other major firms, and put Elbert Gary in charge. The massive Gary Works steel mill on Lake Michigan was for many years the largest steel producing facility in the world.
US Steel combined finishing firms (American Tin Plate (controlled by William Henry "Judge" Moore ), American Steel and Wire, and National Tube) with two major integrated companies, Carnegie Steel and Federal Steel. It was capitalized at $1.466 billion, and included 213 manufacturing mills, one thousand miles of railroad, and 41 mines. In 1901, it accounted for 66% of America's steel output, and almost 30% of the world's. During World War I, its annual production exceeded the combined output of all German and Austrian firms.
The Steel Strike of 1919 disrupted the entire industry for months, but the union lost and its membership sharply declined. [ 45 ] Rapid growth of cities made the 1920s boom years. President Harding and social reformers forced it to end the 12-hour day in 1923. [ 46 ]
Earnings were recorded at $2.650 billion for 2016. [ 47 ]
Charles M. Schwab (1862–1939) and Eugene Grace (1876–1960) made Bethlehem Steel the second-largest American steel company by the 1920s. Schwab had been the operating head of Carnegie Steel and US Steel. In 1903 he purchased the small firm Bethlehem Steel , and in 1916 made Grace president. Innovation was the keynote at a time when U.S. Steel under Judge Elbert Henry Gary moved slowly. Bethlehem concentrated on government contracts, such as ships and naval armor, and on construction beams, especially for skyscrapers and bridges. [ 48 ] Its subsidiary Bethlehem Shipbuilding Corporation operated 15 shipyards in World War II. It produced 1,121 ships, more than any other builder during the war and nearly one-fifth of the U.S. Navy's fleet. Its peak employment was 180,000 workers, out of a company-wide wartime peak of 300,000. After 1945 Bethlehem doubled its steel capacity, a measure of the widespread optimism in the industry. However the company ignored the new technologies then being developed in Europe and Japan. Seeking labor peace in order to avoid strikes, Bethlehem like the other majors agreed to large wage and benefits increases that kept its costs high. After Grace retired the executives concentrated on short term profits and postponed innovations that led to long-term inefficiency. It went bankrupt in 2001. [ 49 ]
Cyrus Eaton (1883–1979) in 1925 purchased the small Trumbull Steel Company of Warren, Ohio , for $18 million. In the late 1920s he purchased undervalued steel and rubber companies. In 1930, Eaton consolidated his steel holdings into the Republic Steel , based in Cleveland; it became the third-largest steel producer in the U.S., after US Steel and Bethlehem Steel. [ 50 ]
The American Federation of Labor (AFL) tried and failed to organize the steelworkers in 1919. Although the strike gained widespread middle-class support because of its demand and the 12-hour day, the strike failed and unionization was postponed until the late 1930s. The mills ended the 12-hour day in the early 1920s. [ 51 ]
The second surge of unionization came under the auspices of the militant Congress of Industrial Organizations in the late 1930s, when it set up the Steel Workers Organizing Committee . The SWOC focused almost exclusively on the achievement of a signed contract, with "Little Steel" (the major producers except for US Steel). At the grassroots however, women of the steel auxiliaries, workers on the picket line, and middle-class liberals from across Chicago sought to transform the strike into something larger than a showdown over union recognition. In Chicago, the Little Steel strike raised the possibility that steelworkers might embrace the ‘civic unionism’ that animated the left-led unions of the era. The effort failed, and while the strike was won, the resulting powerful United Steelworkers of America union suppressed grassroots opinions. [ 52 ]
Integration was the watchword as the various processes were brought together by large corporations, from mining the iron ore to shipping the finished product to wholesalers. The typical steelworks was a giant operation, including blast furnaces, Bessemer converters, open-hearth furnaces, rolling mills, coke ovens and foundries, as well as supported transportation facilities. The largest ones were operated in the region from Chicago to St. Louis to Baltimore , Philadelphia and Buffalo . Smaller operations appeared in Birmingham, Alabama , and in California . [ 53 ]
The industry grew slowly but other industries grew even faster, so that by 1967, as the downward spiral began, steel accounted for 4.4% of manufacturing employment and 4.9% of manufacturing output. After 1970 American steel producers could no longer compete effectively with low-wage producers elsewhere. Imports and local mini-mills undercut sales.
Per-capita steel consumption in the U.S. peaked in 1977, then fell by half before staging a modest recovery to levels well below the peak. [ 54 ]
Most mills were closed. Bethlehem went bankrupt in 2001. In 1984, Republic merged with Jones and Laughlin Steel Company ; the new firm went bankrupt in 2001. US Steel diversified into oil ( Marathon Oil was spun off in 2001). Finally US Steel reemerged in 2002 with plants in three American locations (plus one in Europe) that employed fewer than one-tenth the 168,000 workers of 1902. By 2001 steel accounted for only 0.8% of manufacturing employment and 0.8% of manufacturing output. [ 55 ]
The world steel industry peaked in 2007. That year, ThyssenKrupp spent $12 billion to build the two most modern mills in the world, in Alabama and Brazil. The worldwide great recession starting in 2008, however, with its heavy cutbacks in construction, sharply lowered demand and prices fell 40%. ThyssenKrupp lost $11 billion on its two new plants, which sold steel below the cost of production. Finally in 2013, ThyssenKrupp offered the plants for sale at under $4 billion. [ 56 ]
The President of the United States is authorized to declare each May "Steelmark Month" to recognize the contribution made by the steel industry to the United States. [ 57 ]
Yonekura shows the steel industry was central to the economic development of Japan. The nation's sudden transformation from feudal to modern society in the late nineteenth century, its heavy industrialization and imperialist war ventures in 1900–1945, and the post-World War II high-economic growth , all depended on iron and steel. The other great Japanese industries, such as shipbuilding , automobiles , and industrial machinery are closely linked to steel. From 1850 to 1970, the industry increased its crude steel production from virtually nothing to 93.3 million tons (the third largest in the world). [ 58 ]
The government's activist Ministry of International Trade and Industry (MITI) played a major role in coordination. The transfer of technology from the West and the establishment of competitive firms involved far more than buying foreign hardware. MITI located steel mills and organized a domestic market; it sponsored Yawata Steel Company . Japanese engineers and entrepreneurs internally developed the necessary technological and organizational capabilities, planned the transfer and adoption of technology, and gauged demand and sources of raw materials and finances. [ 59 ]
The Bengal Iron Works was founded at Kulti , Bengal , in 1870 which began its production in 1874 followed by The Tata Iron and Steel Company (TISCO) was established by Dorabji Tata in 1907, as part of his father's conglomerate. By 1939 it operated the largest steel plant in the British Empire . The company launched a major modernization and expansion program in 1951. [ 60 ]
Prime Minister Jawaharlal Nehru , a believer in socialism, decided that the technological revolution in India needed maximization of steel production. He, therefore, formed a government owned company, Hindustan Steel Limited (HSL) and set up three steel plants in the 1950s. [ 61 ]
The Indian steel industry began expanding into Europe in the 21st century. In January 2007 India's Tata Steel made a successful $11.3 billion offer to buy European steel maker Corus Group . In 2006, Mittal Steel Company (based in London but with Indian management) merged with Arcelor after a takeover bid for $34.3 billion to become ArcelorMittal (based in Luxembourg City ), with 10% of the world's output. [ 62 ]
Communist party Chairman Mao Zedong disdained the cities and put his faith in the Chinese peasantry for a Great Leap Forward . Mao saw steel production as the key to overnight economic modernization, promising that within 15 years China's steel production would surpass that of Britain. In 1958 he decided that steel production would double within the year, using backyard steel furnaces run by inexperienced peasants. The plan was a fiasco, as the small amounts of steel produced were of very poor quality, and the diversion of resources out of agriculture produced a massive famine in 1959–61 that killed millions. [ 63 ]
With economic reforms brought in by Deng Xiaoping , who led China from 1978 to 1992, China began to develop a modern steel industry by building new steel plants and recycling scrap metal from the United States and Europe. As of 2013 China produced 779 million metric tons of steel each year, making it by far the largest steel producing country in the world. This is compared to 165 for the European Union, 110 for Japan, 87 for the United States and 81 for India. [ 64 ] China's 2013 steel production was equivalent to an average of 3.14 cubic meters of steel per second. [ 65 ] | https://en.wikipedia.org/wiki/History_of_the_steel_industry_(1850–1970) |
The global steel industry has been going through major changes since 1970. China has emerged as a major producer and consumer, and India has, to a lesser extent. Consolidation has been rapid in Europe . According to the 2019 International Energy Agency (IEA) report, the iron and steel industry directly contributed 2.6 Gt to global CO 2 emissions and accounted for 7% of global energy demand. [ 1 ] Singapore is the world's main trading hub for iron, [ 2 ] with about 90% of the world's iron ore derivatives traded on their stock exchange. [ 3 ]
Global steel production grew enormously in the 20th century from a mere 28 million tonnes at the beginning of the century to 781 million tons at the end. [ 4 ] Per-capita steel consumption in the US peaked in 1977, then fell by half before staging a modest recovery to levels well below the peak. [ 5 ]
Production of crude steel has risen at an astounding rate, reaching 1.691 billion tonnes by 2017 .
During the 20th century, the consumption of steel increased at an average annual rate of 3.3%. In 1900, the United States was producing 37% of the world's steel, but with post war industrial development in Asia and centralised investment by China, by 2017 China alone accounted for 50% , with Europe (including the former Soviet Union) down to 24% and North America down to 6%.
For details of country-wise steel production see steel production by country .
Amongst the other newly steel-producing countries, in 2017 South Korea produces 71 million tonnes , nearly double Germany; and Brazil 34 million tonnes; all three countries have not changed much since 2011.
Indian steel production in 2017 was just over 100 million tonnes; up substantially from 70 million tonnes in 2011 – compared to only 1 million tonnes at the time of its independence in 1947. By 1991, when the economy was opened up steel production grew to around 14 million tonnes. Thereafter, it doubled in the next 10 years, and then it is doubling again, maybe over a slightly longer span.
The world steel industry flattened from 2007 to 2009 at 1,300 million tonnes, before rising again, due to worldwide recession starting in 2008, with its heavy cutbacks in construction, sharply lowered demand and prices falling 40%.
Showing the impact of that plateau, in 2007 ThyssenKrupp spent $12 billion to build the two most modern mills in the world, situated in Alabama and Brazil. They lost $11 billion on the new plants, which sold steel below the cost of production. [ clarification needed ] Finally in 2013, the plants were sold at under $4 billion.
Nowadays, the steel industry is on the edge of a major technological evolution to deal with the huge amounts of CO 2 produced in the conventional steelmaking process. The use of blast furnaces and basic oxygen furnace produces around 1.8 ton of CO 2 per ton of steel produced. [ 6 ] In order to reach the climate objectives as stated in the Paris Climate Agreement, the European Green Deal , etc., the steel industry will have to implement carbon capture and sequestration or carbon capture and utilization technology or change to less conventional steelmaking technologies such as the electric arc furnace route. Other alternatives are the use of biomass, plastic waste or hydrogen as reducing agent in the blast furnace instead of coal.
A modern steel plant employs very few people per tonne, compared to the past. In South Korea, Posco employs 29,648 people to produce 28 million tonnes.
During the period 1974 to 1999, the steel industry had drastically reduced employment all around the world. In the US, it was down from 521,000 to 153,000. In Japan, from 459,000 to 208,000; Germany from 232,000 to 78,000; UK from 197,000 to 31,000; Brazil from 118,000 to 59,000; South Africa from 100,000 to 54,000. South Korea already had a low figure. It was only 58,000 in 1999. The steel industry had reduced its employment around the world by more than 1,500,000 in 25 years. [ citation needed ]
Both appendices are from IISI material, earlier on the web but now replaced by more recent data. | https://en.wikipedia.org/wiki/History_of_the_steel_industry_(1970–present) |
A web browser is a software application for retrieving, presenting and traversing information resources on the World Wide Web . It further provides for the capture or input of information which may be returned to the presenting system, then stored or processed as necessary. The method of accessing a particular page or content is achieved by entering its address, known as a Uniform Resource Identifier or URI. This may be a web page , image, video, or other piece of content. [ 1 ] Hyperlinks present in resources enable users easily to navigate their browsers to related resources.
A web browser can also be defined as an application software or program designed to enable users to access, retrieve and view documents and other resources on the Internet .
Precursors to the web browser emerged in the form of hyperlinked applications during the mid and late 1980s, and following these, Tim Berners-Lee is credited with developing, in 1990, both the first web server , and the first web browser, called WorldWideWeb (no spaces) and later renamed Nexus. [ 2 ] Many others were soon developed, with Marc Andreessen 's 1993 Mosaic (later Netscape ), [ 3 ] being particularly easy to use and install, and often credited with sparking the internet boom of the 1990s. [ 4 ] Today, the major web browsers are Chrome , Safari , Firefox , Opera , and Edge . [ 5 ]
The explosion in popularity of the Web was triggered in September 1993 by NCSA Mosaic , a graphical browser which eventually ran on several popular office and home computers. [ 6 ] This was the first web browser aiming to bring multimedia content to non-technical users, and therefore included images and text on the same page, unlike previous browser designs; [ 7 ] its founder, Marc Andreessen, also established the company that in 1994, released Netscape Navigator , which resulted in one of the early browser wars , when it ended up in a competition for dominance (which it lost) with Microsoft 's Internet Explorer (for Windows ).
In 1984, expanding on ideas from futurist Ted Nelson , Neil Larson's commercial DOS MaxThink outline program [ 8 ] [ 9 ] added [ 10 ] [ 11 ] [ 12 ] angle bracket hypertext jumps (adopted by later web browsers) to and from ASCII , batch, and other MaxThink files up to 32 levels deep. [ citation needed ] In 1986, [ 13 ] he released his DOS Houdini knowledge network program [ 14 ] [ 15 ] that supported 2500 topics cross-connected with 7500 links in each file along with hypertext links [ 11 ] among unlimited numbers of external ASCII, batch, and other Houdini files, [ citation needed ] these capabilities were included in his then popular shareware DOS file browser programs HyperRez (memory resident) and PC Hypertext (which also added jumps to programs, editors, graphic files containing hot spots jumps, and cross-linked thesaurus/glossary files). These programs introduced many to the browser concept and 20 years later, Google still lists 3,000,000 references to PC Hypertext. In 1989, Larson created both HyperBBS [ 16 ] [ 17 ] and HyperLan [ 18 ] which both allow multiple users to create/edit both topics and jumps for information and knowledge annealing which, in concept, the columnist John C. Dvorak says pre-dated Wiki by many years. [ citation needed ]
From 1987 [ dubious – discuss ] [ 19 ] [ 20 ] on, Neil Larson also created TransText (hypertext word processor) and many utilities for rapidly building large scale knowledge systems. In 1989, his software helped produce, for one of the big eight accounting firms, [ citation needed ] a comprehensive knowledge system (integrated litigation knowledge system) [ 21 ] of integrating all accounting laws/regulations into a CDROM containing 50,000 files with 200,000 hypertext jumps. Additionally, the Lynx (a very early web-based browser) development history notes their project origin was based on the browser concepts from Neil Larson and Maxthink. [ 22 ] In 1989, he declined joining the Mosaic browser team with his preference for knowledge/wisdom creation over distributing information ... a problem he says is still not solved by today's internet.
Another early browser, Silversmith, was created by John Bottoms in 1986. [ 23 ] [ 24 ] The browser, based on SGML tags, [ 25 ] used a tag set from the Electronic Document Project of the AAP with minor modifications and was sold to a number of early adopters. [ 26 ] [ 27 ] [ 28 ] At the time SGML was used exclusively for the formatting of printed documents. [ 29 ] [ failed verification ] The use of SGML for electronically displayed documents signaled a shift in electronic publishing and was met with considerable resistance. Silversmith included an integrated indexer, full text searches, hypertext links between images text and sound using SGML tags and a return stack for use with hypertext links. It included features that are still not available in today's browsers. These include capabilities such as the ability to restrict searches within document structures, searches on indexed documents using wild cards and the ability to search on tag attribute values and attribute names.
Peter Scott and Earle Fogel expanded the earlier HyperRez (1988) concept in creating HyTelnet in 1990 which added jumps to telnet sites ... and which offered users instant logon and access to the online catalogs of over 5000 libraries around the world. The strength of Hytelnet was speed and simplicity in link creation/execution at the expense of a centralized worldwide source for adding, indexing, and modifying telnet links. [ citation needed ] This problem was solved by the invention of the web server.
In April 1990, a draft patent application for a mass market consumer device for browsing pages via links "PageLink" was proposed by Craig Cockburn at Digital Equipment Corporation (DEC) whilst working in their Networking and Communications division in Reading, England. This application for a keyboard-less touch screen browser for consumers also makes reference to "navigating and searching text" and "bookmarks" was aimed at (quotes paraphrased) "replacing books", "storing a shopping list" "have an updated personalised newspaper updated round the clock", "dynamically updated maps for use in a car" and suggests such a device could have a "profound effect on the advertising industry". The patent was canned by Digital as too futuristic and, being largely hardware based, had obstacles to market that purely software driven approaches lacked.
The first web browser, WorldWideWeb , was developed in 1990 by Tim Berners-Lee for the NeXT Computer (at the same time as the first web server for the same machine) [ 30 ] [ 31 ] and introduced to his colleagues at CERN in March 1991. Berners-Lee recruited Nicola Pellow , a math student intern working at CERN, to write the Line Mode Browser , a cross-platform web browser that displayed web-pages on old terminals and was released in May 1991. [ 32 ] [ failed verification ]
In 1992, Tony Johnson released the MidasWWW browser. Based on Motif/X, MidasWWW allowed viewing of PostScript files on the Web from Unix and VMS , and even handled compressed PostScript. [ 33 ] Another early popular Web browser was ViolaWWW , which was modeled after HyperCard . In the same year the Lynx browser was announced [ 22 ] – the only one of these early projects still being maintained and supported today. [ 34 ] Erwise was the first browser with a graphical user interface , developed as a student project at Helsinki University of Technology and released in April 1992, but discontinued in 1994. [ 35 ]
Thomas R. Bruce of the Legal Information Institute at Cornell Law School started to develop Cello in 1992. When released on 8 June 1993 it was one of the first graphical web browsers, and the first to run on Microsoft Windows ( Windows 3.1 , NT 3.5 ) and OS/2 platforms.
However, the explosion in popularity of the Web was triggered by NCSA Mosaic which was a graphical browser running originally on Unix and soon ported to the Amiga and VMS platforms, and later the Apple Macintosh and Microsoft Windows platforms. Version 1.0 was released in September 1993, [ 6 ] and was dubbed the killer application of the Internet. It was the first web browser to display images inline with the document's text. [ 7 ] Prior browsers would display an icon that, when clicked, would download and open the graphic file in a helper application . This was an intentional design decision on both parts, as the graphics support in early browsers was intended for displaying charts and graphs associated with technical papers while the user scrolled to read the text, while Mosaic was trying to bring multimedia content to non-technical users. Mosaic and browsers derived from it had a user option to automatically display images inline or to show an icon for opening in external programs. Marc Andreessen , who was the leader of the Mosaic team at NCSA, quit to form a company that would later be known as Netscape Communications Corporation . Netscape released its flagship Navigator product in October 1994, and it took off the next year.
IBM presented its own WebExplorer with OS/2 Warp in 1994 and version 1.0 was released 6 January 1995.
UdiWWW was the first web browser that was able to handle all HTML 3 features with the math tags released 1995. Following the release of version 1.2 in April 1996, Bernd Richter ceased development, stating "let Microsoft with the ActiveX Development Kit do the rest." [ 36 ] [ 37 ] [ 38 ]
Microsoft , which had thus far not marketed a browser, finally entered the fray with its Internet Explorer product (version 1.0 was released 16 August 1995), purchased from Spyglass, Inc. This began what is known as the " browser wars " in which Microsoft and Netscape competed for the Web browser market.
Early web users were free to choose among the handful of web browsers available, just as they would choose any other application— web standards would ensure their experience remained largely the same. The browser wars put the Web in the hands of millions of ordinary PC users, but showed how commercialization of the Web could stymie standards efforts. Both Microsoft and Netscape liberally incorporated proprietary extensions to HTML in their products, and tried to gain an edge by product differentiation, leading to a web by the late 1990s where only Microsoft or Netscape browsers were viable contenders. In a victory for a standardized web, Cascading Style Sheets , proposed by Håkon Wium Lie , were accepted over Netscape's JavaScript Style Sheets (JSSS) by W3C .
In 1996, Netscape's share of the browser market reached 86% (with Internet Explorer approaching 10%); but then Microsoft began integrating its browser with its operating system and bundling deals with OEMs . Within 4 years of its release IE had 75% of the browser market and by 1999 it had 99% of the market. [ 39 ] Although Microsoft has since faced antitrust litigation on these charges, the browser wars effectively ended once it was clear that Netscape's declining market share trend was irreversible. Prior to the release of Mac OS X , Internet Explorer for Mac and Netscape were also the primary browsers in use on the Macintosh platform.
Unable to continue commercially funding their product's development, Netscape responded by open sourcing its product, creating Mozilla . This helped the browser maintain its technical edge over Internet Explorer, but did not slow Netscape's declining market share. Netscape was purchased by America Online in late 1998.
At first, the Mozilla project struggled to attract developers, but by 2002, it had evolved into a relatively stable and powerful internet suite . Mozilla 1.0 was released to mark this milestone. Also in 2002, a spinoff project that would eventually become the popular Firefox was released.
Firefox was always downloadable for free from the start, as was its predecessor, the Mozilla browser. Firefox's business model, unlike the business model of 1990s Netscape, primarily consists of doing deals with search engines such as Google to direct users towards them – see Web browser#Business models .
In 2003, Microsoft announced that Internet Explorer would no longer be made available as a separate product but would be part of the evolution of its Windows platform, and that no more releases for the Macintosh would be made.
AOL announced that it would retire support and development of the Netscape web browser in February 2008. [ 40 ]
In the second half of 2004, Internet Explorer reached a peak market share of more than 92%. [ 41 ] Since then, its market share has been slowly but steadily declining and is around 11.8% as of July 2013. In early 2005, Microsoft reversed its decision to release Internet Explorer as part of Windows, announcing that a standalone version of Internet Explorer was under development. Internet Explorer 7 was released for Windows XP , Windows Server 2003 , and Windows Vista in October 2006. Internet Explorer 8 was released on 19 March 2009, for Windows XP, Windows Server 2003, Windows Vista, Windows Server 2008 , and Windows 7 . [ 42 ] Internet Explorer 9, 10 and 11 were later released, and version 11 is included in Windows 10 , but Microsoft Edge became the default browser there.
Apple 's Safari , the default browser on Mac OS X from version 10.3 onwards, has grown to dominate browsing on Mac OS X. Browsers such as Firefox , Camino , Google Chrome , and OmniWeb are alternative browsers for Mac systems. OmniWeb and Google Chrome, like Safari, use the WebKit rendering engine (forked from KHTML ), which is packaged by Apple as a framework for use by third-party applications. In August 2007, Apple also ported Safari for use on the Windows XP and Vista operating systems.
Opera was first released in 1996. It was a popular choice in handheld devices, particularly mobile phones, but remains a niche player in the PC Web browser market. It was also available on Nintendo 's DS , DS Lite and Wii consoles. [ 43 ] The Opera Mini browser uses the Presto layout engine like all versions of Opera , but runs on most phones supporting Java MIDlets.
The Lynx browser remains popular for Unix shell users and with vision impaired users due to its entirely text-based nature. There are also several text-mode browsers with advanced features, such as w3m , Links (which can operate both in text and graphical mode), and the Links forks such as ELinks .
A number of web browsers have been derived and branched from source code of earlier versions and products.
This is a table of personal computer web browsers by year of release of major version. The increased growth of the Internet in the 1990s and 2000s means that current browsers with small market shares have more total users than the entire market early on. For example, 90% market share in 1997 would be roughly 60 million users, but by the start of 2007 9% market share would equate to over 90 million users. [ 44 ]
This table focuses on operating system (OS) and browsers of the 1990 to 2000. The year listed for a version is usually the year of the first official release, with an end year being end of development, project change, or relevant termination. Releases of OS and browser from the early 1990s to before 2001–02 time frame are the current focus.
Many early browsers can be made to run on later OS (and later browsers on early OS in some cases); however, most of these situations are avoided in the table. Terms are defined below .
[ 62 ] [ 63 ] | https://en.wikipedia.org/wiki/History_of_the_web_browser |
The history of thermodynamics is a fundamental strand in the history of physics , the history of chemistry , and the history of science in general. Due to the relevance of thermodynamics in much of science and technology , its history is finely woven with the developments of classical mechanics , quantum mechanics , magnetism , and chemical kinetics , to more distant applied fields such as meteorology , information theory , and biology ( physiology ), and to technological developments such as the steam engine , internal combustion engine , cryogenics and electricity generation . The development of thermodynamics both drove and was driven by atomic theory . It also, albeit in a subtle manner, motivated new directions in probability and statistics ; see, for example, the timeline of thermodynamics .
The ancients viewed heat as that related to fire. In 3000 BC, the ancient Egyptians viewed heat as related to origin mythologies. [ 1 ] The ancient Indian philosophy including Vedic philosophy believed that five classical elements (or pancha mahā bhūta) are the basis of all cosmic creations. [ 2 ] In the Western philosophical tradition, after much debate about the primal element among earlier pre-Socratic philosophers , Empedocles proposed a four-element theory, in which all substances derive from earth , water , air , and fire . The Empedoclean element of fire is perhaps the principal ancestor of later concepts such as phlogiston and caloric . Around 500 BC, the Greek philosopher Heraclitus became famous as the "flux and fire" philosopher for his proverbial utterance: "All things are flowing." Heraclitus argued that the three principal elements in nature were fire, earth, and water.
The 5th century BC Greek philosopher Parmenides , in his only known work, a poem conventionally titled On Nature , uses verbal reasoning to postulate that a void, essentially what is now known as a vacuum , in nature could not occur. This view was supported by the arguments of Aristotle , but was criticized by Leucippus and Hero of Alexandria . From antiquity to the Middle Ages various arguments were put forward to prove or disapprove the existence of a vacuum and several attempts were made to construct a vacuum but all proved unsuccessful.
Atomism is a central part of today's relationship between thermodynamics and statistical mechanics. Ancient thinkers such as Leucippus and Democritus , and later the Epicureans , by advancing atomism, laid the foundations for the later atomic theory [ citation needed ] . Until experimental proof of atoms was later provided in the 20th century, the atomic theory was driven largely by philosophical considerations and scientific intuition.
The European scientists Cornelius Drebbel , Robert Fludd , Galileo Galilei and Santorio Santorio in the 16th and 17th centuries were able to gauge the relative " coldness " or " hotness " of air, using a rudimentary air thermometer (or thermoscope ). This may have been influenced by an earlier device which could expand and contract the air constructed by Philo of Byzantium and Hero of Alexandria .
The idea that heat is a form of motion is perhaps an ancient one and is certainly discussed by the English philosopher and scientist Francis Bacon in 1620 in his Novum Organum . Bacon surmised: "Heat itself, its essence and quiddity is motion and nothing else." [ 3 ] "not ... of the whole, but of the small particles of the body." [ 4 ]
In 1637, in a letter to the Dutch scientist Christiaan Huygens , the French philosopher René Descartes wrote: [ 5 ]
Lifting 100 lb one foot twice over is the same as lifting 200 lb one foot, or 100 lb two feet.
In 1686, the German philosopher Gottfried Leibniz wrote essentially the same thing: The same force ["work" in modern terms] is necessary to raise body A of 1 pound (libra) to a height of 4 yards (ulnae), as is necessary to raise body B of 4 pounds to a height of 1 yard. [ 6 ]
In Principles of Philosophy ( Principia Philosophiae ) from 1644, Descartes defined "quantity of motion" ( Latin : quantitas motus ) as the product of size and speed, [ 7 ] and claimed that the total quantity of motion in the universe is conserved. [ 7 ] [ 8 ]
If x is twice the size of y, and is moving half as fast, then there's the same amount of motion in each.
[God] created matter, along with its motion ... merely by letting things run their course, he preserves the same amount of motion ... as he put there in the beginning.
He claimed that merely by letting things run their course, God preserves the same amount of motion as He created, and that thus the total quantity of motion in the universe is conserved. [ 9 ]
Irish physicist and chemist Robert Boyle in 1656, in coordination with English scientist Robert Hooke , built an air pump. Using this pump, Boyle and Hooke noticed the pressure-volume correlation: PV=constant. In that time, air was assumed to be a system of motionless particles, and not interpreted as a system of moving molecules. The concept of thermal motion came two centuries later. Therefore, Boyle's publication in 1660 speaks about a mechanical concept: the air spring. [ 10 ] Later, after the invention of the thermometer, the property temperature could be quantified. This tool gave Gay-Lussac the opportunity to derive his law, which led shortly later to the ideal gas law .
Denis Papin , an associate of Boyle's, built in 1679 a bone digester, which is a closed vessel with a tightly fitting lid that confines steam until a high pressure is generated. Later designs implemented a steam release valve to keep the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and cylinder engine. He did not however follow through with his design. Nevertheless, in 1697, based on Papin's designs, Thomas Newcomen greatly improved upon engineer Thomas Savery 's earlier "fire engine" by incorporating a piston. This made it suitable for mechanical work in addition to pumping to heights beyond 30 feet, and is thus often considered the first true steam engine.
The phenomenon of heat conduction is immediately grasped in everyday life. The fact that warm air rises and the importance of the phenomenon to meteorology was first realised by Edmond Halley in 1686. [ 11 ]
In 1701, Sir Isaac Newton published his law of cooling .
The theory of phlogiston arose in the 17th century, late in the period of alchemy. Its replacement by caloric theory in the 18th century is one of the historical markers of the transition from alchemy to chemistry. Phlogiston was a hypothetical substance that was presumed to be liberated from combustible substances during burning , and from metals during the process of rusting .
In 1702 Guillaume Amontons introduced the concept of absolute zero based on observations of gases .
An early scientific reflection on the microscopic and kinetic nature of matter and heat is found in a work by Mikhail Lomonosov , in which he wrote: "Movement should not be denied based on the fact it is not seen. ... leaves of trees move when rustled by a wind, despite it being unobservable from large distances. Just as in this case motion ... remains hidden in warm bodies due to the extremely small sizes of the moving particles."
During the same years, Daniel Bernoulli published his book Hydrodynamics (1738), in which he derived an equation for the pressure of a gas considering the collisions of its atoms with the walls of a container. He proved that this pressure is two thirds the average kinetic energy of the gas in a unit volume. [ citation needed ] Bernoulli's ideas, however, made little impact on the dominant caloric culture. Bernoulli made a connection with Gottfried Leibniz 's vis viva principle, an early formulation of the principle of conservation of energy , and the two theories became intimately entwined throughout their history.
Bodies were capable of holding a certain amount of this fluid, leading to the term heat capacity , named and first investigated by Scottish chemist Joseph Black in the 1750s. [ 12 ]
In the mid- to late 19th century, heat became understood as a manifestation of a system's internal energy . Today heat is seen as the transfer of disordered thermal energy. Nevertheless, at least in English, the term heat capacity survives. In some other languages, the term thermal capacity is preferred, and it is also sometimes used in English.
Prior to 1698 and the invention of the Savery engine , horses were used to power pulleys, attached to buckets, which lifted water out of flooded salt mines in England. In the years to follow, more variations of steam engines were built, such as the Newcomen engine , and later the Watt engine . In time, these early engines would eventually be utilized in place of horses. Thus, each engine began to be associated with a certain amount of "horse power" depending upon how many horses it had replaced. The main problem with these first engines was that they were slow and clumsy, converting less than 2% of the input fuel into useful work. In other words, large quantities of coal (or wood) had to be burned to yield only a small fraction of work output. Hence the need for a new science of engine dynamics was born.
In the mid- to late 18th century, heat was thought to be a measurement of an invisible fluid, known as the caloric . Like phlogiston, caloric was presumed to be the "substance" of heat that would flow from a hotter body to a cooler body, thus warming it. The utility and explanatory power of kinetic theory , however, soon started to displace the caloric theory. Nevertheless, William Thomson , for example, was still trying to explain James Joule 's observations within a caloric framework as late as 1850. The caloric theory was largely obsolete by the end of the 19th century.
Joseph Black and Antoine Lavoisier made important contributions in the precise measurement of heat changes using the calorimeter, a subject which became known as thermochemistry . The development of the steam engine focused attention on calorimetry and the amount of heat produced from different types of coal . The first quantitative research on the heat changes during chemical reactions was initiated by Lavoisier using an ice calorimeter following research by Joseph Black on the latent heat of water.
Carl Wilhelm Scheele distinguished heat transfer by thermal radiation (radiant heat) from that by convection and conduction in 1777.
In the 17th century, it came to be believed that all materials had an identical conductivity and that differences in sensation arose from their different heat capacities . Suggestions that this might not be the case came from the new science of electricity in which it was easily apparent that some materials were good electrical conductors while others were effective insulators. Jan Ingen-Housz in 1785-9 made some of the earliest measurements, as did Benjamin Thompson during the same period.
In 1791, Pierre Prévost showed that all bodies radiate heat, no matter how hot or cold they are. In 1804, Sir John Leslie observed that a matte black surface radiates heat more effectively than a polished surface, suggesting the importance of black-body radiation .
In the 19th century, scientists abandoned the idea of a physical caloric. The first substantial experimental challenges to the caloric theory arose in a work by Benjamin Thompson's (Count Rumford) from 1798, in which he showed that boring cast iron cannons produced great amounts of heat which he ascribed to friction . His work was among the first to undermine the caloric theory.
As a result of his experiments in 1798, Thompson suggested that heat was a form of motion, though no attempt was made to reconcile theoretical and experimental approaches, and it is unlikely that he was thinking of the vis viva principle.
Although early steam engines were crude and inefficient, they attracted the attention of the leading scientists of the time. One such scientist was Sadi Carnot , the "father of thermodynamics", who in 1824 published Reflections on the Motive Power of Fire , a discourse on heat, power, and engine efficiency. Most cite this book as the starting point for thermodynamics as a modern science. (The name "thermodynamics", however, did not arrive until 1854, when the British mathematician and physicist William Thomson (Lord Kelvin) coined the term thermo-dynamics in his paper On the Dynamical Theory of Heat .) [ 13 ]
Carnot defined "motive power" to be the expression of the useful effect that a motor is capable of producing. Herein, Carnot introduced us to the first modern day definition of " work ": weight lifted through a height . The desire to understand, via formulation, this useful effect in relation to "work" is at the core of all modern day thermodynamics.
Even though he was working with the caloric theory, Carnot in 1824 suggested that some of the caloric available for generating useful work is lost in any real process.
Though it had come to be suspected from Scheele's work, in 1831 Macedonio Melloni demonstrated that radiant heat could be reflected , refracted and polarised in the same way as light.
John Herapath independently formulated a kinetic theory in 1820, but mistakenly associated temperature with momentum rather than vis viva or kinetic energy . His work ultimately failed peer review , even from someone as well-disposed to the kinetic principle as Humphry Davy , and was neglected.
John James Waterston in 1843 provided a largely accurate account, again independently, but his work received the same reception, failing peer review.
Further progress in kinetic theory started only in the middle of the 19th century, with the works of Rudolf Clausius , James Clerk Maxwell , and Ludwig Boltzmann .
Quantitative studies by Joule from 1843 onwards provided soundly reproducible phenomena, and helped to place the subject of thermodynamics on a solid footing. In 1843, Joule experimentally found the mechanical equivalent of heat . In 1845, Joule reported his best-known experiment, involving the use of a falling weight to spin a paddle-wheel in a barrel of water, which allowed him to estimate a mechanical equivalent of heat of 819 ft·lbf/Btu (4.41 J/cal). This led to the theory of conservation of energy and explained why heat can do work.
The idea of absolute zero was generalised in 1848 by Lord Kelvin.
In March 1851, while grappling to come to terms with the work of Joule, Lord Kelvin started to speculate that there was an inevitable loss of useful heat in all processes. The idea was framed even more dramatically by Hermann von Helmholtz in 1854, giving birth to the spectre of the heat death of the universe .
In 1854, William John Macquorn Rankine started to make use of what he called thermodynamic function in calculations. This has subsequently been shown to be identical to the concept of entropy formulated by the famed mathematical physicist Rudolf Clausius . [ 14 ]
In 1865, Clausius coined the term " entropy " ( das Wärmegewicht , symbolized S ) to denote heat lost or turned into waste. [ 15 ] [ 16 ] (" Wärmegewicht " translates literally as "heat-weight"; the corresponding English term stems from the Greek τρέπω , "I turn".) Clausius used the concept to develop his classic statement of the second law of thermodynamics the same year. [ 17 ]
In his 1857 work On the nature of the motion called heat , Clausius for the first time clearly states that heat is the average kinetic energy of molecules.
Clausius' above statement interested the Scottish mathematician and physicist James Clerk Maxwell , who in 1859 derived the momentum distribution later named after him. The Austrian physicist Ludwig Boltzmann subsequently generalized this distribution for the case of gases in external fields. In association with Clausius, in 1871, Maxwell formulated a new branch of thermodynamics called statistical thermodynamics , which functions to analyze large numbers of particles at equilibrium , i.e., systems where no changes are occurring, such that only their average properties as temperature T , pressure P , and volume V become important.
Boltzmann is perhaps the most significant contributor to kinetic theory, as he introduced many of the fundamental concepts in the theory. Besides the Maxwell–Boltzmann distribution mentioned above, he also associated the kinetic energy of particles with their degrees of freedom . The Boltzmann equation for the distribution function of a gas in non-equilibrium states is still the most effective equation for studying transport phenomena in gases and metals. By introducing the concept of thermodynamic probability as the number of microstates corresponding to the current macrostate, he showed that its logarithm is proportional to entropy.
In 1875, the Austrian physicist Ludwig Boltzmann formulated a precise connection between entropy S and molecular motion:
being defined in terms of the number of possible states W that such motion could occupy, where k is the Boltzmann constant .
In 1876, chemical engineer Willard Gibbs published an obscure 300-page paper titled: On the Equilibrium of Heterogeneous Substances , wherein he formulated one grand equality, the Gibbs free energy equation, which suggested a measure of the amount of "useful work" attainable in reacting systems.
Gibbs also originated the concept we now know as enthalpy H , calling it "a heat function for constant pressure". [ 18 ] The modern word enthalpy would be coined many years later by Heike Kamerlingh Onnes , [ 19 ] who based it on the Greek word enthalpein meaning to warm .
James Clerk Maxwell 's 1862 insight that both light and radiant heat were forms of electromagnetic wave led to the start of the quantitative analysis of thermal radiation. In 1879, Jožef Stefan observed that the total radiant flux from a blackbody is proportional to the fourth power of its temperature and stated the Stefan–Boltzmann law . The law was derived theoretically by Ludwig Boltzmann in 1884.
In 1900 Max Planck found an accurate formula for the spectrum of black-body radiation. Fitting new data required the introduction of a new constant, known as the Planck constant , the fundamental constant of modern physics. Looking at the radiation as coming from a cavity oscillator in thermal equilibrium, the formula suggested that energy in a cavity occurs only in multiples of frequency times the constant. That is, it is quantized. This avoided a divergence to which the theory would lead without the quantization.
In 1906, Walther Nernst stated the third law of thermodynamics .
Building on the foundations above, Lars Onsager , Erwin Schrödinger , Ilya Prigogine and others, brought these engine "concepts" into the thoroughfare of almost every modern-day branch of science.
The following list is a rough disciplinary outline of the major branches of thermodynamics and their time of inception:
Concepts of thermodynamics have also been applied in other fields, for example: | https://en.wikipedia.org/wiki/History_of_thermodynamics |
Early study of triangles can be traced to the 2nd millennium BC , in Egyptian mathematics ( Rhind Mathematical Papyrus ) and Babylonian mathematics . Trigonometry was also prevalent in Kushite mathematics. [ 1 ] Systematic study of trigonometric functions began in Hellenistic mathematics , reaching India as part of Hellenistic astronomy . [ 2 ] In Indian astronomy , the study of trigonometric functions flourished in the Gupta period , especially due to Aryabhata (sixth century AD), who discovered the sine function, cosine function, and versine function.
During the Middle Ages , the study of trigonometry continued in Islamic mathematics , by mathematicians such as Al-Khwarizmi and Abu al-Wafa . The knowledge of trigonometric functions passed to Arabia from the Indian Subcontinent. It became an independent discipline in the Islamic world , where all six trigonometric functions were known. Translations of Arabic and Greek texts led to trigonometry being adopted as a subject in the Latin West beginning in the Renaissance with Regiomontanus .
The development of modern trigonometry shifted during the western Age of Enlightenment , beginning with 17th-century mathematics ( Isaac Newton and James Stirling ) and reaching its modern form with Leonhard Euler (1748).
The term "trigonometry" was derived from Greek τρίγωνον trigōnon , "triangle" and μέτρον metron , "measure". [ 3 ]
The modern words "sine" and "cosine" are derived from the Latin word sinus via mistranslation from Arabic (see Sine and cosine § Etymology ). Particularly Fibonacci 's sinus rectus arcus proved influential in establishing the term. [ 4 ]
The word tangent comes from Latin tangens meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin secans "cutting" since the line cuts the circle (see the figure at Pythagorean identities ). [ 5 ]
The prefix " co -" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter 's Canon triangulorum (1620), which defines the cosinus as an abbreviation for the sinus complementi (sine of the complementary angle ) and proceeds to define the cotangens similarly. [ 6 ] [ 7 ]
The words "minute" and "second" are derived from the Latin phrases partes minutae primae and partes minutae secundae . [ 8 ] These roughly translate to "first small parts" and "second small parts".
The ancient Egyptians and Babylonians had known of theorems on the ratios of the sides of similar triangles for many centuries. However, as pre-Hellenic societies lacked the concept of an angle measure, they were limited to studying the sides of triangles instead. [ 9 ]
The Babylonian astronomers kept detailed records on the rising and setting of stars , the motion of the planets , and the solar and lunar eclipses , all of which required familiarity with angular distances measured on the celestial sphere . [ 10 ] Based on one interpretation of the Plimpton 322 cuneiform tablet (c. 1900 BC), some have even asserted that the ancient Babylonians had a table of secants but does not work in this context as without using circles and angles in the situation modern trigonometric notations will not apply. [ 11 ] There is, however, much debate as to whether it is a table of Pythagorean triples , a solution of quadratic equations, or a trigonometric table . [ 12 ]
The Egyptians, on the other hand, used a primitive form of trigonometry for building pyramids in the 2nd millennium BC. [ 10 ] The Rhind Mathematical Papyrus , written by the Egyptian scribe Ahmes (c. 1680–1620 BC), contains the following problem related to trigonometry: [ 10 ]
"If a pyramid is 250 cubits high and the side of its base 360 cubits long, what is its seked ?"
Ahmes' solution to the problem is the ratio of half the side of the base of the pyramid to its height, or the run-to-rise ratio of its face. In other words, the quantity he found for the seked is the cotangent of the angle to the base of the pyramid and its face. [ 10 ]
Ancient Greek and Hellenistic mathematicians made use of the chord . Given a circle and an arc on the circle, the chord is the line that subtends the arc. A chord's perpendicular bisector passes through the center of the circle and bisects the angle. One half of the bisected chord is the sine of one half the bisected angle, that is, [ 13 ]
and consequently the sine function is also known as the half-chord . Due to this relationship, a number of trigonometric identities and theorems that are known today were also known to Hellenistic mathematicians, but in their equivalent chord form. [ 14 ] [ 15 ]
Although there is no trigonometry in the works of Euclid and Archimedes , in the strict sense of the word, there are theorems presented in a geometric way (rather than a trigonometric way) that are equivalent to specific trigonometric laws or formulas. [ 9 ] For instance, propositions twelve and thirteen of book two of the Elements are the laws of cosines for obtuse and acute angles, respectively. Theorems on the lengths of chords are applications of the law of sines . And Archimedes' theorem on broken chords is equivalent to formulas for sines of sums and differences of angles. [ 9 ] To compensate for the lack of a table of chords , mathematicians of Aristarchus ' time would sometimes use the statement that, in modern notation, sin α /sin β < α / β < tan α /tan β whenever 0° < β < α < 90°, now known as Aristarchus's inequality . [ 16 ]
The first trigonometric table was apparently compiled by Hipparchus of Nicaea (180 – 125 BC), who is now consequently known as "the father of trigonometry." [ 17 ] Hipparchus was the first to tabulate the corresponding values of arc and chord for a series of angles. [ 4 ] [ 17 ]
Although it is not known when the systematic use of the 360° circle came into mathematics, it is known that the systematic introduction of the 360° circle came a little after Aristarchus of Samos composed On the Sizes and Distances of the Sun and Moon (c. 260 BC), since he measured an angle in terms of a fraction of a quadrant. [ 16 ] It seems that the systematic use of the 360° circle is largely due to Hipparchus and his table of chords . Hipparchus may have taken the idea of this division from Hypsicles who had earlier divided the day into 360 parts, a division of the day that may have been suggested by Babylonian astronomy. [ 18 ] In ancient astronomy, the zodiac had been divided into twelve "signs" or thirty-six "decans". A seasonal cycle of roughly 360 days could have corresponded to the signs and decans of the zodiac by dividing each sign into thirty parts and each decan into ten parts. [ 8 ] It is due to the Babylonian sexagesimal numeral system that each degree is divided into sixty minutes and each minute is divided into sixty seconds. [ 8 ]
Menelaus of Alexandria (c. 100 AD) wrote in three books his Sphaerica . In Book I, he established a basis for spherical triangles analogous to the Euclidean basis for plane triangles. [ 15 ] He established a theorem that is without Euclidean analogue, that two spherical triangles are congruent if corresponding angles are equal, but he did not distinguish between congruent and symmetric spherical triangles. [ 15 ] Another theorem that he establishes is that the sum of the angles of a spherical triangle is greater than 180°. [ 15 ] Book II of Sphaerica applies spherical geometry to astronomy. And Book III contains the "theorem of Menelaus". [ 15 ] He further gave his famous "rule of six quantities". [ 19 ]
Later, Claudius Ptolemy (c. 90 – c. 168 AD) expanded upon Hipparchus' Chords in a Circle in his Almagest , or the Mathematical Syntaxis . The Almagest is primarily a work on astronomy, and astronomy relies on trigonometry. Ptolemy's table of chords gives the lengths of chords of a circle of diameter 120 as a function of the number of degrees n in the corresponding arc of the circle, for n ranging from 1/2 to 180 by increments of 1/2. [ 20 ] The thirteen books of the Almagest are the most influential and significant trigonometric work of all antiquity. [ 21 ] A theorem that was central to Ptolemy's calculation of chords was what is still known today as Ptolemy's theorem , that the sum of the products of the opposite sides of a cyclic quadrilateral is equal to the product of the diagonals. A special case of Ptolemy's theorem appeared as proposition 93 in Euclid's Data . Ptolemy's theorem leads to the equivalent of the four sum-and-difference formulas for sine and cosine that are today known as Ptolemy's formulas, although Ptolemy himself used chords instead of sine and cosine. [ 21 ] Ptolemy further derived the equivalent of the half-angle formula
Ptolemy used these results to create his trigonometric tables, but whether these tables were derived from Hipparchus' work cannot be determined. [ 21 ]
Neither the tables of Hipparchus nor those of Ptolemy have survived to the present day, although descriptions by other ancient authors leave little doubt that they once existed. [ 22 ]
Some of the early and very significant developments of trigonometry were in India . Influential works from the 4th–5th century AD, known as the Siddhantas (of which there were five, the most important of which is the Surya Siddhanta [ 23 ] ) first defined the sine as the modern relationship between half an angle and half a chord, while also defining the cosine, versine , and inverse sine . [ 24 ] Soon afterwards, another Indian mathematician and astronomer , Aryabhata (476–550 AD), collected and expanded upon the developments of the Siddhantas in an important work called the Aryabhatiya . [ 25 ] The Siddhantas and the Aryabhatiya contain the earliest surviving tables of sine values and versine (1 − cosine) values, in 3.75° intervals from 0° to 90°, to an accuracy of 4 decimal places. [ 26 ] They used the words jya for sine, kojya for cosine, utkrama-jya for versine, and otkram jya for inverse sine. The words jya and kojya eventually became sine and cosine respectively after a mistranslation described above.
In the 7th century, Bhaskara I produced a formula for calculating the sine of an acute angle without the use of a table. He also gave the following approximation formula for sin( x ), which had a relative error of less than 1.9%:
Later in the 7th century, Brahmagupta redeveloped the formula
(also derived earlier, as mentioned above) and the Brahmagupta interpolation formula for computing sine values. [ 11 ]
Madhava (c. 1400) made early strides in the analysis of trigonometric functions and their infinite series expansions. He developed the concepts of the power series and Taylor series , and produced the power series expansions of sine, cosine, tangent, and arctangent. [ 27 ] [ 28 ] Using the Taylor series approximations of sine and cosine, he produced a sine table to 12 decimal places of accuracy and a cosine table to 9 decimal places of accuracy. He also gave the power series of π and the angle , radius , diameter , and circumference of a circle in terms of trigonometric functions. His works were expanded by his followers at the Kerala School up to the 16th century. [ 27 ] [ 28 ]
The Indian text the Yuktibhāṣā contains proof for the expansion of the sine and cosine functions and the derivation and proof of the power series for inverse tangent , discovered by Madhava. The Yuktibhāṣā also contains rules for finding the sines and the cosines of the sum and difference of two angles.
In China , Aryabhata 's table of sines were translated into the Chinese mathematical book of the Kaiyuan Zhanjing , compiled in 718 AD during the Tang dynasty . [ 30 ] Although the Chinese excelled in other fields of mathematics such as solid geometry, binomial theorem , and complex algebraic formulas, early forms of trigonometry were not as widely appreciated as in the earlier Greek, Hellenistic, Indian and Islamic worlds. [ 31 ] Instead, the early Chinese used an empirical substitute known as chong cha , while practical use of plane trigonometry in using the sine, the tangent, and the secant were known. [ 30 ] However, this embryonic state of trigonometry in China slowly began to change and advance during the Song dynasty (960–1279), where Chinese mathematicians began to express greater emphasis for the need of spherical trigonometry in calendrical science and astronomical calculations. [ 30 ] The polymath Chinese scientist, mathematician and official Shen Kuo (1031–1095) used trigonometric functions to solve mathematical problems of chords and arcs. [ 30 ] Victor J. Katz writes that in Shen's formula "technique of intersecting circles", he created an approximation of the arc s of a circle given the diameter d , sagitta v , and length c of the chord subtending the arc, the length of which he approximated as [ 32 ]
Sal Restivo writes that Shen's work in the lengths of arcs of circles provided the basis for spherical trigonometry developed in the 13th century by the mathematician and astronomer Guo Shoujing (1231–1316). [ 33 ] As the historians L. Gauchet and Joseph Needham state, Guo Shoujing used spherical trigonometry in his calculations to improve the calendar system and Chinese astronomy . [ 30 ] [ 34 ] Along with a later 17th-century Chinese illustration of Guo's mathematical proofs, Needham states that:
Guo used a quadrangular spherical pyramid, the basal quadrilateral of which consisted of one equatorial and one ecliptic arc, together with two meridian arcs , one of which passed through the summer solstice point...By such methods he was able to obtain the du lü (degrees of equator corresponding to degrees of ecliptic), the ji cha (values of chords for given ecliptic arcs), and the cha lü (difference between chords of arcs differing by 1 degree). [ 35 ]
Despite the achievements of Shen and Guo's work in trigonometry, another substantial work in Chinese trigonometry would not be published again until 1607, with the dual publication of Euclid's Elements by Chinese official and astronomer Xu Guangqi (1562–1633) and the Italian Jesuit Matteo Ricci (1552–1610). [ 36 ]
Previous works from India and Greece were later translated and expanded in the medieval Islamic world by Muslim mathematicians of mostly Persian and Arab descent , who enunciated a large number of theorems which freed the subject of trigonometry from dependence upon the complete quadrilateral , as was the case in Hellenistic mathematics due to the application of Menelaus' theorem . According to E. S. Kennedy, it was after this development in Islamic mathematics that "the first real trigonometry emerged, in the sense that only then did the object of study become the spherical or plane triangle , its sides and angles ." [ 37 ]
Methods dealing with spherical triangles were also known, particularly the method of Menelaus of Alexandria , who developed "Menelaus' theorem" to deal with spherical problems. [ 15 ] [ 38 ] However, E. S. Kennedy points out that while it was possible in pre-Islamic mathematics to compute the magnitudes of a spherical figure, in principle, by use of the table of chords and Menelaus' theorem, the application of the theorem to spherical problems was very difficult in practice. [ 39 ] In order to observe holy days on the Islamic calendar in which timings were determined by phases of the moon , astronomers initially used Menelaus' method to calculate the place of the moon and stars , though this method proved to be clumsy and difficult. It involved setting up two intersecting right triangles ; by applying Menelaus' theorem it was possible to solve one of the six sides, but only if the other five sides were known. To tell the time from the sun 's altitude , for instance, repeated applications of Menelaus' theorem were required. For medieval Islamic astronomers , there was an obvious challenge to find a simpler trigonometric method. [ 40 ]
In the early 9th century AD, Muhammad ibn Mūsā al-Khwārizmī produced accurate sine and cosine tables. He was also a pioneer in spherical trigonometry . In 830 AD, Habash al-Hasib al-Marwazi discovered the tangent and the cotangent and produced the first table of these trigonometric functions . [ 41 ] [ 42 ] Muhammad ibn Jābir al-Harrānī al-Battānī (Albatenius) (853–929 AD) discovered the secant and the cosecant, and produced the first table of cosecants for each degree from 1° to 90°. [ 43 ]
By the 10th century AD, in the work of Abū al-Wafā' al-Būzjānī , all six trigonometric functions were used. [ 44 ] Abu al-Wafa had sine tables in 0.25° increments, to 8 decimal places of accuracy, and accurate tables of tangent values. [ 44 ] He also developed the following trigonometric formula: [ 45 ]
In his original text, Abū al-Wafā' states: "If we want that, we multiply the given sine by the cosine minutes , and the result is half the sine of the double". [ 45 ] Abū al-Wafā also established the angle addition and difference identities presented with complete proofs: [ 45 ]
For the second one, the text states: "We multiply the sine of each of the two arcs by the cosine of the other minutes . If we want the sine of the sum, we add the products, if we want the sine of the difference, we take their difference". [ 45 ]
He also discovered the law of sines for spherical trigonometry: [ 41 ]
Also in the late 10th and early 11th centuries AD, the Egyptian astronomer Ibn Yunus performed many careful trigonometric calculations and demonstrated the following trigonometric identity : [ 46 ]
Al-Jayyani (989–1079) of al-Andalus wrote The book of unknown arcs of a sphere , which is considered "the first treatise on spherical trigonometry ". [ 47 ] It "contains formulae for right-handed triangles , the general law of sines, and the solution of a spherical triangle by means of the polar triangle." This treatise later had a "strong influence on European mathematics", and his "definition of ratios as numbers" and "method of solving a spherical triangle when all sides are unknown" are likely to have influenced Regiomontanus . [ 47 ]
The method of triangulation was first developed by Muslim mathematicians, who applied it to practical uses such as surveying [ 48 ] and Islamic geography , as described by Abu Rayhan Biruni in the early 11th century. Biruni himself introduced triangulation techniques to measure the size of the Earth and the distances between various places. [ 49 ] In the late 11th century, Omar Khayyám (1048–1131) solved cubic equations using approximate numerical solutions found by interpolation in trigonometric tables. In the 13th century, Naṣīr al-Dīn al-Ṭūsī was the first to treat trigonometry as a mathematical discipline independent from astronomy, and he developed spherical trigonometry into its present form. [ 42 ] He listed the six distinct cases of a right-angled triangle in spherical trigonometry, and in his Book on the Complete Quadrilateral , he stated the law of sines for plane and spherical triangles, discovered the law of tangents for spherical triangles, and provided proofs for both these laws. [ 50 ] Nasir al-Din al-Tusi has been described as the creator of trigonometry as a mathematical discipline in its own right. [ 51 ] [ 52 ] [ 53 ]
The law of cosines , in geometric form, can be found as propositions II.12–13 in Euclid's Elements (c. 300 BC), [ 54 ] but was not used for the solution of triangles per se. Medieval Islamic mathematicians developed a method for finding the third side of an arbitrary triangle given two sides and the included angle based on the same concept but more similar to the modern formulation of the law of cosines. A sketch of the method can be found in Naṣīr al-Dīn al-Ṭūsī's Book on the Complete Quadrilateral (c. 1250), [ 55 ] and the same method is described in more detail in Jamshīd al-Kāshī 's Key of Arithmetic (1427). [ 56 ] Al-Kāshī also computed the sine of 1° accurate to 8 sexagesimal digits, and constructed the most accurate trigonometric tables to date, accurate to four sexagesimal places (equivalent to 8 decimal places) for each 1° of arc. [ citation needed ] Al-Kāshī presumably worked on Ulugh Beg 's even more comprehensive trigonometric tables, with five-place (sexagesimal) entries for each minute of arc. [ citation needed ]
In 1342, Levi ben Gershon, known as Gersonides , wrote On Sines, Chords and Arcs , in particular proving the sine law for plane triangles and giving five-figure sine tables . [ 57 ]
A simplified trigonometric table, the " toleta de marteloio ", was used by sailors in the Mediterranean Sea during the 14th-15th Centuries to calculate navigation courses. It is described by Ramon Llull of Majorca in 1295, and laid out in the 1436 atlas of Venetian captain Andrea Bianco .
Regiomontanus was perhaps the first mathematician in Europe to treat trigonometry as a distinct mathematical discipline, [ 58 ] in his De triangulis omnimodis written in 1464, as well as his later Tabulae directionum which included the tangent function, unnamed.
The Opus palatinum de triangulis of Georg Joachim Rheticus , a student of Copernicus , was probably the first in Europe to define trigonometric functions directly in terms of right triangles instead of circles, with tables for all six trigonometric functions; this work was finished by Rheticus' student Valentin Otho in 1596.
In the 17th century, Isaac Newton and James Stirling developed the general Newton–Stirling interpolation formula for trigonometric functions.
In the 18th century, Leonhard Euler 's Introduction in analysin infinitorum (1748) was mostly responsible for establishing the analytic treatment of trigonometric functions in Europe, deriving their infinite series and presenting " Euler's formula " e ix = cos x + i sin x . Euler used the near-modern abbreviations sin. , cos. , tang. , cot. , sec. , and cosec. Prior to this, Roger Cotes had computed the derivative of sine in his Harmonia Mensurarum (1722). [ 59 ] Also in the 18th century, Brook Taylor defined the general Taylor series and gave the series expansions and approximations for all six trigonometric functions. The works of James Gregory in the 17th century and Colin Maclaurin in the 18th century were also very influential in the development of trigonometric series. | https://en.wikipedia.org/wiki/History_of_trigonometry |
A Virtual Learning Environment (VLE) is a system specifically designed to facilitate the management of educational courses by teachers for their students. It predominantly relies on computer hardware and software, enabling distance learning. In North America , this concept is commonly denoted as a "Learning Management System" (LMS).
The terminology for systems which integrate and manage computer-based learning has changed over the years. Common terms which are useful in understanding and searching for earlier materials include:
Develops the "Interactive Learning Network" ILN 1.5, and installs it at several academic institutions including Cornell University, Yale Medical School and University of Pittsburgh. The ILN was the first e-learning system of its kind to leverage an install on top of a relational database MySqL.
Later that year in October 2000, deploy the ArsDigita Community Education System (ACES) at MIT Sloan School. The system is called Sloanspace. [ 124 ] The ArsDigita Community System as well as ACES in the next few years grow to OpenACS and .LRN | https://en.wikipedia.org/wiki/History_of_virtual_learning_environments |
The history of water filters can be traced to the earliest civilisations with written records. Water filters have been used throughout history to improve the safety and aesthetics of water intended to be used for drinking or bathing. In modern times, they are also widely used in industry and commerce. The history of water filtration is closely linked with the broader history of improvements in public health . [ 1 ]
Ancient Sanskrit and Egyptian writings document practices that were followed to keep water pure for drinking. The Sushruta Samhita (3rd or 4th century CE) specified various methods, including: boiling and heating under the sun . The text also recommends filtering water through sand and coarse gravel. [ 1 ] Images in Egyptian tombs, dating from the 15th to 13th century BCE depict the use of various water treatment devices. [ 2 ]
Hippocrates conducted his own experiments in water purification. [ 1 ] His theory of the four humors of the body led him to believe that the maintenance of good health required that the four humors be kept in balance. He recommended that feverish patients immerse themselves in a bath of cool water, which would help realign the temperature and harmony of the four humors. Hippocrates believed that water had to be clean and pure. Rainwater was the best water, but had to be boiled and strained before drinking to get rid of the "bad smell" and to avoid hoarseness of the voice. [ 3 ] [ 4 ] He designed a crude water filter to “purify” the water he used for his patients. Later known as the “Hippocratic sleeve,” this filter was a cloth bag through which water could be poured after being boiled. [ 1 ] [ 3 ]
Various methods for masking bad water were used: Diophanes of Nicaea of the first century BC advised putting macerated laurel into rainwater, Paxamus proposed that bruised coral or pounded barley, in a bag, be immersed in bad tasting water. and the eighth century Arabian alchemist , Gerber , described various stills for purifying water that used wick siphons — to transfer water from one vessel to another. [ 2 ]
The Classic Maya at Palenque made household water filters using locally abundant limestone carved into a porous cylinder, made so as to work in a manner strikingly similar to Modern ceramic water filters . [ 5 ] [ 6 ]
Sir Francis Bacon in his famous compilation "A Natural History of Ten Centuries" 1627 (Baker & Taras, 1981) discussed desalination and began the first scientific experimentation into water filtration. He believed that if seawater was allowed to percolate through the sand, it would be purified of salt. He thought that sand particles would obstruct the passage of salt in the water. Although his hypothesis was proven incorrect, it marked the beginning of a new interest in the field.
An experiment of sand filtration was illustrated by the Italian physician Lucas Antonius Portius. He wrote about the multiple sand filtration method in his work "Soldier's Vade Mecum". He illustrated water filtration experiment by using three pairs of sand filters.
Fathers of microscopy , Antonie van Leeuwenhoek and Robert Hooke , used the newly invented microscope to observe for the first time small material particles that lay suspended in the water, laying the groundwork for the future understanding of waterborne pathogens. [ 7 ]
The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland , John Gibb, installed an experimental filter, selling his unwanted surplus to the public. [ 8 ] [ 9 ] This method was refined in the following two decades by engineers working for private water companies, and it culminated in the first treated public water supply in the world, installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. [ 10 ] [ 11 ] This installation provided filtered water for every resident of the area, and the network design was widely copied throughout the United Kingdom in the ensuing decades.
The practice of water treatment soon became mainstream, and the virtues of the system were made starkly apparent after the investigations of the physician John Snow during the 1854 Broad Street cholera outbreak . Snow was sceptical of the then-dominant miasma theory that stated that diseases were caused by noxious "bad airs". Although the germ theory of disease had not yet been developed, Snow's observations led him to discount the prevailing theory. His 1855 essay On the Mode of Communication of Cholera conclusively demonstrated the role of the water supply in spreading the cholera epidemic in Soho , [ 12 ] with the use of a dot distribution map and statistical proof to illustrate the connection between the quality of the water source and cholera cases. His data convinced the local council to disable the water pump, which promptly ended the outbreak.
The Metropolis Water Act introduced the regulation of the water supply companies in London , including minimum standards of water quality for the first time. The Act "made provision for securing the supply to the Metropolis of pure and wholesome water", and required that all water be "effectually filtered" from 31 December 1855. [ 13 ] This was followed up with legislation for the mandatory inspection of water quality, including comprehensive chemical analyses, in 1858. This legislation set a worldwide precedent for similar state public health interventions across Europe . The Metropolitan Commission of Sewers was formed at the same time, water filtration was adopted throughout the country, and new water intakes on the Thames were established above Teddington Lock . Automatic pressure filters, where the water is forced under pressure through the filtration system, were innovated in 1899 in England. [ 9 ]
Limited drinking water standards were first implemented in the US in 1914, but it would not be until the 1940s that federal drinking water standards were widely applied. In 1972, the Clean Water Act passed through Congress and became law, requiring industrial plants to proactively improve their waste procedures in order to limit the effect of contaminants on freshwater sources. In 1974, the Safe Drinking Water Act was adopted by all 50 U.S. states for the regulation of public water systems within their jurisdictions. [ 14 ]
By the early 1900s, water treatment experimentation had turned from the prevention of waterborne diseases to the creation of softer, less-mineralized water. Water softeners, which use sodium ions to replace water-hardening minerals in water, were first introduced into the water treatment market in 1903.
The theory of ion exchange involves replacing undesirable or potentially harmful ions with more desirable or harmless ones. This is implemented in domestic water treatment system as water softeners. These not only remove calcium ions, but also lead and other heavy metals from water. [ 15 ] | https://en.wikipedia.org/wiki/History_of_water_filters |
Web syndication technologies were preceded by metadata standards such as the Meta Content Framework (MCF) and the Resource Description Framework (RDF), as well as by ' push ' specifications such as Channel Definition Format (CDF). Early web syndication standards included Information and Content Exchange (ICE) and RSS . More recent specifications include Atom and GData .
Web syndication specifications were preceded by several formats in push and metadata technologies, few of which achieved widespread popularity, as many, such as Backweb and Pointcast , were intended to work only with a single service. [ 1 ]
Between 1995 and 1997, Ramanathan V. Guha and others at Apple Computer's Advanced Technology Group developed the Meta Content Framework (MCF). [ 2 ] MCF was a specification for structuring metadata information about web sites and other data, implemented in HotSauce , a 3D flythrough visualizer for the web. When the research project was discontinued in 1997, Guha left Apple for Netscape .
Guha and the XML co-creator Tim Bray extended MCF into an XML application [ 3 ] that Netscape submitted to the World Wide Web Consortium (W3C) as a proposed web standard in June 1997. [ 4 ] This submission contributed towards the emergence of the Resource Description Framework (RDF). [ 5 ] [ 6 ] [ 7 ] [ 8 ]
In March 1997, Microsoft submitted a detailed specification for the 'push' technology Channel Definition Format (CDF) to the W3C. [ 9 ] This format was designed for the Active Channel feature of Internet Explorer 4.0 . CDF never became popular, perhaps because of the extensive resources it required at a time when people were mostly on dial-up. Backweb and Pointcast were geared towards news, much like a personal application programming interface (API) feed. Backweb later morphed into providing software updates, a precursor to the push update features used by various companies now.
In September 1997, Netscape previewed a new, competing technology named "Aurora," based on RDF, [ 10 ] a metadata model whose first public working draft would be posted the next month [ 2 ] by a W3C working group that included representatives of many companies, including R.V. Guha of Netscape. [ 5 ]
In December 1997, Dave Winer designed his own XML format for use on his Scripting News weblog. [ 11 ]
The first standard created specifically for web syndication was Information and Content Exchange (ICE), [ 12 ] which was proposed by Firefly Networks and Vignette in January 1998. [ 13 ] The ICE Authoring Group included Microsoft , Adobe , Sun , CNET , National Semiconductor , Tribune Media Services , Ziff Davis and Reuters , amongst others, [ 14 ] and was limited to thirteen companies. The ICE advisory council included nearly a hundred members. [ 12 ]
ICE was submitted to the World Wide Web Consortium standards body on 26 October 1998, [ 15 ] and showcased in a press event the day after. [ 16 ] The standard failed to benefit from the open-source implementation that W3C XML specifications often received. [ 17 ]
RDF Site Summary, the first web syndication format to be called "RSS", was offered by Netscape in March 1999 for use on the My Netscape portal. This version became known as RSS 0.9. [ 18 ]
In July 1999, responding to comments and suggestions, Dan Libby produced a prototype tentatively named RSS 0.91 [ 19 ] (RSS standing for Rich Site Summary at that time), that simplified the format and incorporated parts of Winer's scripting news format. This they considered an interim measure, with Libby suggesting an RSS 1.0-like format through the so-called Futures Document. [ 20 ]
In April 2001, in the midst of AOL's acquisition and subsequent restructuring of Netscape properties, a re-design of the My Netscape portal removed RSS/XML support. The RSS 0.91 DTD was removed during this re-design, but in response to feedback, Dan Libby was able to restore the DTD, but not the RSS validator previously in place. In response to comments within the RSS community at the time, Lars Marius Garshol , to whom authorship of the original 0.9 DTD is sometimes attributed, commented, "What I don't understand is all this fuss over Netscape removing the DTD. A well-designed RSS tool, whether it validates or not, would not use the DTD at Netscape's site in any case. There are several mechanisms which can be used to control the dereferencing of references from XML documents to their DTDs. These should be used. If not the result will be as described in the article." [ 21 ]
Effectively, this left the format without an owner, just as it was becoming widely used.
A working group and mailing list , RSS-DEV , was set up by various users and XML notables to continue its development. At the same time, Winer unilaterally posted a modified version of the RSS 0.91 specification to the Userland website, since it was already in use in their products. He claimed the RSS 0.91 specification was the property of his company, UserLand Software . [ 22 ]
Since neither side had any official claim on the name or the format, arguments raged whenever either side claimed RSS as its own, creating what became known as the RSS fork.
The RSS-DEV group went on to produce RSS 1.0 in December 2000. [ 23 ] Like RSS 0.9 (but not 0.91) this was based on the RDF specifications, but was more modular, with many of the terms coming from standard metadata vocabularies such as Dublin Core .
Nineteen days later, Winer released by himself RSS 0.92, [ 24 ] a minor and supposedly compatible set of changes to RSS 0.91 based on the same proposal. In April 2001, he published a draft of RSS 0.93 which was almost identical to 0.92. [ 25 ] A draft RSS 0.94 surfaced in August, reverting the changes made in 0.93, and adding a type attribute to the description element.
In September 2002, Winer released a final successor to RSS 0.92, known as RSS 2.0 and emphasizing "Really Simple Syndication" as the meaning of the three-letter abbreviation. The RSS 2.0 spec removed the type attribute added in RSS 0.94 and allowed people to add extension elements using XML namespaces . Several versions of RSS 2.0 were released, but the version number of the document model was not changed.
In November 2002, The New York Times began offering its readers the ability to subscribe to RSS news feeds related to various topics. In January 2003, Winer called the New York Times' adoption of RSS the "tipping point" in driving the RSS format's becoming a de facto standard .
In July 2003, Winer and Userland Software assigned ownership of the RSS 2.0 specification to his then workplace, Harvard's Berkman Center for the Internet & Society . [ 26 ]
In 2003, the primary method of web content syndication was the RSS family of formats. Developers who wished to overcome the limitations of these formats were unable to make changes directly to RSS 2.0 because the specification was copyrighted by Harvard University and "frozen," stating that "no significant changes can be made and it is intended that future work be done under a different name". [1]
In June 2003, Sam Ruby set up a wiki to discuss what makes "a well-formed log entry." [ 27 ] This posting acted as a rallying point. [2] People quickly started using the wiki to discuss a new syndication format to address the shortcomings of RSS. It also became clear that the new format could also form the basis of a more robust replacement for blog editing protocols such as Blogger API and LiveJournal XML-RPC Client/Server Protocol.
The project aimed to develop a web syndication format that was: [3]
In short order, a project road map was built. The effort quickly attracted more than 150 supporters including Dave Sifry of Technorati , Mena Trott of Six Apart , Brad Fitzpatrick of LiveJournal, Jason Shellen of Blogger, Jeremy Zawodny of Yahoo! , Timothy Appnel of the O'Reilly Network , Glenn Otis Brown of Creative Commons and Lawrence Lessig . Other notables supporting Atom include Mark Pilgrim , Tim Bray , Aaron Swartz , Joi Ito , and Jack Park . [4] Also, Dave Winer, the key figure behind RSS 2.0, gave tentative support to the Atom endeavor (which at the time was called Echo.) [5]
After this point, discussion became chaotic, due to the lack of a decision-making process. The project also lacked a name, tentatively using "Pie," "Echo," and "Necho" before settling on Atom . After releasing a project snapshot known as Atom 0.2 in early July 2003, discussion was shifted off the wiki.
The discussion then moved to a newly set up mailing list. The next and final snapshot during this phase was Atom 0.3 , released in December 2003. This version gained widespread adoption in syndication tools, and in particular it was added to several Google -related services, such as Blogger, Google News , and Gmail . Google's Data APIs (Beta) GData are based on Atom 1.0 and RSS 2.0.
In 2004, discussions began about moving the Atom project to a standards body such as the W3C or the Internet Engineering Task Force (IETF). The group eventually chose the IETF and the Atompub working group was formally set up in June 2004, finally giving the project a charter and process. The Atompub working group is co-chaired by Tim Bray (the co-editor of the XML specification) and Paul Hoffman. Initial development was focused on the syndication format.
The final draft of Atom 1.0 was published in July 2005 and was accepted by the IETF as a "proposed standard" in August 2005. Work then continued on the further development of the publishing protocol and various extensions to the syndication format.
The Atom Syndication Format was issued as a proposed "internet official protocol standard" in IETF RFC 4287 in December 2005 with the help of the co-editors Mark Nottingham and Robert Sayre .
In January 2005, Sean B. Palmer , Christopher Schmidt, and Cody Woodard produced a preliminary draft of RSS 1.1. [ 28 ] It was intended as a bugfix for 1.0, removing little-used features, simplifying the syntax and improving the specification based on the more recent RDF specifications. As of July 2005, RSS 1.1 had amounted to little more than an academic exercise.
In April 2005, Apple released Safari 2.0 with RSS Feed capabilities built in. Safari delivered the ability to read RSS feeds, and bookmark them, with built-in search features. Safari's RSS button is a blue rounded rectangle with "RSS" written inside in white. The favicon displayed defaults to a newspaper icon.
In November 2005, Microsoft proposed its Simple Sharing Extensions to RSS. [ 29 ]
In December 2005, Microsoft announced in blogs that Internet Explorer 7 [ 30 ] and Microsoft Outlook 12 (Outlook 2007) [ 31 ] will adopt the feed icon first used in the Mozilla Firefox , effectively making the orange square with white radio waves the industry standard for both RSS and related formats such as Atom. Also in February 2006, Opera Software announced they too would add the orange square in their Opera 9 release. [ 32 ] [ 33 ]
In January 2006, Rogers Cadenhead relaunched the RSS Advisory Board in order to move the RSS format forward. [ 34 ]
In January 2007, as part of a revitalization of Netscape by AOL, the FQDN for my.netscape.com was redirected to a holding page in preparation for an impending relaunch, and as a result some news feeders using RSS 0.91 stopped working. [ 35 ] The DTD has again been restored.
In 2013 the Candidate Recommendation for HTML5 included explicit provision for syndication by introducing the 'article' element. [ 36 ] | https://en.wikipedia.org/wiki/History_of_web_syndication_technology |
The presence of women in science spans the earliest times of the history of science wherein they have made substantial contributions. Historians with an interest in gender and science have researched the scientific endeavors and accomplishments of women, the barriers they have faced, and the strategies implemented to have their work peer-reviewed and accepted in major scientific journals and other publications. The historical, critical, and sociological study of these issues has become an academic discipline in its own right.
The involvement of women in medicine occurred in several early Western civilizations , and the study of natural philosophy in ancient Greece was open to women. Women contributed to the proto-science of alchemy in the first or second centuries CE During the Middle Ages, religious convents were an important place of education for women , and some of these communities provided opportunities for women to contribute to scholarly research. The 11th century saw the emergence of the first universities ; women were, for the most part, excluded from university education. [ 1 ] Outside academia, botany was the science that benefitted most from the contributions of women in early modern times. [ 2 ] The attitude toward educating women in medical fields appears to have been more liberal in Italy than elsewhere. The first known woman to earn a university chair in a scientific field of studies was eighteenth-century Italian scientist Laura Bassi .
Gender roles were largely deterministic in the eighteenth century and women made substantial advances in science . During the nineteenth century, women were excluded from most formal scientific education, but they began to be admitted into learned societies during this period. In the later nineteenth century, the rise of the women's college provided jobs for women scientists and opportunities for education. Marie Curie paved the way for scientists to study radioactive decay and discovered the elements radium and polonium . [ 3 ] Working as a physicist and chemist , she conducted pioneering research on radioactive decay and was the first woman to receive a Nobel Prize in Physics and became the first person to receive a second Nobel Prize in Chemistry . Sixty women have been awarded the Nobel Prize between 1901 and 2022. Twenty-four women have been awarded the Nobel Prize in physics, chemistry, physiology or medicine. [ 4 ]
In the 1970s and 1980s, many books and articles about women scientists were appearing; virtually all of the published sources ignored women of color and women outside of Europe and North America . [ 5 ] The formation of the Kovalevskaia Fund in 1985 and the Organization for Women in Science for the Developing World in 1993 gave more visibility to previously marginalized women scientists, but even today there is a dearth of information about current and historical women in science in developing countries. According to academic Ann Hibner Koblitz : [ 6 ]
Most work on women scientists has focused on the personalities and scientific subcultures of Western Europe and North America, and historians of women in science have implicitly or explicitly assumed that the observations made for those
regions will hold true for the rest of the world.
Koblitz has said that these generalizations about women in science often do not hold up cross-culturally: [ 7 ]
A scientific or technical field that might be considered 'unwomanly' in one country in a given period may enjoy the participation of many women in a different historical period or in another country. An example is engineering, which in many countries is considered the exclusive domain of men, especially in usually prestigious subfields such as electrical or mechanical engineering. There are exceptions to this, however. In the former Soviet Union all subspecialties of engineering had high percentages of women, and at the Universidad Nacional de Ingeniería of Nicaragua, women made up 70% of engineering students in 1990.
The involvement of women in the field of medicine has been recorded in several early civilizations. An ancient Egyptian physician, Peseshet ( c. 2600–2500 B.C.E. ), described in an inscription as "lady overseer of the female physicians", [ 8 ] [ 9 ] is the earliest known female physician named in the history of science . [ 10 ] Agamede was cited by Homer as a healer in ancient Greece before the Trojan War (c. 1194–1184 BCE). [ 11 ] [ 12 ] [ 13 ] According to one late antique legend, Agnodice was the first female physician to practice legally in fourth century BCE Athens . [ 14 ]
The study of natural philosophy in ancient Greece was open to women. Recorded examples include Aglaonike , who predicted eclipses ; and Theano , mathematician and physician, who was a pupil (possibly also wife) of Pythagoras , and one of a school in Crotone founded by Pythagoras, which included many other women. [ 15 ] A passage in Pollux speaks about those who invented the process of coining money mentioning Pheidon and Demodike from Cyme , wife of the Phrygian king, Midas, and daughter of King Agamemnon of Cyme. [ 16 ] A daughter of a certain Agamemnon , king of Aeolian Cyme , married a Phrygian king called Midas. [ 17 ] This link may have facilitated the Greeks "borrowing" their alphabet from the Phrygians because the Phrygian letter shapes are closest to the inscriptions from Aeolis. [ 17 ]
During the period of the Babylonian civilization, around 1200 BCE, two perfumeresses named Tapputi-Belatekallim and -ninu (first half of her name unknown) were able to obtain the essences from plants by using extraction and distillation procedures. [ 18 ] During the Egyptian dynasty , women were involved in applied chemistry, such as the making of beer and the preparation of medicinal compounds. [ 19 ] Women have been recorded to have made major contributions to alchemy . [ 19 ] Many of which lived in Alexandria around the 1st or 2nd centuries C.E., where the gnostic tradition led to female contributions being valued. The most famous of the women alchemist, Mary the Jewess , is credited with inventing several chemical instruments, including the double boiler ( bain-marie ); the improvement or creation of distillation equipment of that time. [ 19 ] [ 20 ] Such distillation equipment were called kerotakis (simple still) and the tribikos (a complex distillation device). [ 19 ]
Hypatia of Alexandria (c. 350–415 CE), daughter of Theon of Alexandria , was a philosopher, mathematician, and astronomer. [ 21 ] [ 22 ] She is the earliest female mathematician about whom detailed information has survived. [ 22 ] Hypatia is credited with writing several important commentaries on geometry , algebra and astronomy . [ 15 ] [ 23 ] Hypatia was the head of a philosophical school and taught many students. [ 24 ] In 415 CE, she became entangled in a political dispute between Cyril , the bishop of Alexandria, and Orestes , the Roman governor, which resulted in a mob of Cyril's supporters stripping her, dismembering her, and burning the pieces of her body. [ 24 ]
The early parts of the European Middle Ages , also known as the Dark Ages , were marked by the decline of the Roman Empire . The Latin West was left with great difficulties that affected the continent's intellectual production dramatically. Although nature was still seen as a system that could be comprehended in the light of reason, there was little innovative scientific inquiry. [ 25 ] The Arabic world deserves credit for preserving scientific advancements. Arabic scholars produced original scholarly work and generated copies of manuscripts from Classical periods . [ 26 ] During this period, Christianity underwent a period of resurgence, and Western civilization was bolstered as a result. This phenomenon was, in part, due to monasteries and nunneries that nurtured the skills of reading and writing, and the monks and nuns who collected and copied important writings produced by scholars of the past. [ 26 ] [ citation needed ]
As it mentioned before, convents were an important place of education for women during this period, for the monasteries and nunneries encourage the skills of reading and writing, and some of these communities provided opportunities for women to contribute to scholarly research. [ 26 ] An example is the German abbess Hildegard of Bingen (1098–1179 A.D), a famous philosopher and botanist, whose prolific writings include treatments of various scientific subjects, including medicine, botany and natural history (c. 1151–58). [ 27 ] Another famous German abbess was Hroswitha of Gandersheim (935–1000 A.D.) [ 26 ] that also helped encourage women to be intellectual. However, with the growth in number and power of nunneries, the all-male clerical hierarchy was not welcomed toward it, and thus it stirred up conflict by having backlash against women's advancement. That impacted many religious orders closed on women and disbanded their nunneries, and overall excluding women from the ability to learn to read and write. With that, the world of science became closed off to women, limiting women's influence in science. [ 26 ]
Entering the 11th century, the first universities emerged. Women were, for the most part, excluded from university education. [ 1 ] However, there were some exceptions. The Italian University of Bologna allowed women to attend lectures from its inception, in 1088. [ 28 ]
The attitude to educating women in medical fields in Italy appears to have been more liberal than in other places. The physician, Trotula di Ruggiero , is supposed to have held a chair at the Medical School of Salerno in the 11th century, where she taught many noble Italian women, a group sometimes referred to as the " ladies of Salerno ". [ 20 ] Several influential texts on women's medicine, dealing with obstetrics and gynecology , among other topics, are also often attributed to Trotula.
Dorotea Bucca was another distinguished Italian physician. She held a chair of philosophy and medicine at the University of Bologna for over forty years from 1390. [ 28 ] [ 29 ] [ self-published source? ] [ 30 ] [ 31 ] Other Italian women whose contributions in medicine have been recorded include Abella , Jacobina Félicie , Alessandra Giliani , Rebecca de Guarna , Margarita , Mercuriade (14th century), Constance Calenda , Calrice di Durisio (15th century), Constanza , Maria Incarnata and Thomasia de Mattio . [ 29 ] [ 32 ]
Despite the success of some women, cultural biases affecting their education and participation in science were prominent in the Middle Ages. For example, Saint Thomas Aquinas , a Christian scholar, wrote, referring to women, "She is mentally incapable of holding a position of authority." [ 1 ]
Margaret Cavendish , a seventeenth-century aristocrat, took part in some of the most important scientific debates of that time. She was, however, not inducted into the English Royal Society , although she was once allowed to attend a meeting. She wrote a number of works on scientific matters, including Observations upon Experimental Philosophy (1666) and Grounds of Natural Philosophy . In these works she was especially critical of the growing belief that humans, through science, were the masters of nature. The 1666 work attempted to heighten female interest in science. The observations provided a critique of the experimental science of Bacon and criticized microscopes as imperfect machines. [ 33 ]
Isabella Cortese , an Italian alchemist, is most known for her book I secreti della signora Isabella Cortese or The Secrets of Isabella Cortese. Cortese was able to manipulate nature in order to create several medicinal, alchemy and cosmetic "secrets" or experiments. [ 34 ] Isabella's book of secrets belongs to a larger book of secrets that became extremely popular among the elite during the 16th century. Despite the low percentage of literate women during Cortese's era, the majority of alchemical and cosmetic "secrets" in the book of secrets were geared towards women. This included but was not limited to pregnancy, fertility, and childbirth. [ 34 ]
Sophia Brahe , sister of Tycho Brahe, was a Danish Horticulturalist. Brahe was trained by her older brother in chemistry and horticulture but taught herself astronomy by studying books in German. Sophia visited her brother in the Uranienborg on numerous occasions and assisted on his project the De nova stella. Her observations lead to the discovery of the Supernova SN 1572 which helped refute the geocentric model of the universe. [ 35 ]
Tycho Wrote the Urania Titani about his sister Sophia and her husband Erik. The Urania presented Sophia and the Titan represented Erik. Tycho used this poem in order to show his appreciation for his sister and all of her work.
In Germany, the tradition of female participation in craft production enabled some women to become involved in observational science, especially astronomy . Between 1650 and 1710, women were 14% of German astronomers. [ 36 ] The most famous female astronomer in Germany was Maria Winkelmann . She was educated by her father and uncle and received training in astronomy from a nearby self-taught astronomer. Her chance to be a practising astronomer came when she married Gottfried Kirch , Prussia's foremost astronomer. She became his assistant at the astronomical observatory operated in Berlin by the Academy of Science . She made original contributions, including the discovery of a comet. When her husband died, Winkelmann applied for a position as assistant astronomer at the Berlin Academy – for which she had experience. As a woman – with no university degree – she was denied the post. Members of the Berlin Academy feared that they would establish a bad example by hiring a woman. "Mouths would gape", they said. [ 37 ]
Winkelmann's problems with the Berlin Academy reflect the obstacles women faced in being accepted in scientific work, which was considered to be chiefly for men. No woman was invited to either the Royal Society of London nor the French Academy of Sciences until the twentieth century. Most people in the seventeenth century viewed a life devoted to any kind of scholarship as being at odds with the domestic duties women were expected to perform.
A founder of modern botany and zoology , the German Maria Sibylla Merian (1647–1717), spent her life investigating nature. When she was thirteen, Sibylla began growing caterpillars and studying their metamorphosis into butterflies. She kept a "Study Book" which recorded her investigations into natural philosophy. In her first publication, The New Book of Flowers , she used imagery to catalog the lives of plants and insects. After her husband died, and her brief stint of living in Siewert , she and her daughter journeyed to Paramaribo for two years to observe insects, birds, reptiles, and amphibians. [ 38 ] She returned to Amsterdam and published The Metamorphosis of the Insects of Suriname , which "revealed to Europeans for the first time the astonishing diversity of the rain forest." [ 39 ] [ 40 ] She was a botanist and entomologist who was known for her artistic illustrations of plants and insects. Uncommon for that era, she traveled to South America and Surinam, where, assisted by her daughters, she illustrated the plant and animal life of those regions. [ 41 ]
Overall, the Scientific Revolution did little to change people's ideas about the nature of women – more specifically – their capacity to contribute to science just as men do. According to Jackson Spielvogel , 'Male scientists used the new science to spread the view that women were by nature inferior and subordinate to men and suited to play a domestic role as nurturing mothers. The widespread distribution of books ensured the continuation of these ideas'. [ 42 ]
Although women excelled in many scientific areas during the eighteenth century, they were discouraged from learning about plant reproduction. Carl Linnaeus ' system of plant classification based on sexual characteristics drew attention to botanical licentiousness, and people feared that women would learn immoral lessons from nature's example. Women were often depicted as both innately emotional and incapable of objective reasoning, or as natural mothers reproducing a natural, moral society. [ 43 ]
The eighteenth century was characterized by three divergent views towards women: that women were mentally and socially inferior to men, that they were equal but different, and that women were potentially equal in both mental ability and contribution to society. [ 44 ] While individuals such as Jean-Jacques Rousseau believed women's roles were confined to motherhood and service to their male partners, the Enlightenment was a period in which women experienced expanded roles in the sciences. [ 45 ]
The rise of salon culture in Europe brought philosophers and their conversation to an intimate setting where men and women met to discuss contemporary political, social, and scientific topics. [ 46 ] While Jean-Jacques Rousseau attacked women-dominated salons as producing 'effeminate men' that stifled serious discourse, salons were characterized in this era by the mixing of the sexes. [ 47 ]
Lady Mary Wortley Montagu defied convention by introducing smallpox inoculation through variolation to Western medicine after witnessing it during her travels in the Ottoman Empire . [ 48 ] [ 49 ] In 1718 Wortley Montague had her son inoculated [ 49 ] and when in 1721 a smallpox epidemic struck England, she had her daughter inoculated. [ 50 ] This was the first such operation done in Britain. [ 49 ] She persuaded Caroline of Ansbach to test the treatment on prisoners. [ 50 ] Princess Caroline subsequently inoculated her two daughters in 1722. [ 49 ] Under a pseudonym, Wortley Montague published an article describing and advocating in favor of inoculation in September 1722. [ 51 ]
After publicly defending forty nine theses [ 52 ] in the Palazzo Pubblico, Laura Bassi was awarded a doctorate of philosophy in 1732 at the University of Bologna . [ 53 ] Thus, Bassi became the second woman in the world to earn a philosophy doctorate after Elena Cornaro Piscopia in 1678, 54 years prior. She subsequently defended twelve additional theses at the Archiginnasio , the main building of the University of Bologna which allowed her to petition for a teaching position at the university. [ 53 ] In 1732 the university granted Bassi's professorship in philosophy, making her a member of the Academy of the Sciences and the first woman to earn a professorship in physics at a university in Europe [ 53 ] But the university held the value that women were to lead a private life and from 1746 to 1777 she gave only one formal dissertation per year ranging in topic from the problem of gravity to electricity . [ 52 ] Because she could not lecture publicly at the university regularly, she began conducting private lessons and experiments from home in the year of 1749. [ 52 ] However, due to her increase in responsibilities and public appearances on behalf of the university, Bassi was able to petition for regular pay increases, which in turn was used to pay for her advanced equipment. Bassi earned the highest salary paid by the University of Bologna of 1,200 lire. [ 54 ] In 1776, at the age of 65, she was appointed to the chair in experimental physics by the Bologna Institute of Sciences with her husband as a teaching assistant. [ 52 ]
According to Britannica, Maria Gaetana Agnesi is "considered to be the first woman in the Western world to have achieved a reputation in mathematics." [ 55 ] She is credited as the first woman to write a mathematics handbook, the Instituzioni analitiche ad uso della gioventù italiana , (Analytical Institutions for the Use of Italian Youth). Published in 1748 it "was regarded as the best introduction extant to the works of Euler ." [ 56 ] [ 57 ] The goal of this work was, according to Agnesi herself, to give a systematic illustration of the different results and theorems of infinitesimal calculus . [ 58 ] In 1750 she became the second woman to be granted a professorship at a European university. Also appointed to the University of Bologna she never taught there. [ 56 ] [ 59 ]
The German Dorothea Erxleben was instructed in medicine by her father from an early age [ 60 ] and Bassi's university professorship inspired Erxleben to fight for her right to practise medicine . In 1742 she published a tract arguing that women should be allowed to attend university. [ 61 ] After being admitted to study by a dispensation of Frederick the Great , [ 60 ] Erxleben received her M.D. from the University of Halle in 1754. [ 61 ] She went on to analyse the obstacles preventing women from studying, among them housekeeping and children. [ 60 ] She became the first female medical doctor in Germany . [ 62 ]
In 1741–42 Charlotta Frölich became the first woman to be published by the Royal Swedish Academy of Sciences with three books in agricultural science. In 1748 Eva Ekeblad became the first woman inducted into that academy. [ 63 ] In 1746 Ekeblad had written to the academy about her discoveries of how to make flour and alcohol out of potatoes . [ 64 ] [ 65 ] Potatoes had been introduced into Sweden in 1658 but had been cultivated only in the greenhouses of the aristocracy. Ekeblad's work turned potatoes into a staple food in Sweden, and increased the supply of wheat , rye and barley available for making bread, since potatoes could be used instead to make alcohol. This greatly improved the country's eating habits and reduced the frequency of famines. [ 65 ] Ekeblad also discovered a method of bleaching cotton textile and yarn with soap in 1751, [ 64 ] and of replacing the dangerous ingredients in cosmetics of the time by using potato flour in 1752. [ 65 ]
Émilie du Châtelet , a close friend of Voltaire , was the first scientist to appreciate the significance of kinetic energy , as opposed to momentum . She repeated and described the importance of an experiment originally devised by Willem 's Gravesande showing the impact of falling objects is proportional not to their velocity, but to the velocity squared. This understanding is considered to have made a profound contribution to Newtonian mechanics . [ 66 ] In 1749 she completed the French translation of Newton's Philosophiae Naturalis Principia Mathematica (the Principia ), including her derivation of the notion of conservation of energy from its principles of mechanics. Published ten years after her death, her translation and commentary of the Principia contributed to the completion of the scientific revolution in France and to its acceptance in Europe. [ 67 ]
Marie-Anne Pierrette Paulze and her husband Antoine Lavoisier rebuilt the field of chemistry , which had its roots in alchemy and at the time was a convoluted science dominated by George Stahl 's theory of phlogiston . Paulze accompanied Lavoisier in his lab, making entries into lab notebooks and sketching diagrams of his experimental designs. The training she had received allowed her to accurately and precisely draw experimental apparatuses, which ultimately helped many of Lavoisier's contemporaries to understand his methods and results. Paulze translated various works about phlogiston into French. One of her most important translation was that of Richard Kirwan 's Essay on Phlogiston and the Constitution of Acids , which she both translated and critiqued, adding footnotes as she went along and pointing out errors in the chemistry made throughout the paper. [ 68 ] Paulze was instrumental in the 1789 publication of Lavoisier's Elementary Treatise on Chemistry , which presented a unified view of chemistry as a field. This work proved pivotal in the progression of chemistry, as it presented the idea of conservation of mass as well as a list of elements and a new system for chemical nomenclature . She also kept strict records of the procedures followed, lending validity to the findings Lavoisier published.
The astronomer Caroline Herschel was born in Hanover but moved to England where she acted as an assistant to her brother, William Herschel . Throughout her writings, she repeatedly made it clear that she desired to earn an independent wage and be able to support herself. When the crown began paying her for her assistance to her brother in 1787, she became the first woman to do so at a time when even men rarely received wages for scientific enterprises – to receive a salary for services to science. [ 69 ] During 1786–97 she discovered eight comets , the first on 1 August 1786. She had unquestioned priority as discoverer of five of the comets [ 69 ] [ 70 ] and rediscovered Comet Encke in 1795. [ 71 ] Five of her comets were published in Philosophical Transactions , a packet of paper bearing the superscription, "This is what I call the Bills and Receipts of my Comets" contains some data connected with the discovery of each of these objects. William was summoned to Windsor Castle to demonstrate Caroline's comet to the royal family . [ 72 ] Caroline Herschel is often credited as the first woman to discover a comet; however, Maria Kirch discovered a comet in the early 1700s, but is often overlooked because at the time, the discovery was attributed to her husband, Gottfried Kirch . [ 73 ]
Science remained a largely amateur profession during the early part of the nineteenth century. Botany was considered a popular and fashionable activity, and one particularly suitable to women. In the later eighteenth and early nineteenth centuries, it was one of the most accessible areas of science for women in both England and North America. [ 74 ] [ 75 ] [ 76 ]
However, as the nineteenth century progressed, botany and other sciences became increasingly professionalized, and women were increasingly excluded. Women's contributions were limited by their exclusion from most formal scientific education, but began to be recognized through their occasional admittance into learned societies during this period. [ 76 ] [ 74 ]
Scottish scientist Mary Fairfax Somerville carried out experiments in magnetism , presenting a paper entitled 'The Magnetic Properties of the Violet Rays of the Solar Spectrum' to the Royal Society in 1826, the second woman to do so. She also wrote several mathematical , astronomical , physical and geographical texts, and was a strong advocate for women's education . In 1835, she and Caroline Herschel were the first two women elected as Honorary Members of the Royal Astronomical Society . [ 77 ]
English mathematician Ada, Lady Lovelace , a pupil of Somerville, corresponded with Charles Babbage about applications for his analytical engine . In her notes (1842–43) appended to her translation of Luigi Menabrea 's article on the engine, she foresaw wide applications for it as a general-purpose computer, including composing music. She has been credited as writing the first computer program, though this has been disputed. [ 78 ]
In Germany, institutes for "higher" education of women ( Höhere Mädchenschule , in some regions called Lyzeum ) were founded at the beginning of the century. [ 79 ] The Deaconess Institute at Kaiserswerth was established in 1836 to instruct women in nursing . Elizabeth Fry visited the institute in 1840 and was inspired to found the London Institute of Nursing, and Florence Nightingale studied there in 1851. [ 80 ]
In the US, Maria Mitchell made her name by discovering a comet in 1847, but also contributed calculations to the Nautical Almanac produced by the United States Naval Observatory . She became the first woman member of the American Academy of Arts and Sciences in 1848 and of the American Association for the Advancement of Science in 1850.
Other notable female scientists during this period include: [ 15 ]
The latter part of the 19th century saw a rise in educational opportunities for women. Schools aiming to provide education for girls similar to that afforded to boys were founded in the UK, including the North London Collegiate School (1850), Cheltenham Ladies' College (1853) and the Girls' Public Day School Trust schools (from 1872). The first UK women's university college, Girton , was founded in 1869, and others soon followed: Newnham (1871) and Somerville (1879).
The Crimean War (1854–1856) contributed to establishing nursing as a profession, making Florence Nightingale a household name. A public subscription allowed Nightingale to establish a school of nursing in London in 1860, and schools following her principles were established throughout the UK. [ 80 ] Nightingale was also a pioneer in public health as well as a statistician.
James Barry became the first British woman to gain a medical qualification in 1812, passing as a man. Elizabeth Garrett Anderson was the first openly female Briton to qualify medically, in 1865. With Sophia Jex-Blake , American Elizabeth Blackwell and others, Garret Anderson founded the first UK medical school to train women, the London School of Medicine for Women , in 1874.
Annie Scott Dill Maunder was a pioneer in astronomical photography , especially of sunspots . A mathematics graduate of Girton College , Cambridge, she was first hired (in 1890) to be an assistant to Edward Walter Maunder , discoverer of the Maunder Minimum , the head of the solar department at Greenwich Observatory . They worked together to observe sunspots and to refine the techniques of solar photography. They married in 1895. Annie's mathematical skills made it possible to analyse the years of sunspot data that Maunder had been collecting at Greenwich. She also designed a small, portable wide-angle camera with a 1.5-inch-diameter (38 mm) lens. In 1898, the Maunders traveled to India, where Annie took the first photographs of the Sun's corona during a solar eclipse. By analysing the Cambridge records for both sunspots and geomagnetic storm , they were able to show that specific regions of the Sun's surface were the source of geomagnetic storms and that the Sun did not radiate its energy uniformly into space, as William Thomson, 1st Baron Kelvin had declared. [ 81 ]
In Prussia women could go to university from 1894 and were allowed to receive a PhD. In 1908 all remaining restrictions for women were terminated.
Alphonse Rebière published a book in 1897, in France, entitled Les Femmes dans la science (Women in Science) which listed the contributions and publications of women in science. [ 82 ]
Other notable female scientists during this period include: [ 15 ] [ 83 ]
In the second half of the 19th century, a large proportion of the most successful women in the STEM fields were Russians. Although many women received advanced training in medicine in the 1870s, [ 84 ] in other fields women were barred and had to go to western Europe – mainly Switzerland – in order to pursue scientific studies. In her book about these "women of the [eighteen] sixties" (шестидесятницы), as they were called, Ann Hibner Koblitz writes: [ 85 ] : 11
To a large extent, women's higher education in continental Europe was pioneered by this first generation of Russian women. They were the first students in Zürich, Heidelberg, Leipzig, and elsewhere. Theirs were the first doctorates in medicine, chemistry, mathematics, and biology.
Among the successful scientists were Nadezhda Suslova (1843–1918), the first woman in the world to obtain a medical doctorate fully equivalent to men's degrees; Maria Bokova-Sechenova (1839–1929), a pioneer of women's medical education who received two doctoral degrees, one in medicine in Zürich and one in physiology in Vienna; Iulia Lermontova (1846–1919), the first woman in the world to receive a doctoral degree in chemistry; the marine biologist Sofia Pereiaslavtseva (1849–1903), director of the Sevastopol Biological Station and winner of the Kessler Prize of the Russian Society of Natural Scientists; and the mathematician Sofia Kovalevskaia (1850–1891), the first woman in 19th century Europe to receive a doctorate in mathematics and the first to become a university professor in any field. [ 85 ]
In the later nineteenth century the rise of the women's college provided jobs for women scientists, and opportunities for education.
Women's colleges produced a disproportionate number of women who went on for PhDs in science.
Many coeducational colleges and universities also opened or started to admit women during this period; such institutions included just over 3000 women in 1875, by 1900 numbered almost 20,000. [ 83 ]
An example is Elizabeth Blackwell , who became the first certified female doctor in the US when she graduated from Geneva Medical College in 1849. [ 86 ] With her sister, Emily Blackwell , and Marie Zakrzewska , Blackwell founded the New York Infirmary for Women and Children in 1857 and the first women's medical college in 1868, providing both training and clinical experience for women doctors. She also published several books on medical education for women.
In 1876, Elizabeth Bragg became the first woman to graduate with a civil engineering degree in the United States, from the University of California, Berkeley . [ 87 ]
Marie Skłodowska-Curie , the first woman to win a Nobel prize in 1903 (physics), went on to become a double Nobel prize winner in 1911, both for her work on radiation . She was the first person to win two Nobel prizes, a feat accomplished by only three others since then. She also was the first woman to teach at Sorbonne University in Paris . [ 88 ]
Alice Perry is understood to be the first woman to graduate with a degree in civil engineering in the then United Kingdom of Great Britain and Ireland , in 1906 at Queen's College, Galway, Ireland . [ 89 ]
Lise Meitner played a major role in the discovery of nuclear fission. As head of the physics section at the Kaiser Wilhelm Institute in Berlin she collaborated closely with the head of chemistry Otto Hahn on atomic physics until forced to flee Berlin in 1938. In 1939, in collaboration with her nephew Otto Frisch , Meitner derived the theoretical explanation for an experiment performed by Hahn and Fritz Strassman in Berlin, thereby demonstrating the occurrence of nuclear fission . The possibility that Fermi's bombardment of uranium with neutrons in 1934 had instead produced fission by breaking up the nucleus into lighter elements, had actually first been raised in print in 1934, by chemist Ida Noddack (co-discover of the element rhenium ), but this suggestion had been ignored at the time, as no group made a concerted effort to find any of these light radioactive fission products.
Maria Montessori was the first woman in Southern Europe to qualify as a physician. [ 90 ] She developed an interest in the diseases of children and believed in the necessity of educating those recognized to be ineducable. In the case of the latter she argued for the development of training for teachers along Froebelian lines and developed the principle that was also to inform her general educational program , which is the first the education of the senses, then the education of the intellect. Montessori introduced a teaching program that allowed defective children to read and write. She sought to teach skills not by having children repeatedly try it, but by developing exercises that prepare them. [ 91 ]
Emmy Noether revolutionized abstract algebra, filled in gaps in relativity, and was responsible for a critical theorem about conserved quantities in physics. One notes that the Erlangen program attempted to identify invariants under a group of transformations. On 16 July 1918, before a scientific organization in Göttingen , Felix Klein read a paper written by Emmy Noether , because she was not allowed to present the paper herself. In particular, in what is referred to in physics as Noether's theorem , this paper identified the conditions under which the Poincaré group of transformations (now called a gauge group ) for general relativity defines conservation laws . [ 92 ] Noether's papers made the requirements for the conservation laws precise. Among mathematicians, Noether is best known for her fundamental contributions to abstract algebra, where the adjective noetherian is nowadays commonly used on many sorts of objects.
Mary Cartwright was a British mathematician who was the first to analyze a dynamical system with chaos. [ 93 ] Inge Lehmann , a Danish seismologist , first suggested in 1936 that inside the Earth's molten core there may be a solid inner core . [ 94 ] Women such as Margaret Fountaine continued to contribute detailed observations and illustrations in botany, entomology, and related observational fields. Joan Beauchamp Procter , an outstanding herpetologist , was the first woman Curator of Reptiles for the Zoological Society of London at London Zoo .
Florence Sabin was an American medical scientist. Sabin was the first woman faculty member at Johns Hopkins in 1902, and the first woman full-time professor there in 1917. [ 95 ] Her scientific and research experience is notable. Sabin published over 100 scientific papers and multiple books. [ 95 ]
Women moved into science in significant numbers by 1900, helped by the women's colleges and by opportunities at some of the new universities. Margaret Rossiter 's books Women Scientists in America: Struggles and Strategies to 1940 and Women Scientists in America: Before Affirmative Action 1940–1972 provide an overview of this period, stressing the opportunities women found in separate women's work in science. [ 96 ] [ 97 ]
In 1892, Ellen Swallow Richards called for the "christening of a new science" – " oekology " (ecology) in a Boston lecture. This new science included the study of "consumer nutrition" and environmental education. This interdisciplinary branch of science was later specialized into what is currently known as ecology, while the consumer nutrition focus split off and was eventually relabeled as home economics , [ 98 ] [ 99 ] which provided another avenue for women to study science. Richards helped to form the American Home Economics Association , which published a journal, the Journal of Home Economics , and hosted conferences. Home economics departments were formed at many colleges, especially at land grant institutions. In her work at MIT, Ellen Richards also introduced the first biology course in its history as well as the focus area of sanitary engineering.
Women also found opportunities in botany and embryology . In psychology , women earned doctorates but were encouraged to specialize in educational and child psychology and to take jobs in clinical settings, such as hospitals and social welfare agencies.
In 1901, Annie Jump Cannon first noticed that it was a star's temperature that was the principal distinguishing feature among different spectra. [ dubious – discuss ] This led to re-ordering of the ABC types by temperature instead of hydrogen absorption-line strength. Due to Cannon's work, most of the then-existing classes of stars were thrown out as redundant. Afterward, astronomy was left with the seven primary classes recognized today, in order: O, B, A, F, G, K, M; [ 100 ] that has since been extended.
Henrietta Swan Leavitt first published her study of variable stars in 1908. This discovery became known as the "period-luminosity relationship" of Cepheid variables . [ 102 ] Our picture of the universe was changed forever, largely because of Leavitt's discovery.
The accomplishments of Edwin Hubble , renowned American astronomer, were made possible by Leavitt's groundbreaking research and Leavitt's Law. "If Henrietta Leavitt had provided the key to determine the size of the cosmos, then it was Edwin Powell Hubble who inserted it in the lock and provided the observations that allowed it to be turned", wrote David H. and Matthew D.H. Clark in their book Measuring the Cosmos . [ 103 ]
Hubble often said that Leavitt deserved the Nobel for her work. [ 104 ] Gösta Mittag-Leffler of the Swedish Academy of Sciences had begun paperwork on her nomination in 1924, only to learn that she had died of cancer three years earlier [ 105 ] (the Nobel prize cannot be awarded posthumously).
In 1925, Harvard graduate student Cecilia Payne-Gaposchkin demonstrated for the first time from existing evidence on the spectra of stars that stars were made up almost exclusively of hydrogen and helium , one of the most fundamental theories in stellar astrophysics . [ 100 ] [ 102 ]
Canadian-born Maud Menten worked in the US and Germany. Her most famous work was on enzyme kinetics together with Leonor Michaelis , based on earlier findings of Victor Henri . This resulted in the Michaelis–Menten equations. Menten also invented the azo-dye coupling reaction for alkaline phosphatase , which is still used in histochemistry. She characterised bacterial toxins from B. paratyphosus , Streptococcus scarlatina and Salmonella ssp. , and conducted the first electrophoretic separation of proteins in 1944. She worked on the properties of hemoglobin , regulation of blood sugar level, and kidney function.
World War II brought some new opportunities. The Office of Scientific Research and Development , under Vannevar Bush , began in 1941 to keep a registry of men and women trained in the sciences. Because there was a shortage of workers, some women were able to work in jobs they might not otherwise have accessed. Many women worked on the Manhattan Project or on scientific projects for the United States military services. Women who worked on the Manhattan Project included Leona Woods Marshall, Katharine Way , and Chien-Shiung Wu . It was actually Wu who confirmed Enrico Fermi's hypothesis through her earlier draft that Xe-135 impeded the B reactor from working. The adjustments made would quickly let the project resume its course. [ 106 ] [ 107 ]
Wu would later also confirm Albert Einstein's EPR Paradox in the first experimental corroboration, and prove the first violation of Parity and Charge Conjugate Symmetry , thereby laying the conceptual basis for the future Standard model of Particle Physics , and the rapid development of the new field. [ 108 ]
Women in other disciplines looked for ways to apply their expertise to the war effort. Three nutritionists, Lydia J. Roberts , Hazel K. Stiebeling , and Helen S. Mitchell , developed the Recommended Dietary Allowance in 1941 to help military and civilian groups make plans for group feeding situations. The RDAs proved necessary, especially, once foods began to be rationed . Rachel Carson worked for the United States Bureau of Fisheries , writing brochures to encourage Americans to consume a wider variety of fish and seafood. She also contributed to research to assist the Navy in developing techniques and equipment for submarine detection.
Women in psychology formed the National Council of Women Psychologists , which organized projects related to the war effort. The NCWP elected Florence Laura Goodenough president. In the social sciences, several women contributed to the Japanese Evacuation and Resettlement Study , based at the University of California . This study was led by sociologist Dorothy Swaine Thomas , who directed the project and synthesized information from her informants, mostly graduate students in anthropology. These included Tamie Tsuchiyama , the only Japanese-American woman to contribute to the study, and Rosalie Hankey Wax .
In the United States Navy , female scientists conducted a wide range of research. Mary Sears , a planktonologist , researched military oceanographic techniques as head of the Hydgrographic Office's Oceanographic Unit. Florence van Straten , a chemist, worked as an aerological engineer. She studied the effects of weather on military combat. Grace Hopper , a mathematician, became one of the first computer programmers for the Mark I computer. Mina Spiegel Rees , also a mathematician, was the chief technical aide for the Applied Mathematics Panel of the National Defense Research Committee .
Gerty Cori was a biochemist who discovered the mechanism by which glycogen, a derivative of glucose, is transformed in the muscles to form lactic acid, and is later reformed as a way to store energy. For this discovery she and her colleagues were awarded the Nobel prize in 1947, making her the third woman and the first American woman to win a Nobel Prize in science. She was the first woman ever to be awarded the Nobel Prize in Physiology or Medicine. Cori is among several scientists whose works are commemorated by a U.S. postage stamp. [ 109 ]
Nina Byers notes that before 1976, fundamental contributions of women to physics were rarely acknowledged. Women worked unpaid or in positions lacking the status they deserved. That imbalance is gradually being redressed. [ citation needed ]
In the early 1980s, Margaret Rossiter presented two concepts for understanding the statistics behind women in science as well as the disadvantages women continued to suffer. She coined the terms "hierarchical segregation" and "territorial segregation." The former term describes the phenomenon in which the further one goes up the chain of command in the field, the smaller the presence of women. The latter describes the phenomenon in which women "cluster in scientific disciplines." [ 110 ] : 33–34
A recent book titled Athena Unbound provides a life-course analysis (based on interviews and surveys) of women in science from early childhood interest, through university, graduate school and the academic workplace. The thesis of this book is that "Women face a special series of gender related barriers to entry and success in scientific careers that persist, despite recent advances". [ 111 ]
The L'Oréal-UNESCO Awards for Women in Science were set up in 1998, with prizes alternating each year between the materials science and life sciences. One award is given for each geographical region of Africa and the Middle East, Asia-Pacific, Europe, Latin America and the Caribbean, and North America. By 2017, these awards had recognised almost 100 laureates from 30 countries. Two of the laureates have gone on to win the Nobel Prize, Ada Yonath (2008) and Elizabeth Blackburn (2009). Fifteen promising young researchers also receive an International Rising Talent fellowship each year within this programme.
South-African born physicist and radiobiologist Tikvah Alper (1909–95), working in the UK, developed many fundamental insights into biological mechanisms, including the (negative) discovery that the infective agent in scrapie could not be a virus or other eukaryotic structure.
French virologist Françoise Barré-Sinoussi performed some of the fundamental work in the identification of the human immunodeficiency virus (HIV) as the cause of AIDS, for which she shared the 2008 Nobel Prize in Physiology or Medicine.
In July 1967, Jocelyn Bell Burnell discovered evidence for the first known radio pulsar , which resulted in the 1974 Nobel Prize in Physics for her supervisor . She was president of the Institute of Physics from October 2008 until October 2010.
Astrophysicist Margaret Burbidge was a member of the B 2 FH group responsible for originating the theory of stellar nucleosynthesis, which explains how elements are formed in stars. She has held a number of prestigious posts, including the directorship of the Royal Greenwich Observatory .
Mary Cartwright was a mathematician and student of G. H. Hardy . Her work on nonlinear differential equations was influential in the field of dynamical systems .
Rosalind Franklin was a crystallographer, whose work helped to elucidate the fine structures of coal, graphite , DNA and viruses. In 1953, the work she did on DNA allowed Watson and Crick to conceive their model of the structure of DNA. Her photograph of DNA gave Watson and Crick a basis for their DNA research, and they were awarded the Nobel Prize without giving due credit to Franklin, who had died of cancer in 1958.
Jane Goodall is a British primatologist considered to be the world's foremost expert on chimpanzees and is best known for her over 55-year study of social and family interactions of wild chimpanzees. She is the founder of the Jane Goodall Institute and the Roots & Shoots programme.
Dorothy Hodgkin analyzed the molecular structure of complex chemicals by studying diffraction patterns caused by passing X-rays through crystals. She won the 1964 Nobel prize for chemistry for discovering the structure of vitamin B 12 , becoming the third woman to win the prize for chemistry. [ 112 ]
Irène Joliot-Curie , daughter of Marie Curie, won the 1935 Nobel Prize for chemistry with her husband Frédéric Joliot for their work in radioactive isotopes leading to nuclear fission . This made the Curies the family with the most Nobel laureates to date.
Palaeoanthropologist Mary Leakey discovered the first skull of a fossil ape on Rusinga Island and also a noted robust Australopithecine.
Italian neurologist Rita Levi-Montalcini received the 1986 Nobel Prize in Physiology or Medicine for the discovery of Nerve growth factor (NGF). Her work allowed for a further potential understanding of different diseases such as tumors, delayed healing, malformations, and others. [ 113 ] This research led to her winning the Nobel Prize for Physiology or Medicine alongside Stanley Cohen in 1986. While making advancements in medicine and science, Rita Levi-Montalcini was also active politically throughout her life. [ 114 ] She was appointed a Senator for Life in the Italian Senate in 2001 and is the oldest Nobel laureate ever to have lived.
Zoologist Anne McLaren conducted studied in genetics which led to advances in in vitro fertilization . She became the first female officer of the Royal Society in 331 years.
Christiane Nüsslein-Volhard received the Nobel Prize in Physiology or Medicine in 1995 for research on the genetic control of embryonic development. She also started the Christiane Nüsslein-Volhard Foundation (Christiane Nüsslein-Volhard Stiftung), to aid promising young female German scientists with children.
Bertha Swirles was a theoretical physicist who made a number of contributions to early quantum theory . She co-authored the well-known textbook Methods of Mathematical Physics with her husband Sir Harold Jeffreys .
Kay McNulty , Betty Jennings , Betty Snyder , Marlyn Wescoff , Fran Bilas and Ruth Lichterman were six of the original programmers for the ENIAC , the first general purpose electronic computer. [ 115 ]
Linda B. Buck is a neurobiologist who was awarded the 2004 Nobel Prize in Physiology or Medicine along with Richard Axel for their work on olfactory receptors .
Rachel Carson was a marine biologist from the United States. She is credited with being the founder of the environmental movement. [ 116 ] The biologist and activist published Silent Spring , a work on the dangers of pesticides, in 1962. The publishing of her environmental science book led to the questioning of usage of harmful pesticides and other chemicals in agricultural settings. [ 116 ] This led to a campaign to attempt to ultimately discredit Carson. However, the federal government called for a review of DDT which concluded with DDT being banned. [ 117 ] Carson later died from cancer in 1964 at 57 years old. [ 117 ]
Eugenie Clark , popularly known as The Shark Lady, was an American ichthyologist known for her research on poisonous fish of the tropical seas and on the behavior of sharks. [ 118 ]
Ann Druyan is an American writer, lecturer and producer specializing in cosmology and popular science . Druyan has credited her knowledge of science to the 20 years she spent studying with her late husband, Carl Sagan , rather than formal academic training. [ citation needed ] She was responsible for the selection of music on the Voyager Golden Record for the Voyager 1 and Voyager 2 exploratory missions. Druyan also sponsored the Cosmos 1 spacecraft.
Gertrude B. Elion was an American biochemist and pharmacologist, awarded the Nobel Prize in Physiology or Medicine in 1988 for her work on the differences in biochemistry between normal human cells and pathogens.
Sandra Moore Faber , with Robert Jackson , discovered the Faber–Jackson relation between luminosity and stellar dispersion velocity in elliptical galaxies . She also headed the team which discovered the Great Attractor , a large concentration of mass which is pulling a number of nearby galaxies in its direction.
Zoologist Dian Fossey worked with gorillas in Africa from 1967 until her murder in 1985.
Astronomer Andrea Ghez received a MacArthur "genius grant" in 2008 for her work in surmounting the limitations of earthbound telescopes. [ 119 ]
Maria Goeppert Mayer was the second female Nobel Prize winner in Physics, for proposing the nuclear shell model of the atomic nucleus. Earlier in her career, she had worked in unofficial or volunteer positions at the university where her husband was a professor. Goeppert Mayer is one of several scientists whose works are commemorated by a U.S. postage stamp. [ 120 ]
Sulamith Low Goldhaber and her husband Gerson Goldhaber formed a research team on the K meson and other high-energy particles in the 1950s.
Carol Greider and the Australian born Elizabeth Blackburn , along with Jack W. Szostak, received the 2009 Nobel Prize in Physiology or Medicine for the discovery of how chromosomes are protected by telomeres and the enzyme telomerase.
Rear Admiral Grace Murray Hopper developed the first computer compiler while working for the Eckert Mauchly Computer Corporation , released in 1952.
Deborah S. Jin 's team at JILA , in Boulder, Colorado , in 2003 produced the first fermionic condensate , a new state of matter .
Stephanie Kwolek , a researcher at DuPont, invented poly-paraphenylene terephthalamide – better known as Kevlar .
Lynn Margulis is a biologist best known for her work on endosymbiotic theory , which is now generally accepted for how certain organelles were formed.
Barbara McClintock 's studies of maize genetics demonstrated genetic transposition in the 1940s and 1950s. Before then, McClintock obtained her PhD from Cornell University in 1927. Her discovery of transposition provided a greater understanding of mobile loci within chromosomes and the ability for genetics to be fluid. [ 121 ] She dedicated her life to her research, and she was awarded the Nobel Prize in Physiology or Medicine in 1983. McClintock was the first American woman to receive a Nobel Prize that was not shared by anyone else. [ 121 ] McClintock is one of several scientists whose works are commemorated by a U.S. postage stamp. [ 122 ]
Nita Ahuja is a renowned surgeon-scientist known for her work on CIMP in cancer, she is currently the Chief of surgical oncology at Johns Hopkins Hospital. First woman ever to be the Chief of this prestigious department.
Carolyn Porco is a planetary scientist best known for her work on the Voyager program and the Cassini–Huygens mission to Saturn . She is also known for her popularization of science, in particular space exploration.
Physicist Helen Quinn , with Roberto Peccei , postulated Peccei-Quinn symmetry . One consequence is a particle known as the axion , a candidate for the dark matter that pervades the universe. Quinn was the first woman to receive the Dirac Medal by the International Centre for Theoretical Physics (ICTP) and the first to receive the Oskar Klein Medal .
Lisa Randall is a theoretical physicist and cosmologist, best known for her work on the Randall–Sundrum model . She was the first tenured female physics professor at Princeton University .
Sally Ride was an astrophysicist and the first American woman, and then-youngest American, to travel to outer space. Ride wrote or co-wrote several books on space aimed at children, with the goal of encouraging them to study science. [ 123 ] [ 124 ] Ride participated in the Gravity Probe B (GP-B) project, which provided more evidence that the predictions of Albert Einstein 's general theory of relativity are correct. [ 125 ]
Through her observations of galaxy rotation curves, astronomer Vera Rubin discovered the Galaxy rotation problem , now taken to be one of the key pieces of evidence for the existence of dark matter . She was the first female allowed to observe at the Palomar Observatory .
Sara Seager is a Canadian-American astronomer who is currently a professor at the Massachusetts Institute of Technology and known for her work on extrasolar planets.
Astronomer Jill Tarter is best known for her work on the search for extraterrestrial intelligence. Tarter was named one of the 100 most influential people in the world by Time Magazine in 2004. [ 126 ] She is the former director of SETI . [ 127 ]
Rosalyn Yalow was the co-winner of the 1977 Nobel Prize in Physiology or Medicine (together with Roger Guillemin and Andrew Schally) for development of the radioimmunoassay (RIA) technique.
Latin America
Maria Nieves Garcia-Casal , the first scientist and nutritionist woman from Latin America to lead the Latin America Society of Nutrition.
Angela Restrepo Moreno is a microbiologist from Colombia. She first gained interest in tiny organisms when she had the opportunity to view them through a microscope that belonged to her grandfather. [ 128 ] While Restrepo has a variety of research, her main area of research is fungi and their causes of diseases. [ 128 ] Her work led her to develop research on a disease caused by fungi that has only been diagnosed in Latin America but was originally found in Brazil: Paracoccidioidomycosis . [ 128 ] Research groups also developed by Restrepo have begun studying two routes: the relationship between humans, fungi, and the environment and also how the cells within the fungi work. [ 128 ]
Along with her research, Restrepo co-founded a non-profit that is devoted to scientific research named Corporation for Biological Research (CIB). [ 128 ] Angela Restrepo Moreno was awarded the SCOPUS Prize in 2007 for her numerous publications. [ 128 ] She currently resides in Colombia and continues her research.
Susana López Charretón was born in Mexico City, Mexico in 1957. She is a virologist whose area of study focused on the rotavirus . [ 129 ] When she initially began studying rotavirus, it had only been discovered four years earlier. [ 129 ] Charretón's main job was to study how the virus entered cells and its ways of multiplying. [ 129 ] Because of her, and several others, work other scientists were able to learn about more details of the virus. [ 129 ] Now, her research focuses on the virus's ability to recognize the cells it infects. [ 129 ] Along with her husband, Charretón was awarded the Carlos J. Finlay Prize for Microbiology in 2001. [ 129 ] She also received the Loreal-UNESCO prize titled "Woman in Science" in 2012. [ 129 ] Charretón has also received several other awards for her research.
Liliana Quintanar Vera is a Mexican chemist. Currently a researcher at the Department of Chemistry of the Center of Investigation and Advanced Studies, Vera's research currently focuses on neurodegenerative diseases like Parkinson's, Alzheimer's, and prion disease and also on degenerative diseases like diabetes and cataracts. [ 130 ] For this research she focused on how copper interacts with the proteins of the neurodegenerative diseases mentioned before. [ 131 ]
Liliana's awards include the Mexican Academy of Sciences Research Prize for Science in 2017, the Marcos Moshinsky Chair award in 2016, the Fulbright Scholarship in 2014, and the L'Oréal-UNESCO For Women in Science Award in 2007. [ 130 ]
The Nobel Prize and Prize in Economic Sciences have been awarded to women 61 times between 1901 and 2022. One woman, Marie Sklodowska-Curie, has been honored twice, with the 1903 Nobel Prize in Physics and the 1911 Nobel Prize in Chemistry. This means that 60 women in total have been awarded the Nobel Prize between 1901 and 2022. 25 women have been awarded the Nobel Prize in physics, chemistry, physiology or medicine. [ 4 ]
Statistics are used to indicate disadvantages faced by women in science, and also to track positive changes of employment opportunities and incomes for women in science. [ 110 ] : 33
Women appear to do less well than men (in terms of degree, rank, and salary) in the fields that have been traditionally dominated by women, such as nursing . In 1991 women attributed 91% of the PhDs in nursing, and men held 4% of full professorships in nursing. [ citation needed ] In the field of psychology , where women earn the majority of PhDs, women do not fill the majority of high rank positions in that field. [ 133 ] [ citation needed ]
Women's lower salaries in the scientific community are also reflected in statistics. According to the data provided in 1993, the median salaries of female scientists and engineers with doctoral degrees were 20% less than men. [ 110 ] : 35 [ needs update ] This data can be explained [ who? ] as there was less participation of women in high rank scientific fields/positions and a female majority in low-paid fields/positions. However, even with men and women in the same scientific community field, women are typically paid 15–17% less than men. [ citation needed ] In addition to the gender gap , there were also salary differences between ethnicity: African-American women with more years of experiences earn 3.4% less than European-American women with similar skills, while Asian women engineers out-earn both Africans and Europeans. [ 134 ] [ needs update ]
Women are also under-represented in the sciences as compared to their numbers in the overall working population. Within 11% of African-American women in the workforce, 3% are employed as scientists and engineers. [ clarification needed ] Hispanics made up 8% of the total workers in the US, 3% of that number are scientists and engineers. Native Americans participation cannot be statistically measured. [ citation needed ]
Women tend to earn less than men in almost all industries, including government and academia. [ citation needed ] Women are less likely to be hired in highest-paid positions. [ citation needed ] The data showing the differences in salaries, ranks, and overall success between the genders is often claimed [ who? ] to be a result of women's lack of professional experience. The rate of women's professional achievement is increasing. In 1996, the salaries for women in professional fields increased from 85% to 95% relative to men with similar skills and jobs. Young women between the age of 27 and 33 earned 98%, nearly as much as their male peers. [ needs update ] In the total workforce of the United States, women earn 74% as much as their male counterparts (in the 1970s they made 59% as much as their male counterparts). [ 110 ] : 33–37 [ needs update ]
Claudia Goldin , Harvard concludes in A Grand Gender Convergence: Its Last Chapter – "The gender gap in pay would be considerably reduced and might vanish altogether if firms did not have an incentive to disproportionately reward individuals who labored long hours and worked particular hours." [ 135 ]
Research on women's participation in the "hard" sciences such as physics and computer science speaks of the "leaky pipeline" model, in which the proportion of women "on track" to potentially becoming top scientists fall off at every step of the way, from getting interested in science and maths in elementary school, through doctorate, postdoctoral, and career steps. The leaky pipeline also applies in other fields. In biology , for instance, women in the United States have been getting Masters degrees in the same numbers as men for two decades, yet fewer women get PhDs ; and the numbers of women principal investigators have not risen. [ 136 ]
What may be the cause of this "leaky pipeline" of women in the sciences? [ tone ] It is important to look at factors outside of academia that are occurring in women's lives at the same time they are pursuing their continued education and career search. The most outstanding factor that is occurring at this crucial time is family formation. As women are continuing their academic careers, they are also stepping into their new role as a wife and mother. These traditionally require at large time commitment and presence outside work. These new commitments do not fare well for the person looking to attain tenure. That is why women entering the family formation period of their life are 35% less likely to pursue tenure positions after receiving their PhD's than their male counterparts. [ 137 ]
In the UK, women occupied over half the places in science-related higher education courses (science, medicine, maths, computer science and engineering) in 2004–05. [ 138 ] However, gender differences varied from subject to subject: women substantially outnumbered men in biology and medicine , especially nursing, while men predominated in maths, physical sciences, computer science and engineering.
In the US, women with science or engineering doctoral degrees were predominantly employed in the education sector in 2001, with substantially fewer employed in business or industry than men. [ 139 ] According to salary figures reported in 1991, women earn anywhere between 83.6 percent to 87.5 percent that of a man's salary. [ needs update ] An even greater disparity between men and women is the ongoing trend that women scientists with more experience are not as well-compensated as their male counterparts. The salary of a male engineer continues to experience growth as he gains experience whereas the female engineer sees her salary reach a plateau. [ 140 ]
Women, in the United States and many European countries, who succeed in science tend to be graduates of single-sex schools. [ 110 ] : Chapter 3 [ needs update ] Women earn 54% of all bachelor's degrees in the United States and 50% of those are in science. 9% of US physicists are women. [ 110 ] : Chapter 2 [ needs update ]
In 2013, women accounted for 53% of the world's graduates at the bachelor's and master's level and 43% of successful PhD candidates but just 28% of researchers. Women graduates are consistently highly represented in the life sciences, often at over 50%. However, their representation in the other fields is inconsistent. In North America and much of Europe, few women graduate in physics, mathematics and computer science but, in other regions, the proportion of women may be close to parity in physics or mathematics. In engineering and computer sciences, women consistently trail men, a situation that is particularly acute in many high-income countries. [ 141 ]
As of 2015, each step up the ladder of the scientific research system saw a drop in female participation until, at the highest echelons of scientific research and decision-making, there were very few women left. In 2015, the EU Commissioner for Research, Science and Innovation Carlos Moedas called attention to this phenomenon, adding that the majority of entrepreneurs in science and engineering tended to be men. In 2013, the German government coalition agreement introduced a 30% quota for women on company boards of directors. [ 141 ]
In 2010, women made up 14% of university chancellors and vice-chancellors at Brazilian public universities and 17% of those in South Africa in 2011. [ 142 ] [ 143 ] As of 2015, in Argentina, women made up 16% of directors and vice-directors of national research centres and, in Mexico, 10% of directors of scientific research institutes at the National Autonomous University of Mexico. [ 144 ] [ 145 ] In the US, numbers are slightly higher at 23%. In the EU, less than 16% of tertiary institutions were headed by a woman in 2010 and just 10% of universities. In 2011, at the main tertiary institution for the English-speaking Caribbean, the University of the West Indies, women represented 51% of lecturers but only 32% of senior lecturers and 26% of full professors . A 2018 review of the Royal Society of Britain by historians Aileen Fyfe and Camilla Mørk Røstvik produced similarly low numbers, [ 146 ] with women accounting for more than 25% of members in only a handful of countries, including Cuba, Panama and South Africa. As of 2015, the figure for Indonesia was 17%. [ 141 ] [ 147 ] [ 148 ]
In life sciences, women researchers have achieved parity (45–55% of researchers) in many countries. In some, the balance even now tips in their favour. Six out of ten researchers are women in both medical and agricultural sciences in Belarus and New Zealand, for instance. More than two-thirds of researchers in medical sciences are women in El Salvador, Estonia, Kazakhstan, Latvia, the Philippines, Tajikistan, Ukraine and Venezuela. [ 141 ]
There has been a steady increase in female graduates in agricultural sciences since the turn of the century. In sub-Saharan Africa, for instance, numbers of female graduates in agricultural science have been increasing steadily, with eight countries reporting a share of women graduates of 40% or more: Lesotho, Madagascar, Mozambique, Namibia, Sierra Leone, South Africa, Swaziland and Zimbabwe. The reasons for this surge are unclear, although one explanation may lie in the growing emphasis on national food security and the food industry. Another possible explanation is that women are highly represented in biotechnology. For example, in South Africa, women were underrepresented in engineering (16%) in 2004 and in 'natural scientific professions' (16%) in 2006 but made up 52% of employees working in biotechnology-related companies. [ 141 ]
Women play an increasing role in environmental sciences and conservation biology. In fact, women played a foremost role in the development of these disciplines. Silent Spring by Rachel Carson proved an important impetus to the conservation movement and the later banning of chemical pesticides. Women played an important role in conservation biology including the famous work of Dian Fossey, who published the famous Gorillas in the Mist and Jane Goodall who studied primates in East Africa. Today women make up an increasing proportion of roles in the active conservation sector. A recent survey of those working in the Wildlife Trusts in the U.K., the leading conservation organisation in England, found that there are nearly as many women as men in practical conservation roles. [ 149 ]
Women are consistently underrepresented in engineering and related fields. In Israel, for instance, where 28% of senior academic staff are women, there are proportionately many fewer in engineering (14%), physical sciences (11%), mathematics and computer sciences (10%) but dominate education (52%) and paramedical occupations (63%). In Japan and the Republic of Korea, women represent just 5% and 10% of engineers. [ 141 ]
For women who are pursuing STEM major careers, these individuals often face gender disparities in the work field, especially in regards to science and engineering. It has become more common for women to pursue undergraduate degrees in science, but are continuously discredited in salary rates and higher ranking positions. For example, men show a greater likelihood of being selected for an employment position than a woman. [ 150 ]
In Europe and North America, the number of female graduates in engineering, physics, mathematics and computer science is generally low. Women make up just 19% of engineers in Canada, Germany and the US and 22% in Finland, for example. However, 50% of engineering graduates are women in Cyprus, 38% in Denmark and 36% in the Russian Federation, for instance. [ 141 ]
In many cases, engineering has lost ground to other sciences, including agriculture. The case of New Zealand is fairly typical. Here, women jumped from representing 39% to 70% of agricultural graduates between 2000 and 2012, continued to dominate health (80–78%) but ceded ground in science (43–39%) and engineering (33–27%). [ 141 ]
In a number of developing countries, there is a sizable proportion of women engineers. At least three out of ten engineers are women, for instance, in Costa Rica, Vietnam and the United Arab Emirates (31%), Algeria (32%), Mozambique (34%), Tunisia (41%) and Brunei Darussalam (42%). In Malaysia (50%) and Oman (53%), women are on a par with men. Of the 13 sub-Saharan countries reporting data, seven have observed substantial increases (more than 5%) in women engineers since 2000, namely: Benin, Burundi, Eritrea, Ethiopia, Madagascar, Mozambique and Namibia. [ 141 ]
Of the seven Arab countries reporting data, four observe a steady percentage or an increase in female engineers (Morocco, Oman, Palestine and Saudi Arabia). In the United Arab Emirates, the government has made it a priority to develop a knowledge economy, having recognized the need for a strong human resource base in science, technology and engineering. With just 1% of the labour force being Emirati, it is also concerned about the low percentage of Emirati citizens employed in key industries. As a result, it has introduced policies promoting the training and employment of Emirati citizens, as well as a greater participation of Emirati women in the labour force. Emirati female engineering students have said that they are attracted to a career in engineering for reasons of financial independence, the high social status associated with this field, the opportunity to engage in creative and challenging projects and the wide range of career opportunities. [ 141 ]
An analysis of computer science shows a steady decrease in female graduates since 2000 that is particularly marked in high-income countries. Between 2000 and 2012, the share of women graduates in computer science slipped in Australia, New Zealand, the Republic of Korea and US. In Latin America and the Caribbean, the share of women graduates in computer science dropped by between 2 and 13 percentage points over this period for all countries reporting data. [ 141 ]
There are exceptions. In Denmark, the proportion of female graduates in computer science increased from 15% to 24% between 2000 and 2012 and Germany saw an increase from 10% to 17%. These are still very low levels. Figures are higher in many emerging economies. In Turkey, for instance, the proportion of women graduating in computer science rose from a relatively high 29% to 33% between 2000 and 2012. [ 141 ]
The Malaysian information technology (IT) sector is made up equally of women and men, with large numbers of women employed as university professors and in the private sector. This is a product of two historical trends: the predominance of women in the Malay electronics industry, the precursor to the IT industry, and the national push to achieve a 'pan-Malayan' culture beyond the three ethnic groups of Indian, Chinese and Malay. Government support for the education of all three groups is available on a quota basis and, since few Malay men are interested in IT, this leaves more room for women. Additionally, families tend to be supportive of their daughters' entry into this prestigious and highly remunerated industry, in the interests of upward social mobility. Malaysia's push to develop an endogenous research culture should deepen this trend. [ 141 ]
In India, the substantial increase in women undergraduates in engineering may be indicative of a change in the 'masculine' perception of engineering in the country. It is also a product of interest on the part of parents, since their daughters will be assured of employment as the field expands, as well as an advantageous marriage. Other factors include the 'friendly' image of engineering in India and the easy access to engineering education resulting from the increase in the number of women's engineering colleges over the last two decades. [ 141 ]
While women have made huge strides in the STEM fields, it is obvious that they are still underrepresented. One of the areas where women are most underrepresented in science is space flight. Out of the 556 people who have traveled to space, only 65 of them were women. This means that only 11% of astronauts have been women. [ 151 ]
In the 1960s, the American space program was taking off. However, women were not allowed to be considered for the space program because at the time astronauts were required to be military pilots – a profession that women were not allowed to be a part of. There were other "practical" reasons as well. According to General Don Flickinger of the United States Air Force, there was difficulty "designing and fitting a space suit to accommodate their particular biological needs and functions." [ 152 ]
During the early 1960s, the first American astronauts, nicknamed the Mercury Seven , were training. At the same time, William Randolph Lovelace II was interested to see if women could manage to go through the same training that the Mercury 7 undergoing at the time. Lovelace recruited thirteen female pilots, called the " Mercury 13 ", and put them through the same tests that the male astronauts took. As a result, the women actually performed better on these tests than the men of the Mercury 7 did. However, this did not convince NASA officials to allow women in space. [ 151 ] In response, congressional hearings were held to investigate discrimination against women in the program. One of the women who testified at the hearing was Jerrie Cobb , the first woman to pass Lovelace's tests. [ 153 ] During her testimony, Cobb said: [ 151 ]
I find it a little ridiculous when I read in a newspaper that there is a place called Chimp College in New Mexico where they are training chimpanzees for space flight , one a female named Glenda. I think it would be at least as important to let the women undergo this training for space flight.
NASA officials also had representatives present, notably astronauts John Glenn and Scott Carpenter , to testify that women are not suited for the space program. Ultimately, no action came from the hearings, and NASA did not put a woman in space until 1983. [ 153 ]
Even though the United States did not allow women in space during the 60s or 70s, other countries did. Valentina Tereshkova , a cosmonaut from the Soviet Union, was the first woman to fly in space. Although she had no piloting experience, she flew on the Vostok 6 in 1963. Before going to space, Tereshkova was a textile worker. Although she successfully orbited the Earth 48 times, the next woman to go to space did not fly until almost twenty years later. [ 154 ]
Sally Ride was the third woman to go to space and the first American woman in space. In 1978, Ride and five other women were accepted into the first class of astronauts that allowed women. In 1983, Ride became the first American woman in space when she flew on the Challenger for the STS-7 mission. [ 154 ]
NASA has been more inclusive in recent years. The number of women in NASA's astronaut classes has steadily risen since the first class that allowed women in 1978. The most recent class was 45% women, and the class before was 50%. In 2019, the first all-female spacewalk was completed at the International Space Station . [ 155 ]
The global figures mask wide disparities from one region to another. In Southeast Europe, for instance, women researchers have obtained parity and, at 44%, are on the verge of doing so in Central Asia and Latin America and the Caribbean. In the European Union, on the other hand, just one in three (33%) researchers is a woman, compared to 37% in the Arab world. Women are also better represented in sub-Saharan Africa (30%) than in South Asia (17%). [ 141 ]
There are also wide intraregional disparities. Women make up 52% of researchers in the Philippines and Thailand, for instance, and are close to parity in Malaysia and Vietnam, yet only one in three researchers is a woman in Indonesia and Singapore. In Japan and the Republic of Korea, two countries characterized by high researcher densities and technological sophistication, as few as 15% and 18% of researchers respectively are women. These are the lowest ratios among members of the Organisation for Economic Co-operation and Development . The Republic of Korea also has the widest gap among OECD members in remuneration between men and women researchers (39%). There is also a yawning gap in Japan (29%). [ 141 ]
Latin America has some of the world's highest rates of women studying scientific fields; it also shares with the Caribbean one of the highest proportions of female researchers: 44%. Of the 12 countries reporting data for the years 2010–2013, seven have achieved gender parity, or even dominate research: Bolivia (63%), Venezuela (56%), Argentina (53%), Paraguay (52%), Uruguay (49%), Brazil (48%) and Guatemala (45%). Costa Rica is on the cusp (43%). Chile has the lowest score among countries for which there are recent data (31%). The Caribbean paints a similar picture, with Cuba having achieved gender parity (47%) and Trinidad and Tobago on 44%. Recent data on women's participation in industrial research are available for those countries with the most developed national innovation systems, with the exception of Brazil and Cuba: Uruguay (47%), Argentina (29%), Colombia and Chile (26%). [ 141 ]
As in most other regions, the great majority of health graduates are women (60–85%). Women are also strongly represented in science. More than 40% of science graduates are women in each of Argentina, Colombia, Ecuador, El Salvador, Mexico, Panama and Uruguay. The Caribbean paints a similar picture, with women graduates in science being on a par with men or dominating this field in Barbados, Cuba, Dominican Republic and Trinidad and Tobago. [ 141 ]
In engineering, women make up over 30% of the graduate population in seven Latin American countries (Argentina, Colombia, Costa Rica, Honduras, Panama and Uruguay) and one Caribbean country, the Dominican Republic. There has been a decrease in the number of women engineering graduates in Argentina, Chile and Honduras. [ 141 ]
The participation of women in science has consistently dropped since the turn of the century. This trend has been observed in all sectors of the larger economies: Argentina, Brazil, Chile and Colombia. Mexico is a notable exception, having recorded a slight increase. Some of the decrease may be attributed to women transferring to agricultural sciences in these countries. Another negative trend is the drop in female doctoral students and in the labour force. Of those countries reporting data, the majority signal a significant drop of 10–20 percentage points in the transition from master's to doctoral graduates. [ 141 ]
A study at UNICAMP (2019-2023) reveals female underrepresentation in publications, particularly in STEM fields and first/last authorship positions. [ 156 ] While UNICAMP's 42% female participation is comparable to USP's historical average (38.28%), [ 157 ] both fall below the Brazilian average (49%), [ 158 ] contrasting with higher female representation in science in some Latin American countries (UNESCO). Despite gender equity policies, female participation at UNICAMP declined after 2021, potentially due to the pandemic, and Field-Weighted Citation Impact (FWCI) was lower in areas like Social Sciences and Life Sciences, highlighting the need for stronger gender equality policies in science.
Most countries in Eastern Europe, West and Central Asia have attained gender parity in research ( Armenia , Azerbaijan , Georgia , Kazakhstan , Mongolia and Ukraine ) or are on the brink of doing so ( Kyrgyzstan and Uzbekistan ). This trend is reflected in tertiary education, with some exceptions in engineering and computer science. Although Belarus and the Russian Federation have seen a drop over the past decade, women still represented 41% of researchers in 2013. In the former Soviet states, women are also very present in the business enterprise sector: Bosnia and Herzegovina (59%), Azerbaijan (57%), Kazakhstan (50%), Mongolia (48%), Latvia (48%), Serbia (46%), Croatia and Bulgaria (43%), Ukraine and Uzbekistan (40%), Romania and Montenegro (38%), Belarus (37%), Russian Federation (37%). [ 141 ]
One in three researchers is a woman in Turkey (36%) and Tajikistan (34%). Participation rates are lower in Iran (26%) and Israel (21%), although Israeli women represent 28% of senior academic staff. At university, Israeli women dominate medical sciences (63%) but only a minority study engineering (14%), physical sciences (11%), mathematics and computer science (10%). There has been an interesting evolution in Iran. Whereas the share of female PhD graduates in health remained stable at 38–39% between 2007 and 2012, it rose in all three other broad fields. Most spectacular was the leap in female PhD graduates in agricultural sciences from 4% to 33% but there was also a marked progression in science (from 28% to 39%) and engineering (from 8% to 16%). [ 141 ]
With the exception of Greece, all the countries of Southeast Europe were once part of the Soviet bloc. Some 49% of researchers in these countries are women (compared to 37% in Greece in 2011). This high proportion is considered a legacy of the consistent investment in education by the Socialist governments in place until the early 1990s, including that of the former Yugoslavia. Moreover, the participation of female researchers is holding steady or increasing in much of the region, with representation broadly even across the four sectors of government, business, higher education and non-profit. In most countries, women tend to be on a par with men among tertiary graduates in science. Between 70% and 85% of graduates are women in health, less than 40% in agriculture and between 20% and 30% in engineering. Albania has seen a considerable increase in the share of its women graduates in engineering and agriculture. [ 141 ]
Women make up 33% of researchers overall in the European Union (EU), slightly more than their representation in science (32%). Women constitute 40% of researchers in higher education, 40% in government and 19% in the private sector, with the number of female researchers increasing faster than that of male researchers. The proportion of female researchers has been increasing over the last decade, at a faster rate than men (5.1% annually over 2002–2009 compared with 3.3% for men), which is also true for their participation among scientists and engineers (up 5.4% annually between 2002 and 2010, compared with 3.1% for men). [ 141 ]
Despite these gains, women's academic careers in Europe remain characterized by strong vertical and horizontal segregation. In 2010, although female students (55%) and graduates (59%) outnumbered male students, men outnumbered women at the PhD and graduate levels (albeit by a small margin). Further along in the research career, women represented 44% of grade C academic staff, 37% of grade B academic staff and 20% of grade A academic staff.11 These trends are intensified in science, with women making up 31% of the student population at the tertiary level to 38% of PhD students and 35% of PhD graduates. At the faculty level, they make up 32% of academic grade C personnel, 23% of grade B and 11% of grade A. The proportion of women among full professors is lowest in engineering and technology, at 7.9%. With respect to representation in science decision-making, in 2010 15.5% of higher education institutions were headed by women and 10% of universities had a female rector. [ 141 ]
Membership on science boards remained predominantly male as well, with women making up 36% of board members. The EU has engaged in a major effort to integrate female researchers and gender research into its research and innovation strategy since the mid-2000s. Increases in women's representation in all of the scientific fields overall indicates that this effort has met with some success; however, the continued lack of representation of women at the top level of faculties, management and science decision making indicate that more work needs to be done. The EU is addressing this through a gender equality strategy and crosscutting mandate in Horizon 2020 , its research and innovation funding programme for 2014–2020. [ 141 ]
In 2013, women made up the majority of PhD graduates in fields related to health in Australia (63%), New Zealand (58%) and the United States of America (73%). The same can be said of agriculture, in New Zealand's case (73%). Women have also achieved parity in agriculture in Australia (50%) and the United States (44%). Just one in five women graduate in engineering in the latter two countries, a situation that has not changed over the past decade. In New Zealand, women jumped from constituting 39% to 70% of agricultural graduates (all levels) between 2000 and 2012 but ceded ground in science (43–39%), engineering (33–27%) and health (80–78%). As for Canada, it has not reported sex-disaggregated data for women graduates in science and engineering in recent years. Moreover, none of the four countries mentioned here have reported recent data on the share of female researchers. [ 141 ]
South Asia is the region where women make up the smallest proportion of researchers: 17%. This is 13 percentage points below sub-Saharan Africa. Of those countries in South Asia reporting data for 2009–2013, Nepal has the lowest representation of all (in head counts), at 8% (2010), a substantial drop from 15% in 2002. In 2013, only 14% of researchers (in full-time equivalents) were women in the region's most populous country, India, down slightly from 15% in 2009. The percentage of female researchers is highest in Sri Lanka (39%), followed by Pakistan: 24% in 2009, 31% in 2013. There are no recent data available for Afghanistan or Bangladesh. [ 141 ]
Women are most present in the private non-profit sector – they make up 60% of employees in Sri Lanka – followed by the academic sector: 30% of Pakistani and 42% of Sri Lankan female researchers. Women tend to be less present in the government sector and least likely to be employed in the business sector, accounting for 23% of employees in Sri Lanka, 11% in India and just 5% in Nepal. Women have achieved parity in science in both Sri Lanka and Bangladesh but are less likely to undertake research in engineering. They represent 17% of the research pool in Bangladesh and 29% in Sri Lanka. Many Sri Lankan women have followed the global trend of opting for a career in agricultural sciences (54%) and they have also achieved parity in health and welfare. In Bangladesh, just over 30% choose agricultural sciences and health, which goes against the global trend. Although Bangladesh still has progress to make, the share of women in each scientific field has increased steadily over the past decade. [ 141 ]
Southeast Asia presents a different picture entirely, with women basically on a par with men in some countries: they make up 52% of researchers in the Philippines and Thailand, for example. Other countries are close to parity, such as Malaysia and Vietnam, whereas Indonesia and Singapore are still around the 30% mark. Cambodia trails its neighbours at 20%. Female researchers in the region are spread fairly equally across the sectors of participation, with the exception of the private sector, where they make up 30% or less of researchers in most countries.
The proportion of women tertiary graduates reflects these trends, with high percentages of women in science in Brunei Darussalam, Malaysia, Myanmar and the Philippines (around 60%) and a low of 10% in Cambodia. Women make up the majority of graduates in health sciences, from 60% in Laos to 81% in Myanmar – Vietnam being an exception at 42%. Women graduates are on a par with men in agriculture but less present in engineering: Vietnam (31%), the Philippines (30%) and Malaysia (39%); here, the exception is Myanmar, at 65%. In the Republic of Korea, women make up about 40% of graduates in science and agriculture and 71% of graduates in health sciences but only 18% of female researchers overall. This represents a loss in the investment made in educating girls and women up through tertiary education, a result of traditional views of women's role in society and in the home. Kim and Moon (2011) remark on the tendency of Korean women to withdraw from the labour force to take care of children and assume family responsibilities, calling it a 'domestic brain drain'. [ 141 ]
Women remain very much a minority in Japanese science (15% in 2013), although the situation has improved slightly (13% in 2008) since the government fixed a target in 2006 of raising the ratio of female researchers to 25%. Calculated on the basis of the current number of doctoral students, the government hopes to obtain a 20% share of women in science, 15% in engineering and 30% in agriculture and health by the end of the current Basic Plan for Science and Technology in 2016. In 2013, Japanese female researchers were most common in the public sector in health and agriculture, where they represented 29% of academics and 20% of government researchers. In the business sector, just 8% of researchers were women (in head counts), compared to 25% in the academic sector. In other public research institutions, women accounted for 16% of researchers. One of the main thrusts of Abenomics , Japan's current growth strategy, is to enhance the socio-economic role of women. Consequently, the selection criteria for most large university grants now take into account the proportion of women among teaching staff and researchers. [ 141 ]
The low ratio of women researchers in Japan and the Republic of Korea, which both have some of the highest researcher densities in the world, brings down Southeast Asia's average to 22.5% for the share of women among researchers in the region. [ 141 ]
At 37%, the share of female researchers in the Arab States compares well with other regions. The countries with the highest proportion of female researchers are Bahrain and Sudan at around 40%. Jordan, Libya, Oman, Palestine and Qatar have percentage shares in the low twenties. The country with the lowest participation of female researchers is Saudi Arabia, even though they make up the majority of tertiary graduates, but the figure of 1.4% covers only the King Abdulaziz City for Science and Technology. Female researchers in the region are primarily employed in government research institutes, with some countries also seeing a high participation of women in private nonprofit organizations and universities. [ 159 ] With the exception of Sudan (40%) and Palestine (35%), fewer than one in four researchers in the business enterprise sector is a woman; for half of the countries reporting data, there are barely any women at all employed in this sector. [ 141 ]
Despite these variable numbers, the percentage of female tertiary-level graduates in science and engineering is very high across the region, which indicates there is a substantial drop between graduation and employment and research. Women make up half or more than half of science graduates in all but Sudan and over 45% in agriculture in eight out of the 15 countries reporting data, namely Algeria, Egypt, Jordan, Lebanon, Sudan, Syria, Tunisia and the United Arab Emirates. In engineering, women make up over 70% of graduates in Oman, with rates of 25–38% in the majority of the other countries, which is high in comparison to other regions. [ 141 ]
The participation of women is somewhat lower in health than in other regions, possibly on account of cultural norms restricting interactions between males and females. Iraq and Oman have the lowest percentages (mid-30s), whereas Iran, Jordan, Kuwait, Palestine and Saudi Arabia are at gender parity in this field. The United Arab Emirates and Bahrain have the highest rates of all: 83% and 84%. [ 141 ]
Once Arab women scientists and engineers graduate, they may come up against barriers to finding gainful employment. These include a misalignment between university programmes and labour market demand – a phenomenon which also affects men –, a lack of awareness about what a career in their chosen field entails, family bias against working in mixed-gender environments and a lack of female role models. [ 141 ] [ 160 ]
One of the countries with the smallest female labour force is developing technical and vocational education for girls as part of a wider scheme to reduce dependence on foreign labour. By 2017, the Technical and Vocational Training Corporation of Saudi Arabia is to have constructed 50 technical colleges, 50 girls' higher technical institutes and 180 industrial secondary institutes. The plan is to create training placements for about 500 000 students, half of them girls. Boys and girls will be trained in vocational professions that include information technology, medical equipment handling, plumbing, electricity and mechanics. [ 141 ]
Just under one in three (30%) researchers in sub-Saharan Africa is a woman. Much of sub-Saharan Africa is seeing solid gains in the share of women among tertiary graduates in scientific fields. In two of the top four countries for women's representation in science, women graduates are part of very small cohorts, however: they make up 54% of Lesotho 's 47 tertiary graduates in science and 60% of those in Namibia 's graduating class of 149. South Africa and Zimbabwe , which have larger graduate populations in science, have achieved parity, with 49% and 47% respectively. The next grouping clusters seven countries poised at around 35–40% ( Angola , Burundi , Eritrea , Liberia , Madagascar , Mozambique and Rwanda ). The rest are grouped around 30% or below ( Benin , Ethiopia , Ghana , Swaziland and Uganda ). Burkina Faso ranks lowest, with women making up 18% of its science graduates. [ 141 ]
Female representation in engineering is fairly high in sub-Saharan Africa in comparison with other regions. In Mozambique and South Africa, for instance, women make up more than 34% and 28% of engineering graduates, respectively. Numbers of female graduates in agricultural science have been increasing steadily across the continent, with eight countries reporting the share of women graduates of 40% or more (Lesotho, Madagascar, Mozambique, Namibia, Sierra Leone , South Africa, Swaziland and Zimbabwe). In health, this rate ranges from 26% and 27% in Benin and Eritrea to 94% in Namibia. [ 141 ]
Of note is that women account for a relatively high proportion of researchers employed in the business enterprise sector in South Africa (35%), Kenya (34%), Botswana and Namibia (33%) and Zambia (31%). Female participation in industrial research is lower in Uganda (21%), Ethiopia (15%) and Mali (12%). [ 141 ]
Beginning in the twentieth century to present day, more and more women are becoming acknowledged for their work in science. However, women often find themselves at odds with expectations held towards them in relation to their scientific studies. [ 161 ] [ 162 ] For example, in 1968 James Watson questioned scientist Rosalind Franklin's place in the industry. He claimed that "the best place for a feminist was in another person's lab". [ 110 ] : 76–77 Women were and still are often critiqued of their overall presentation. [ 163 ] In Franklin's situation, she was seen as lacking femininity for she failed to wear lipstick or revealing clothing. [ 110 ] : 76–77
Since on average most of a woman's colleagues in science are men who do not see her as a true social peer, she will also find herself left out of opportunities to discuss possible research opportunities outside of the laboratory. In Londa Schiebinger 's book, Has Feminism Changed Science? , she mentions that men would have discussed their research outside of the lab, but this conversation is preceded by culturally "masculine" small-talk topics that, whether intentionally or not, excluded women influenced by their culture's feminine gender role from the conversation. [ 110 ] : 81–91 Consequently, this act of excluding many women from the after-hours work discussions produced a more separate work environment between the men and the women in science; as women then would converse with other women in science about their current findings and theories. Ultimately, the women's work was devalued as a male scientist was not involved in the overall research and analysis.
According to Oxford University Press, the inequality toward women is "endorsed within cultures and entrenched within institutions [that] hold power to reproduce that inequality". [ 164 ] There are various gendered barriers in social networks that prevent women from working in male-dominated fields and top management jobs. Social networks are based on the cultural beliefs such as schemas and stereotypes. [ 164 ] According to social psychology studies, top management jobs are more likely to have incumbent schemas that favor "an achievement-oriented aggressiveness and emotional toughness that is distinctly male in character". [ 164 ] Gender stereotypes of feminine style assume women to be conforming and submissive to male culture creating a sense of unqualified women for top management jobs. In attempting to demonstrate competence and power, women can still be seen as unlikeable and untrustworthy, even if they excel at traditionally "masculine" tasks. [ 164 ] In addition, women's achievements are likely to be dismissed or discredited. [ 164 ] These "untrustworthy, dislikable women" could have very well been denied achievement from the fear men held of a woman overtaking his management position. Social networks and gender stereotypes produce many injustices that women have to experience in their workplace, as well as, the various obstacles they encounter when trying to advance in male-dominated and top management jobs. Women in professions like science, technology, and other related industries are likely to encounter these gendered barriers in their careers. [ 164 ]
While there has been a push to encourage more women to participate in science, there is less outreach to lesbian, bi, or gender nonconforming women, and gender nonconforming people more broadly. [ 165 ] Due to the lack of data and statistics of LGBTQ members involvement in the STEM field, it is unknown to what exact degree lesbian and bisexual women, gender non-conformers (transgender, nonbinary/agender, or anti-gender gender abolitionists who eschew the system altogether) are potentially even more repressed and underrepresented than their straight peers. But a general lack of out lesbian and bi women in STEM has been noted. [ 165 ] [ 166 ] Reasons for under-representation of same-sex attracted women and anyone gender nonconforming in STEM fields include lack of role models in K–12 , [ 165 ] [ 166 ] [ 167 ] the desire of some transgender girls and women to adopt traditional heteronormative gender roles as gender is a cultural performance and socially-determined subjective internal experience, [ 168 ] [ 169 ] employment discrimination, and the possibility of sexual harassment in the workplace. Historically, women who have accepted STEM research positions for the government or the military remained in the closet due to lack of federal protections or the fact that homosexual or gender nonconforming expression was criminalized in their country. A notable example is Sally Ride , a physicist, the first American female astronaut, and a lesbian. [ 170 ] [ 171 ] Sally Ride chose not to reveal her sexuality until after her death in 2012; she purposefully revealed her sexual orientation in her obituary. [ 171 ] She has been known as the first female (and youngest) American to enter space, as well as, starting her own company, Sally Ride Science, that encourages young girls to enter the STEM field. She chose to keep her sexuality to herself because she was familiar with "the male-dominated" NASA's anti-homosexual policies at the time of her space travel. [ 171 ] Sally Ride's legacy continues as her company is still working to increase young girls and women's participation in the STEM fields. [ 172 ]
In a nationwide study of LGBTQA employees in STEM fields in the United States, same-sex attracted and gender nonconforming women in engineering, earth sciences, and mathematics reported that they were less likely to be out in the workplace. [ 173 ] In general, LGBTQA people in this survey reported that, when more female or feminine gender role-identified people worked in their labs, the more accepting and safe the work environment. [ 173 ] In another study of over 30,000 LGBT employees in STEM-related federal agencies in the United States, queer women in these agencies reported feeling isolated in the workplace and having to work harder than their gender conforming male colleagues. This isolation and overachievement remained constant as they earned supervisory positions and worked their way up the ladder. [ 174 ] Gender nonconforming people in physics, particularly those identified as trans women in physics programs and labs, felt the most isolated and perceived the most hostility. [ 175 ]
Organizations such as Lesbians Who Tech , Out to Innovate , Out in Science, Technology, Engineering and Mathematics (OSTEM), Pride in STEM , and House of STEM provide networking and mentoring opportunities for lesbian girls and women and LGBT people interested in or currently working in STEM fields. These organizations also advocate for the rights of lesbian and bi women and gender nonconformists in STEM in education and the workplace. [ 176 ]
Margaret Rossiter , an American historian of science, offered three concepts to explain the reasons behind the data in statistics and how these reasons disadvantaged women in the science industry. The first concept is hierarchical segregation. [ 177 ] This is a well-known phenomenon in society, that the higher the level and rank of power and prestige, the smaller the population of females participating. The hierarchical differences point out that there are fewer women participating at higher levels of both academia and industry. Based on data collected in 1982, women earn 54 percent of all bachelor's degrees in the United States, with 50 percent of these in science. The source also indicated that this number increased almost every year. [ 178 ] As of 2020, women were earning 57.3 percent of all bachelor's degrees, with 38.6 percent of these in a STEM field. [ 179 ]
The second concept included in Rossiter's explanation of women in science is territorial segregation . [ 110 ] : 34–35 The term refers to how female employment is often clustered in specific industries or categories in industries. Women stayed at home or took employment in feminine fields while men left the home to work. Although nearly half of the civilian work force is female, women still comprise the majority of low-paid jobs or jobs that society considered feminine. Statistics show that 60 percent of white professional women are nurses, daycare workers, or schoolteachers. [ 180 ]
Researchers collected the data on many differences between women and men in science. Rossiter found that in 1966, thirty-eight percent of female scientists held master's degrees compared to twenty-six percent of male scientists; but large proportions of female scientists were in environmental and nonprofit organizations. [ 181 ] During the late 1960s and 1970s, equal-rights legislation made the number of female scientists rise dramatically. [ 182 ] The number of science degrees awarded to woman rose from seven percent in 1970 to twenty-four percent in 1985. In 1975 only 385 women received bachelor's degrees in engineering compared to 11,000 women in 1985. Elizabeth Finkel claims that even if the number of women participating in scientific fields increases, the opportunities are still limited. [ 183 ] Another researcher, Harriet Zuckerman , claims that when woman and man have similar abilities for a job, the probability of the woman getting the job is lower. [ 184 ] Finkel agrees, saying, "In general, while woman and men seem to be completing doctorate with similar credentials and experience, the opposition and rewards they find are not comparable. Women tend to be treated with less salary and status, many policy makers notice this phenomenon and try to rectify the unfair situation for women participating in scientific fields." [ 181 ]
Despite women's tendency to perform better than men academically, there are flaws involving stereotyping, lack of information, and family influence that have been found to affect women's involvement in science. Stereotyping has an effect, because people associate characteristics such as nurturing, kind, and warm or characteristics like strong and powerful with a particular gender. These character associations lead people to stereotype that certain jobs are more suitable to a particular gender. [ 185 ] Lack of information is something that many institutions have worked hard over the years to improve by making programs such as the IFAC project [ 186 ] (Information for a choice: empowering women through learning for scientific and technological career paths) which investigated low women participation in science and technology fields at high school to university level. However, not all efforts were as successful, "Science: it's a girl thing" campaign, which has since been removed, received backlash for further encouraging women that they must partake in "girly" or "feminine" activities. [ 186 ] The idea being that if women are fully informed of their career choices and employability, they will be more inclined to pursue STEM field jobs. Women also struggle in the sense of lacking role models of women in science. [ 186 ] Family influence is dependent on education level, economic status, and belief system. [ 187 ] Education level of a student's parent matters, because oftentimes people who have higher education have a different opinion on education's importance than someone that does not. A parent can also be an influence in the sense that they want their children to follow in their footsteps and pursue a similar occupation, especially in women, it's been found that the mother's line of work tends to correlate with their daughters. [ 188 ] Economic status can influence what kind of higher education a student might get. Economic status may influence their education depending on whether they are a work bound student or a college bound student. A work bound student may choose a shorter career path to quickly begin making money or due to lack of time. The belief system of a household can also have a big impact on women depending on their family's religious or cultural viewpoints. There are still some countries that have certain regulations on women's occupation, clothing, and curfew that limit career choices for women. Parental influence is also relevant because people tend to want to fulfill what they could not have as a child. [ 187 ] Unfortunately, women are at such a disadvantage because not only must they overcome societal norms but then they also have to outperform men for the same recognition, studies show. [ 189 ]
That sexism is alive and well in science is known. ... Even in the life sciences , where men and women start careers in fairly equal numbers, the number of women drops off rapidly at professorial level. On average, fewer than one in five science professors are female. Science punishes career breaks, and women who take time off to have children are immediately disadvantaged. "The flashpoint is when you’re about 35 and trying to get tenure. That can be when you’re trying to have kids, and it can play a major role in why you see so much attrition at that stage," said Jennifer Rohn , a cell biologist at University College London . A grant may give a woman a year’s grace if she has a baby, but it takes longer to get back into research projects than that. [ 190 ]
A number of organizations have been set up to combat the stereotyping that may encourage girls away from careers in these areas. In the UK The WISE Campaign (Women into Science, Engineering and Construction) and the UKRC (The UK Resource Centre for Women in SET) are collaborating to ensure industry, academia and education are all aware of the importance of challenging the traditional approaches to careers advice and recruitment that mean some of the best brains in the country are lost to science. The UKRC and other women's networks provide female role models, resources and support for activities that promote science to girls and women. The Women's Engineering Society , a professional association in the UK, has been supporting women in engineering and science since 1919. In computing, the British Computer Society group BCSWomen is active in encouraging girls to consider computing careers, and in supporting women in the computing workforce.
In the United States, the Association for Women in Science is one of the most prominent organization for professional women in science. In 2011, the Scientista Foundation was created to empower pre-professional college and graduate women in science, technology, engineering and mathematics (STEM), to stay in the career track. There are also several organizations focused on increasing mentorship from a younger age. One of the best known groups is Science Club for Girls , [ citation needed ] which pairs undergraduate mentors with high school and middle school mentees. The model of that pairs undergraduate college mentors with younger students is quite popular. In addition, many young women are creating programs to boost participation in STEM at a younger level, either through conferences or competitions.
In efforts to make women scientists more visible to the general public, the Grolier Club in New York hosted a "landmark exhibition" titled "Extraordinary Women in Science & Medicine: Four Centuries of Achievement", showcasing the lives and works of 32 women scientists in 2003. [ 191 ] The National Institute for Occupational Safety and Health (NIOSH) developed a video series highlighting the stories of female researchers at NIOSH. [ 192 ] Each of the women featured in the videos share their journey into science, technology, engineering, or math (STEM), and offers encouragement to aspiring scientists. [ 192 ] NIOSH also partners with external organizations in efforts to introduce individuals to scientific disciplines and funds several science-based training programs across the country. [ 193 ] [ 194 ]
Creative Resilience: Art by Women in Science is a multi–media exhibition and accompanying publication, produced in 2021 by the Gender Section of the United Nations Educational, Scientific and Cultural Organization ( UNESCO ). The project aims to give visibility to women, both professionals and university students, working in science, technology, engineering and mathematics ( STEM ). With short biographical information and graphic reproductions of their artworks dealing with the Covid-19 pandemic and accessible online, the project provides a platform for women scientists to express their experiences, insights, and creative responses to the pandemic. [ 195 ]
In 2013, journalist Christie Aschwanden noted that a type of media coverage of women scientists that "treats its subject's sex as her most defining detail" was still prevalent. She proposed a checklist, the " Finkbeiner test ", [ 196 ] to help avoid this approach. [ 197 ] It was cited in the coverage of a much-criticized 2013 New York Times obituary of rocket scientist Yvonne Brill that began with the words: "She made a mean beef stroganoff". [ 198 ] Women are often poorly portrayed in film . [ 199 ] The misrepresentation of women scientists in film, television and books can influence children to engage in gender stereotyping. This was seen in a 2007 meta-analysis conducted by Jocelyn Steinke and colleagues from Western Michigan University where, after engaging elementary school students in a Draw-a-Scientist Test , out of 4,000 participants only 28 girls drew female scientists. [ 200 ]
A study conducted at Lund University in 2010 and 2011 analysed the genders of invited contributors to News & Views in Nature and Perspectives in Science . It found that 3.8% of the Earth and environmental science contributions to News & Views were written by women even while the field was estimated to be 16–20% female in the United States. Nature responded by suggesting that, worldwide, a significantly lower number of Earth scientists were women, but nevertheless committed to address any disparity. [ 201 ]
In 2012, a journal article published in Proceedings of the National Academy of Sciences (PNAS) reported a gender bias among science faculty. [ 202 ] Faculty were asked to review a resume from a hypothetical student and report how likely they would be to hire or mentor that student, as well as what they would offer as starting salary. Two resumes were distributed randomly to the faculty, only differing in the names at the top of the resume (John or Jennifer). The male student was rated as significantly more competent, more likely to be hired, and more likely to be mentored. The median starting salary offered to the male student was greater than $3,000 over the starting salary offered to the female student. Both male and female faculty exhibited this gender bias. This study suggests bias may partly explain the persistent deficit in the number of women at the highest levels of scientific fields. Another study reported that men are favored in some domains, such as biology tenure rates, but that the majority of domains were gender-fair; the authors interpreted this to suggest that the under-representation of women in the professorial ranks was not solely caused by sexist hiring, promotion, and remuneration. [ 203 ] In April 2015 Williams and Ceci published a set of five national experiments showing that hypothetical female applicants were favored by faculty for assistant professorships over identically qualified men by a ratio of 2 to 1. [ 204 ]
In 2014, a controversy over the depiction of pinup women on Rosetta project scientist Matt Taylor's shirt during a press conference raised questions of sexism within the European Space Agency. [ 205 ] The shirt, which featured cartoon women with firearms, led to an outpouring of criticism and an apology after which Taylor "broke down in tears." [ 170 ]
In 2015, stereotypes about women in science were directed at Fiona Ingleby, research fellow in evolution, behavior, and environment at the University of Sussex , and Megan Head, postdoctoral researcher at the Australian National University , when they submitted a paper analyzing the progression of PhD graduates to postdoctoral positions in the life sciences to the journal PLOS ONE . [ 206 ] The authors received an email on 27 March informing them that their paper had been rejected due to its poor quality. [ 206 ] The email included comments from an anonymous reviewer, which included the suggestion that male authors be added in order to improve the quality of the science and serve as a means of ensuring that incorrect interpretations of the data are not included. [ 206 ] Ingleby posted excerpts from the email on Twitter on 29 April bringing the incident to the attention of the public and media. [ 206 ] The editor was dismissed from the journal and the reviewer was removed from the list of potential reviewers. A spokesman from PLOS apologized to the authors and said they would be given the opportunity to have the paper reviewed again. [ 206 ]
On 9 June 2015, Nobel prize winning biochemist Tim Hunt spoke at the World Conference of Science Journalists in Seoul . Prior to applauding the work of women scientists, he described emotional tension, saying "you fall in love with them, they fall in love with you, and when you criticise them they cry." [ 207 ] Initially, his remarks were widely condemned and he was forced to resign from his position at University College London . However, multiple conference attendees gave accounts, including a partial transcript and a partial recording, maintaining that his comments were understood to be satirical before being taken out of context by the media. [ 208 ]
In 2016, an article published in JAMA Dermatology reported a significant and dramatic downward trend in the number of NIH-funded woman investigators in the field of dermatology and that the gender gap between male and female NIH-funded dermatology investigators was widening. The article concluded that this disparity was likely due to a lack of institutional support for women investigators. [ 209 ]
In January 2005, Harvard University President Lawrence Summers sparked controversy at a National Bureau of Economic Research (NBER) Conference on Diversifying the Science & Engineering Workforce. Dr. Summers offered his explanation for the shortage of women in senior posts in science and engineering. He made comments suggesting the lower numbers of women in high-level science positions may in part be due to innate differences in abilities or preferences between men and women. Making references to the field and behavioral genetics, he noted the generally greater variability among men (compared to women) on tests of cognitive abilities, [ 210 ] [ 211 ] [ 212 ] leading to proportionally more men than women at both the lower and upper tails of the test score distributions. In his discussion of this, Summers said that "even small differences in the standard deviation [between genders] will translate into very large differences in the available pool substantially out [from the mean]". [ 213 ] Summers concluded his discussion by saying: [ 213 ]
So my best guess, to provoke you, of what's behind all of this is that the largest phenomenon, by far, is the general clash between people's legitimate family desires and employers' current desire for high power and high intensity, that in the special case of science and engineering, there are issues of intrinsic aptitude, and particularly of the variability of aptitude, and that those considerations are reinforced by what are in fact lesser factors involving socialization and continuing discrimination.
Despite his protégée, Sheryl Sandberg, defending Summers' actions and Summers offering his own apology repeatedly, the Harvard Graduate School of Arts and Sciences passed a motion of "lack of confidence" in the leadership of Summers who had allowed tenure offers to women plummet after taking office in 2001. [ 213 ] The year before he became president, Harvard extended 13 of its 36 tenure offers to women and by 2004 those numbers had dropped to 4 of 32 with several departments lacking even a single tenured female professor. [ 214 ] This controversy is speculated to have significantly contributed to Summers resignation from his position at Harvard the following year. | https://en.wikipedia.org/wiki/History_of_women_in_science |
HitPredict is a database of high confidence protein-protein interactions . [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it .
This biophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/HitPredict |
In high-throughput screening (HTS), one of the major goals is to select compounds (including small molecules , siRNAs , shRNA , genes , et al.) with a desired size of inhibition or activation effects. A compound with a desired size of effects in an HTS screen is called a hit. The process of selecting hits is called hit selection . [ citation needed ]
HTS experiments have the ability to screen tens of thousands (or even millions) of compounds rapidly. Hence, it is a challenge to glean chemical/biochemical significance from mounds of data in the process of hit selection. To address this challenge, appropriate analytic methods have been adopted for hit selection. There are two main strategies of selecting hits with large effects. [ 1 ] One is to use certain metric(s) to rank and/or classify the compounds by their effects and then to select the largest number of potent compounds that is practical for validation assays . [ 2 ] [ 3 ] The other strategy is to test whether a compound has effects strong enough to reach a pre-set level. In this strategy, false-negative rates (FNRs) and/or false-positive rates (FPRs) must be controlled. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
There are two major types of HTS experiments, one without replicates (usually in primary screens) and one with replicates (usually in confirmatory screens). The analytic methods for hit selection differ in those two types of HTS experiments. For example, the z-score method is suitable for screens without replicates whereas the t-statistic is suitable for screens with replicate. The calculation of SSMD for screens without replicates also differs from that for screens with replicates. [ 1 ]
There are many metrics used for hit selection in primary screens without replicates.
The easily interpretable ones are fold change, mean difference, percent inhibition, and percent activity. However, the drawback common to all of these metrics is that they do not capture data variability effectively. To address this issue, researchers then turned to the z-score method or SSMD , which can capture data variability in negative references. [ 12 ] [ 13 ]
The z-score method is based on the assumption that the measured values (usually fluorescent intensity in log scale) of all investigated compounds in a plate have a normal distribution. SSMD also works the best under the normality assumption. However, true hits with large effects should behave very different from the majority of the compounds and thus are outliers. Strong assay artifacts may also behave as outliers. Thus, outliers are not uncommon in HTS experiments. The regular versions of z-score and SSMD are sensitive to outliers and can be problematic. Consequently, robust methods such as the z*-score method, SSMD *, B-score method, and quantile-based method have been proposed and adopted for hit selection in primary screens without replicates. [ 14 ] [ 15 ]
In a primary screen without replicates, every compound is measured only once. Consequently, we cannot directly estimate the data variability for each compound. Instead, we indirectly estimate data variability by making a strong assumption that every compound has the same variability as a negative reference in a plate in the screen. The z-score, z*-score and B-score relies on this strong assumption; so are the SSMD and SSMD* for cases without replicates.
In a screen with replicates, we can directly estimate data variability for each compound, and thus we can use more powerful methods, such as SSMD for cases with replicates and t-statistic that does not rely on the strong assumption that the z-score and z*-score rely on. One issue with the use of t-statistic and associated p-values is that they are affected by both sample size and effect size. [ 16 ] They come from testing for no mean difference, thus are not designed to measure the size of small molecule or siRNA effects. For hit selection, the major interest is the size of effect in a tested small molecule or siRNA . SSMD directly assesses the size of effects. [ 17 ] SSMD has also been shown to be better than other commonly used effect sizes. [ 18 ] The population value of SSMD is comparable across experiments and thus we can use the same cutoff for the population value of SSMD to measure the size of siRNA effects. [ 19 ]
SSMD can overcome the drawback of average fold change not being able to capture data variability. On the other hand, because SSMD is the ratio of mean to standard deviation, we may get a large SSMD value when the standard deviation is very small, even if the mean is small. In some cases, a too small mean value may not have a biological impact. As such, the compounds with large SSMD values (or differentiations) but too small mean values may not be of interest. The concept of dual-flashlight plot has been proposed to address this issue. In a dual-flashlight plot , we plot the SSMD versus average log fold-change (or average percent inhibition/activation) on the y- and x-axes, respectively, for all compounds investigated in an experiment. [ 19 ] With the dual-flashlight plot, we can see how the genes or compounds are distributed into each category in effect sizes, as shown in the figure. Meanwhile, we can also see the average fold-change for each compound. [ 19 ] [ 20 ] | https://en.wikipedia.org/wiki/Hit_selection |
Hit to lead ( H2L ) also known as lead generation is a stage in early drug discovery where small molecule hits from a high throughput screen (HTS) are evaluated and undergo limited optimization to identify promising lead compounds . [ 1 ] [ 2 ] These lead compounds undergo more extensive optimization in a subsequent step of drug discovery called lead optimization (LO). [ 3 ] [ 4 ] The drug discovery process generally follows the following path that includes a hit to lead stage:
The hit to lead stage starts with confirmation and evaluation of the initial screening hits and is followed by synthesis of analogs (hit expansion). Typically the initial screening hits display binding affinities for their biological target in the micromolar (10 −6 molar concentration ) range. Through limited H2L optimization, the affinities of the hits are often improved by several orders of magnitude to the nanomolar (10 −9 M) range. The hits also undergo limited optimization to improve metabolic half life so that the compounds can be tested in animal models of disease and also to improve selectivity against other biological targets binding that may result in undesirable side effects.
On average, only one in every 5,000 compounds that enters drug discovery to the stage of preclinical development becomes an approved drug. [ 5 ]
After hits are identified from a high throughput screen, the hits are confirmed and evaluated using the following methods:
Following hit confirmation, several compound clusters will be chosen according to their characteristics in the previously defined tests. An Ideal compound cluster will contain members that possess:
The project team will usually select between three and six compound series to be further explored. The next step will allow the testing of analogous compounds to determine a quantitative structure-activity relationship (QSAR). Analogs can be quickly selected from an internal library or purchased from commercially available sources ("SAR by catalog" or "SAR by purchase"). Medicinal chemists will also start synthesizing related compounds using different methods such as combinatorial chemistry , high-throughput chemistry, or more classical organic chemistry synthesis.
The objective of this drug discovery phase is to synthesize lead compounds , new analogs with improved potency, reduced off-target activities, and physiochemical/metabolic properties suggestive of reasonable in vivo pharmacokinetics . [ 7 ] [ 8 ] This optimization is accomplished through chemical modification of the hit structure, with modifications chosen by employing knowledge of the structure–activity relationship (SAR) as well as structure-based design if structural information about the target is available.
Lead optimization is concerned with experimental testing and confirmation of the compound based on animal efficacy models and ADMET ( in vitro and in situ ) tools that may be followed by target identification and target validation.
For educational purposes the European Federation for Medicinal Chemistry and Chemical Biology (EFMC) shared a series of webinars including 'Best Practices for Hit Finding' as well as 'Hit Generation Case Studies'. [ 9 ] | https://en.wikipedia.org/wiki/Hit_to_lead |
The Hitachi SR8000 is a high-performance supercomputer manufactured by the Hitachi c. 2001. It comprises 4 to 512 nodes, each containing multiple Hitachi RISC microprocessors. [ 1 ] Cooperative microprocessors are assigned to the same address space for synchronicity within each node. [ 2 ]
In 2002, Yasumasa Kanada calculated the decimal expansion of pi to 1.24 trillion digits using this model. [ 3 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hitachi_SR8000 |
Hitchens's razor is an epistemological razor that serves as a general rule for rejecting certain knowledge claims. It states:
What can be asserted without evidence can also be dismissed without evidence. [ 1 ] [ 2 ] [ 3 ] [ a ]
The razor is credited to author and journalist Christopher Hitchens , although its provenance can be traced to the Latin Quod gratis asseritur, gratis negatur ("What is asserted gratuitously is denied gratuitously"). [ 4 ] It implies that the burden of proof regarding the truthfulness of a claim lies with the one who makes the claim; if this burden is not met, then the claim is unfounded, and its opponents need not argue further in order to dismiss it. Hitchens used this phrase specifically in the context of refuting religious belief. [ 3 ] : 258
The dictum appears in Hitchens's 2007 book God Is Not Great: How Religion Poisons Everything . [ 3 ] : 150, 258 The term "Hitchens's razor" itself first appeared (as " Hitchens ' razor") in an online forum in October 2007, and was used by atheist blogger Rixaeton in December 2010, and popularised by, among others, evolutionary biologist and atheist activist Jerry Coyne after Hitchens died in December 2011. [ 5 ] [ 6 ] [ 7 ]
Some pages earlier in God Is Not Great , Hitchens also invoked Occam's razor . William Ockham devised a principle of economy , popularly known as Ockham's razor , which relied for its effect on disposing of unnecessary assumptions and accepting the first sufficient explanation or cause: "Do not multiply entities beyond necessity." This principle extends itself: "Everything which is explained through positing something different from the act of understanding can be explained without positing such a distinct thing." [ 3 ] : 119
In 2007, Michael Kinsley observed in The New York Times that Hitchens was rather fond of applying Occam's razor to religious claims, [ 8 ] [ b ] and according to The Wall Street Journal 's Jillian Melchior in 2017, the phrase "What can be asserted without evidence can be dismissed without evidence" was "Christopher Hitchens's variation of Occam's razor". [ 9 ] [ c ] Hitchens's razor has been presented alongside the Sagan standard ("Extraordinary claims require extraordinary evidence") as an example of evidentialism within the New Atheism movement. [ 10 ]
Academic philosopher Michael V. Antony argued that despite the use of Hitchens's razor to reject religious belief and to support atheism, applying the razor to atheism itself would seem to imply that atheism is epistemically unjustified. According to Antony, the New Atheists (to whom Hitchens also belonged) invoke a number of special arguments purporting to show that atheism can in fact be asserted without evidence. [ 10 ]
Philosopher C. Stephen Evans outlined some common Christian theological responses to the argument made by Hitchens, Richard Dawkins , and the other New Atheists that if religious belief is not based on evidence, it is not reasonable, and can thus be dismissed without evidence. Characterising the New Atheists as evidentialists , Evans counted himself amongst the Reformed epistemologists together with Alvin Plantinga , who argued for a version of foundationalism , namely that "belief in God can be reasonable even if the believer has no arguments or propositional evidence on which the belief is based". The idea is that all beliefs are based on other beliefs, and some "foundational" or "basic beliefs" just need to be assumed to be true in order to start somewhere, and it is fine to pick God as one of those basic beliefs. [ 11 ]
Additional criticism includes that, more often than not, the razor is invoked against a position that has at least prima facie evidence supporting it and hence should not be dismissed outright. While the nature of the evidence may be disputed, its existence cannot be ignored. Furthermore, Hitchens’s razor can prematurely shut down a potentially fruitful discussion if both parties choose to invoke it against each other. Based on these considerations, among others, theologian Randal Rauser sees Hitchens's razor as having a "deeply corrosive impact on reasoned discourse", lamenting that "those who invoke it are far less likely to consider the respective merits of evidence on both sides of an issue". [ 12 ]
And remember, miracles are supposed to occur at the behest of a being who is omnipotent as well as omniscient and omnipresent. One might hope for more magnificent performances than ever seem to occur.
The "evidence" for faith, then, seems to leave faith looking even weaker than it would if it stood, alone and unsupported, all by itself.
What can be asserted without evidence can also be dismissed without evidence .
This is even more true when the 'evidence' eventually offered is so shoddy and self-interested. [ 3 ] : 258
Hitchens is attracted repeatedly to the principle of Occam's razor: That simple explanations are more likely to be correct than complicated ones. (e.g., Earth makes a circle around the Sun; the Sun doesn't do a complex roller coaster ride around Earth.)
You might think that Occam's razor would favor religion; the biblical creation story certainly seems simpler than evolution. But Hitchens argues effectively again, and again, that attaching the religious myth to what we know from science to be true adds nothing but needless complication. [ 8 ]
Mr. Coffman cited Christopher Hitchens's variation of Occam's razor :
What can be asserted without evidence can be dismissed without [evidence] [ 9 ] | https://en.wikipedia.org/wiki/Hitchens's_razor |
In mathematics , the Hitchin integrable system is an integrable system depending on the choice of a complex reductive group and a compact Riemann surface , introduced by Nigel Hitchin in 1987. It lies on the crossroads of algebraic geometry , the theory of Lie algebras and integrable system theory. It also plays an important role in the geometric Langlands correspondence over the field of complex numbers through conformal field theory .
A genus zero analogue of the Hitchin system, the Garnier system , was discovered by René Garnier somewhat earlier as a certain limit of the Schlesinger equations , and Garnier solved his system by defining spectral curves. (The Garnier system is the classical limit of the Gaudin model . In turn, the Schlesinger equations are the classical limit of the Knizhnik–Zamolodchikov equations ).
Almost all integrable systems of classical mechanics can be obtained as particular cases of the Hitchin system or their common generalization defined by Bottacin and Markman in 1994.
Using the language of algebraic geometry, the phase space of the system is a partial compactification of the cotangent bundle to the moduli space of stable G -bundles for some reductive group G , on some compact algebraic curve . This space is endowed with a canonical symplectic form . Suppose for simplicity that G = G L ( n , C ) {\displaystyle G=\mathrm {GL} (n,\mathbb {C} )} , the general linear group ; then the Hamiltonians can be described as follows: the tangent space to the moduli space of G -bundles at the bundle F is
which by Serre duality is dual to
where K {\displaystyle K} is the canonical bundle , so a pair
called a Hitchin pair or Higgs bundle , defines a point in the cotangent bundle. Taking
one obtains elements in
which is a vector space which does not depend on ( F , Φ ) {\displaystyle (F,\Phi )} . So taking any basis in these vector spaces we obtain functions H i , which are Hitchin's hamiltonians. The construction for general reductive group is similar and uses invariant polynomials on the Lie algebra of G .
For trivial reasons these functions are algebraically independent, and some calculations show that their number is exactly half of the dimension of the phase space. The nontrivial part is a proof of Poisson commutativity of these functions. They therefore define an integrable system in the symplectic or Arnol'd–Liouville sense.
The Hitchin fibration is the map from the moduli space of Hitchin pairs to characteristic polynomials , a higher genus analogue of the map Garnier used to define the spectral curves. Ngô ( 2006 , 2010 ) used Hitchin fibrations over finite fields in his proof of the fundamental lemma .
To be more precise, the version of Hitchin fibration that is used by Ngô has source the moduli stack of Hitchin pairs, instead of the moduli space. Let g {\displaystyle {\mathfrak {g}}} be the Lie algebra of the reductive algebraic group G {\displaystyle G} . We have the adjoint action of G {\displaystyle G} on g {\displaystyle {\mathfrak {g}}} . We can then take the stack quotient g / G {\displaystyle {\mathfrak {g}}/G} and the GIT quotient g / / G {\displaystyle {\mathfrak {g}}/\!/G} , and there is a natural morphism χ : g / G → g / / G {\displaystyle \chi :{\mathfrak {g}}/G\to {\mathfrak {g}}/\!/G} . There is also the natural scaling action of the multiplicative group G m {\displaystyle \mathbb {G} _{m}} on g {\displaystyle {\mathfrak {g}}} , which descends to the stack and GIT quotients. Furthermore, the morphism χ {\displaystyle \chi } is equivariant with respect to the G m {\displaystyle \mathbb {G} _{m}} -actions. Therefore, given any line bundle L {\displaystyle L} on our curve C {\displaystyle C} , we can twist the morphism χ {\displaystyle \chi } by the G m {\displaystyle \mathbb {G} _{m}} - torsor , and obtain a morphism χ L : ( g / G ) L → ( g / / G ) L {\displaystyle \chi _{L}:({\mathfrak {g}}/G)_{L}\to ({\mathfrak {g}}/\!/G)_{L}} of stacks over C {\displaystyle C} . Finally, the moduli stack of L {\displaystyle L} -twisted Higgs bundles is recovered as the section stack H i g g s = S e c t ( C , ( g / G ) L ) {\displaystyle Higgs=Sect(C,({\mathfrak {g}}/G)_{L})} ; the corresponding Hitchin base is recovered as A ( C , L ) := S e c t ( C , ( g / / G ) L ) {\displaystyle A(C,L):=Sect(C,({\mathfrak {g}}/\!/G)_{L})} , which is represented by a vector space; and the Hitchin morphism at the stack level h : H i g g s → A ( C , L ) {\displaystyle h:Higgs\to A(C,L)} is simply the morphism induced by the morphism χ L {\displaystyle \chi _{L}} above. Note that this definition is not relevant to semistability. To obtain the Hitchin fibration mentioned above, we need to take L {\displaystyle L} to be the canonical bundle, restrict to the semistable part of H i g g s {\displaystyle Higgs} , and then take the induced morphism on the moduli space. To be even more precise, the version of H i g g s {\displaystyle Higgs} that is used by Ngô often has the restriction that deg ( L ) ≥ 2 g {\displaystyle \deg(L)\geq 2g} , so that it cannot be the canonical bundle. This condition is added to guarantee that the topology of the Hitchin morphism is, in a precise sense , determined by its restriction to the smooth part, see ( Chaudouard & Laumon 2016 ) for the vector bundle case. | https://en.wikipedia.org/wiki/Hitchin_system |
Hitomi ( Japanese : ひとみ ) , also known as ASTRO-H and New X-ray Telescope ( NeXT ), was an X-ray astronomy satellite commissioned by the Japan Aerospace Exploration Agency ( JAXA ) for studying extremely energetic processes in the Universe . The space observatory was designed to extend the research conducted by the Advanced Satellite for Cosmology and Astrophysics (ASCA) by investigating the hard X-ray band above 10 keV . The satellite was originally called New X-ray Telescope; [ 5 ] at the time of launch it was called ASTRO-H. [ 6 ] After it was placed in orbit and its solar panels deployed, it was renamed Hitomi . [ 7 ] The spacecraft was launched on 17 February 2016 and contact was lost on 26 March 2016, due to multiple incidents with the attitude control system leading to an uncontrolled spin rate and breakup of structurally weak elements. [ 8 ]
The new name refers to the pupil of an eye , and to a legend of a painting of four dragons. [ 6 ] The word Hitomi generally means " eye ", and specifically the pupil , or entrance window of the eye – the aperture. There is also an ancient legend that inspires the name Hitomi. "One day, many years ago, a painter was drawing four white dragons on a street. He finished drawing the dragons, but without "Hitomi". People who looked at the painting said "why don't you paint Hitomi, it is not complete. The painter hesitated, but people pressured him. The painter then drew Hitomi on two of the four dragons. Immediately, these dragons came to life and flew up into the sky. The two dragons without Hitomi remained still". The inspiration of this story is that Hitomi is regarded as the "One last, but most important part", and so we wish ASTRO-H to be the essential mission to solve mysteries of the universe in X-rays. Hitomi refers to the aperture of the eye, the part where incoming light is absorbed. From this, Hitomi reminds us of a black hole. We will observe Hitomi in the Universe using the Hitomi satellite. [ 9 ]
Hitomi 's objectives were to explore the large-scale structure and evolution of the universe, as well as the distribution of dark matter within galaxy clusters [ 10 ] and how the galaxy clusters evolve over time; [ 6 ] how matter behaves in strong gravitational fields [ 10 ] (such as matter inspiraling into black holes), [ 6 ] to explore the physical conditions in regions where cosmic rays are accelerated, [ 10 ] as well as observing supernovae. [ 6 ] In order to achieve this, it was designed to be capable of: [ 10 ]
It was the sixth of a series of JAXA X-ray satellites, [ 10 ] which started in 1979, [ 7 ] and it was designed to observe sources that are an order of magnitude fainter than its predecessor, Suzaku . [ 6 ] Its planned mission length was three years. [ 7 ] At the time of launch, two other large X-ray satellites were carrying out observations in orbit: the Chandra X-ray Observatory and XMM-Newton , both of which were launched in 1999. [ 6 ]
The probe carried four instruments and six detectors to observe photons with energies ranging from soft X-rays to gamma rays , with a high energy resolution. [ 10 ] [ 7 ] Hitomi was built by an international collaboration led by JAXA with over 70 contributing institutions in Japan, the United States, Canada, and Europe, [ 10 ] and over 160 scientists. [ 11 ] With a mass of 2,700 kg (6,000 lb), [ 10 ] [ 7 ] At launch, Hitomi was the heaviest Japanese X-ray mission. [ 1 ] The satellite is about 14 m (46 ft) in length. [ 7 ]
Two soft X-ray telescopes (SXT-S, SXT-I), with focal lengths of 5.6 m (18 ft), focus light onto a soft X-ray Spectrometer (SXS), provided by NASA , with an energy range of 0.4–12 keV for high-resolution X-ray spectroscopy , [ 10 ] and a soft X-ray imager (SXI), with an energy range of 0.3–12 keV. [ 10 ]
Two hard X-ray telescopes (HXT), with a focus length of 12 m (39 ft), [ 10 ] [ 12 ] focus light onto two hard X-ray imagers (HXI), [ 10 ] with energy range 5-80 keV, [ 12 ] which are mounted on a plate placed at the end of the 6 m (20 ft) extendable optical bench (EOB) that is deployed once the satellite is in orbit. [ 10 ] The Canadian Space Agency (CSA) provided the Canadian ASTRO-H Metrology System (CAMS), [ 13 ] [ 14 ] which is a laser alignment system that will be used to measure the distortions in the extendible optical bench.
Two soft Gamma-ray detectors (SGD), each containing three units, were mounted on two sides of the satellite, using non-focusing detectors to observe soft gamma-ray emission with energies from 60 to 600 KeV. [ 1 ] [ 10 ]
The Netherlands Institute for Space Research (SRON) in collaboration with the University of Geneva provided the filter-wheel and calibration source for the spectrometer . [ 15 ] [ 16 ]
The launch of the satellite was planned for 2013 as of 2008, [ 17 ] later revised to 2015 as of 2013. [ 11 ] As of early February 2016, it was planned for 12 February, but was delayed due to poor weather forecasts. [ 18 ]
Hitomi launched on 17 February 2016 at 08:45 UTC [ 6 ] [ 7 ] into a low Earth orbit of approximately 575 km (357 mi). [ 10 ] The circular orbit had an orbital period of around 96 minutes, and an orbital inclination of 31.01°. [ 10 ] It was launched from the Tanegashima Space Center on board an H-IIA launch vehicle. [ 10 ] [ 6 ] 14 minutes after launch, the satellite separated from the launch vehicle. The solar arrays later deployed according to plan, and it began its on-orbit checkout. [ 6 ]
Measurements by Hitomi have allowed scientists to track the motion of X-ray-emitting gas at the heart of the Perseus cluster of galaxies for the first time. Using the Soft X-ray Spectrometer, astronomers have mapped the motion of X-ray-emitting gas in a cluster of galaxies and shown it moves at cosmically modest speeds. The total range of gas velocities directed toward or away from Earth within the area observed by Hitomi was found to be about 365,000 miles an hour (590,000 kilometers per hour). The observed velocity range indicates that turbulence is responsible for only about 4 percent of the total gas pressure. [ 19 ]
On 27 March 2016, JAXA reported that communication with Hitomi had "failed from the start of its operation" on 26 March 2016 at 07:40 UTC. [ 20 ] On the same day, the U.S. Joint Space Operations Center (JSpOC) announced on Twitter that it had observed a breakup of the satellite into 5 pieces at 08:20 UTC on 26 March 2016, [ 21 ] and its orbit also suddenly changed on the same day. [ 22 ] Later analysis by the JSpOC found that the fragmentation likely took place around 01:42 UTC, but that there was no evidence the spacecraft had been struck by debris. [ 3 ] Between 26 and 28 March 2016, JAXA reported receiving three brief signals from Hitomi ; while the signals were offset by 200 kHz from what was expected from Hitomi , their direction of origin and time of reception suggested they were legitimate. [ 23 ] Later analysis, however, determined that the signals were not from Hitomi but from an unknown radio source not registered with the International Telecommunication Union . [ 23 ] [ 24 ]
JAXA stated they were working to recover communication and control over the spacecraft, [ 20 ] but that "the recovery will require months, not days". [ 25 ] Initially suggested possibilities for the communication loss is that a helium gas leak, battery explosion, or stuck-open thruster caused the satellite to start rotating, rather than a catastrophic failure. [ 22 ] [ 26 ] [ 27 ] JAXA announced on 1 April 2016 that Hitomi had lost attitude control at around 19:10 UTC on 25 March 2016. After analysing engineering data from just before the communication loss, however, no problems were noted with either the helium tank or batteries. [ 28 ]
The same day, JSpOC released orbital data for ten detected pieces of debris, five more than originally reported, including one piece that was large enough to initially be confused with the main body of the spacecraft. [ 29 ] [ 30 ] Amateur trackers observed what was believed to be Hitomi tumbling in orbit, with reports of the main spacecraft body (Object A) rotating once every 1.3 or 2.6 seconds, and the next largest piece (Object L) rotating every 10 seconds. [ 30 ]
JAXA ceased efforts to recover the satellite on 28 April 2016, switching focus to anomaly investigation. [ 24 ] [ 31 ] It was determined that the chain of events that led to the spacecraft's loss began with its inertial reference unit (IRU) reporting a rotation of 21.7° per hour at 19:10 UTC on 25 March 2016, though the vehicle was actually stable. The attitude control system attempted to use Hitomi 's reaction wheels to counteract the non-existent spin, which caused the spacecraft to rotate in the opposite direction. Because the IRU continued to report faulty data, the reaction wheels began to accumulate excessive momentum, tripping the spacecraft's computer into taking the vehicle into "safe hold" mode. Attitude control then tried to use its thrusters to stabilise the spacecraft; the Sun sensor was unable to lock on to the Sun's position, and continued thruster firings caused Hitomi to rotate even faster due to an incorrect software setting. Because of this excessive rotation rate, early on 26 March 2016 several parts of the spacecraft broke away, likely including both solar arrays and the extended optical bench. [ 8 ] [ 23 ]
Reports of a Hitomi replacement mission first surfaced on 21 June 2016. [ 32 ] According to an article from Kyodo News , JAXA was considering a launch of "Hitomi 2" in the early 2020s aboard Japan's new H3 launch vehicle . [ 32 ] The spacecraft would be a near-copy of Hitomi . [ 32 ] However, a 27 June 2016 article from The Nikkei stated that some within the Ministry of Education, Culture, Sports, Science and Technology believed it was too early to grant funding for a Hitomi replacement. [ 33 ] The article also noted that NASA had expressed support for a replacement mission led by Japan.
On 14 July 2016, JAXA published a press release regarding the ongoing study of a successor. [ 34 ] According to the press release, the spacecraft would be a remanufacture but with countermeasures reflecting Hitomi 's loss, and would be launched in 2020 on a H-IIA launch vehicle. The scientific mission of the "ASTRO-H Successor" would be based around the SXS instrument. [ 34 ] The Minister of Education, Culture, Sports, Science and Technology, Hiroshi Hase , stated during a press conference on 15 July 2016 that funding for Hitomi 's successor will be allocated in the fiscal year 2017 budget request, [ 35 ] and that he intends to accept the successor mission on the condition that the investigation of Hitomi 's destruction is completed and measures to prevent recurrence are done accordingly. [ 36 ] The X-Ray Imaging and Spectroscopy Mission (XRISM) was approved by JAXA and NASA in April 2017, and successfully launched in September 2023. [ 37 ] | https://en.wikipedia.org/wiki/Hitomi_(satellite) |
In endurance sports such as road cycling and long-distance running , hitting the wall or the bonk is a condition of sudden fatigue and loss of energy which is caused by the depletion of glycogen stores in the liver and muscles . Milder instances can be remedied by brief rest and the ingestion of food or drinks containing carbohydrates . Otherwise, it can be remedied by attaining second wind by either resting for approximately 10 minutes or by slowing down considerably and increasing speed slowly over a period of 10 minutes. Ten minutes is approximately the time that it takes for free fatty acids to sufficiently produce ATP in response to increased demand. [ 1 ]
During a marathon, for instance, runners typically hit the wall around kilometer 30 (mile 20). [ 2 ] The condition can usually be avoided by ensuring that glycogen levels are high when the exercise begins, maintaining glucose levels during exercise by eating or drinking carbohydrate-rich substances, or by reducing exercise intensity.
Skeletal muscle relies predominantly on glycogenolysis for the first few minutes as it transitions from rest to activity, as well as throughout high-intensity aerobic activity and all anaerobic activity. [ 3 ] The lack of glycogen causes a low ATP reservoir within the exercising muscle cells. Until second wind is achieved (increased ATP production primarily from free fatty acids ), the symptoms of a low ATP reservoir in exercising muscle due to depleted glycogen include: muscle fatigue , muscle cramping , muscle pain ( myalgia ), inappropriate rapid heart rate response to exercise ( tachycardia ), breathlessness ( dyspnea ) or rapid breathing ( tachypnea ), exaggerated cardiorespiratory response to exercise (tachycardia & dyspnea/tachypnea). [ 3 ] The heart tries to compensate for the energy shortage by increasing heart rate to maximize delivery of oxygen and blood borne fuels to the muscle cells for oxidative phosphorylation . [ 3 ]
Without muscle glycogen, it is important to get into second wind without going too fast, too soon nor trying to push through the pain. Going too fast, too soon encourages protein metabolism over fat metabolism, and the muscle pain in this circumstance is a result of muscle damage due to a severely low ATP reservoir. [ 4 ] [ 5 ]
Protein metabolism occurs through amino acid degradation which converts amino acids into pyruvate , the breakdown of protein to maintain the amino acid pool, the myokinase (adenylate kinase) reaction and purine nucleotide cycle . [ 6 ] Amino acids are vital to the purine nucleotide cycle as they are precursors for purines, nucleotides, and nucleosides; as well as branch-chained amino acids are converted into glutamate and aspartate for use in the cycle ( see Aspartate and glutamate synthesis ). Severe breakdown of muscle leads to rhabdomyolysis and myoglobinuria . Excessive use of the myokinase reaction and purine nucleotide cycle leads to myogenic hyperuricemia . [ 7 ]
In muscle glycogenoses (muscle GSDs), an inborn error of carbohydrate metabolism impairs either the formation or utilization of muscle glycogen. As such, those with muscle glycogenoses do not need to do prolonged exercise to experience hitting the wall. Instead, signs of exercise intolerance , such as an inappropriate rapid heart rate response to exercise, are experienced from the beginning of activity. [ 4 ] [ 5 ]
The term bonk for fatigue is presumably derived from the original meaning "to hit", and dates back at least half a century. [ vague ] Its earliest citation in the Oxford English Dictionary is a 1952 article in the Daily Mail . [ 8 ]
The term is used colloquially as a noun ("hitting the bonk") and as a verb ("to bonk halfway through the race"). The condition is also known to long-distance ( marathon ) runners, who usually refer to it as "hitting the wall". The British may refer to it as "hunger knock," while "hunger bonk" was used by South African cyclists in the 1960s.
It can also be referred to as "blowing up" [ 9 ] or a "weak attack".
In German , hitting the wall is known as " der Mann mit dem Hammer " ("the man with the hammer"); the phenomenon is thus likened to a man with the hammer coming after the athlete, catching up, and eventually hitting the athlete, causing a sudden drop in performance. [ 10 ]
In French , marathoners in particular use "frapper le mur (du marathon)", literally hitting the (marathon) wall, just like in English. One may also hear "avoir un coup de barre" (getting smacked by a bar), which means experiencing sudden, incredible fatigue. This expression is used in a wider set of contexts.
Athletes engaged in exercise over a long period of time produce energy via two mechanisms, both facilitated by oxygen:
How much energy comes from either source depends on the intensity of the exercise. During intense exercise that approaches one's VO 2 max , most of the energy comes from glycogen.
A typical untrained individual on an average diet is able to store about 380 grams of glycogen, or 1500 kcal , in the body, though much of that amount is spread throughout the muscular system and may not be available for any specific type of exercise. [ 11 ] Intense cycling or running can easily consume 600–800 or more kcal per hour. Unless glycogen stores are replenished during exercise, glycogen stores in such an individual will be depleted after less than 2 hours of continuous cycling [ 12 ] or 15 miles (24 km) of running. Training and carbohydrate loading can raise these reserves as high as 880 g (3600 kcal), correspondingly raising the potential for uninterrupted exercise.
In one study of five male subjects, "reduction in preexercise muscle glycogen from 59.1 to 17.1 μmol × g −1 (n = 3) was associated with a 14% reduction in maximum power output but no change in maximum O 2 intake; at any given power output O 2 intake, heart rate, and ventilation (VE) were significantly higher, CO 2 output (V CO 2 ) was similar, and the respiratory exchange ratio was lower during glycogen depletion compared with control." [ 13 ] Five is an extremely small sample size , so this study may not be representative of the general population.
There are several approaches to prevent glycogen depletion: | https://en.wikipedia.org/wiki/Hitting_the_wall |
The Hittite Plague or Hand of Nergal was an epidemic , possibly of tularemia , which occurred in the mid-to-late 14th century BC .
The Hittite Empire stretched from Turkey to Syria. [ 1 ] The plague was likely an outbreak of Francisella tularensis which occurred along the Arwad-Euphrates trading route in the 14th century BC. Much of the ancient Near East suffered from outbreaks; however, Egypt and Assyria initiated a quarantine along their border, and they did not experience the epidemic. [ 2 ]
Tularemia is a bacterial infection which is still a threat. [ 1 ] It is also referred to as "rabbit fever" and it is a zoonotic disease which can easily pass from animals to humans. The most common way that it is spread is through various insects which hop between species, such as ticks. [ 3 ] The symptoms of an infection range from skin lesions to respiratory failure. Without treatment the mortality rate is 15% of those infected. [ 1 ] According to former microbiologist Siro Trevisanato, "Tularemia is rare in many countries today, but remains a problem in some countries including Bulgaria." [ 1 ]
According to author Philip Norrie ( How Disease Affected the End of the Bronze Age ), there are three diseases most likely to have caused a post- Bronze Age societal collapse: smallpox , bubonic plague , and tularemia. The tularemia plague which struck the Hittites could have been spread by insects or infected dirt or plants, through open wounds, or by eating infected animals. [ 3 ]
Hittite texts from the mid-14th century BC refer to the plague causing disabilities and death. [ 1 ] Hittite King Muršili II wrote prayers seeking relief from the epidemic, which had lasted two decades and killed many of his subjects. The two kings who preceded him, Šuppiluliuma I and Šuppiluliuma's immediate heir, Arnuwanda II , had also succumbed to tularemia. [ 4 ] Muršili had ascended to the throne because he was the last surviving son of Šuppiluliuma. [ 5 ]
Muršili believed that the plague had been transmitted to the Hittites by Egyptian prisoners who had been paraded through the capital city, Hattusa . There is some evidence suggesting that the Egyptians suffered from tularemia in the years preceding 1322 BC. [ 4 ] The Hittites apparently also suspected zoonotic transmission, because they banned the use of donkeys in caravans. [ 1 ] Another theory of the plague's origin suggests that it originated with rams that the Hittites had taken as spoils of war, along with other animals, after the Hittites raided Simyra . Soon after the animals were brought into Hittite villages, the tularemia outbreak began. [ 1 ]
The plague is mentioned in Amarna letter EA 35 , a letter written in Akkadian from the ruler of Alashiya (Cyprus) to the Pharaoh of Egypt during the Amarna Period . [ 6 ] It dates from between 1350 and 1325 BC. In it, the plague is specially named as The Hand of Nergal. While Muršili II seems to have believed of an Egyptian origin to the disease, letter EA 35 seems to support a Cypriot origin. If Muršili's account is to believed, it is possible that the plague spread to Egypt from Cyprus, or that Egyptian soldiers became infected in the levant. While a majority of scholars see EA 35 as evidence of the Hittite Plague being in Cyprus, as it unclear to which Egyptian Pharaoh it was sent to, a significant minority believe that the two events may be two successive outbreaks of the same plague, or possibly two different plagues entirely. While it is universally accepted letter EA 35 predates Anatolian references to the disease, the time between the events is unclear, and could be from between two to twenty-five years. It is also generally agreed the plague on Cyprus was probably tularemia, lending credence to a connection between the events. [ 7 ]
The disease was intentionally brought to western Anatolia in what is described as the "first known record of biological warfare ". [ 2 ] Shortly after the Hittites experienced the outbreak of disease, the Arzawans from western Anatolia believed the Hittites were weakened and attacked them. The Arzawans claimed that rams suddenly appeared (1320 and 1318 BC) and the Arzawans brought them into their villages. It is thought that the Hittites had sent rams diseased with tularemia to infect their enemies. The Arzawans became so weakened by the plague that they failed in their attempt to conquer the Hittites. [ 1 ] [ 3 ] | https://en.wikipedia.org/wiki/Hittite_plague |
The Hiyama coupling is a palladium -catalyzed cross-coupling reaction of organosilanes with organic halides used in organic chemistry to form carbon–carbon bonds (C-C bonds). This reaction was discovered in 1988 by Tamejiro Hiyama and Yasuo Hatanaka as a method to form carbon-carbon bonds synthetically with chemo - and regioselectivity . [ 1 ] The Hiyama coupling has been applied to the synthesis of various natural products . [ 2 ]
The Hiyama coupling was developed to combat the issues associated with other organometallic reagents. The initial reactivity of organosilicon was not actually first reported by Hiyama, as Kumada reported a coupling reaction using organofluorosilicates [ 3 ] shown below. Organosilanes were then discovered, by Hiyama, to have reactivity when activated by a fluoride source. [ 4 ] [ 5 ] This reactivity, when combined with a palladium salt, creates a carbon-carbon bond with an electrophillic carbon, like an organic halide. Compared to the inherent issues of well-used organometalics reagents, such as organomagnesium ( Grignard reagents ) and organocopper reagents, which are very reactive and are known to have low chemoselectivity, enough to destroy functional groups on both coupling partners, organosilicon compounds are inactive. Other organometallic reagents using metals such as zinc , tin , and boron , reduce the reactivity issue, but have other problems associated with each reagent. Organozinc reagents are moisture sensitive, organotin compounds are toxic, and organoboron reagents are not readily available, are expensive, and aren't often stable. Organosilanes are readily available compounds that, upon activation (much like organotin or organoboron compounds) from fluoride or a base, can react with organohalides to form C-C bonds in a chemo- and regioselective manner. The reaction first reported was used to couple easily made (and activated) organosilicon nucleophiles and organohalides ( electrophiles ) in the presence of a palladium catalyst. [ 1 ] Since this discovery, work has been done by various groups to expand the scope of this reaction and to "fix" the issues with this first coupling, such as the need for fluoride activation of the organosilane.
The organosilane is activated with fluoride (as some sort of salt such as TBAF or TASF ) or a base to form a pentavalent silicon center which is labile enough to allow for the breaking of a C-Si bond during the transmetalation step. [ 6 ] The general scheme to form this key intermediate is shown below. This step occurs in situ or at the same time as the catalytic cycle in the reaction.
The mechanism for the Hiyama coupling follows a catalytic cycle, including an A) oxidative addition step, in which the organic halide adds to the palladium oxidizing the metal from palladium(0) to palladium(II); a B) transmetalation step, in which the C-Si bond is broken and the second carbon fragment is bound to the palladium center; and finally C) a reductive elimination step, in which the C-C bond is formed and the palladium returns to its zero-valent state to start the cycle over again. [ 7 ] The catalytic cycle is shown below.
The Hiyama coupling can be applied to the formation of C sp 2 -C sp 2 (e.g. aryl –aryl) bonds as well as C sp 2 -C sp 3 (e.g. aryl– alkyl ) bonds. Good synthetic yields are obtained with couplings of aryl halides , vinyl halides , and allylic halides and organoiodides afford the best yields.
The scope of this reaction was expanded to include closure of medium-sized rings by Scott E. Denmark . [ 8 ]
The coupling of alkyl halides with organohalosilanes as alternative organosilanes has also been performed. Organochlorosilanes allow couplings with aryl chlorides, which are abundant and generally more economical than aryl iodides. [ 9 ] A nickel catalyst allows for access to new reactivity of organotrifluorosilanes as reported by GC Fu et al. [ 10 ] Secondary alkyl halides are coupled with aryl silanes [ 11 ] with good yields using this reaction.
The Hiyama coupling is limited by the need for fluoride in order to activate the organosilicon reagent. Addition of fluoride cleaves any silicon protecting groups (e.g. silyl ethers [ 12 ] ), which are frequently employed in organic synthesis. The fluoride ion is also basic, so base sensitive protecting groups, acidic protons, and functional groups may be affected by the addition of this activator. Most of the active research concerning this reaction involves circumventing this problem. To overcome this issue, many groups have looked to the use of other basic additives for activation, or use of a different organosilane reagent all together, leading to the multiple variations of the original Hiyama coupling.
One modification of the Hiyama coupling utilizes a silacyclobutane ring and a fluoride source that is hydrated as shown below. [ 13 ] This mimics the use of an alkoxysilane/organosilanol rather than the use of alkylsilane. The mechanism of this reaction, using a fluoride source, allowed for the design of future reactions that can avoid the use of the fluoride source.
Many modifications to the Hiyama coupling have been developed that avoid the use of a fluoride activator/base. Using organochlorosilanes, Hiyama found a coupling scheme utilizing NaOH as the basic activator. [ 14 ] Modifications using alkoxysilanes have been reported with the use of milder bases like NaOH [ 15 ] and even water. [ 16 ] Study of these mechanisms have led to the development of the Hiyama–Denmark coupling which utilize organosilanols as coupling partners.
Another class of fluoride-free Hiyama couplings include the use of a Lewis acid additive, which allows for bases such as K 3 PO 4 [ 17 ] to be utilized, or for the reaction to proceed without a basic additive. [ 18 ] [ 19 ] The addition of a copper co-catalyst has also been reported to allow for the use of a milder activating agent [ 17 ] and has even been shown to get turnover in which both the palladium(II) and copper(I) turnover in the catalytic cycle rather than addition of stoichiometric Lewis acid (e.g. silver(I), [ 18 ] copper(I) [ 19 ] ).
The Hiyama–Denmark coupling is the modification of the Hiyama coupling that does not require a fluoride additive to utilize organosilanols and organic halides as coupling partners. The general reaction scheme is shown below, showcasing the utilization of a Brønsted base as the activating agent as opposed to fluoride, phosphine ligands are also used on the metal center. [ 2 ]
A specific example of this reaction is shown with reagents. If fluoride had been used, as in the original Hiyama protocol, the tert -butyldimethylsilyl (TBS) ether would have likely been destroyed. [ 20 ]
Examination of this reaction's mechanism suggests that the formation of the silonate is all that is needed to activate addition of the organosilane to the palladium center. The presence of a pentavalent silicon is not needed and kinetic analysis has shown that this reaction has first order dependence on silonate concentration. [ 2 ] This is due to the key bond being formed, the Pd-O bond during the transmetalation step, that then allows for transfer of the carbon fragment onto the palladium center. Based on this observation, it seems that the rate limiting step in this catalytic cycle is the Pd-O bond formation, in which increased silonate concentrations increase the rate of this reaction (indicative of faster reactions). | https://en.wikipedia.org/wiki/Hiyama_coupling |
Hjernevask ("Brainwash") is a Norwegian documentary miniseries about science that aired on NRK1 in 2010. The series, consisting of seven episodes, was created for NRK and presented by the comedian and sociologist Harald Eia .
The series contrasted cultural determinist models of human behavior (also referred to as the Standard social science model ) with nature-nurture interactionist perspectives. In support of the cultural determinist perspective it interviewed mainly Norwegian humanities scholars, in particular literary theorist Jørgen Lorentzen at the Centre for Gender Research . Experts interviewed for the series in support of a nature-nurture interactionist perspective included Simon Baron-Cohen , Steven Pinker , Simon LeVay , David Buss , Glenn Wilson , Robert Plomin and Anne Campbell . This ignited a wide public discussion on the subject of the nature versus nurture debate, and especially focused on the views expressed by Lorentzen. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The entire series has since been released online.
Eia and coproducer Ole Martin Ihle have named Steven Pinker 's book The Blank Slate as an inspiration for the documentary series. [ 5 ] The series was a huge success, and Eia was awarded the Fritt Ord Award for "having precipitated one of the most heated debates on research in recent times". [ 6 ]
The producers have made the series available online. [ 7 ] Episodes linked in the external links have English subtitles available.
Hjernevask led to extensive public debate, largely focused on the views expressed in the program by literary scholar Jørgen Lorentzen at the Centre for Gender Research . Lorentzen's description of the research of psychologists Simon Baron-Cohen , Anne Campbell and Richard Lippa as "weak science" was strongly criticized by many commentators; biologist Trond Amundsen pointed out that Lorentzen's work was cited less than 30 times in academic literature and responded that "the characteristic 'weak science' would be rude and uncollegial if Lorentzen was a leading international expert and the three researchers were in fact not so meritorious. But all three are meritorious international researchers [...] against this background, the statement is just embarrassing." [ 9 ]
Lorentzen accused his critics including " Ottar Brox , Øyvind Østerud [ no ] , Stig Frøland , Kristian Gundersen [ no ] , Tor K. Larsen and others" of being "cowardly hyenas" who have shown "neither insight into nor interest in gender research." [ 10 ] Lorentzen also accused series creator Eia of being motivated by a midlife crisis . [ 11 ] Eia pointed out that Lorentzen has a very limited scholarly impact with few international publications and a very low number of citations, and said that he wouldn't have interviewed Lorentzen for the series if he had known at the time that Lorentzen was "such a low-level researcher." [ 12 ] Jon Hustad accused Lorentzen of being "blinded by ideology." [ 13 ] In response to claims by Lorentzen that NRK had portrayed him unfairly and misrepresented his comments, NRK made all the raw footage available. [ 14 ] [ 15 ] Lorentzen complained to the Norwegian Press Complaints Commission (PFU). In June 2010 PFU concluded that NRK had not violated press ethics or portrayed Lorentzen in an unfair manner; [ 16 ] the chairman of PFU described Hjernevask as a "solid work" of investigative journalism. [ 17 ] Eia received the 2010 Fritt Ord Honorary Award for the series. [ 18 ] [ 19 ]
Several years after Hjernevask aired, it was reported that it has been used in Eastern Europe to promote false claims that all gender studies research in Norway has been closed down in its aftermath; Harald Eia commented that he did not make Hjernevask for a Hungarian audience, and that he wanted to showcase the arrogance he felt Lorentzen displayed towards other research fields for a Norwegian audience. [ 20 ] | https://en.wikipedia.org/wiki/Hjernevask |
The Hjulström curve , named after Filip Hjulström (1902–1982), is a graph used by hydrologists and geologists to determine whether a river will erode , transport , or deposit sediment. It was originally published in his doctoral thesis "Studies of the morphological activity of rivers as illustrated by the river Fyris . [ 1 ] " in 1935. The graph takes sediment particle size and water velocity into account. [ 2 ]
The upper curve shows the critical erosion velocity in cm/s as a function of particle size in mm, while the lower curve shows the deposition velocity as a function of particle size. Note that the axes are logarithmic .
The plot shows several key concepts about the relationships between erosion, transportation, and deposition. For particle sizes where friction is the dominating force preventing erosion, the curves follow each other closely and the required velocity increases with particle size. However, for cohesive sediment, mostly clay but also silt , the erosion velocity increases with decreasing grain size , as the cohesive forces are relatively more important when the particles get smaller. [ 3 ] The critical velocity for deposition, on the other hand, depends on the settling velocity , and that decreases with decreasing grainsize. The Hjulström curve shows that sand particles of a size around 0.1 mm require the lowest stream velocity to erode.
The curve was expanded by Åke Sundborg in 1956. He significantly improved the level of detail in the cohesive part of the diagram, and added lines for different modes of transportation. [ 4 ] The result is called the Sundborg diagram , or the Hjulström-Sundborg Diagram , in the academic literature.
This curve dates back to early 20th century research on river geomorphology and has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity deceleration and erosion is caused by flow acceleration . The dimensionless Shields Diagram , in combination with the Shields formula is now unanimously accepted for initiation of sediment motion in rivers. Much work was done on river sediment transport formulae in the second half of the 20th century and that work should be used preferably to Hjulström's curve. [ 4 ]
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hjulström_curve |
The Ho-Am Prize in Science was established in 1990 by Kun-Hee Lee , the Chairman of Samsung , to honour the late Chairman, Lee Byung-chul , the founder of the company. [ 1 ] The Ho-Am Prize in Science (previously the Ho-Am Prize in Science & Technology) is one of six prizes awarded annually, covering the five categories of Science, Engineering, Medicine, Arts, and Community Service, plus a Special Prize, which are named after the late Chairman's sobriquet ( art-name or pen name), Ho-Am.
The Ho-Am Prize in Science is presented each year, together with the other prizes, to individuals of Korean heritage who have furthered the welfare of humanity through distinguished accomplishments in the field of Science.
Source: Ho-Am Foundation | https://en.wikipedia.org/wiki/Ho-Am_Prize_in_Science |
Ho-Young Kim is a mechanical engineer and an academic. He is a Professor and chair in the Department of Mechanical Engineering at Seoul National University .
Kim's research interests encompass fluid mechanics , biofluid dynamics , microfluidics , soft matter , and their applications in bio-inspired soft mechanics, biomimetic soft robotics , nanofluidics , and renewable energy . Among numerous awards, he is the recipient of SNU President's Research Excellence Award, Gasan Award for Research Excellence and the Namheon Award for Research Excellence from the Korean Society of Mechanical Engineers.
Kim is a Fellow of the American Physical Society . [ 1 ] He has served as an Associate Editor for Droplet . [ 2 ]
Kim obtained his B.S. in Mechanical Engineering from Seoul National University in 1994. In 1996, he pursued an S.M. (Master of Science) in Mechanical Engineering at the Massachusetts Institute of Technology (MIT) in Cambridge and earned his Ph.D. in Mechanical Engineering from MIT in 1999. [ 3 ]
Kim began his career as a Senior Research Scientist as Military Service at the Korea Institute of Science and Technology from 1999 to 2004. During the military stint, he held positions as a Visiting Scholar at the Laboratory for Manufacturing and Productivity at the Massachusetts Institute of Technology (MIT) in 2001 and a Visiting Scientist at the University of Cambridge , in 2002. In 2004, he worked as a Postdoctoral Fellow in the Division of Engineering and Applied Sciences at Harvard University . [ 4 ] He then joined Seoul University as an assistant professor in the same year and has held the position of Professor in the Department of Mechanical Engineering at the Seoul National University since 2014. [ 5 ]
Kim has held numerous professional appointments, including Track Chair for the World Congress on Biomechanics 2022 and co-chair for the International Conference on Nature Inspired Surface Engineering 2020, and organizer for the IUTAM Symposium on Capillarity and Elastocapillarity in Biology 2024. [ 6 ]
Kim's research has focused on biofluid mechanics, capillarity , bubbles, nanofabrication , and soft matter , and has integrated experimental and theoretical approaches.
Motivated by the ability of water striders to jump off water surface without sinking, Kim studied how super-water-repellent solids can be disengaged from water. He showed that a tiny superhydrophobic sphere can bounce off water surface when it impacts onto water with speeds of a narrow range. [ 7 ] By studying the force and energy required to lift a solid object clear from the water surface, he found that a drastic degree of energy saving (up to 99%) is achieved when lifting a superhydrophobic object as compared with an object with moderate wettability. [ 8 ] He also obtained the load supported by small floating objects as a function of the contact angle, [ 9 ] and the sinking speeds of small but heavy solids into either inviscid [ 10 ] or viscous liquids. [ 11 ] These hydrodynamic studies eventually allowed him to capture the essential physics behind water jumping of water striders [ 12 ] and to build a robotic water strider. [ 13 ] He has extended his interests to the jumps of terrestrial insects, and solved the motion of a simple jumper (elastic hoop) to predict its maximum jump height accurately. [ 14 ] In addition to the locomotion of semi-aquatic arthropods, he studied thrust generation of flapping appendages of swimming robots and animals. He found a kinematic condition of a compliant, beating fin for maximizing the thrust of a robotic fish. [ 15 ] He also found that flapping paddles, tails, and fins of ducks, standing dolphins, and starting fish generate thrust by forming a vortical structure different from a conventional starting-stopping vortex paradigm, which allowed him to construct a scaling law to predict the thrust of the flapping plate in the absence of a free stream velocity. [ 16 ] He also obtained a universal scaling law for the lift of hovering insects through simple scaling arguments of the strength of the leading edge vortex and the momentum induced by the vortical structure. [ 17 ] In addition, his collaborative work used a fluttering flag to devise a novel scheme to generate electric power based on triboelectrification. [ 18 ]
Upon the basis of the pioneering theory of elastocapillarity, [ 19 ] Kim continued to investigate the bending of thin elastic objects due to interfacial forces as they touch the liquid-fluid interface. He formulated the elastic deformation of elastic sheets under the line force of surface tension and the loading due to hydrostatic and Laplace pressures, and solved the free-boundary problem as the location of the meniscus is a part of the solution. The problems that he investigated include a two-dimensional paintbrush, [ 20 ] a bubble-actuated paddle, [ 21 ] and a floating flexible leg. [ 22 ] He investigated the clustering behavior of micropillars and lamellae as a liquid film evaporates and pulls the solid structures together due to surface tension effects. [ 23 ] He has also expanded this research to hygroscopic poroelastic structures, like paper, that deform with impregnation of water. [ 24 ] [ 25 ]
The development of micro- and nanofabrication technology has enabled the formation of microscopically rough surfaces with tailored topography. Such surface textures magnify either wettability or water-repellency of smooth surfaces, which used to be impossible. He investigated the dynamics of liquid drops deposited on superhydrophilic textured surfaces to find that the spreading dynamics are qualitatively different from those on smooth surfaces and obtained the various scaling laws that govern the hemiwicking dynamics. [ 26 ] [ 27 ] [ 28 ] Noting that writing with ink involves the similar process of superwetting of rough surfaces (paper) from a moving source (pen), he mathematically analyzed the process of writing. [ 29 ] He also showed the effectiveness of superhydrophilic surfaces in collecting water from humid air via dewing, [ 30 ] and modeled the shape of large drops on superhydrophobic surfaces. [ 31 ]
Using the micro- and nanofabrication technology, he also generated surfaces with super- wettability-contrast, such that superhydrophobic areas are surrounded by superhydrophilic area or vice versa. Liquid drops impacting on the micro-wetting patterned surfaces exhibit novel and even aesthetically pleasing dynamic behaviors, leading to the formation of various deposit morphologies such as radiating liquid spokes [ 32 ] and liquid rings. [ 33 ]
Focusing on the mechanism of ultrasonic cleaning, Kim showed through a high-speed visualization technique and verified theoretically that it is the pressure gradient locally generated by rapid bubble oscillations that removes particles on solid surfaces. [ 34 ] Based on the understanding of the role of ultrasonic bubbles in cleaning and damaging of solid surfaces, he devised a scheme of ultrasonic cleaning that can preserve fragile nanostructures on semiconductor chips while removing contaminant particles. [ 35 ] He identified the physical origins of micropattern damage caused by violently oscillating cavitation bubbles. [ 36 ] In addition to ultrasonic cavitation bubbles, he studied dynamic behavior of relatively slow thermal bubbles, which have implications on microbubble-based MEMS devices as well as boiling heat transfer. Both fluid-dynamic and thermal measurements were carried out for a bubble that forms, grows and departs from a continuously powered microline heater, a tool to investigate the microbubble behavior with high temporal and spatial resolutions. [ 37 ] [ 38 ] He also demonstrated that bubbles consecutively formed by a continuously powered microheater and deflecting an adjacent cantilever beam can be used as an actuator in liquid environments. [ 21 ]
Kim developed a surface modification technology that forms nanoscale roughness and lowers the surface energy on large areas at a low cost using plasma assisted chemical vapor deposition (PACVD) technique with a group of materials scientists. The collaboration led to a variety of functional surfaces including the surfaces with strong and robust superhydrophobicity, [ 39 ] [ 40 ] [ 41 ] with long-lasting superhydrophilicity, [ 42 ] with tunable absorbability, [ 43 ] and an array of tilted pillars resembling the footpad of a gecko lizard. [ 44 ] He extended this technique to superhydrophobize cylindrical porous tubes in order to improve the efficiency of a desalination process called membrane distillation. [ 45 ]
To overcome the inherent drawbacks of most nanofabrication technologies that modify or pattern two-dimensional surfaces, either planar (conventional technology) or curved (the aforementioned plasma-based technology), he developed a technology to build three-dimensional nanoscale objects by direct deposition of nanofibers. He showed that a nanoscale polymer solution electrojet can coil to form free-standing hollow pottery as the jet is focused onto a sharp electrode tip. [ 46 ] He also fabricated free-standing walls using electrojets, which can be a fundamental technology to enable nanoscale three-dimensional printing. [ 47 ]
Kim has worked on mechanical analysis, optimal design, and low-cost fabrication of soft-matter-based machines, which can shape-morph and locomote just as soft natural organisms. He has been particularly interested in hygroscopically responsive materials, which can swell by absorbing water. He reported a self-locomotive ratcheted actuator powered by environmental humidity, called hygrobot. [ 48 ] He suggested a way to understand such system from the perspective of thermodynamic cycle analysis. [ 49 ] Mechanical study of hygroscopic swelling of porous materials led to the birth of a new scientific branch of poroelastocapillarity, for which he wrote an authoritative review. [ 50 ] He is also working on stimuli-responsive granular materials [ 51 ] and growing soft systems structurally embedded with physical intelligence. [ 52 ] | https://en.wikipedia.org/wiki/Ho-Young_Kim |
Europium(III) oxide Gold(III) oxide Lanthanum(III) oxide Lutetium(III) oxide Praseodymium(III) oxide Promethium(III) oxide Terbium(III) oxide Thallium(III) oxide Thulium(III) oxide
Holmium(III) oxide , or holmium oxide is a chemical compound of the rare-earth element holmium and oxygen with the formula Ho 2 O 3 . Together with dysprosium(III) oxide (Dy 2 O 3 ), holmium oxide is one of the most powerfully paramagnetic substances known. The oxide, also called holmia , occurs as a component of the related erbium oxide mineral called erbia . Typically, the oxides of the trivalent lanthanides coexist in nature, and separation of these components requires specialized methods. Holmium oxide is used in making specialty colored glasses . Glass containing holmium oxide and holmium oxide solutions have a series of sharp optical absorption peaks in the visible spectral range . They are therefore traditionally used as a convenient calibration standard for optical spectrophotometers .
Holmium oxide has some fairly dramatic color changes depending on the lighting conditions. In daylight, it is a tannish yellow color. Under trichromatic light, it is a fiery orange red, almost indistinguishable from the way erbium oxide looks under this same lighting. This is related to the sharp emission bands of the phosphors. [ 2 ] Holmium oxide has a wide band gap of 5.3 eV [ 1 ] and thus should appear colorless. The yellow color originates from abundant lattice defects (such as oxygen vacancies) and is related to internal transitions at the Ho 3+ ions. [ 2 ]
Holmium oxide has a cubic , yet rather complex bixbyite structure, with many atoms per unit cell and a large lattice constant of 1.06 nm. This structure is characteristic of oxides of heavy rare-earth elements, such as Tb 2 O 3 , Dy 2 O 3 , Er 2 O 3 , Tm 2 O 3 , Yb 2 O 3 and Lu 2 O 3 . The thermal expansion coefficient of Ho 2 O 3 is also relatively large at 7.4 ×10 −6 /°C. [ 3 ]
Treating holmium oxide with hydrogen chloride or with ammonium chloride affords the corresponding holmium chloride : [ 4 ]
Holmium(III) oxide can also react with hydrogen sulfide to form holmium(III) sulfide at high temperatures. [ 5 ]
Holmium ( Holmia , Latin name for Stockholm ) was discovered by Marc Delafontaine and Jacques-Louis Soret in 1878 who noticed the aberrant spectrographic absorption bands of the then-unknown element (they called it "Element X"). [ 6 ] [ 7 ] Later in 1878, Per Teodor Cleve independently discovered the element while he was working on erbia earth ( erbium oxide ). [ 8 ] [ 9 ]
Using the method developed by Carl Gustaf Mosander , Cleve first removed all of the known contaminants from erbia. The result of that effort was two new materials, one brown and one green. He named the brown substance holmia (after the Latin name for Cleve's home town, Stockholm) and the green one thulia. Holmia was later found to be the holmium oxide and thulia was thulium oxide . [ 10 ]
Holmium readily oxidizes in air; therefore presence of holmium in nature is synonymous with that of holmia. Holmium oxide occurs in trace amounts in the minerals gadolinite , monazite , and in other rare-earth minerals .
A typical extraction process of holmium oxide can be simplified as follows: the mineral mixtures are crushed and ground. Monazite, because of its magnetic properties can be separated by repeated electromagnetic separation. After separation, it is treated with hot concentrated sulfuric acid to produce water-soluble sulfates of several rare earth elements. The acidic filtrates are partially neutralized with sodium hydroxide to pH 3–4. Thorium precipitates out of solution as hydroxide and is removed. After that, the solution is treated with ammonium oxalate to convert rare earths in to their insoluble oxalates . The oxalates are converted to oxides by annealing. The oxides are dissolved in nitric acid that excludes one of the main components, cerium , whose oxide is insoluble in HNO 3 .
The most efficient separation routine for holmium oxide from the rare-earths is ion exchange . In this process, rare-earth ions are adsorbed onto suitable ion-exchange resin by exchange with hydrogen, ammonium or cupric ions present in the resin. The rare earth ions are then selectively washed out by suitable complexing agent, such as ammonium citrate or nitrilotriacetate. [ 4 ]
Holmium oxide is one of the colorants used for cubic zirconia and glass , providing yellow or red coloring. [ 11 ] Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid ) have sharp optical absorption peaks in the spectral range 200-900 nm. They are therefore used as a calibration standard for optical spectrophotometers [ 12 ] [ 13 ] and are available commercially. [ 14 ] As most other oxides of rare-earth elements, holmium oxide is used as a specialty catalyst , phosphor and a laser material. Holmium laser operates at wavelength of about 2.08 micrometres, either in pulsed or continuous regime. This laser is eye safe and is used in medicine, lidars , wind velocity measurements and atmosphere monitoring. [ 15 ]
Holmium(III) oxide is, compared to many other compounds, not very dangerous, although repeated overexposure can cause granuloma and hemoglobinemia . It has low oral, dermal and inhalation toxicities and is non-irritating. The acute oral median lethal dose (LD 50 ) is greater than 1 g per kilogram of body weight. [ 16 ] | https://en.wikipedia.org/wiki/Ho2O3 |
Holmium titanate is an inorganic compound with the chemical formula Ho 2 Ti 2 O 7 .
Holmium titanate is a spin ice material [ 3 ] like dysprosium titanate and holmium stannate . [ 4 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Ho2Ti2O7 |
Holmium(III) fluoride is an inorganic compound with a chemical formula of HoF 3 .
Holmium(III) fluoride can be produced by reacting holmium oxide and ammonium fluoride , then crystallising it from the ammonium salt formed in solution: [ 3 ]
It can also be prepared by directly reacting holmium with fluorine : [ 4 ]
Holmium(III) fluoride is a yellowish powder that is insoluble in water. [ 5 ] It has an orthorhombic crystal system (corresponding to β-YF 3 [ 6 ] ) with the space group Pnma (space group no. 62). [ 7 ] However, there is also a trigonal low-temperature form of the lanthanum(III) fluoride type. [ 3 ] | https://en.wikipedia.org/wiki/HoF3 |
Hoarding or caching in animal behavior is the storage of food in locations hidden from the sight of both conspecifics (animals of the same or closely related species) and members of other species. [ 1 ] Most commonly, the function of hoarding or caching is to store food in times of surplus for times when food is less plentiful. However, there is evidence that a certain amount of caching or hoarding is actually undertaken with the aim of ripening the food so stored, and this practice is thus referred to as ‘ripening caching’. [ 2 ] The term hoarding is most typically used for rodents , whereas caching is more commonly used in reference to birds , but the behaviors in both animal groups are quite similar.
Hoarding is done either on a long-term basis—cached on a seasonal cycle, with food to be consumed months down the line—or on a short-term basis, in which case the food will be consumed over a period of one or several days.
Some common animals that cache their food are rodents such as hamsters and squirrels , and many different bird species, such as rooks and woodpeckers . The western scrub jay is noted for its particular skill at caching. There are two types of caching behavior: larder hoarding, where a species creates a few large caches which it often defends, and scatter hoarding, where a species will create multiple caches, often with each individual food item stored in a unique place. Both types of caching have their advantage.
Caching behavior is typically a way to save excess edible food for later consumption—either soon to be eaten food, such as when a jaguar hangs partially eaten prey from a tree to be eaten within a few days, or long term, where the food is hidden and retrieved many months later. Caching is a common adaptation to seasonal changes in food availability. In regions where winters are harsh, food availability typically becomes low, and caching food during the times of high food availability in the warmer months provides a significant survival advantage. For species that hoard perishable food weather can significantly affect the accumulation, use and rotting of the stored food. [ 4 ] This phenomenon is referenced in the fable The Ant and the Grasshopper .
However, in ripening caching behavior, animals collect and cache food which is immediately inedible but will become "ripe" and edible after a short while. For instance, tayras (a Central American weasel) have been observed to harvest whole green plantains , hide them, and then come back to eat them after they have ripened. [ 2 ] Leafcutter ants harvest pieces of inedible leaves and then cache them in underground chambers to ripen with a fungus which is the main food for the colony. Contrary to popular belief, there is no scientific evidence that crocodilians such as the American Alligator cache large prey underwater to consume later. [ 5 ]
Scatter hoarding is the formation of a large number of small hoards. This behavior is present in both birds (especially the Canada jay ) and small mammals, mainly squirrels and other rodents, such as the eastern gray squirrel , fox squirrel , and wood mouse . Specifically, those who do not migrate to warmer climates or hibernate for winter are most likely to scatter hoard. [ 6 ] [ 7 ] [ 8 ] This behavior plays an important part in seed dispersal , as those seeds that are left uneaten will have a chance to germinate , thus enabling plants to spread their populations effectively. While it is clear why some animals scatter their food caches, there is still the question of why they would store the food outside of their bodies in the first place. The reason for this is that scatter hoarders must remain active during the caching period in order to hide the most food in the most places possible. Storing the food inside their body would reduce their mobility and be counterproductive to this objective. [ 9 ]
Cache spacing is the primary technique that scatter hoarders use to protect food from pilferers. By spreading the food supply around geographically, hoarders discourage competitors who happen upon a cache from conducting area-restricted searching for more of the supply. Despite cache spacing, hoarders are still unable to eliminate the threat of pilferage . [ 10 ] However, having multiple cache sites is costly because it requires a good spatial memory. Scatter-hoarders generally have larger hippocampi [ 11 ] than animals that do not participate in scatter-hoarding behavior. Additionally, studies have shown that hippocampal volume in scatter-hoarders varies seasonally [ 12 ] and based on the harshness of the climate that the animal lives in. [ 13 ]
In larder hoarding , the hoard is large and is found in a single place termed a larder , which usually also serves as the nest where the animal lives. Hamsters are famous larder hoarders. Indeed, the German verb "hamstern" (to hoard) is derived from the noun "Hamster" which refers to the rodent; similar verbs are found in various related languages ( Dutch hamsteren , and Swedish hamstra ). Other languages also draw a clear connection between hamsters and hoarding: Polish chomikować , from chomik – hamster; Hebrew hamster ; oger (אוגר) comes from to hoard ; le'egor (לאגור). A disadvantage of larder hoarding is that if a cache is raided, this is far more problematic for the animal than if it were a scatter hoarder. While the hoard is much easier to remember the location of, these larger hoards must also be more staunchly defended.
Most species are particularly wary of onlooking individuals during caching and ensure that the cache locations are secret . [ 14 ] [ 15 ] [ 16 ] Not all caches are concealed however, for example shrikes store prey items on thorns on branches in the open. [ 17 ]
Although a small handful of species share food stores, food hoarding is a solo endeavor for most species, including almost all rodents and birds. For example, a number of jays live in large family groups, but they don't demonstrate sharing of cached food. Rather, they hoard their food supply selfishly, caching and retrieving the supply in secret. [ 18 ]
There are only two species in which kin selection has resulted in a shared food store, i.e. beavers ( Castor canadensis ) and acorn woodpeckers ( Melanerpes formicivorous ); the former live in family groups and construct winter larders of submerged branches, while the latter are unusual in that they construct a conspicuous communal larder. [ 19 ]
Wolves , foxes , and coyotes identify their food caches by scent-marking them, [ 20 ] [ 1 ] usually after they have been emptied. [ 3 ]
Pilferage occurs when one animal takes food from another animal's larder.
Some species experience high levels of cache pilferage, up to 30% of the supply per day. Models of scatter hoarding [ 21 ] [ 22 ] [ 23 ] suggested the value of cached food is equal to the hoarder's ability to retrieve it. [ 24 ]
It has been observed that members of certain species, such as rodents and chickadees , act as both hoarder and pilferer. In other words, pilfering can be reciprocal and, thus, tolerable. [ 24 ]
Animals recache the food that they've pilfered from other animal's caches.
For example, 75% percent of mildly radioactive (thus traceable ) Jeffrey pine seeds cached by yellow pine chipmunks were found in two cache sites, 29% of the seeds were found in three sites, 9.4% were found in four sites and 1.3% were found in five sites over a 3-month period. [ 25 ] These results, and those from other studies, demonstrate the dynamic nature of the food supplies of scatter hoarding animals.
Group-foraging common ravens , ( Corvus corax ), scatter hoard their food and also raid the caches made by others. Cachers withdraw from conspecifics when hiding their food and most often place their caches behind structures, obstructing the view of potential observers. Raiders watch inconspicuously and keep at a distance to cachers close to their cache sites. In response to the presence of potential raiders or because of their initial movements towards caches, the cachers frequently interrupt caching, change cache sites, or recover their food items. These behaviors suggest that ravens are capable of withholding information about their intentions, which may qualify as tactical deception . [ 26 ]
Similarly, Eurasian jays ( Garrulus glandarius ) when being watched by another jay, prefer to cache food behind an opaque barrier rather than a transparent barrier, suggesting they may opt to cache in out-of-view locations to reduce the likelihood of other jays pilfering their caches. [ 27 ] | https://en.wikipedia.org/wiki/Hoarding_(animal_behavior) |
Hobart Hurd Willard (June 3, 1881 – May 7, 1974) was an analytical chemist and inorganic chemist who spent most of his career at the University of Michigan . He was known for his teaching skill and his authorship of widely used textbooks. His research interests were wide-ranging and involved the characterization of perchloric acid and periodic acid salts. [ 1 ] [ 2 ]
Willard was born on June 3, 1881, in Erie, Pennsylvania . His family relocated to Union City, Michigan , in 1883 and he spent the rest of his early life there. His father and later his high school teachers encouraged his interest in chemistry, which he pursued as an undergraduate at the University of Michigan . He received his A.B. in 1903 and his M.A. in 1905. Meanwhile, he was briefly hired as an instructor of chemistry, but at the encouragement of coworkers he decided to pursue his Ph.D. at Harvard University . He received his Ph.D. in 1909 under the supervision of Theodore William Richards . [ 2 ]
After finishing his Ph.D., Willard returned to Michigan to rejoin the faculty; he became a full professor in 1922 and retired from the university, assuming professor emeritus status, in 1951. He was designated the Henry Russel Lecturer in 1948, noted as the university's highest distinction. [ 1 ] [ 3 ] He was known for his strong teaching skills and continued teaching at a variety of institutions after his retirement. [ 2 ]
During his tenure at Michigan, Willard wrote several widely used and positively reviewed chemistry textbooks and laboratory course manuals, [ 4 ] [ 5 ] often with former students as coauthors. [ 2 ] He also performed consulting work for local industry throughout his career, serving as the Director of the Chemistry and Metallurgy Laboratories for Detroit's Bureau of Aircraft Production in 1917-18 and as a long-term consultant for the Parker Rust-Proof Company . [ 2 ]
Willard served as a director of the American Chemical Society from 1934 to 1940 and received the ACS' Fisher Award in Analytical Chemistry in 1951. He was the inaugural recipient of the Anachem Award given by the Association of Analytical Chemists in 1953. [ 2 ]
Willard's research interests focused on analytical chemistry and quantitative analysis of inorganic substances. With student G. Frederick Smith , he was particularly productive in studying perchloric acid and periodic acid salts. [ 1 ] [ 3 ] In addition, he is credited with important work in determining precise atomic weights of chemical elements such as lithium , silver , and antimony , and with development of metal alloy techniques. [ 3 ]
Willard and his wife Margaret had two daughters, Ann and Nancy. Willard was a photography enthusiast and hobbyist who occasionally sold his work. He died in Ann Arbor, Michigan , on May 7, 1974. [ 2 ] Michigan holds a named professorship in his honor; the current Willard Professor of Chemistry is Robert T. Kennedy . [ 6 ] | https://en.wikipedia.org/wiki/Hobart_Hurd_Willard |
The Hobbesian trap (or Schelling's dilemma ) is a theory that explains why preemptive strikes occur between two groups, out of bilateral fear of an imminent attack. Without outside influences, this situation will lead to a fear spiral ( catch-22 , vicious circle , Nash equilibrium ) in which fear will lead to an arms race which in turn will lead to increasing fear. The Hobbesian trap can be explained in terms of game theory . Although cooperation would be the better outcome for both sides, mutual distrust leads to the adoption of strategies that have negative outcomes for both individual players and all players combined. [ 1 ] The theory has been used to explain outbreaks of conflicts and violence , spanning from individuals to states. [ 2 ]
An early example of Hobbesian trap reasoning is Thucydides 's analysis of the Peloponnesian War in Ancient Greece . Thucydides presented that fear and distrust towards the other side led to an escalation of violence. [ 3 ] The theory is most commonly associated with English philosopher Thomas Hobbes . Nobel Prize winner Thomas Schelling also saw fear as a motive for conflict. Applying game theory to the Cold War and nuclear strategy, Schelling's view was that in situations where two parties are in conflict but share a common interest, the two sides will often reach a tacit agreement rather than resort to open conflict. [ 4 ]
Psychology professor Steven Pinker is a proponent of the theory of the Hobbesian trap and has applied the theory to many conflicts and outbreaks of violence between people, groups, tribes, societies and states. [ 2 ] [ 5 ] Issues of gun control have been described as a Hobbesian trap. [ 6 ] A common example is the dilemma that both the armed burglar and the armed homeowner face when they meet each other. Neither side may want to shoot, but both are afraid of the other party shooting first so they may be inclined to fire preemptively, although the favorable outcome for both parties would be that nobody be shot. [ 7 ] [ 8 ]
A similar example between two states is the Cuban Missile Crisis . Fear and mutual distrust between the actors increased the likelihood of a preemptive strike. [ 7 ] Hobbesian traps in nuclear weapons' case can be defused if both sides can threaten second strike , which is the capacity to retaliate with nuclear force after the first attack. This is the basis of mutual assured destruction . [ 9 ]
The Dark Forest , a science fiction novel by Liu Cixin , incorporates a Hobbesian trap into its narrative. The dark forest hypothesis , both diegetically and non-diegetically to the novel, is a form of the Hobbesian trap that has been used to answer the Fermi Paradox by arguing that any two advanced space-faring civilizations will inevitably seek to destroy each other rather than risk being destroyed by the other, like two scared armed men prowling through a dark forest, ready to shoot at anything that so much as snaps a twig. [ 10 ]
The Hobbesian trap can be avoided by influences that increase the trust between the two parties. [ 1 ] In Hobbes' case, the Hobbesian trap would be present in the state of nature where, in the absence of law and law enforcement , the credible threat of violence from others may justify preemptive attacks. For Hobbes, we avoid this problem by naming a ruler who pledges to punish violence with violence. [ 11 ] [ 12 ] In the Cuban Missile Crisis, for example, Kennedy and Khrushchev realized that they were caught in a Hobbesian trap, which helped them to make concessions that reduced distrust and fear. [ 7 ] | https://en.wikipedia.org/wiki/Hobbesian_trap |
Hobbs meter is a generic trademark for devices used in aviation to measure the time that an aircraft is in use. The meters typically display hours and tenths of an hour, but there are several ways in which the meter may be activated:
The Hobbs meter is named after John Weston Hobbs (1889–1968), who in 1938 founded the company named after him in Springfield, Illinois , which manufactured the first electrically wound clocks for vehicle use. World War II created the demand for aviation hour meters which led to the development of the original Hobbs meter. The company was eventually renamed Honeywell Hobbs after being acquired by Honeywell International , who in 2009 announced plans to move manufacturing to Mexico. [ 1 ]
In 2022, Honeywell obsoleted all of their hour meters including the Hobbs meter line. [ 2 ]
For general aviation , "Hobbs time" is usually recorded in the pilot's log book , and many fixed-base operators that rent airplanes charge an hourly rate based on Hobbs time. Tachometer time or " tach time " is recorded in the engine's log books and is used, for example, to determine when the oil should be changed and the time between overhauls . Tach time differs from Hobbs time in that it is linked to engine revolutions per minute (RPM). Tach time records the time at a specific RPM. It is most accurate at cruise RPM, and least accurate while taxiing or stationary with the engine running. At these times, the clock runs slower. Depending on the type of flight, tach time can be 10–20% less than Hobbs time. Many organizations, such as flying clubs , charge by tach time so as to differentiate themselves from fixed-base operators as 10–20% less time recorded makes it 10–20% cheaper to fly (if the hourly rate is the same). In the case where flying clubs use tach time, many will charge a "dry rate", requiring the renter to pay for fuel on top of the hourly tach time rate. [ citation needed ] | https://en.wikipedia.org/wiki/Hobbs_meter |
In mathematics , and in particular the necklace splitting problem , the Hobby–Rice theorem is a result that is useful in establishing the existence of certain solutions. It was proved in 1965 by Charles R. Hobby and John R. Rice ; [ 1 ] a simplified proof was given in 1976 by A. Pinkus. [ 2 ]
Define a partition of the interval [0,1] as a division of the interval into n + 1 {\displaystyle n+1} subintervals by as an increasing sequence of n {\displaystyle n} numbers:
Define a signed partition as a partition in which each subinterval i {\displaystyle i} has an associated sign δ i {\displaystyle \delta _{i}} :
The Hobby–Rice theorem says that for every n continuously integrable functions:
there exists a signed partition of [0,1] such that:
(in other words: for each of the n functions, its integral over the positive subintervals equals its integral over the negative subintervals).
The theorem was used by Noga Alon in the context of necklace splitting [ 3 ] in 1987.
Suppose the interval [0,1] is a cake . There are n partners and each of the n functions is a value-density function of one partner. We want to divide the cake into two parts such that all partners agree that the parts have the same value. This fair-division challenge is sometimes referred to as the consensus-halving problem. [ 4 ] The Hobby–Rice theorem implies that this can be done with n cuts.
This mathematical analysis –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hobby–Rice_theorem |
In algebra , the Hochster–Roberts theorem , introduced by Melvin Hochster and Joel L. Roberts in 1974, [ 1 ] states that rings of invariants of linearly reductive groups acting on regular rings are Cohen–Macaulay .
In other words, [ 2 ] if V is a rational representation of a linearly reductive group G over a field k , then there exist algebraically independent invariant homogeneous polynomials f 1 , ⋯ , f d {\displaystyle f_{1},\cdots ,f_{d}} such that k [ V ] G {\displaystyle k[V]^{G}} is a free finite graded module over k [ f 1 , ⋯ , f d ] {\displaystyle k[f_{1},\cdots ,f_{d}]} .
In 1987, Jean-François Boutot proved [ 3 ] that if a variety over a field of characteristic 0 has rational singularities then so does its quotient by the action of a reductive group; this implies the Hochster–Roberts theorem in characteristic 0 as rational singularities are Cohen–Macaulay.
In characteristic p >0, there are examples of groups that are reductive (or even finite) acting on polynomial rings whose rings of invariants are not Cohen–Macaulay.
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hochster–Roberts_theorem |
In combinatorics , the hockey-stick identity , [ 1 ] Christmas stocking identity , [ 2 ] boomerang identity , Fermat's identity or Chu's Theorem , [ 3 ] states that if n ≥ r ≥ 0 {\displaystyle n\geq r\geq 0} are integers, then
The name stems from the graphical representation of the identity on Pascal's triangle : when the addends represented in the summation and the sum itself are highlighted, the shape revealed is vaguely reminiscent of those objects (see hockey stick , Christmas stocking ).
Using sigma notation , the identity states
or equivalently, the mirror-image by the substitution j → i − r {\displaystyle j\to i-r} , and by using the identify ( n k ) = ( n n − k ) {\displaystyle {n \choose k}={n \choose n-k}} :
The inductive and algebraic proofs both make use of Pascal's identity :
This identity can be proven by mathematical induction on n {\displaystyle n} .
Base case Let n = r {\displaystyle n=r} ;
Inductive step Suppose, for some k ∈ N , k ⩾ r {\displaystyle k\in \mathbb {N} ,k\geqslant r} ,
Then
We use a telescoping argument to simplify the computation of the sum:
Imagine that we are distributing n {\displaystyle n} indistinguishable candies to k {\displaystyle k} distinguishable children. By a direct application of the stars and bars method , there are
ways to do this. Alternatively, we can first give 0 ⩽ i ⩽ n {\displaystyle 0\leqslant i\leqslant n} candies to the oldest child so that we are essentially giving n − i {\displaystyle n-i} candies to k − 1 {\displaystyle k-1} kids and again, with stars and bars and double counting , we have
which simplifies to the desired result by taking n ′ = n + k − 2 {\displaystyle n'=n+k-2} and r = k − 2 {\displaystyle r=k-2} , and noticing that n ′ − n = k − 2 = r {\displaystyle n'-n=k-2=r} :
We can form a committee of size k + 1 {\displaystyle k+1} from a group of n + 1 {\displaystyle n+1} people in
ways. Now we hand out the numbers 1 , 2 , 3 , … , n − k + 1 {\displaystyle 1,2,3,\dots ,n-k+1} to n − k + 1 {\displaystyle n-k+1} of the n + 1 {\displaystyle n+1} people. We can then divide our committee-forming process into n − k + 1 {\displaystyle n-k+1} exhaustive and disjoint cases based on the committee member with the lowest number, x {\displaystyle x} . Note that there are only k {\displaystyle k} people without numbers, meaning we must choose at least one person with a number in order to form a committee of k + 1 {\displaystyle k+1} people. In general, in case x {\displaystyle x} , person x {\displaystyle x} is on the committee and persons 1 , 2 , 3 , … , x − 1 {\displaystyle 1,2,3,\dots ,x-1} are not on the committee. The rest of the committee can then be chosen in
ways. Now we can sum the values of these n − k + 1 {\displaystyle n-k+1} disjoint cases, and using double counting , we obtain
Let X = 1 + x {\displaystyle X=1+x} . Then, by the partial sum formula for geometric series , we find that
Further, by the binomial theorem , we also find that
X r + k = ( 1 + x ) r + k = ∑ i = 0 r + k ( r + k i ) x i {\displaystyle X^{r+k}=(1+x)^{r+k}=\sum _{i=0}^{r+k}{\binom {r+k}{i}}x^{i}} .
Note that this means the coefficient of x r {\displaystyle x^{r}} in X r + k {\displaystyle X^{r+k}} is given by ( r + k r ) {\displaystyle {\binom {r+k}{r}}} .
Thus, the coefficient of x r {\displaystyle x^{r}} in the left hand side of our first equation can be obtained by summing over the coefficients of x r {\displaystyle x^{r}} from each term, which gives
∑ k = 0 n − r ( r + k r ) {\displaystyle \sum _{k=0}^{n-r}{\binom {r+k}{r}}}
Similarly, we find that the coefficient of x r {\displaystyle x^{r}} on the right hand side is given by the coefficient of x r + 1 {\displaystyle x^{r+1}} in X n + 1 − X r {\displaystyle X^{n+1}-X^{r}} , which is
( n + 1 r + 1 ) − ( r r + 1 ) = ( n + 1 r + 1 ) {\displaystyle {\binom {n+1}{r+1}}-{\binom {r}{r+1}}={\binom {n+1}{r+1}}}
Therefore, we can compare the coefficients of x r {\displaystyle x^{r}} on each side of the equation to find that
∑ k = 0 n − r ( r + k r ) = ( n + 1 r + 1 ) {\displaystyle \sum _{k=0}^{n-r}{\binom {r+k}{r}}={\binom {n+1}{r+1}}} | https://en.wikipedia.org/wiki/Hockey-stick_identity |
In mathematics , the Hodge star operator or Hodge star is a linear map defined on the exterior algebra of a finite-dimensional oriented vector space endowed with a nondegenerate symmetric bilinear form . Applying the operator to an element of the algebra produces the Hodge dual of the element. This map was introduced by W. V. D. Hodge .
For example, in an oriented 3-dimensional Euclidean space, an oriented plane can be represented by the exterior product of two basis vectors, and its Hodge dual is the normal vector given by their cross product ; conversely, any vector is dual to the oriented plane perpendicular to it, endowed with a suitable bivector. Generalizing this to an n -dimensional vector space, the Hodge star is a one-to-one mapping of k -vectors to ( n – k ) -vectors; the dimensions of these spaces are the binomial coefficients ( n k ) = ( n n − k ) {\displaystyle {\tbinom {n}{k}}={\tbinom {n}{n-k}}} .
The naturalness of the star operator means it can play a role in differential geometry when applied to the cotangent bundle of a pseudo-Riemannian manifold , and hence to differential k -forms . This allows the definition of the codifferential as the Hodge adjoint of the exterior derivative , leading to the Laplace–de Rham operator . This generalizes the case of 3-dimensional Euclidean space, in which divergence of a vector field may be realized as the codifferential opposite to the gradient operator, and the Laplace operator on a function is the divergence of its gradient. An important application is the Hodge decomposition of differential forms on a closed Riemannian manifold.
Let V be an n -dimensional oriented vector space with a nondegenerate symmetric bilinear form ⟨ ⋅ , ⋅ ⟩ {\displaystyle \langle \cdot ,\cdot \rangle } , referred to here as a scalar product. (In more general contexts such as pseudo-Riemannian manifolds and Minkowski space , the bilinear form may not be positive-definite.) This induces a scalar product on k -vectors α , β ∈ ⋀ k V {\textstyle \alpha ,\beta \in \bigwedge ^{\!k}V} , for 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} , by defining it on simple k -vectors α = α 1 ∧ ⋯ ∧ α k {\displaystyle \alpha =\alpha _{1}\wedge \cdots \wedge \alpha _{k}} and β = β 1 ∧ ⋯ ∧ β k {\displaystyle \beta =\beta _{1}\wedge \cdots \wedge \beta _{k}} to equal the Gram determinant [ 1 ] : 14
extended to ⋀ k V {\textstyle \bigwedge ^{\!k}V} through linearity.
The unit n -vector ω ∈ ⋀ n V {\displaystyle \omega \in {\textstyle \bigwedge }^{\!n}V} is defined in terms of an oriented orthonormal basis { e 1 , … , e n } {\displaystyle \{e_{1},\ldots ,e_{n}\}} of V as:
(Note: In the general pseudo-Riemannian case, orthonormality means ⟨ e i , e j ⟩ ∈ { δ i j , − δ i j } {\displaystyle \langle e_{i},e_{j}\rangle \in \{\delta _{ij},-\delta _{ij}\}} for all pairs of basis vectors.)
The Hodge star operator is a linear operator on the exterior algebra of V , mapping k -vectors to ( n – k )-vectors, for 0 ≤ k ≤ n {\displaystyle 0\leq k\leq n} . It has the following property, which defines it completely: [ 1 ] : 15
Dually, in the space ⋀ n V ∗ {\displaystyle {\textstyle \bigwedge }^{\!n}V^{*}} of n -forms (alternating n -multilinear functions on V n {\displaystyle V^{n}} ), the dual to ω {\displaystyle \omega } is the volume form det {\displaystyle \det } , the function whose value on v 1 ∧ ⋯ ∧ v n {\displaystyle v_{1}\wedge \cdots \wedge v_{n}} is the determinant of the n × n {\displaystyle n\times n} matrix assembled from the column vectors of v j {\displaystyle v_{j}} in e i {\displaystyle e_{i}} -coordinates. Applying det {\displaystyle \det } to the above equation, we obtain the dual definition:
Equivalently, taking α = α 1 ∧ ⋯ ∧ α k {\displaystyle \alpha =\alpha _{1}\wedge \cdots \wedge \alpha _{k}} , β = β 1 ∧ ⋯ ∧ β k {\displaystyle \beta =\beta _{1}\wedge \cdots \wedge \beta _{k}} , and ⋆ β = β 1 ⋆ ∧ ⋯ ∧ β n − k ⋆ {\displaystyle {\star }\beta =\beta _{1}^{\star }\wedge \cdots \wedge \beta _{n-k}^{\star }} :
This means that, writing an orthonormal basis of k -vectors as e I = e i 1 ∧ ⋯ ∧ e i k {\displaystyle e_{I}\ =\ e_{i_{1}}\wedge \cdots \wedge e_{i_{k}}} over all subsets I = { i 1 < ⋯ < i k } {\displaystyle I=\{i_{1}<\cdots <i_{k}\}} of [ n ] = { 1 , … , n } {\displaystyle [n]=\{1,\ldots ,n\}} , the Hodge dual is the ( n – k )-vector corresponding to the complementary set I ¯ = [ n ] ∖ I = { i ¯ 1 < ⋯ < i ¯ n − k } {\displaystyle {\bar {I}}=[n]\smallsetminus I=\left\{{\bar {i}}_{1}<\cdots <{\bar {i}}_{n-k}\right\}} :
where s ∈ { 1 , − 1 } {\displaystyle s\in \{1,-1\}} is the sign of the permutation i 1 ⋯ i k i ¯ 1 ⋯ i ¯ n − k {\displaystyle i_{1}\cdots i_{k}{\bar {i}}_{1}\cdots {\bar {i}}_{n-k}} and t ∈ { 1 , − 1 } {\displaystyle t\in \{1,-1\}} is the product ⟨ e i 1 , e i 1 ⟩ ⋯ ⟨ e i k , e i k ⟩ {\displaystyle \langle e_{i_{1}},e_{i_{1}}\rangle \cdots \langle e_{i_{k}},e_{i_{k}}\rangle } . In the Riemannian case, t = 1 {\displaystyle t=1} .
Since Hodge star takes an orthonormal basis to an orthonormal basis, it is an isometry on the exterior algebra ⋀ V {\textstyle \bigwedge V} .
The Hodge star is motivated by the correspondence between a subspace W of V and its orthogonal subspace (with respect to the scalar product), where each space is endowed with an orientation and a numerical scaling factor. Specifically, a non-zero decomposable k -vector w 1 ∧ ⋯ ∧ w k ∈ ⋀ k V {\displaystyle w_{1}\wedge \cdots \wedge w_{k}\in \textstyle \bigwedge ^{\!k}V} corresponds by the Plücker embedding to the subspace W {\displaystyle W} with oriented basis w 1 , … , w k {\displaystyle w_{1},\ldots ,w_{k}} , endowed with a scaling factor equal to the k -dimensional volume of the parallelepiped spanned by this basis (equal to the Gramian , the determinant of the matrix of scalar products ⟨ w i , w j ⟩ {\displaystyle \langle w_{i},w_{j}\rangle } ). The Hodge star acting on a decomposable vector can be written as a decomposable ( n − k )-vector:
where u 1 , … , u n − k {\displaystyle u_{1},\ldots ,u_{n-k}} form an oriented basis of the orthogonal space U = W ⊥ {\displaystyle U=W^{\perp }\!} . Furthermore, the ( n − k )-volume of the u i {\displaystyle u_{i}} -parallelepiped must equal the k -volume of the w i {\displaystyle w_{i}} -parallelepiped, and w 1 , … , w k , u 1 , … , u n − k {\displaystyle w_{1},\ldots ,w_{k},u_{1},\ldots ,u_{n-k}} must form an oriented basis of V {\displaystyle V} .
A general k -vector is a linear combination of decomposable k -vectors, and the definition of Hodge star is extended to general k -vectors by defining it as being linear.
In two dimensions with the normalized Euclidean metric and orientation given by the ordering ( x , y ) , the Hodge star on k -forms is given by ⋆ 1 = d x ∧ d y ⋆ d x = d y ⋆ d y = − d x ⋆ ( d x ∧ d y ) = 1. {\displaystyle {\begin{aligned}{\star }\,1&=dx\wedge dy\\{\star }\,dx&=dy\\{\star }\,dy&=-dx\\{\star }(dx\wedge dy)&=1.\end{aligned}}}
A common example of the Hodge star operator is the case n = 3 , when it can be taken as the correspondence between vectors and bivectors. Specifically, for Euclidean R 3 with the basis d x , d y , d z {\displaystyle dx,dy,dz} of one-forms often used in vector calculus , one finds that ⋆ d x = d y ∧ d z ⋆ d y = d z ∧ d x ⋆ d z = d x ∧ d y . {\displaystyle {\begin{aligned}{\star }\,dx&=dy\wedge dz\\{\star }\,dy&=dz\wedge dx\\{\star }\,dz&=dx\wedge dy.\end{aligned}}}
The Hodge star relates the exterior and cross product in three dimensions: [ 2 ] ⋆ ( u ∧ v ) = u × v ⋆ ( u × v ) = u ∧ v . {\displaystyle {\star }(\mathbf {u} \wedge \mathbf {v} )=\mathbf {u} \times \mathbf {v} \qquad {\star }(\mathbf {u} \times \mathbf {v} )=\mathbf {u} \wedge \mathbf {v} .} Applied to three dimensions, the Hodge star provides an isomorphism between axial vectors and bivectors , so each axial vector a is associated with a bivector A and vice versa, that is: [ 2 ] A = ⋆ a , a = ⋆ A {\displaystyle \mathbf {A} ={\star }\mathbf {a} ,\ \ \mathbf {a} ={\star }\mathbf {A} } .
The Hodge star can also be interpreted as a form of the geometric correspondence between an axis of rotation and an infinitesimal rotation (see also: 3D rotation group#Lie algebra ) around the axis, with speed equal to the length of the axis of rotation. A scalar product on a vector space V {\displaystyle V} gives an isomorphism V ≅ V ∗ {\displaystyle V\cong V^{*}\!} identifying V {\displaystyle V} with its dual space , and the vector space L ( V , V ) {\displaystyle L(V,V)} is naturally isomorphic to the tensor product V ∗ ⊗ V ≅ V ⊗ V {\displaystyle V^{*}\!\!\otimes V\cong V\otimes V} . Thus for V = R 3 {\displaystyle V=\mathbb {R} ^{3}} , the star mapping ⋆ : V → ⋀ 2 V ⊂ V ⊗ V {\textstyle \textstyle {\star }:V\to \bigwedge ^{\!2}\!V\subset V\otimes V} takes each vector v {\displaystyle \mathbf {v} } to a bivector ⋆ v ∈ V ⊗ V {\displaystyle {\star }\mathbf {v} \in V\otimes V} , which corresponds to a linear operator L v : V → V {\displaystyle L_{\mathbf {v} }:V\to V} . Specifically, L v {\displaystyle L_{\mathbf {v} }} is a skew-symmetric operator, which corresponds to an infinitesimal rotation: that is, the macroscopic rotations around the axis v {\displaystyle \mathbb {v} } are given by the matrix exponential exp ( t L v ) {\displaystyle \exp(tL_{\mathbf {v} })} . With respect to the basis d x , d y , d z {\displaystyle dx,dy,dz} of R 3 {\displaystyle \mathbb {R} ^{3}} , the tensor d x ⊗ d y {\displaystyle dx\otimes dy} corresponds to a coordinate matrix with 1 in the d x {\displaystyle dx} row and d y {\displaystyle dy} column, etc., and the wedge d x ∧ d y = d x ⊗ d y − d y ⊗ d x {\displaystyle dx\wedge dy\,=\,dx\otimes dy-dy\otimes dx} is the skew-symmetric matrix [ 0 1 0 − 1 0 0 0 0 0 ] {\displaystyle \scriptscriptstyle \left[{\begin{array}{rrr}\,0\!\!&\!\!1&\!\!\!\!0\!\!\!\!\!\!\\[-.5em]\,\!-1\!\!&\!\!0\!\!&\!\!\!\!0\!\!\!\!\!\!\\[-.5em]\,0\!\!&\!\!0\!\!&\!\!\!\!0\!\!\!\!\!\!\end{array}}\!\!\!\right]} , etc. That is, we may interpret the star operator as: v = a d x + b d y + c d z ⟶ ⋆ v ≅ L v = [ 0 c − b − c 0 a b − a 0 ] . {\displaystyle \mathbf {v} =a\,dx+b\,dy+c\,dz\quad \longrightarrow \quad {\star }{\mathbf {v} }\ \cong \ L_{\mathbf {v} }\ =\left[{\begin{array}{rrr}0&c&-b\\-c&0&a\\b&-a&0\end{array}}\right].} Under this correspondence, cross product of vectors corresponds to the commutator Lie bracket of linear operators: L u × v = L v L u − L u L v = − [ L u , L v ] {\displaystyle L_{\mathbf {u} \times \mathbf {v} }=L_{\mathbf {v} }L_{\mathbf {u} }-L_{\mathbf {u} }L_{\mathbf {v} }=-\left[L_{\mathbf {u} },L_{\mathbf {v} }\right]} .
In case n = 4 {\displaystyle n=4} , the Hodge star acts as an endomorphism of the second exterior power (i.e. it maps 2-forms to 2-forms, since 4 − 2 = 2 ). If the signature of the metric tensor is all positive, i.e. on a Riemannian manifold , then the Hodge star is an involution . If the signature is mixed, i.e., pseudo-Riemannian , then applying the operator twice will return the argument up to a sign – see § Duality below. This particular endomorphism property of 2-forms in four dimensions makes self-dual and anti-self-dual two-forms natural geometric objects to study. That is, one can describe the space of 2-forms in four dimensions with a basis that "diagonalizes" the Hodge star operator with eigenvalues ± 1 {\displaystyle \pm 1} (or ± i {\displaystyle \pm i} , depending on the signature).
For concreteness, we discuss the Hodge star operator in Minkowski spacetime where n = 4 {\displaystyle n=4} with metric signature (− + + +) and coordinates ( t , x , y , z ) {\displaystyle (t,x,y,z)} . The volume form is oriented as ε 0123 = 1 {\displaystyle \varepsilon _{0123}=1} . For one-forms , ⋆ d t = − d x ∧ d y ∧ d z , ⋆ d x = − d t ∧ d y ∧ d z , ⋆ d y = − d t ∧ d z ∧ d x , ⋆ d z = − d t ∧ d x ∧ d y , {\displaystyle {\begin{aligned}{\star }dt&=-dx\wedge dy\wedge dz\,,\\{\star }dx&=-dt\wedge dy\wedge dz\,,\\{\star }dy&=-dt\wedge dz\wedge dx\,,\\{\star }dz&=-dt\wedge dx\wedge dy\,,\end{aligned}}} while for 2-forms , ⋆ ( d t ∧ d x ) = − d y ∧ d z , ⋆ ( d t ∧ d y ) = − d z ∧ d x , ⋆ ( d t ∧ d z ) = − d x ∧ d y , ⋆ ( d x ∧ d y ) = d t ∧ d z , ⋆ ( d z ∧ d x ) = d t ∧ d y , ⋆ ( d y ∧ d z ) = d t ∧ d x . {\displaystyle {\begin{aligned}{\star }(dt\wedge dx)&=-dy\wedge dz\,,\\{\star }(dt\wedge dy)&=-dz\wedge dx\,,\\{\star }(dt\wedge dz)&=-dx\wedge dy\,,\\{\star }(dx\wedge dy)&=dt\wedge dz\,,\\{\star }(dz\wedge dx)&=dt\wedge dy\,,\\{\star }(dy\wedge dz)&=dt\wedge dx\,.\end{aligned}}}
These are summarized in the index notation as ⋆ ( d x μ ) = η μ λ ε λ ν ρ σ 1 3 ! d x ν ∧ d x ρ ∧ d x σ , ⋆ ( d x μ ∧ d x ν ) = η μ κ η ν λ ε κ λ ρ σ 1 2 ! d x ρ ∧ d x σ . {\displaystyle {\begin{aligned}{\star }(dx^{\mu })&=\eta ^{\mu \lambda }\varepsilon _{\lambda \nu \rho \sigma }{\frac {1}{3!}}dx^{\nu }\wedge dx^{\rho }\wedge dx^{\sigma }\,,\\{\star }(dx^{\mu }\wedge dx^{\nu })&=\eta ^{\mu \kappa }\eta ^{\nu \lambda }\varepsilon _{\kappa \lambda \rho \sigma }{\frac {1}{2!}}dx^{\rho }\wedge dx^{\sigma }\,.\end{aligned}}}
Hodge dual of three- and four-forms can be easily deduced from the fact that, in the Lorentzian signature, ⋆ 2 = 1 {\displaystyle {\star }^{2}=1} for odd-rank forms and ⋆ 2 = − 1 {\displaystyle {\star }^{2}=-1} for even-rank forms. An easy rule to remember for these Hodge operations is that given a form α {\displaystyle \alpha } , its Hodge dual ⋆ α {\displaystyle {\star }\alpha } may be obtained by writing the components not involved in α {\displaystyle \alpha } in an order such that α ∧ ( ⋆ α ) = d t ∧ d x ∧ d y ∧ d z {\displaystyle \alpha \wedge ({\star }\alpha )=dt\wedge dx\wedge dy\wedge dz} . [ verification needed ] An extra minus sign will enter only if α {\displaystyle \alpha } contains d t {\displaystyle dt} . (For (+ − − −) , one puts in a minus sign only if α {\displaystyle \alpha } involves an odd number of the space-associated forms d x {\displaystyle dx} , d y {\displaystyle dy} and d z {\displaystyle dz} .)
Note that the combinations ( d x μ ∧ d x ν ) ± := 1 2 ( d x μ ∧ d x ν ∓ i ⋆ ( d x μ ∧ d x ν ) ) {\displaystyle (dx^{\mu }\wedge dx^{\nu })^{\pm }:={\frac {1}{2}}{\big (}dx^{\mu }\wedge dx^{\nu }\mp i{\star }(dx^{\mu }\wedge dx^{\nu }){\big )}} take ± i {\displaystyle \pm i} as the eigenvalue for Hodge star operator, i.e., ⋆ ( d x μ ∧ d x ν ) ± = ± i ( d x μ ∧ d x ν ) ± , {\displaystyle {\star }(dx^{\mu }\wedge dx^{\nu })^{\pm }=\pm i(dx^{\mu }\wedge dx^{\nu })^{\pm },} and hence deserve the name self-dual and anti-self-dual two-forms. Understanding the geometry, or kinematics, of Minkowski spacetime in self-dual and anti-self-dual sectors turns out to be insightful in both mathematical and physical perspectives, making contacts to the use of the two-spinor language in modern physics such as spinor-helicity formalism or twistor theory .
The Hodge star is conformally invariant on n -forms on a 2 n -dimensional vector space V {\displaystyle V} , i.e. if g {\displaystyle g} is a metric on V {\displaystyle V} and λ > 0 {\displaystyle \lambda >0} , then the induced Hodge stars ⋆ g , ⋆ λ g : Λ n V → Λ n V {\displaystyle {\star }_{g},{\star }_{\lambda g}:\Lambda ^{n}V\to \Lambda ^{n}V} are the same.
The combination of the ⋆ {\displaystyle {\star }} operator and the exterior derivative d generates the classical operators grad , curl , and div on vector fields in three-dimensional Euclidean space. This works out as follows: d takes a 0-form (a function) to a 1-form, a 1-form to a 2-form, and a 2-form to a 3-form (and takes a 3-form to zero). For a 0-form f = f ( x , y , z ) {\displaystyle f=f(x,y,z)} , the first case written out in components gives: d f = ∂ f ∂ x d x + ∂ f ∂ y d y + ∂ f ∂ z d z . {\displaystyle df={\frac {\partial f}{\partial x}}\,dx+{\frac {\partial f}{\partial y}}\,dy+{\frac {\partial f}{\partial z}}\,dz.}
The scalar product identifies 1-forms with vector fields as d x ↦ ( 1 , 0 , 0 ) {\displaystyle dx\mapsto (1,0,0)} , etc., so that d f {\displaystyle df} becomes grad f = ( ∂ f ∂ x , ∂ f ∂ y , ∂ f ∂ z ) {\textstyle \operatorname {grad} f=\left({\frac {\partial f}{\partial x}},{\frac {\partial f}{\partial y}},{\frac {\partial f}{\partial z}}\right)} .
In the second case, a vector field F = ( A , B , C ) {\displaystyle \mathbf {F} =(A,B,C)} corresponds to the 1-form φ = A d x + B d y + C d z {\displaystyle \varphi =A\,dx+B\,dy+C\,dz} , which has exterior derivative: d φ = ( ∂ C ∂ y − ∂ B ∂ z ) d y ∧ d z + ( ∂ C ∂ x − ∂ A ∂ z ) d x ∧ d z + ( ∂ B ∂ x − ∂ A ∂ y ) d x ∧ d y . {\displaystyle d\varphi =\left({\frac {\partial C}{\partial y}}-{\frac {\partial B}{\partial z}}\right)dy\wedge dz+\left({\frac {\partial C}{\partial x}}-{\frac {\partial A}{\partial z}}\right)dx\wedge dz+\left({\partial B \over \partial x}-{\frac {\partial A}{\partial y}}\right)dx\wedge dy.}
Applying the Hodge star gives the 1-form: ⋆ d φ = ( ∂ C ∂ y − ∂ B ∂ z ) d x − ( ∂ C ∂ x − ∂ A ∂ z ) d y + ( ∂ B ∂ x − ∂ A ∂ y ) d z , {\displaystyle {\star }d\varphi =\left({\partial C \over \partial y}-{\partial B \over \partial z}\right)\,dx-\left({\partial C \over \partial x}-{\partial A \over \partial z}\right)\,dy+\left({\partial B \over \partial x}-{\partial A \over \partial y}\right)\,dz,} which becomes the vector field curl F = ( ∂ C ∂ y − ∂ B ∂ z , − ∂ C ∂ x + ∂ A ∂ z , ∂ B ∂ x − ∂ A ∂ y ) {\textstyle \operatorname {curl} \mathbf {F} =\left({\frac {\partial C}{\partial y}}-{\frac {\partial B}{\partial z}},\,-{\frac {\partial C}{\partial x}}+{\frac {\partial A}{\partial z}},\,{\frac {\partial B}{\partial x}}-{\frac {\partial A}{\partial y}}\right)} .
In the third case, F = ( A , B , C ) {\displaystyle \mathbf {F} =(A,B,C)} again corresponds to φ = A d x + B d y + C d z {\displaystyle \varphi =A\,dx+B\,dy+C\,dz} . Applying Hodge star, exterior derivative, and Hodge star again: ⋆ φ = A d y ∧ d z − B d x ∧ d z + C d x ∧ d y , d ⋆ φ = ( ∂ A ∂ x + ∂ B ∂ y + ∂ C ∂ z ) d x ∧ d y ∧ d z , ⋆ d ⋆ φ = ∂ A ∂ x + ∂ B ∂ y + ∂ C ∂ z = div F . {\displaystyle {\begin{aligned}{\star }\varphi &=A\,dy\wedge dz-B\,dx\wedge dz+C\,dx\wedge dy,\\d{\star \varphi }&=\left({\frac {\partial A}{\partial x}}+{\frac {\partial B}{\partial y}}+{\frac {\partial C}{\partial z}}\right)dx\wedge dy\wedge dz,\\{\star }d{\star }\varphi &={\frac {\partial A}{\partial x}}+{\frac {\partial B}{\partial y}}+{\frac {\partial C}{\partial z}}=\operatorname {div} \mathbf {F} .\end{aligned}}}
One advantage of this expression is that the identity d 2 = 0 , which is true in all cases, has as special cases two other identities: (1) curl grad f = 0 , and (2) div curl F = 0 . In particular, Maxwell's equations take on a particularly simple and elegant form, when expressed in terms of the exterior derivative and the Hodge star. The expression ⋆ d ⋆ {\displaystyle {\star }d{\star }} (multiplied by an appropriate power of −1) is called the codifferential ; it is defined in full generality, for any dimension, further in the article below.
One can also obtain the Laplacian Δ f = div grad f in terms of the above operations: Δ f = ⋆ d ⋆ d f = ∂ 2 f ∂ x 2 + ∂ 2 f ∂ y 2 + ∂ 2 f ∂ z 2 . {\displaystyle \Delta f={\star }d{\star }df={\frac {\partial ^{2}f}{\partial x^{2}}}+{\frac {\partial ^{2}f}{\partial y^{2}}}+{\frac {\partial ^{2}f}{\partial z^{2}}}.}
The Laplacian can also be seen as a special case of the more general Laplace–deRham operator Δ = d δ + δ d {\displaystyle \Delta =d\delta +\delta d} where in three dimensions, δ = ( − 1 ) k ⋆ d ⋆ {\displaystyle \delta =(-1)^{k}{\star }d{\star }} is the codifferential for k {\displaystyle k} -forms. Any function f {\displaystyle f} is a 0-form, and δ f = 0 {\displaystyle \delta f=0} and so this reduces to the ordinary Laplacian. For the 1-form φ {\displaystyle \varphi } above, the codifferential is δ = − ⋆ d ⋆ {\displaystyle \delta =-{\star }d{\star }} and after some straightforward calculations one obtains the Laplacian acting on φ {\displaystyle \varphi } .
Applying the Hodge star twice leaves a k -vector unchanged up to a sign: for η ∈ ⋀ k V {\displaystyle \eta \in {\textstyle \bigwedge }^{k}V} in an n -dimensional space V , one has
where s is the parity of the signature of the scalar product on V , that is, the sign of the determinant of the matrix of the scalar product with respect to any basis. For example, if n = 4 and the signature of the scalar product is either (+ − − −) or (− + + +) then s = −1 . For Riemannian manifolds (including Euclidean spaces), we always have s = 1 .
The above identity implies that the inverse of ⋆ {\displaystyle {\star }} can be given as
If n is odd then k ( n − k ) is even for any k , whereas if n is even then k ( n − k ) has the parity of k . Therefore:
where k is the degree of the element operated on.
For an n -dimensional oriented pseudo-Riemannian manifold M , we apply the construction above to each cotangent space T p ∗ M {\displaystyle {\text{T}}_{p}^{*}M} and its exterior powers ⋀ k T p ∗ M {\textstyle \bigwedge ^{k}{\text{T}}_{p}^{*}M} , and hence to the differential k -forms ζ ∈ Ω k ( M ) = Γ ( ⋀ k T ∗ M ) {\textstyle \zeta \in \Omega ^{k}(M)=\Gamma \left(\bigwedge ^{k}{\text{T}}^{*}\!M\right)} , the global sections of the bundle ⋀ k T ∗ M → M {\textstyle \bigwedge ^{k}\mathrm {T} ^{*}\!M\to M} . The Riemannian metric induces a scalar product on ⋀ k T p ∗ M {\textstyle \bigwedge ^{k}{\text{T}}_{p}^{*}M} at each point p ∈ M {\displaystyle p\in M} . We define the Hodge dual of a k -form ζ {\displaystyle \zeta } , defining ⋆ ζ {\displaystyle {\star }\zeta } as the unique ( n – k )-form satisfying η ∧ ⋆ ζ = ⟨ η , ζ ⟩ ω {\displaystyle \eta \wedge {\star }\zeta \ =\ \langle \eta ,\zeta \rangle \,\omega } for every k -form η {\displaystyle \eta } , where ⟨ η , ζ ⟩ {\displaystyle \langle \eta ,\zeta \rangle } is a real-valued function on M {\displaystyle M} , and the volume form ω {\displaystyle \omega } is induced by the pseudo-Riemannian metric. Integrating this equation over M {\displaystyle M} , the right side becomes the L 2 {\displaystyle L^{2}} ( square-integrable ) scalar product on k -forms , and we obtain: ∫ M η ∧ ⋆ ζ = ∫ M ⟨ η , ζ ⟩ ω . {\displaystyle \int _{M}\eta \wedge {\star }\zeta \ =\ \int _{M}\langle \eta ,\zeta \rangle \ \omega .}
More generally, if M {\displaystyle M} is non-orientable, one can define the Hodge star of a k -form as a ( n – k )- pseudo differential form ; that is, a differential form with values in the canonical line bundle .
We compute in terms of tensor index notation with respect to a (not necessarily orthonormal) basis { ∂ ∂ x 1 , … , ∂ ∂ x n } {\textstyle \left\{{\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\right\}} in a tangent space V = T p M {\displaystyle V=T_{p}M} and its dual basis { d x 1 , … , d x n } {\displaystyle \{dx_{1},\ldots ,dx_{n}\}} in V ∗ = T p ∗ M {\displaystyle V^{*}=T_{p}^{*}M} , having the metric matrix ( g i j ) = ( ⟨ ∂ ∂ x i , ∂ ∂ x j ⟩ ) {\textstyle (g_{ij})=\left(\left\langle {\frac {\partial }{\partial x_{i}}},{\frac {\partial }{\partial x_{j}}}\right\rangle \right)} and its inverse matrix ( g i j ) = ( ⟨ d x i , d x j ⟩ ) {\displaystyle (g^{ij})=(\langle dx^{i},dx^{j}\rangle )} . The Hodge dual of a decomposable k -form is: ⋆ ( d x i 1 ∧ ⋯ ∧ d x i k ) = | det [ g i j ] | ( n − k ) ! g i 1 j 1 ⋯ g i k j k ε j 1 … j n d x j k + 1 ∧ ⋯ ∧ d x j n . {\displaystyle {\star }\left(dx^{i_{1}}\wedge \dots \wedge dx^{i_{k}}\right)\ =\ {\frac {\sqrt {\left|\det[g_{ij}]\right|}}{(n-k)!}}g^{i_{1}j_{1}}\cdots g^{i_{k}j_{k}}\varepsilon _{j_{1}\dots j_{n}}dx^{j_{k+1}}\wedge \dots \wedge dx^{j_{n}}.}
Here ε j 1 … j n {\displaystyle \varepsilon _{j_{1}\dots j_{n}}} is the Levi-Civita symbol with ε 1 … n = 1 {\displaystyle \varepsilon _{1\dots n}=1} , and we implicitly take the sum over all values of the repeated indices j 1 , … , j n {\displaystyle j_{1},\ldots ,j_{n}} . The factorial ( n − k ) ! {\displaystyle (n-k)!} accounts for double counting, and is not present if the summation indices are restricted so that j k + 1 < ⋯ < j n {\displaystyle j_{k+1}<\dots <j_{n}} . The absolute value of the determinant is necessary since it may be negative, as for tangent spaces to Lorentzian manifolds .
An arbitrary differential form can be written as follows: α = 1 k ! α i 1 , … , i k d x i 1 ∧ ⋯ ∧ d x i k = ∑ i 1 < ⋯ < i k α i 1 , … , i k d x i 1 ∧ ⋯ ∧ d x i k . {\displaystyle \alpha \ =\ {\frac {1}{k!}}\alpha _{i_{1},\dots ,i_{k}}dx^{i_{1}}\wedge \dots \wedge dx^{i_{k}}\ =\ \sum _{i_{1}<\dots <i_{k}}\alpha _{i_{1},\dots ,i_{k}}dx^{i_{1}}\wedge \dots \wedge dx^{i_{k}}.}
The factorial k ! {\displaystyle k!} is again included to account for double counting when we allow non-increasing indices. We would like to define the dual of the component α i 1 , … , i k {\displaystyle \alpha _{i_{1},\dots ,i_{k}}} so that the Hodge dual of the form is given by ⋆ α = 1 ( n − k ) ! ( ⋆ α ) i k + 1 , … , i n d x i k + 1 ∧ ⋯ ∧ d x i n . {\displaystyle {\star }\alpha ={\frac {1}{(n-k)!}}({\star }\alpha )_{i_{k+1},\dots ,i_{n}}dx^{i_{k+1}}\wedge \dots \wedge dx^{i_{n}}.}
Using the above expression for the Hodge dual of d x i 1 ∧ ⋯ ∧ d x i k {\displaystyle dx^{i_{1}}\wedge \dots \wedge dx^{i_{k}}} , we find: [ 3 ] ( ⋆ α ) j k + 1 , … , j n = | det [ g a b ] | k ! α i 1 , … , i k g i 1 j 1 ⋯ g i k j k ε j 1 , … , j n . {\displaystyle ({\star }\alpha )_{j_{k+1},\dots ,j_{n}}={\frac {\sqrt {\left|\det[g_{ab}]\right|}}{k!}}\alpha _{i_{1},\dots ,i_{k}}\,g^{i_{1}j_{1}}\cdots g^{i_{k}j_{k}}\,\varepsilon _{j_{1},\dots ,j_{n}}\,.}
Although one can apply this expression to any tensor α {\displaystyle \alpha } , the result is antisymmetric, since contraction with the completely anti-symmetric Levi-Civita symbol cancels all but the totally antisymmetric part of the tensor. It is thus equivalent to antisymmetrization followed by applying the Hodge star.
The unit volume form ω = ⋆ 1 ∈ ⋀ n V ∗ {\textstyle \omega ={\star }1\in \bigwedge ^{n}V^{*}} is given by: ω = | det [ g i j ] | d x 1 ∧ ⋯ ∧ d x n . {\displaystyle \omega ={\sqrt {\left|\det[g_{ij}]\right|}}\;dx^{1}\wedge \cdots \wedge dx^{n}.}
The most important application of the Hodge star on manifolds is to define the codifferential δ {\displaystyle \delta } on k {\displaystyle k} -forms. Let δ = ( − 1 ) n ( k + 1 ) + 1 s ⋆ d ⋆ = ( − 1 ) k ⋆ − 1 d ⋆ {\displaystyle \delta =(-1)^{n(k+1)+1}s\ {\star }d{\star }=(-1)^{k}\,{\star }^{-1}d{\star }} where d {\displaystyle d} is the exterior derivative or differential, and s = 1 {\displaystyle s=1} for Riemannian manifolds. Then d : Ω k ( M ) → Ω k + 1 ( M ) {\displaystyle d:\Omega ^{k}(M)\to \Omega ^{k+1}(M)} while δ : Ω k ( M ) → Ω k − 1 ( M ) . {\displaystyle \delta :\Omega ^{k}(M)\to \Omega ^{k-1}(M).}
The codifferential is not an antiderivation on the exterior algebra, in contrast to the exterior derivative.
The codifferential is the adjoint of the exterior derivative with respect to the square-integrable scalar product: ⟨ ⟨ η , δ ζ ⟩ ⟩ = ⟨ ⟨ d η , ζ ⟩ ⟩ , {\displaystyle \langle \!\langle \eta ,\delta \zeta \rangle \!\rangle \ =\ \langle \!\langle d\eta ,\zeta \rangle \!\rangle ,} where ζ {\displaystyle \zeta } is a k {\displaystyle k} -form and η {\displaystyle \eta } a ( k − 1 ) {\displaystyle (k\!-\!1)} -form. This property is useful as it can be used to define the codifferential even when the manifold is non-orientable (and the Hodge star operator not defined). The identity can be proved from Stokes' theorem for smooth forms: 0 = ∫ M d ( η ∧ ⋆ ζ ) = ∫ M ( d η ∧ ⋆ ζ + ( − 1 ) k − 1 η ∧ ⋆ ⋆ − 1 d ⋆ ζ ) = ⟨ ⟨ d η , ζ ⟩ ⟩ − ⟨ ⟨ η , δ ζ ⟩ ⟩ , {\displaystyle 0\ =\ \int _{M}d(\eta \wedge {\star }\zeta )\ =\ \int _{M}\left(d\eta \wedge {\star }\zeta +(-1)^{k-1}\eta \wedge {\star }\,{\star }^{-1}d\,{\star }\zeta \right)\ =\ \langle \!\langle d\eta ,\zeta \rangle \!\rangle -\langle \!\langle \eta ,\delta \zeta \rangle \!\rangle ,} provided M {\displaystyle M} has empty boundary, or η {\displaystyle \eta } or ⋆ ζ {\displaystyle {\star }\zeta } has zero boundary values. (The proper definition of the above requires specifying a topological vector space that is closed and complete on the space of smooth forms. The Sobolev space is conventionally used; it allows the convergent sequence of forms ζ i → ζ {\displaystyle \zeta _{i}\to \zeta } (as i → ∞ {\displaystyle i\to \infty } ) to be interchanged with the combined differential and integral operations, so that ⟨ ⟨ η , δ ζ i ⟩ ⟩ → ⟨ ⟨ η , δ ζ ⟩ ⟩ {\displaystyle \langle \!\langle \eta ,\delta \zeta _{i}\rangle \!\rangle \to \langle \!\langle \eta ,\delta \zeta \rangle \!\rangle } and likewise for sequences converging to η {\displaystyle \eta } .)
Since the differential satisfies d 2 = 0 {\displaystyle d^{2}=0} , the codifferential has the corresponding property δ 2 = ( − 1 ) n s 2 ⋆ d ⋆ ⋆ d ⋆ = ( − 1 ) n k + k + 1 s 3 ⋆ d 2 ⋆ = 0. {\displaystyle \delta ^{2}=(-1)^{n}s^{2}{\star }d{\star }{\star }d{\star }=(-1)^{nk+k+1}s^{3}{\star }d^{2}{\star }=0.}
The Laplace–deRham operator is given by Δ = ( δ + d ) 2 = δ d + d δ {\displaystyle \Delta =(\delta +d)^{2}=\delta d+d\delta } and lies at the heart of Hodge theory . It is symmetric: ⟨ ⟨ Δ ζ , η ⟩ ⟩ = ⟨ ⟨ ζ , Δ η ⟩ ⟩ {\displaystyle \langle \!\langle \Delta \zeta ,\eta \rangle \!\rangle =\langle \!\langle \zeta ,\Delta \eta \rangle \!\rangle } and non-negative: ⟨ ⟨ Δ η , η ⟩ ⟩ ≥ 0. {\displaystyle \langle \!\langle \Delta \eta ,\eta \rangle \!\rangle \geq 0.}
The Hodge star sends harmonic forms to harmonic forms. As a consequence of Hodge theory , the de Rham cohomology is naturally isomorphic to the space of harmonic k -forms, and so the Hodge star induces an isomorphism of cohomology groups ⋆ : H Δ k ( M ) → H Δ n − k ( M ) , {\displaystyle {\star }:H_{\Delta }^{k}(M)\to H_{\Delta }^{n-k}(M),} which in turn gives canonical identifications via Poincaré duality of H k ( M ) with its dual space .
In coordinates, with notation as above, the codifferential of the form α {\displaystyle \alpha } may be written as δ α = − 1 k ! g m l ( ∂ ∂ x l α m , i 1 , … , i k − 1 − Γ m l j α j , i 1 , … , i k − 1 ) d x i 1 ∧ ⋯ ∧ d x i k − 1 , {\displaystyle \delta \alpha =\ -{\frac {1}{k!}}g^{ml}\left({\frac {\partial }{\partial x_{l}}}\alpha _{m,i_{1},\dots ,i_{k-1}}-\Gamma _{ml}^{j}\alpha _{j,i_{1},\dots ,i_{k-1}}\right)dx^{i_{1}}\wedge \dots \wedge dx^{i_{k-1}},} where here Γ m l j {\displaystyle \Gamma _{ml}^{j}} denotes the Christoffel symbols of { ∂ ∂ x 1 , … , ∂ ∂ x n } {\textstyle \left\{{\frac {\partial }{\partial x_{1}}},\ldots ,{\frac {\partial }{\partial x_{n}}}\right\}} .
In analogy to the Poincare lemma for exterior derivative , one can define its version for codifferential, which reads [ 4 ]
A practical way of finding α {\displaystyle \alpha } is to use cohomotopy operator h {\displaystyle h} , that is a local inverse of δ {\displaystyle \delta } . One has to define a homotopy operator [ 4 ]
where F ( t , x ) = x 0 + t ( x − x 0 ) {\displaystyle F(t,x)=x_{0}+t(x-x_{0})} is the linear homotopy between its center x 0 ∈ U {\displaystyle x_{0}\in U} and a point x ∈ U {\displaystyle x\in U} , and the (Euler) vector K = ∑ i = 1 n ( x − x 0 ) i ∂ x i {\displaystyle {\mathcal {K}}=\sum _{i=1}^{n}(x-x_{0})^{i}\partial _{x^{i}}} for n = dim ( U ) {\displaystyle n=\dim(U)} is inserted into the form β ∈ Λ ∗ ( U ) {\displaystyle \beta \in \Lambda ^{*}(U)} . We can then define cohomotopy operator as [ 4 ]
where η β = ( − 1 ) k β {\displaystyle \eta \beta =(-1)^{k}\beta } for β ∈ Λ k ( U ) {\displaystyle \beta \in \Lambda ^{k}(U)} .
The cohomotopy operator fulfills (co)homotopy invariance formula [ 4 ]
where S x 0 = ⋆ − 1 s x 0 ∗ ⋆ {\displaystyle S_{x_{0}}={\star }^{-1}s_{x_{0}}^{*}{\star }} , and s x 0 ∗ {\displaystyle s_{x_{0}}^{*}} is the pullback along the constant map s x 0 : x → x 0 {\displaystyle s_{x_{0}}:x\rightarrow x_{0}} .
Therefore, if we want to solve the equation δ ω = 0 {\displaystyle \delta \omega =0} , applying cohomotopy invariance formula we get
Cohomotopy operator fulfills the following properties: [ 4 ] h 2 = 0 , δ h δ = δ , h δ h = h {\displaystyle h^{2}=0,\quad \delta h\delta =\delta ,\quad h\delta h=h} . They make it possible to use it to define [ 4 ] anticoexact forms on U {\displaystyle U} by Y ( U ) = { ω ∈ Λ ( U ) | ω = h δ ω } {\displaystyle {\mathcal {Y}}(U)=\{\omega \in \Lambda (U)|\omega =h\delta \omega \}} , which together with exact forms C ( U ) = { ω ∈ Λ ( U ) | ω = δ h ω } {\displaystyle {\mathcal {C}}(U)=\{\omega \in \Lambda (U)|\omega =\delta h\omega \}} make a direct sum decomposition [ 4 ]
This direct sum is another way of saying that the cohomotopy invariance formula is a decomposition of unity, and the projector operators on the summands fulfills idempotence formulas: [ 4 ] ( h δ ) 2 = h δ , ( δ h ) 2 = δ h {\displaystyle (h\delta )^{2}=h\delta ,\quad (\delta h)^{2}=\delta h} .
These results are extension of similar results for exterior derivative. [ 5 ] | https://en.wikipedia.org/wiki/Hodge_star_operator |
In mathematics , Hodge–Arakelov theory of elliptic curves is an analogue of classical and p-adic Hodge theory for elliptic curves carried out in the framework of Arakelov theory . It was introduced by Mochizuki ( 1999 ). It bears the name of two mathematicians, Suren Arakelov and W. V. D. Hodge .
The main comparison in his theory remains unpublished as of 2019.
Mochizuki's main comparison theorem in Hodge–Arakelov theory states (roughly) that the space of polynomial functions of degree less than d on the universal extension of a smooth elliptic curve in characteristic 0 is naturally isomorphic (via restriction) to the d 2 -dimensional space of functions on the d - torsion points .
It is called a 'comparison theorem' as it is an analogue for Arakelov theory of comparison theorems in cohomology relating de Rham cohomology to singular cohomology of complex varieties or étale cohomology of p -adic varieties.
In Mochizuki ( 1999 ) and Mochizuki ( 2002a ) he pointed out that arithmetic Kodaira–Spencer map and Gauss–Manin connection may give some important hints for Vojta's conjecture , ABC conjecture and so on; in 2012, he published his Inter-universal Teichmuller theory , in which he didn't use Hodge-Arakelov theory but used the theory of frobenioids , anabelioids and mono-anabelian geometry . | https://en.wikipedia.org/wiki/Hodge–Arakelov_theory |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.