text stringlengths 559 401k | source stringlengths 13 121 |
|---|---|
An optical telegraph is a line of stations, typically towers, for the purpose of conveying textual information by means of visual signals (a form of optical communication). There are two main types of such systems; the semaphore telegraph which uses pivoted indicator arms and conveys information according to the direction the indicators point, and the shutter telegraph which uses panels that can be rotated to block or pass the light from the sky behind to convey information.
The most widely used system was the Chappe telegraph, which was invented in France in 1792 by Claude Chappe. It was popular in the late eighteenth to early nineteenth centuries. Chappe used the term télégraphe to describe the mechanism he had invented – that is the origin of the English word "telegraph". Lines of relay towers with a semaphore rig at the top were built within line of sight of each other, at separations of 5–20 miles (8–32 km). Operators at each tower would watch the neighboring tower through a telescope, and when the semaphore arms began to move spelling out a message, they would pass the message on to the next tower.
This system was much faster than post riders for conveying a message over long distances, and also had cheaper long-term operating costs, once constructed. Half a century later, semaphore lines were replaced by the electrical telegraph, which was cheaper, faster, and more private. The line-of-sight distance between relay stations was limited by geography and weather, and prevented the optical telegraph from crossing wide expanses of water, unless a convenient island could be used for a relay station. A modern derivative of the semaphore system is flag semaphore, signalling with hand-held flags.
== Etymology and terminology ==
The word semaphore was coined in 1801 by the French inventor of the semaphore line itself, Claude Chappe. He composed it from the Greek elements σῆμα (sêma, "sign"); and from φορός (phorós, "carrying"), or φορά (phorá, "a carrying") from φέρειν (phérein, "to bear"). Chappe also coined the word tachygraph, meaning "fast writer". However, the French Army preferred to call Chappe's semaphore system the telegraph, meaning "far writer", which was coined by French statesman André François Miot de Mélito.
The word semaphoric was first printed in English in 1808: "The newly constructed Semaphoric telegraphs (...) have been blown up", in a news report in the Naval Chronicle. The first use of the word semaphore in reference to English use was in 1816: "The improved Semaphore has been erected on the top of the Admiralty", referring to the installation of a simpler telegraph invented by Sir Home Popham. Semaphore telegraphs are also called, "Chappe telegraphs" or "Napoleonic semaphore".
== Early designs ==
Optical telegraphy dates from ancient times, in the form of hydraulic telegraphs, torches (as used by ancient cultures since the discovery of fire) and smoke signals. Modern designs of semaphores developed via several paths, often simultaneously.
Possibly the earliest was by the British polymath Robert Hooke, who gave a vivid and comprehensive outline of visual telegraphy to the Royal Society in a 1684 submission in which he outlined many practical details. The system (which was motivated by military concerns, following the Battle of Vienna in 1683) was never put into practice.
One of the first experiments of optical signalling was carried out by the Anglo-Irish landowner and inventor, Sir Richard Lovell Edgeworth in 1767. He placed a bet with his friend, the horse racing gambler Lord March, that he could transmit knowledge of the outcome of the race in just one hour. Using a network of signalling sections erected on high ground, the signal would be observed from one station to the next by means of a telescope. The signal itself consisted of a large pointer that could be placed into eight possible positions in 45 degree increments. A series of two such signals gave a total 64 code elements and a third signal took it up to 512. He returned to his idea in 1795, after hearing of Chappe's system.
While Edgeworth was developing his design, William Playfair, a Scottish political economist traveling in Europe in 1794, surreptitiously obtained the design and alphabet of the French system from a fleeing royalist. Playfair, who had numerous connections to British officials, provided a model of the system to the Duke of York, commander of British forces, then based in Flanders, and, according to the Encyclopædia Britannica, "hence the alphabet and plan of the machine came to England."
== Prevalence ==
=== France ===
Credit for the first successful optical telegraph goes to the French engineer Claude Chappe and his brothers in 1792, who succeeded in covering France with a network of 556 stations stretching a total distance of 4,800 kilometres (3,000 mi). Le système Chappe was used for military and national communications until the 1850s.
==== Development in France ====
During 1790–1795, at the height of the French Revolution, France needed a swift and reliable military communications system to thwart the war efforts of its enemies. France was surrounded by the forces of Britain, the Netherlands, Prussia, Austria, and Spain, the cities of Marseille and Lyon were in revolt, and the British Fleet held Toulon. The only advantage France held was the lack of cooperation between the allied forces due to their inadequate lines of communication.
In mid-1790, the Chappe brothers set about devising a system of communication that would allow the central government to receive intelligence and to transmit orders in the shortest possible time. Chappe considered many possible methods including audio and smoke. He even considered using electricity, but could not find insulation for the conductors that would withstand the high-voltage electrostatic sources available at the time.
Chappe settled on an optical system and the first public demonstration occurred on 2 March 1791 between Brûlon and Parcé, a distance of 16 kilometres (9.9 mi). The system consisted of a modified pendulum clock at each end with dials marked with ten numerals. The hands of the clocks almost certainly moved much faster than a normal clock. The hands of both clocks were set in motion at the same time with a synchronisation signal. Further signals indicated the time at which the dial should be read. The numbers sent were then looked up in a codebook. In their preliminary experiments over a shorter distance, the Chappes had banged a pan for synchronisation. In the demonstration, they used black and white panels observed with a telescope. The message to be sent was chosen by town officials at Brûlon and sent by René Chappe to Claude Chappe at Parcé who had no pre-knowledge of the message. The message read "si vous réussissez, vous serez bientôt couverts de gloire" (If you succeed, you will soon bask in glory). It was only later that Chappe realised that he could dispense with the clocks and the synchronisation system itself could be used to pass messages.
The Chappes carried out experiments during the next two years, and on two occasions their apparatus at Place de l'Étoile, Paris was destroyed by mobs who thought they were communicating with royalist forces. Their cause was assisted by Ignace Chappe being elected to the Legislative Assembly. In the summer of 1792 Claude was appointed Ingénieur-Télégraphiste and charged with establishing a line of stations between Paris and Lille, a distance of 230 kilometres (about 143 miles). It was used to carry dispatches for the war between France and Austria. In 1794, it brought news of a French capture of Condé-sur-l'Escaut from the Austrians less than an hour after it occurred. The first symbol of a message to Lille would pass through 15 stations in only nine minutes. The speed of the line varied with the weather, but the line to Lille typically transferred 36 symbols, a complete message, in about 32 minutes. Another line of 50 stations was completed in 1798, covering 488 km between Paris and Strasbourg. From 1803 on, the French also used the 3-arm Depillon semaphore at coastal locations to provide warning of British incursions.
English military engineer William Congreve observed that at the Battle of Vervik of 1793 French commanders directed their forces by using the sails of a prominent local windmill as an improvised signal station. Two of the four sails of the mill had been removed to resemble the arm of the new telegraph.
==== Chappe system technical operation ====
The Chappe brothers determined by experiment that it was easier to see the angle of a rod than to see the presence or absence of a panel. Their semaphore was composed of two black movable wooden arms, connected by a cross bar; the positions of all three of these components together indicated an alphabetic letter. With counterweights (named forks) on the arms, the Chappe system was controlled by only two handles and was mechanically simple and reasonably robust. Each of the two 2-metre-long arms could display seven positions, and the 4.6-metre-long cross bar connecting the two arms could display four different angles, for a total of 196 symbols (7×7×4). Night operation with lamps on the arms was unsuccessful. To speed up transmission and to provide some semblance of security, a code book was developed for use with semaphore lines. The Chappes' corporation used a code that took 92 of the basic symbols two at a time to yield 8,464 coded words and phrases.
The revised Chappe system of 1795 provided not only a set of codes but also an operational protocol intended to maximize line throughput. Symbols were transmitted in cycles of "2 steps and 3 movements."
Step 1, movement 1 (setup): The operator turned the indicator arms to align with the cross bar, forming a non-symbol, and then turned the cross bar into position for the next symbol.
Step 1, movement 2 (transmission): The operator positioned the indicator arms for current symbol and waited for the downline station to copy it.
Step 2, movement 3 (completion): The operator turned the cross bar to a vertical or horizontal position, indicating the end of a cycle.
In this manner, each symbol could propagate down the line as quickly as operators could successfully copy it, with acknowledgement and flow control built into the protocol. A symbol sent from Paris took 2 minutes to reach Lille through 22 stations and 9 minutes to reach Lyon through 50 stations. A rate of 2–3 symbols per minute was typical, with the higher figure being prone to errors. This corresponds to only 0.4–0.6 wpm, but with messages limited to those contained in the code book, this could be dramatically increased. An additional benefit is that, if the code is kept secret, the content of transmitted messages can be concealed from both onlookers and system operators, even if they are aware that a message is being transmitted. This has remained an important feature of encrypted communications even as the technology for transmitting data has evolved.
==== History in France ====
After Chappe's initial line (between Paris and Lille), the Paris to Strasbourg with 50 stations followed soon after (1798). Napoleon Bonaparte made full use of the telegraph by obtaining speedy information on enemy movements. In 1801 he had Abraham Chappe build an extra-large station to transmit across the English Channel in preparation for an invasion of Britain. A pair of such stations were built on a test line over a comparable distance. The line to Calais was extended to Boulogne in anticipation and a new design station was briefly in operation at Boulogne, but the invasion never happened. In 1812, Napoleon took up another design of Abraham Chappe for a mobile telegraph that could be taken with him on campaign. This was still in use in 1853 during the Crimean War.
The invention of the telegraph was followed by an enthusiasm concerning its potential to support direct democracy. For instance, based on Rousseau's argument that direct democracy was improbable in large constituencies, the French Intellectual Alexandre-Théophile Vandermonde commented:
Something has been said about the telegraph which appears perfectly right to me and gives the right measure of its importance. Such invention might be enough to render democracy possible in its largest scale. Many respectable men, among them Jean-Jacques Rousseau, have thought that democracy was impossible within large constituencies.… The invention of the telegraph is a novelty that Rousseau did not expect to happen. It enables long-distance communication at the same pace and clarity than that of conversation in a living room. This solution may address by itself the objections to large [direct] democratic republics. It may even be done in the absence of representative constitutions.
The operational costs of the telegraph in the year 1799/1800 were 434,000 francs ($1.2 million in 2015 in silver costs). In December 1800, Napoleon cut the budget of the telegraph system by 150,000 francs ($400,000 in 2015) leading to the Paris-Lyons line being temporarily closed. Chappe sought commercial uses of the system to make up the deficit, including use by industry, the financial sector, and newspapers. Only one proposal was immediately approved—the transmission of results from the state-run lottery. No non-government uses were approved. The lottery had been abused for years by fraudsters who knew the results, selling tickets in provincial towns after the announcement in Paris, but before the news had reached those towns.
In 1819 Norwich Duff, a young British Naval officer, visiting Clermont-en-Argonne, walked up to the telegraph station there and engaged the signalman in conversation. Here is his note of the man's information:
The pay is twenty five sous per day and he [the signalman] is obliged to be there from day light till dark, at present from half past three till half past eight; there are only two of them and for every minute a signal is left without being answered they pay five sous: this is a part of the branch which communicates with Strasburg and a message arrives there from Paris in six minutes it is here in four.
The network was reserved for government use, but an early case of wire fraud occurred in 1834 when two bankers, François and Joseph Blanc, bribed the operators at a station near Tours on the line between Paris and Bordeaux to pass Paris stock exchange information to an accomplice in Bordeaux. It took three days for the information to travel the 300 mile distance, giving the schemers plenty of time to play the market. An accomplice at Paris would know whether the market was going up or down days before the information arrived in Bordeaux via the newspapers, after which Bordeaux was sure to follow. The message could not be inserted in the telegraph directly because it would have been detected. Instead, pre-arranged deliberate errors were introduced into existing messages which were visible to an observer at Bordeaux. Tours was chosen because it was a division station where messages were purged of errors by an inspector who was privy to the secret code used and unknown to the ordinary operators. The scheme would not work if the errors were inserted prior to Tours. The operators were told whether the market was going up or down by the colour of packages (either white or grey paper wrapping) sent by mail coach, or, according to another anecdote, if the wife of the Tours operator received a package of socks (down) or gloves (up) thus avoiding any evidence of misdeed being put in writing. The scheme operated for two years until it was discovered in 1836.
The French optical system remained in use for many years after other countries had switched to the electrical telegraph. Partly, this was due to inertia; France had the most extensive optical system and hence the most difficult to replace. But there were also arguments put forward for the superiority of the optical system. One of these was that the optical system is not so vulnerable to saboteurs as an electrical system with many miles of unguarded wire. Samuel Morse failed to sell the electrical telegraph to the French government. Eventually the advantages of the electrical telegraph of improved privacy, and all-weather and nighttime operation won out. A decision was made in 1846 to replace the optical telegraph with the Foy–Breguet electrical telegraph after a successful trial on the Rouen line. This system had a display which mimicked the look of the Chappe telegraph indicators to make it familiar to telegraph operators. Jules Guyot issued a dire warning of the consequences of what he considered to be a serious mistake. It took almost a decade before the optical telegraph was completely decommissioned. One of the last messages sent over the French semaphore was the report of the fall of Sebastopol in 1855.
=== Sweden ===
Sweden was the second country in the world, after France, to introduce an optical telegraph network. Its network became the second most extensive after France. The central station of the network was at the Katarina Church in Stockholm. The system was faster than the French system, partly due to the Swedish control panel and partly to the ease of transcribing the octal code (the French system was recorded as pictograms). The system was used primarily for reporting the arrival of ships, but was also useful in wartime for observing enemy movements and attacks.
The last stationary semaphore link in regular service was in Sweden, connecting an island with a mainland telegraph line. It went out of service in 1880.
==== Development in Sweden ====
Inspired by news of the Chappe telegraph, the Swedish inventor Abraham Niclas Edelcrantz experimented with the optical telegraph in Sweden. He constructed a three-station experimental line in 1794 running from the royal castle in Stockholm, via Traneberg, to the grounds of Drottningholm Castle, a distance of 12 kilometres (7.5 mi). The first demonstration was on 1 November, when Edelcrantz sent a poem dedicated to the king, Gustav IV Adolf, on his fourteenth birthday. On 7 November the king brought Edelcrantz into his Council of Advisers with a view to building a telegraph throughout Sweden, Denmark, and Finland.
==== Edelcrantz system technical operation ====
After some initial experiments with Chappe-style indicator arms, Edelcrantz settled on a design with ten iron shutters. Nine of these represented a 3-digit octal number and the tenth, when closed, meant the code number should be preceded by "A". This gave 1,024 codepoints which were decoded to letters, words or phrases via a codebook. The telegraph had a sophisticated control panel which allowed the next symbol to be prepared while waiting for the previous symbol to be repeated on the next station down the line. The control panel was connected by strings to the shutters. When ready to transmit, all the shutters were set at the same time with the press of a footpedal.
The shutters were painted matte black to avoid reflection from sunlight and the frame and arms supporting the shutters were painted white or red for best contrast. Around 1809 Edelcrantz introduced an updated design. The frame around the shutters was dispensed with leaving a simpler, more visible, structure of just the arms with the indicator panels on the end of them. The "A" shutter was reduced to the same size as the other shutters and offset to one side to indicate which side was the most significant digit (whether the codepoint is read left-to-right or right-to-left is different for the two adjacent stations depending on which side they are on). This was previously indicated with a stationary indicator fixed to the side of the frame, but without a frame this was no longer possible.
The distance that a station could transmit depended on the size of the shutters and the power of the telescope being used to observe them. The smallest object visible to the human eye is one that subtends an angle of 40 seconds of arc, but Edelcrantz used a figure of 4 minutes of arc to account for atmospheric disturbances and imperfections of the telescope. On that basis, and with a 32× telescope, Edelcrantz specified shutter sizes ranging from 9 inches (22 cm) for a distance of half a Swedish mile (5.3 km) to 54 inches (134 cm) for 3 Swedish miles (32 km). These figures were for the original design with square shutters. The open design of 1809 had long oblong shutters which Edelcrantz thought was more visible. Distances much further than these would require impractically high towers to overcome the curvature of the Earth as well as large shutters. Edelcrantz kept the distance between stations under 2 Swedish miles (21 km) except where large bodies of water made it unavoidable.
The Swedish telegraph was capable of being used at night with lamps. On smaller stations lamps were placed behind the shutters so that they became visible when the shutter was opened. For larger stations, this was impractical. Instead, a separate tin box matrix with glass windows was installed below the daytime shutters. The lamps inside the tin box could be uncovered by pulling strings in the same way the daytime shutters were operated. Windows on both sides of the box allowed the lamps to be seen by both the upstream and downstream adjacent stations. The codepoints used at night were the complements of the codepoints used during the day. This made the pattern of lamps in open shutters at night the same as the pattern of closed shutters in daytime.
==== First network: 1795–1809 ====
The first operational line, Stockholm to Vaxholm, went into service in January 1795. By 1797 there were also lines from Stockholm to Fredriksborg, and Grisslehamn via Signilsskär to Eckerö in Åland. A short line near Gothenburg to Marstrand on the west coast was installed in 1799. During the War of the Second Coalition, Britain tried to enforce a blockade against France. Concerned at the effect on their own trade, Sweden joined the Second League of Armed Neutrality in 1800. Britain was expected to respond with an attack on one of the Nordic countries in the league. To help guard against such an attack, the king ordered a telegraph link joining the systems of Sweden and Denmark. This was the first international telegraph connection in the world. Edelcrantz made this link between Helsingborg in Sweden and Helsingør in Denmark, across the Öresund, the narrow strait separating the two countries. A new line along the coast from Kullaberg to Malmö, incorporating the Helsingborg link, was planned in support and to provide signalling points to the Swedish fleet. Nelson's attack on the Danish fleet at Copenhagen in 1801 was reported over this link, but after Sweden failed to come to Denmark's aid it was not used again and only one station on the supporting line was ever built.
In 1808 the Royal Telegraph Institution was created and Edelcrantz was made director. The Telegraph Institution was put under the jurisdiction of the military, initially as part of the Royal Engineering Corps. A new code was introduced to replace the 1796 codebook with 5,120 possible codepoints with many new messages. The new codes included punishments for delinquent operators. These included an order to the operator to stand on one of the telegraph arms (code 001-721), and a message asking an adjacent station to confirm that they could see him do it (code 001-723). By 1809, the network had 50 stations over 200 km of line employing 172 people. In comparison, the French system in 1823 had 650 km of line and employed over three thousand people.
In 1808, the Finnish War broke out when Russia seized Finland, then part of Sweden. Åland was attacked by Russia and the telegraph stations destroyed. The Russians were expelled in a revolt, but attacked again in 1809. The station at Signilsskär found itself behind enemy lines, but continued to signal the position of Russian troops to the retreating Swedes. After Sweden ceded Finland in the Treaty of Fredrikshamn, the east coast telegraph stations were considered superfluous and put into storage. In 1810, the plans for a south coast line were revived but were scrapped in 1811 due to financial considerations. Also in 1811, a new line from Stockholm via Arholma to Söderarm lighthouse was proposed, but also never materialised. For a while, the telegraph network in Sweden was almost non-existent, with only four telegraphists employed by 1810.
==== Rebuilding the network ====
The post of Telegraph Inspector was created as early as 1811, but the telegraph in Sweden remained dormant until 1827 when new proposals were put forward. In 1834, the Telegraph Institution was moved to the Topographical Corps. The Corps head, Carl Fredrik Akrell, conducted comparisons of the Swedish shutter telegraph with more recent systems from other countries. Of particular interest was the semaphore system of Charles Pasley in England which had been on trial in Karlskrona. Tests were performed between Karlskrona and Drottningskär, and, in 1835, nighttime tests between Stockholm and Fredriksborg. Akrell concluded that the shutter telegraph was faster and easier to use, and was again adopted for fixed stations. However, Pasley's semaphore was cheaper and easier to construct, so was adopted for mobile stations. By 1836 the Swedish telegraph network had been fully restored.
The network continued to expand. In 1837, the line to Vaxholm was extended to Furusund. In 1838 the Stockholm-Dalarö-Sandhamn line was extended to Landsort. The last addition came in 1854 when the Furusund line was extended to Arholma and Söderarm. The conversion to electrical telegraphy was slower and more difficult than in other countries. The many stretches of open ocean needing to be crossed on the Swedish archipelagos was a major obstacle. Akrell also raised similar concerns to those in France concerning potential sabotage and vandalism of electrical lines. Akrell first proposed an experimental electrical telegraph line in 1852. For many years the network consisted of a mix of optical and electrical lines. The last optical stations were not taken out of service until 1881, the last in operation in Europe. In some places, the heliograph replaced the optical telegraph rather than the electrical telegraph.}
=== United Kingdom ===
In Ireland, Richard Lovell Edgeworth returned to his earlier work in 1794, and proposed a telegraph there to warn against an anticipated French invasion; however, the proposal was not implemented. Lord George Murray, stimulated by reports of the Chappe semaphore, proposed a system of visual telegraphy to the British Admiralty in 1795. He employed rectangular framework towers with six five-foot-high octagonal shutters on horizontal axes that flipped between horizontal and vertical positions to signal. The Rev. Mr Gamble also proposed two distinct five-element systems in 1795: one using five shutters, and one using five ten-foot poles. The British Admiralty accepted Murray's system in September 1795, and the first system was the 15 site chain from London to Deal. Messages passed from London to Deal in about sixty seconds, and sixty-five sites were in use by 1808.
Chains of Murray's shutter telegraph stations were built along the following routes: London–Deal and Sheerness, London–Great Yarmouth, and London–Portsmouth and Plymouth. The line to Plymouth was not completed until 4 July 1806, and so could not be used to relay the news of Trafalgar. The shutter stations were temporary wooden huts, and at the conclusion of the Napoleonic wars they were no longer necessary, and were closed down by the Admiralty in March 1816.
Following the Battle of Trafalgar, the news was transmitted to London by frigate to Falmouth, from where the captain took the dispatches to London by coach along what became known as the Trafalgar Way; the journey took 38 hours. This delay prompted the Admiralty to investigate further.
A replacement telegraph system was sought, and of the many ideas and devices put forward the Admiralty chose the simpler semaphore system invented by Sir Home Popham. A Popham semaphore was a single fixed vertical 30 foot pole, with two movable 8 foot arms attached to the pole by horizontal pivots at their ends, one arm at the top of the pole, and the other arm at the middle of the pole. The signals of the Popham semaphore were found to be much more visible than those of the Murray shutter telegraph. Popham's 2-arm semaphore was modelled after the 3-arm Depillon French semaphore. An experimental semaphore line between the Admiralty and Chatham was installed in July 1816, and its success helped to confirm the choice.
Subsequently, the Admiralty decided to establish a permanent link to Portsmouth and built a chain of semaphore stations. Work started in December 1820 with Popham's equipment replaced with another two-arm system invented by Charles Pasley. Each of the arms of Pasley's system could take on one of eight positions and it thus had more codepoints than Popham's. In good conditions messages were sent from London to Portsmouth in less than eight minutes. The line was operational from 1822 until 1847, when the railway and electric telegraph provided a better means of communication. The semaphore line did not use the same locations as the shutter chain, but followed almost the same route with 15 stations: Admiralty (London), Chelsea Royal Hospital, Putney Heath, Coombe Warren, Coopers Hill, Chatley Heath, Pewley Hill, Bannicle Hill, Haste Hill (Haslemere), Holder Hill, (Midhurst), Beacon Hill, Compton Down, Camp Down, Lumps Fort (Southsea), and Portsmouth Dockyard. The semaphore tower at Chatley Heath, which replaced the Netley Heath station of the shutter telegraph, is currently being restored by the Landmark Trust as self-catering holiday accommodation. There will be public access on certain days when the restoration is complete.
The Board of the Port of Liverpool obtained a local act of Parliament, the Liverpool Improvement Act 1825 (6 Geo. 4. c. clxxxvii), to construct a chain of Popham optical semaphore stations from Liverpool to Holyhead in 1825. The system was designed and part-owned by Barnard L. Watson, a reserve marine officer, and came into service in 1827. The line is possibly the only example of an optical telegraph built entirely for commercial purposes. It was used so that observers at Holyhead could report incoming ships to the Port of Liverpool and trading could begin in the cargo being carried before the ship docked. The line was kept in operation until 1860 when a railway line and associated electrical telegraph made it redundant.: 181–183 Many of the prominences on which the towers were built ('telegraph hills') are known as Telegraph Hill to this day.
==== British empire ====
===== Ireland =====
In Ireland R.L. Edgeworth was to develop an optical telegraph based on a triangle pointer, measuring up to 16 feet in height. Following several years promoting his system, he was to get admiralty approval and engaged in its construction during 1803–1804. The completed system ran from Dublin to Galway and was to act as a rapid warning system in case of French invasion of the west coast of Ireland. Despite its success in operation, the receding threat of French invasion was to see the system disestablished in 1804.
===== Canada =====
In Canada, Prince Edward, Duke of Kent established the first semaphore line in North America. In operation by 1800, it ran between the city of Halifax and the town of Annapolis in Nova Scotia, and across the Bay of Fundy to Saint John and Fredericton in New Brunswick. In addition to providing information on approaching ships, the Duke used the system to relay military commands, especially as they related to troop discipline. The Duke had envisioned the line reaching as far as the British garrison at Quebec City, but the many hills and coastal fog meant the towers needed to be placed relatively close together to ensure visibility. The labour needed to build and continually man so many stations taxed the already stretched-thin British military and there is doubt the New Brunswick line was ever in operation. With the exception of the towers around Halifax harbour, the system was abandoned shortly after the Duke's departure in August 1800.
===== Malta =====
The British military authorities began to consider installing a semaphore line in Malta in the early 1840s. Initially, it was planned that semaphore stations be established on the bell towers and domes of the island's churches, but the religious authorities rejected the proposal. Due to this, in 1848 new semaphore towers were constructed at Għargħur and Għaxaq on the main island, and another was built at Ta' Kenuna on Gozo. Further stations were established at the Governor's Palace, Selmun Palace and the Giordan Lighthouse. Each station was staffed by the Royal Engineers.
===== India =====
In India, semaphore towers were introduced in 1810. A series of towers were built between Fort William, Kolkata to Chunar Fort near Varanasi.The towers in the plains were 75–80 ft (23–24 m) tall and those in the hills were 40–50 ft (12–15 m) tall, and were built at an interval of about 13 km (8.1 mi).
===== Van Diemen's Land =====
In southern Van Diemens Land (Tasmania) a signalling system to announce the arrival of ships was suggested by Governor-In-Chief Lachlan Macquarie when he made his first visit in 1811 Initially a simple flag system in 1818 between Mt. Nelson and Hobart, it developed into a system with two revolving arms by 1829, the system was quite crude and the arms were difficult to operate. In 1833 Charles O'Hara Booth took over command of the Port Arthur penal settlement, as an "enthusiast in the art of signalling" he saw the value of better communications with the headquarters in Hobart. During his command the semaphore system was extended to include 19 stations on the various mountains and islands between Port Arthur and Hobart. Until 1837 three single rotating arm semaphores were used. Subsequently the network was upgraded to use signal posts with six arms - a pair top, middle and bottom. This enabled the semaphore to send 999 signal codes. Captain George King of the Port Office and Booth together contributed to the code book for the system.
King drew up shipping related codes and Booth added Government, Military and penal station matters. In 1877 Port Arthur was closed and the semaphore was operated for shipping signals only, it was finally replaced with a simple flagstaff after the introduction of the telephone in 1880.
In the north of the state there was a requirement to report on shipping arrivals as they entered the Tamar Estuary, some 55 kilometers from the main port at this time in Launceston. The Tamar Valley Semaphore System was based on a design by Peter Archer Mulgrave. This design used two arms, one with a cross piece at the end. The arms were rotated by ropes, and later chains. The barred arm positions indicated numbers 1 to 6 clockwise from the bottom left and the unbarred arm 7,8,9, STOP and REPEAT. A message was sent by sending numbers sequentially to make up a code. As with other systems the code was decoded via a code book. On 1 October 1835 it was announced in the Launceston Advertiser - "...that the signal stations are now complete from Launceston to George Town, and communication may he made, as well as received, from the Windmill Hill to George Town, in a very few minutes, on a clear day". The system comprised six stations - Launceston Port Office, Windmill Hill, Mt. Direction, Mt.George, George Town Port Office, Low Head lighthouse. The Tamar Valley semaphore telegraph operated for twenty-two and a half years closing on 31 March 1858 after the introduction of the electric telegraph.
In the 1990s the Tamar Valley Signal Station Committee Inc. was formed to restore the system. The works were carried out over several years and the semaphore telegraph was declared complete once more on Sunday 30 September 2001.
=== Iberia ===
==== Spain ====
In Spain, the engineer Agustín de Betancourt developed his own system which was adopted by that state; in 1798 he received a Royal Appointment, and the first stretch of line connecting Madrid and Aranjuez was in operation as of August 1800. Spain was spanned by an extensive semaphore telegraph network in the 1840s and 1850s. The three main semaphore lines radiated from Madrid. The first ran north to Irun on the Atlantic coast at the French border. The second ran east to the Mediterranean, then north along the coast through Barcelona to the French border. The third ran south to Cadiz on the Atlantic coast. These lines served many other Spanish cities, including: Aranjuez, Badajoz, Burgos, Castellon, Ciudad Real, Córdoba, Cuenca, Gerona, Pamplona, San Sebastian, Seville, Tarancon, Taragona, Toledo, Valladolid, Valencia, Vitoria and Zaragoza.
The rugged topography of the Iberian peninsula that facilitated the design of semaphore lines conveying information from hilltop to hilltop, made it difficult to implement wire telegraph lines when that technology was introduced in the mid 19th century. The Madrid-Cadiz line was the first to be dismantled in 1855, but other segments of the optical system continued to function until the end of the Carlist Wars in 1876.
==== Portugal ====
In Portugal, the British forces fighting Napoleon in Portugal soon found that the Portuguese Army had already a very capable semaphore terrestrial system working since 1808, giving the Duke of Wellington a decisive advantage in intelligence. The innovative Portuguese telegraphs, designed by Francisco António Ciera, a mathematician, were of 3 types: 3 shutters, 3 balls and 1 pointer/moveable arm. He also wrote the code book "Táboas Telegráphicas", the same for the 3 telegraph types. Since early 1810 the network was operated by "Corpo Telegráfico", the first Portuguese military Signal Corps.
=== Other regions ===
Once it had proved its success in France, the optical telegraph was imitated in many other countries, especially after it was used by Napoleon to coordinate his empire and army. In most of these countries, the postal authorities operated the semaphore lines. Many national services adopted signalling systems different from the Chappe system. For example, the UK and Sweden adopted systems of shuttered panels (in contradiction to the Chappe brothers' contention that angled rods are more visible). In some cases, new systems were adopted because they were thought to be improvements. But many countries pursued their own, often inferior, designs for reasons of national pride or not wanting to copy from rivals and enemies.
In 1801, the Danish post office installed a semaphore line across the Great Belt strait, Storebæltstelegrafen, between islands Funen and Zealand with stations at Nyborg on Funen, on the small island Sprogø in the middle of the strait, and at Korsør on Zealand. It was in use until 1865.
In the Kingdom of Prussia, Frederick William III ordered the construction of an experimental line in 1819, but due to opposition from the defence minister Karl von Hake, nothing happened until 1830 when a short three-station line between Berlin and Potsdam was built. The design was based on the Swedish telegraph with the number of shutters increased to twelve. Postrat Carl Pistor proposed instead a semaphore system based on Watson's design in England. An operational line of this design running Berlin-Magdeburg-Dortmund-Köln-Bonn-Koblenz was completed in 1833. The line employed about 200 people, comparable to Sweden, but no network ever developed and no more official lines were built. The line was decommissioned in 1849 in favour of an electrical line.
Although there were no more government sponsored official lines, there was some private enterprise. Johann Ludwig Schmidt opened a commercial line from Hamburg to Cuxhaven in 1837. In 1847, Schmidt opened a second line from Bremen to Bremerhaven. These lines were used for reporting the arrival of commercial ships. The two lines were later linked with three additional stations to create possibly the only private telegraph network in the optical telegraph era. The telegraph inspector for this network was Friedrich Clemens Gerke, who would later move to the Hamburg-Cuxhaven electrical telegraph line and develop what became the International Morse Code. The Hamburg line went out of use in 1850, and the Bremen line in 1852.
In Russia, Tsar Nicolas I inaugurated a line between Moscow and Warsaw of 1,200 kilometres (750 mi) length in 1833; it needed 220 stations staffed by 1,320 operators. The stations were noted to be unused and decaying in 1859, so the line was probably abandoned long before this.
In the United States, the first optical telegraph was built by Jonathan Grout in 1804 but ceased operation in 1807. This 104-kilometre (65 mi) line between Martha's Vineyard with Boston transmitted shipping news. An optical telegraph system linking Philadelphia and the mouth of the Delaware Bay was in place by 1809 and had a similar purpose; a second line to New York City was operational by 1834, when its Philadelphia terminus was moved to the tower of the Merchants Exchange. One of the principal hills in San Francisco, California is also named "Telegraph Hill", after the semaphore telegraph which was established there in 1849 to signal the arrival of ships into San Francisco Bay.
== As first data networks ==
The optical telegraphs put in place at the turn of the 18th/19th centuries were the first examples of data networks. Chappe and Edelcrantz independently invented many features that are now commonplace in modern networks, but were then revolutionary and essential to the smooth running of the systems. These features included control characters, routing, error control, flow control, message priority and symbol rate control. Edelcrantz documented the meaning and usage of all his control codes from the start in 1794. The details of the early Chappe system are not known precisely; the first operating instructions to survive date to 1809 and the French system is not as fully explained as the Swedish.
Some of the features of these systems are considered advanced in modern practice and have been recently reinvented. An example of this is the error control codepoint 707 in the Edelcrantz code. This was used to request the repeat of a specified recent symbol. The 707 was followed by two symbols identifying the row and column in the current page of the logbook that it was required to repeat. This is an example of a selective repeat and is more efficient than the simple go back n strategy used on many modern networks. This was a later addition; both Edelcrantz (codepoint 272), and Chappe (codepoint 2H6) initially used only a simple "erase last character" for error control, taken directly from Hooke's 1684 proposal.
Routing in the French system was almost permanently fixed; only Paris and the station at the remote end of a line were allowed to initiate a message. The early Swedish system was more flexible, having the ability to set up message connections between arbitrary stations. Similar to modern networks, the initialisation request contained the identification of the requesting and target station. The request was acknowledged by the target station by sending the complement of the code received. This protocol is unique with no modern equivalent. This facility was removed from the codebook in the revision of 1808. After this, only Stockholm would normally initiate messages with other stations waiting to be polled.
The Prussian system required the Coblenz station (at the end of the line) to send a "no news" message (or a real message if there was one pending) back to Berlin on the hour, every hour. Intermediate stations could only pass messages by replacing the "no news" message with their traffic. On arrival in Berlin, the "no news" message was returned to Coblenz with the same procedure. This can be considered an early example of a token passing system. This arrangement required accurate clock synchronisation at all the stations. A synchronisation signal was sent out from Berlin for this purpose every three days.
Another feature that would be considered advanced in a modern electronic system is the dynamic changing of transmission rates. Edelcrantz had codepoints for faster (770) and slower (077). Chappe also had this feature.
== In popular culture ==
By the mid-19th century, the optical telegraph was well known enough to be referenced in popular works without special explanation. The Chappe telegraph appeared in contemporary fiction and comic strips. In "Mister Pencil" (1831), a comic strip by Rodolphe Töpffer, a dog fallen on a Chappe telegraph's arm—and its master attempting to help get it down—provoke an international crisis by inadvertently transmitting disturbing messages. In Lucien Leuwen (1834), Stendhal pictures a power struggle between Lucien Leuwen and the prefect M. de Séranville with the telegraph's director M. Lamorte. In Chapter 60 ("The Telegraph") of Alexandre Dumas' The Count of Monte Cristo (1844), the title character describes with fascination the semaphore line's moving arms: "I had at times seen rise at the end of a road, on a hillock and in the bright light of the sun, these black folding arms looking like the legs of an immense beetle." He later bribes a semaphore operator to relay a false message in order to manipulate the French financial market. Dumas also describes in detail the functioning of a Chappe telegraph line. In Hector Malot's novel Romain Kalbris (1869), one of the characters, a girl named Dielette, describes her home in Paris as "...next to a church near which there was a clock tower. On top of the tower there were two large black arms, moving all day this way and that. [I was told later] that this was Saint-Eustache church and that these large black arms were a telegraph."
In the 21st century, the optical telegraph concept is mainly kept alive in popular culture through fiction such as the novel Pavane and Terry Pratchett's "Clacks" in his Discworld novels, most notably the 2004 novel Going Postal.
== See also ==
History of telecommunication
Telegraph code, for more information on many of the codes used
Optical communication
Polybius square
Railway signalling
San Jose Semaphore
Semaphore Flag Signaling System
Signal lamp
Telegraph Hill, for a list of telegraph hills
Wigwag, a flag signaling system that also used telescopes and towers
== Notes ==
== References ==
Citations
Bibliography
Burns, R. W. (2004). Communications: an international history of the formative years. ISBN 978-0-86341-327-8.
Crowley, David and Heyer, Paul (ed) (2003) 'Chapter 17: The optical telegraph' Communication in History: Technology, Culture and Society (Fourth Edition) Allyn and Bacon, Boston pp. 123–125
Edelcrantz, Abraham Niclas, Afhandling om Telegrapher ("A Treatise on Telegraphs"), 1796, as translated in ch. 4 of Holzmann & Pehrson.
Greene, David (2003). Light and dark: an exploration in science, nature, art and technology. Boca Raton, Fl: Taylor and Francis. ISBN 978-1-4200-3403-5.
Holzmann, Gerard J.; Pehrson, Bjorn, The Early History of Data Networks, John Wiley & Sons, 1995 ISBN 0818667826.
Huurdeman, Anton A., The Worldwide History of Telecommunications, John Wiley & Sons, 2003 ISBN 0471205052.
== Further reading ==
The Victorian Internet, Tom Standage, Walker & Company, 1998, ISBN 0-8027-1342-4
The Old Telegraphs, Geoffrey Wilson, Phillimore & Co Ltd 1976 ISBN 0-900592-79-6
Faster Than The Wind, The Liverpool to Holyhead Telegraph, Frank Large, an avid publication ISBN 0-9521020-9-9
The early history of data networks, Gerard Holzmann and Bjorn Pehrson, Wiley Publ., 2003, ISBN 0-8186-6782-6
Burns, R.W. (2004). "Semaphore Signaling, Chapter 2". Communications: an international history of the formative years. Institution of Electrical Engineers. ISBN 978-0-86341-327-8.
== External links ==
Chappe's semaphore (an illustrated history of optical telegraphy)
Webpage including a map of England's telegraph chains
Diagrams and maps of Murray's U.K. semaphore stations
Chart of Murray's shutter-semaphore code
Photo and diagrams of Popham's U.K. semaphore stations
Map of visual telegraph (semaphore) and electrical telegraph lines in Italy, 1860 (in Italian)
Details on the history of the Blanc brothers fraudulant use of the Semophore line
Live recreation of the Spanish optical telegraph code (in Spanish) | Wikipedia/Optical_telegraph |
The Electric Telegraph Company (ETC) was a British telegraph company founded in 1846 by William Fothergill Cooke and John Ricardo. It was the world's first public telegraph company. The equipment used was the Cooke and Wheatstone telegraph, an electrical telegraph developed a few years earlier in collaboration with Charles Wheatstone. The system had been taken up by several railway companies for signalling purposes, but in forming the company Cooke intended to open up the technology to the public at large.
The ETC had a monopoly of electrical telegraphy until the formation of the Magnetic Telegraph Company (commonly called the Magnetic) who used a different system which did not infringe the ETC's patents. The Magnetic became the chief rival of the ETC and the two of them dominated the market even after further companies entered the field.
The ETC was heavily involved in laying submarine telegraph cables, including lines to the Netherlands, Ireland, the Channel Islands, and the Isle of Man. It operated the world's first specialised cable-laying ship, the Monarch. A private line was laid for Queen Victoria on the Isle of Wight. The company was nationalised in 1870 along with other British telegraph companies, and its assets were taken over by the General Post Office.
== Formation ==
The Electric Telegraph Company was the world's first public telegraph company, founded in the United Kingdom by Sir William Fothergill Cooke and John Lewis Ricardo, MP for Stoke-on-Trent, with Cromwell F. Varley as chief engineer. It was incorporated by the Electric Telegraph Company's Act 1846 (9 & 10 Vict. c. xlvi). Its headquarters was in Founders Court, Lothbury, behind the Bank of England. This was the first company formed for the specific purpose of providing a telegraph service to the public. Besides Cooke and Ricardo, the original shareholders were railway engineer George Parker Bidder with the largest holding, Benjamin Hawes, Thomas Boulton, and three other members of the Ricardo family; Samson, Albert, and Frederick.
Up to this point telegraph lines had been laid mostly in conjunction with railway companies, and Cooke had been a leading figure in convincing them of its benefits. However, these systems were all for the exclusive use of the railway company concerned, mostly for signalling purposes, until 1843 when Cooke extended the Great Western Railway's telegraph on to Slough at his own expense, at which point he acquired the right to open it to the public. Railway telegraphy continued to be an important part of the company's business with expenditure on the railways peaking in 1847–48. This focus on the railways was reflected in the directors and major shareholders being dominated by people associated with railway construction. Additional railway people who had become involved by 1849 included Samuel Morton Peto, Thomas Brassey, Robert Stephenson (of Rocket fame and who was chairman of the company in 1857–58), Joseph Paxton, and Richard Till, a director of several railway companies.
The collaboration between Cooke and Charles Wheatstone in developing the Cooke and Wheatstone telegraph was not a happy one, degenerating into a bitter dispute over who had invented the telegraph. As a result, the company was formed without Wheatstone (although he claimed he had been offered the post of scientific adviser). At creation the company purchased all the patents Cooke and Wheatstone had obtained to date in building the Cooke and Wheatstone telegraph. It also obtained the important patent for the electric relay from Edward Davy for £600. The relay allowed telegraph signals weakened over a long distance to be renewed and retransmitted onward.
== Early years ==
The company was not immediately hugely profitable, and shares were more or less worthless. In 1846 it won a concession from Belgium for telegraph lines covering the whole country. The company installed a line from Brussels to Antwerp but the traffic was light (mainly stock exchange business) and the company decided to return its concession to the Belgian Government in 1850. In 1848, after a dispute with the Great Western over an engine the ETC was alleged to have damaged, the telegraph line from Paddington to Slough was removed, although the railway company continued to use the telegraph at the Box Tunnel.
The setback with the Great Western did not slow the growth of the telegraph along railway lines, and these continued to be the main source of revenue. By 1848 the company had telegraph lines along half of the railway lines then open, some 1,800 miles, and continued to make deals with more railway companies after that. These included in 1851 a new contract with Great Western which was extending its line to Exeter and Plymouth and by 1852 the ETC had installed a line that ran from London, past Slough, as far as Bristol. These contracts usually gave the company exclusive rights to install telegraph lines. This gave the company a significant advantage over competitors when other companies entered the market.
Other areas of growth were in the supply of news to newspapers, and contracts with stock exchanges. However, general use by the public was retarded by the high cost of sending a message. By 1855 this situation was changing. The ETC now had over 5,200 miles of line and sent nearly three-quarters of a million messages that year. The growth, together with competitors coming on to the market, drove down prices. ETC's maximum charge for an inland telegram (over 100 miles) fell from ten shillings in 1851 to four shillings in 1855.
By 1859, growth required the company to relocate its London central office to Great Bell Alley, Moorgate, but retaining the Founders Court site as a public office. The Moorgate office was arranged over three floors and a large number of men and boys were recruited on an accelerating rate of pay. The company also employed a significant number of women from a higher social class as telegraphists operating the Wheatstone needle instruments. They were paid less and they had to leave if they married. A notable early employee was Maria Craig who became a supervisor. The portion of Great Bell Alley east of Moorgate Street was later renamed Telegraph Street in recognition of the importance of the company at 11–14 Telegraph Street. The site is now occupied by The Telegraph pub.
== Government reserved powers ==
In the act of Parliament establishing the company, the Electric Telegraph Company's Act 1846, the government reserved the right to take over the resources of the ETC in times of national emergency. It did this in 1848 in response to Chartist agitation. Chartism was a working-class movement for democratic reform. One of the main aims was to achieve the vote for all men over twenty-one. In April 1848, the Chartists organised a large demonstration at Kennington Common and presented a petition signed by millions. The government, fearing an insurrection, used its control of the ETC telegraph to disrupt Chartist communication.
== Competitors ==
The first competitor to emerge was the British Electric Telegraph Company (BETC), formed in 1849 by Henry Highton and his brother Edward. The ETC had a policy of suppressing competitors by buying up rival patents. This it had done to Highton when he patented a gold-leaf telegraph instrument. However, Highton now proposed a telegraph with a different system. Even worse for the ETC, in 1850 Parliament passed an Act giving it the right to force the railways to allow the BETC to construct a telegraph for government use between Liverpool and London. The ETC tried to oppose the government Bill but without success.
A more serious rival came in 1851 with the formation of the English and Irish Magnetic Telegraph Company (later renamed the British and Irish Magnetic Telegraph Company and usually just called the Magnetic). The Magnetic also used a non-infringing system, generating the telegraph pulses electromagnetically by the operator's own motion of working the equipment handles. The Magnetic got around the ETC's dominance of rail wayleaves by using buried cables along highways, a problem that had hindered the BETC and eventually led to its takeover by the Magnetic. Further, it had an exclusive agreement with the Submarine Telegraph Company who had laid the first cable to France and was busily laying more cables to other continental countries. The Magnetic also beat the ETC in getting the first cable to Ireland in 1853. For a while then, the Magnetic had shut the ETC out of international business. The ETC was keen to correct this situation and started laying its own submarine cables.
Other companies came on to the market, but ETC remained by far the largest of them with the Magnetic second. The ETC and the Magnetic so dominated the market that they were virtually a duopoly until nationalisation.
== Submarine cables ==
The Electric Telegraph Company merged with the International Telegraph Company (ITC) in 1854 to become the Electric and International Telegraph Company. The International Telegraph Company had been formed in 1853 for the purpose of establishing a telegraph connection to the Netherlands between Orfordness and Scheveningen using submarine telegraph cables. The concession to lay the cables had originally been granted to the ETC, but the Dutch government objected to the ETC laying landlines on its territory so a separate company, the ITC, was set up to do this. In practice, the ITC was run by ETC staff. It planned to lay four separate cable cores as a diversity scheme against damage from anchors and fishing gear. All four were combined into a single cable in the sea a short distance from landing. The work was begun in 1853 with the ship Monarch, specially purchased and fitted out for the purpose, and completed in 1854. The cable proved to need a great deal of maintenance and was replaced in 1858 by a single, heavier cable made by Glass, Elliot & Co and laid by William Cory.
=== Monarch ===
The Monarch was the first ship to be permanently fitted out as a cable ship and operated on a full-time basis by a cable company, although the fitting out for the Netherlands cables was considered temporary. She was a paddle steamer built in 1830 at Thornton-on-Tees with a 130 hp engine. She was the first of a series of cable ships named Monarch.
The cable laying equipment of Monarch was a major step forward compared to the unspecialised ships that had previously been used for cable laying, with sheaves to run the cable out of the hold and a powerful dedicated brake to control the cable running out. However, Monarch did not store the cable in water-filled tanks as was done on future cable ships. The ship could not, therefore, be kept in trim by replacing the cable with water as it was payed out. It was necessary to run out coils of cable alternately from the fore hold and the main hold for this reason.
Besides the cables to the Netherlands, Monarch laid several cables around Britain in its first year. One of these was a cable across the Solent to the Isle of Wight. The purpose of this cable was to provide a connection to Osborne House, the summer residence of Queen Victoria.
A number of improvements were made to Monarch over the years and its gear became the prototype for future cable ships. A cable picking-up machine was soon fitted with a drum that could be driven by both steam engine and manual winching, designed by the company engineer, Frederick Charles Webb. In 1857, draw-off gear was fitted to avoid crew having to hold the cable taught by hand, and water-cooled brakes were fitted in 1863.
The ship was frequently chartered to other companies like the Submarine Telegraph Company and the Magnetic for cable work. The first charter was to R.S. Newall and Company to recover an abandoned cable in the Irish Sea. Newall had made this cable for the Magnetic and a failed attempt to lay it from Portpatrick in Scotland to Donaghadee in Ireland was made in 1852. Newall temporarily installed its own picking-up machine as Webb's had not yet been fitted.
After nationalisation in 1870, Monarch irreparably broke down on her first cable mission for the General Post Office (GPO). She was then relegated to a coal hulk.
=== Ireland ===
The chief competitor to the company, the Magnetic, had succeeded in providing the first connection to Ireland in 1853 on the Portpatrick–Donaghadee route. The ETC was keen to establish its own connection. In September 1854 Monarch attempted to lay a lightweight cable from Holyhead in Wales to Howth in Ireland. This attempt was a failure, as had previous attempts on both routes with lightweight cable. In June 1855 Monarch tried again, but this time with a heavier cable made by Newall. This attempt was successful, the cable being to a similar design to the one Newall had made for the successful Magnetic cable.
Another cable was laid to Ireland in 1862, this time from Wexford in Ireland to Abermawr in Wales. The cable was made by Glass, Elliot & Co and laid by Berwick.
=== Channel Islands ===
A subsidiary company, the Channel Islands Telegraph Company was formed in 1857 for the purpose of providing telegraph to the Channel Islands of Jersey, Guernsey, and Alderney. The main cable was made by Newall and laid by Elba between Weymouth and Alderney in August 1858. The cable required numerous repairs due to the rocky coast of Alderney and the tidal race between Portland Bill and the Isle of Portland. The main section was finally abandoned as a maintenance liability shortly after September 1860.
=== Isle of Man ===
A subsidiary company, the Isle of Man Electric Telegraph Company was formed in 1859 for the purpose of providing telegraph to the Isle of Man. The cable was made by Glass, Elliot & Co and laid by Resolute from Whitehaven.
== Nationalisation ==
The company was nationalised by the British government in 1870 under the Telegraph Act 1868 along with most other British telegraph companies. The Telegraph Act 1870 extended the 1868 Act to include the Isle of Man Electric Telegraph Company and the Jersey and Guernsey Telegraph Company, but excluded the Submarine Telegraph Company and other companies which exclusively operated international cables.
The Electric Telegraph Company formed the largest component of the resulting state monopoly run by the GPO. In 1969 Post Office Telecommunications was made a distinct department of the Post Office, and in 1981 it was separated entirely from the Post Office as British Telecom. In 1984, British Telecom was privatised and from 1991 traded as BT.
== Equipment ==
The primary system initially used by the company was the two-needle and one-needle Cooke and Wheatstone telegraphs. Needle telegraphs continued to be used throughout the company's existence, but printing telegraphs were also in use by the 1850s. From 1867, the ETC started to use the Wheatstone automatic duplex system. This device sent messages at an extremely fast rate from text that had been prerecorded on paper punched tape. Its advantage was that it could make maximum use of a telegraph line. This had a great economic advantage on busy long-distance lines where traffic capacity was limited by the speed of the operator. To increase traffic it would otherwise have been necessary to install expensive additional lines and employ additional operators.
In 1854 the ETC installed a pneumatic tube system between its London central office and the London Stock Exchange using underground pipes. This system was later extended to other major company offices in London. Systems were also installed in Liverpool (1864), Birmingham (1865), and Manchester (1865).
== Historical documents ==
Records of the Electric Telegraph Company (33 volumes), 1846–1872, the International Telegraph Company (5 volumes), 1852–1858 and the Electric and International Telegraph Company (62 volumes), [1852]–1905 are held by BT Archives.
== See also ==
Time signal § United Kingdom, the ETC was the first to distribute telegraph time signals
== References ==
== Bibliography ==
Ash, Stewart, "The development of submarine cables", ch. 1 in, Burnett, Douglas R.; Beckman, Robert; Davenport, Tara M., Submarine Cables: The Handbook of Law and Policy, Martinus Nijhoff Publishers, 2014 ISBN 9789004260320.
Beauchamp, Ken, History of Telegraphy, Institution of Engineering and Technology, 2001 ISBN 0852967926.
Bright, Charles Tilston, Submarine Telegraphs, London: Crosby Lockwood, 1898 OCLC 776529627.
Bright, Edward Brailsford; Bright, Charles, The Life Story of the Late Sir Charles Tilston Bright, Civil Engineer, Cambridge University Press, 2012 ISBN 1108052886 (first published 1898).
Chase, Malcolm, Chartism: A New History, Manchester University Press, 2007 ISBN 9780719060878.
Haigh, Kenneth Richardson, Cableships and Submarine Cables, Adlard Coles, 1968 OCLC 497380538.
Hills, Jill, The Struggle for Control of Global Communication,University of Illinois Press, 2002 ISBN 0252027574.
Kieve, Jeffrey L., The Electric Telegraph: A Social and Economic History, David and Charles, 1973 OCLC 655205099.
McDonough, John; Egolf, Karen, The Advertising Age Encyclopedia of Advertising,
Pitt, Douglas C., The Telecommunications Function of the British Post Office, Saxon House, 1980 ISBN 9780566002731.
Roberts, Steven, Distant Writing, distantwriting.co.uk,
ch. 4, "The Electric Telegraph Company", archived 1 July 2016,
ch. 5, "Competitors and allies", archived 1 July 2016.
Shaffner, Taliaferro Preston, The Telegraph Manual, Pudney & Russell, 1859.
Walley, Wayne, "British Telecom", pp. 218–220 in, Welch, Dick; Frémond, Olivier (eds), The Case-by-case Approach to Privatization, World Bank Publications, 1998 ISBN 9780821341964.
== External links ==
BT Archives official site Archived 2011-02-19 at the Wayback Machine
The BT Family Tree Archived 2011-09-06 at the Wayback Machine | Wikipedia/Electric_Telegraph_Company |
A duplexer is an electronic device that allows bi-directional (duplex) communication over a single path. In radar and radio communications systems, it isolates the receiver from the transmitter while permitting them to share a common antenna. Most radio repeater systems include a duplexer. Duplexers can be based on frequency (often a waveguide filter), polarization (such as an orthomode transducer), or timing (as is typical in radar).
== Types ==
=== Transmit-receive switch ===
In radar, a transmit/receive (TR) switch alternately connects the transmitter and receiver to a shared antenna. In the simplest arrangement, the switch consists of a gas-discharge tube across the input terminals of the receiver. When the transmitter is active, the resulting high voltage causes the tube to conduct, shorting together the receiver terminals to protect it, while its complementary, the anti-transmit/receive (ATR) switch, is a similar discharge tube which decouples the transmitter from the antenna while not operating, to prevent it from wasting received energy.
=== Circulator ===
=== Hybrid ===
A hybrid, such as a magic T, may be used as a duplexer by terminating the fourth port in a matched load.
This arrangement suffers from the disadvantage that half of the transmitter power is lost in the matched load, while thermal noise in the load is delivered to the receiver.
=== Orthomode transducer ===
=== Frequency domain ===
In radio communications (as opposed to radar), the transmitted and received signals can occupy different frequency bands, and so may be separated by frequency-selective filters. These are effectively a higher-performance version of a diplexer, typically with a narrow split between the two frequencies in question (typically around 2%-5% for a commercial two-way radio system).
With a duplexer the high- and low-frequency signals are traveling in opposite directions at the shared port of the duplexer.
Modern duplexers often use nearby frequency bands, so the frequency separation between the two ports is also much less. For example, the transition between the uplink and downlink bands in the GSM frequency bands may be about one percent (915 MHz to 925 MHz). Significant attenuation (isolation) is needed to prevent the transmitter's output from overloading the receiver's input, so such duplexers employ multi-pole filters. Duplexers are commonly made for use on the 30-50 MHz ("low band"), 136-174 MHz ("high band"), 380-520 MHz ("UHF"), plus the 790–862 MHz ("800"), 896-960 MHz ("900") and 1215-1300 MHz ("1200") bands.
There are two predominant types of duplexer in use - "notch duplexers", which exhibit sharp notches at the "unwanted" frequencies and only pass through a narrow band of wanted frequencies and "bandpass duplexers", which have wide-pass frequency ranges and high out-of-band attenuation.
On shared-antenna sites, the bandpass duplexer variety is greatly preferred because this virtually eliminates interference between transmitters and receivers by removing out-of-band transmit emissions and considerably improving the selectivity of receivers. Most professionally engineered sites ban the use of notch duplexers and insist on bandpass duplexers for this reason.
Note 1: A duplexer must be designed for operation in the frequency band used by the receiver and transmitter, and must be capable of handling the output power of the transmitter.
Note 2: A duplexer must provide adequate rejection of transmitter noise occurring at the receive frequency, and must be designed to operate at, or less than, the frequency separation between the transmitter and receiver.
Note 3: A duplexer must provide sufficient isolation to prevent receiver desensitization.
Source: from Federal Standard 1037C
== History ==
The first duplexers were invented for use on the electric telegraph and were known as duplex rather than duplexer. They were an early form of the hybrid coil. The telegraph companies were keen to have such a device since the ability to have simultaneous traffic in both directions had the potential to save the cost of thousands of miles of telegraph wire. The first of these devices was designed in 1853 by Julius Wilhelm Gintl of the Austrian State Telegraph. Gintl's design was not very successful. Further attempts were made by Carl Frischen of Hanover with an artificial line to balance the real line as well as by Siemens & Halske, who bought and modified Frischen's design. The first truly successful duplex was designed by Joseph Barker Stearns of Boston in 1872. This was further developed into the quadruplex telegraph by Thomas Edison. The device is estimated to have saved Western Union $500,000 per year in construction of new telegraph lines.
The first duplexers for radar, sometimes referred to as Transmit/Receive Switches, were invented by Robert Morris Page and Leo C. Young of the United States Naval Research Laboratory in July 1936.
== References == | Wikipedia/Duplex_(telegraph) |
A telegraph sounder is an antique electromechanical device used as a receiver on electrical telegraph lines during the 19th century. It was invented by Alfred Vail after 1850 to replace the previous receiving device, the cumbersome Morse register and was the first practical application of the electromagnet. When a telegraph message comes in it produces an audible "clicking" sound representing the short and long keypresses – "dots" and "dashes" – which are used to represent text characters in Morse code. A telegraph operator would translate the sounds into characters representing the telegraph message.
Telegraph networks, used from the 1850s to the 1970s to transmit text messages long distances, transmitted information by pulses of current of two different lengths, called "dots" and "dashes" which spelled out text messages in Morse code. A telegraph operator at the sending end of the line would create the message by tapping on a switch called a telegraph key, which rapidly connects and breaks the circuit to a battery, sending pulses of current down the line.
The telegraph sounder was used at the receiving end of the line to make the Morse code message audible. Its simple mechanism was similar to a relay. It consisted of an electromagnet attached to the telegraph line, with an iron armature near the magnet's pole balanced on a pivot, held up by a counterweight. When current flowed through the electromagnet's winding, it created a magnetic field which attracted the armature, pulling it down to the electromagnet, resulting in a "click" sound. When the current ended, the counterweight pulled the armature back up to its resting position, resulting in a "clack" sound. Thus, as the telegraph key at the sending end makes and breaks the contact, the sounder echoes the up and down state of the key.
It was important that a sounder make a sound both when the circuit was broken and when it was restored. This was necessary for the operator clearly to distinguish the long and short keypresses – the "dashes" and "dots" – that make up the characters in morse code.
== References ==
== External links ==
Morse Telegraph Club, Inc. (The Morse Telegraph Club is an international non-profit organization dedicated to the perpetuation of the knowledge and traditions of telegraphy.)
Telegraph Sounders - A photo gallery of telegraph sounders from the 19th and 20th centuries | Wikipedia/Telegraph_sounder |
In electronic design automation, a design rule is a geometric constraint imposed on circuit board, semiconductor device, and integrated circuit (IC) designers to ensure their designs function properly, reliably, and can be produced with acceptable yield. Design rules for production are developed by process engineers based on the capability of their processes to realize design intent. Electronic design automation is used extensively to ensure that designers do not violate design rules; a process called design rule checking (DRC). DRC is a major step during physical verification signoff on the design, which also involves LVS (layout versus schematic) checks, XOR checks, ERC (electrical rule check), and antenna checks. The importance of design rules and DRC is greatest for ICs, which have micro- or nano-scale geometries; for advanced processes, some fabs also insist upon the use of more restricted rules to improve yield.
== Design rules ==
Design rules are a series of parameters provided by semiconductor manufacturers that enable the designer to verify the correctness of a mask set. Design rules are specific to a particular semiconductor manufacturing process. A design rule set specifies certain geometric and connectivity restrictions to ensure sufficient margins to account for variability in semiconductor manufacturing processes, so as to ensure that most of the parts work correctly.
The most basic design rules are shown in the diagram on the right. The first are single layer rules. A width rule specifies the minimum width of any shape in the design. A spacing rule specifies the minimum distance between two adjacent objects. These rules will exist for each layer of semiconductor manufacturing process, with the lowest layers having the smallest rules (typically 100 nm as of 2007) and the highest metal layers having larger rules (perhaps 400 nm as of 2007).
A two layer rule specifies a relationship that must exist between two layers. For example, an enclosure rule might specify that an object of one type, such as a contact or via, must be covered, with some additional margin, by a metal layer. A typical value as of 2007 might be about 10 nm.
There are many other rule types not illustrated here. A minimum area rule is just what the name implies. Antenna rules are complex rules that check ratios of areas of every layer of a net for configurations that can result in problems when intermediate layers are etched. Many other such rules exist and are explained in detail in the documentation provided by the semiconductor manufacturer.
Academic design rules are often specified in terms of a scalable parameter, λ, so that all geometric tolerances in a design may be defined as integer multiples of λ. This simplifies the migration of existing chip layouts to newer processes. Industrial rules are more highly optimized, and only approximate uniform scaling. Design rule sets have become increasingly more complex with each subsequent generation of semiconductor process.
== Software ==
The main objective of design rule checking (DRC) is to achieve a high overall yield and reliability for the design. If design rules are violated the design may not be functional. To meet this goal of improving die yields, DRC has evolved from simple measurement and Boolean checks, to more involved rules that modify existing features, insert new features, and check the entire design for process limitations such as layer density. A completed layout consists not only of the geometric representation of the design, but also data that provides support for the manufacture of the design. While design rule checks do not validate that the design will operate correctly, they are constructed to verify that the structure meets the process constraints for a given design type and process technology.
DRC software usually takes as input a layout in the GDSII standard format and a list of rules specific to the semiconductor process chosen for fabrication. From these it produces a report of design rule violations that the designer may or may not choose to correct. Carefully "stretching" or waiving certain design rules is often used to increase performance and component density at the expense of yield.
DRC products define rules in a language to describe the operations needed to be performed in DRC. For example, Mentor Graphics uses Standard Verification Rule Format (SVRF) language in their DRC rules files and Magma Design Automation is using Tcl-based language. A set of rules for a particular process is referred to as a run-set, rule deck, or just a deck.
DRC is a very computationally intense task. Usually DRC checks will be run on each sub-section of the ASIC to minimize the number of errors that are detected at the top level. If run on a single CPU, customers may have to wait up to a week to get the result of a Design Rule check for modern designs. Most design companies require DRC to run in less than a day to achieve reasonable cycle times since the DRC will likely be run several times prior to design completion. With today's processing power, full-chip DRCs may run in much shorter times as quick as one hour depending on the chip complexity and size.
Some categories of design rules (checked by DRC) in IC design include:
Active to active spacing
Well to well spacing
Minimum channel length of the transistor
Minimum metal width
Metal to metal spacing
Metal fill density (for processes using CMP)
Poly density
ESD and I/O rules
Antenna effect
=== Commercial ===
Major products in the DRC area of EDA include:
Altium Designer
Advanced Design System Desktop DRC by PathWave Design (Keysight Technologies Previously Agilent's EEsof EDA division)
Calibre by Mentor Graphics
Diva, DRACULA, Assura, PVS and Pegasus by Cadence Design Systems
Hercules and IC Validator by Synopsys
Guardian by Silvaco
HyperLynx DRC Free/Gold by Mentor Graphics
PowerDRC -now SmartDRC by Silvaco
SmartDRC by Silvaco
Quartz by Magma Design Automation
=== Free software ===
Electric VLSI Design System
KLayout
Magic
Alliance -- A Free VLSI/CAD System
Opencircuitdesign software:
Microwind -- An educational layout CAD system
Opensource 130nm CMOS PDK by Google and SkyWater tech. Foundry
== References ==
Electronic Design Automation For Integrated Circuits Handbook, by Lavagno, Martin, and Scheffer, ISBN 0-8493-3096-3 A survey of the field, from which part of the above summary were derived, with permission. | Wikipedia/Design_rules |
In integrated circuit design, integrated circuit (IC) layout, also known IC mask layout or mask design, is the representation of an integrated circuit in terms of planar geometric shapes which correspond to the patterns of metal, oxide, or semiconductor layers that make up the components of the integrated circuit. Originally the overall process was called tapeout, as historically early ICs used graphical black crepe tape on mylar media for photo imaging (erroneously believed to reference magnetic data—the photo process greatly predated magnetic media).
When using a standard process—where the interaction of the many chemical, thermal, and photographic variables is known and carefully controlled—the behaviour of the final integrated circuit depends largely on the positions and interconnections of the geometric shapes. Using a computer-aided layout tool, the layout engineer—or layout technician—places and connects all of the components that make up the chip such that they meet certain criteria—typically: performance, size, density, and manufacturability. This practice is often subdivided between two primary layout disciplines: analog and digital.
The generated layout must pass a series of checks in a process known as physical verification. The most common checks in this verification process are
Design rule checking (DRC),
Layout versus schematic (LVS),
parasitic extraction,
antenna rule checking, and
electrical rule checking (ERC).
When all verification is complete, layout post processing is applied where the data is also translated into an industry-standard format, typically GDSII, and sent to a semiconductor foundry. The milestone completion of the layout process of sending this data to the foundry is now colloquially called "tapeout". The foundry converts the data into mask data and uses it to generate the photomasks used in a photolithographic process of semiconductor device fabrication.
In the earlier, simpler, days of IC design, layout was done by hand using opaque tapes and films, an evolution derived from early days of printed circuit board (PCB) design -- tape-out.
Modern IC layout is done with the aid of IC layout editor software, mostly automatically using EDA tools, including place and route tools or schematic-driven layout tools.
Typically this involves a library of standard cells.
The manual operation of choosing and positioning the geometric shapes is informally known as "polygon pushing".
== See also ==
Interconnects (integrated circuits)
Physical design (electronics)
Printed circuit board
Integrated circuit design
Floorplan (microelectronics)
== References ==
== Further reading ==
Clein, D. (2000). CMOS IC Layout. Newnes. ISBN 0-7506-7194-7
Hastings, A. (2005). The Art of Analog Layout. Prentice Hall. ISBN 0-13-146410-8
Lienig, J., Scheible, J. (2020). Fundamentals of Layout Design for Electronic Circuits. Springer. doi:10.1007/978-3-030-39284-0. ISBN 978-3-030-39284-0. S2CID 215840278.{{cite book}}: CS1 maint: multiple names: authors list (link)
Saint, Ch. and J. (2002). IC Layout Basics. McGraw-Hill. ISBN 0-07-138625-4 | Wikipedia/Integrated_circuit_layout |
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (sometimes abbreviated IEEE TCAD or IEEE Transactions on CAD) is a monthly peer-reviewed scientific journal covering the design, analysis, and use of computer-aided design of integrated circuits and systems. It is published by the IEEE Circuits and Systems Society and the IEEE Council on Electronic Design Automation (Institute of Electrical and Electronics Engineers). The journal was established in 1982 and the editor-in-chief is Rajesh K. Gupta (University of California at San Diego). According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.9.
== Past editors-in-chief ==
Rajesh K. Gupta (2018-2022)
Vijaykrishnan Narayanan (2014-2018)
Sachin Sapatnekar (2010-2014)
== See also ==
Electronic design automation
== References ==
== External links ==
Official website | Wikipedia/IEEE_Transactions_on_Computer-Aided_Design_of_Integrated_Circuits_and_Systems |
Materials science is an interdisciplinary field of researching and discovering materials. Materials engineering is an engineering field of finding uses for materials in other fields and industries.
The intellectual origins of materials science stem from the Age of Enlightenment, when researchers began to use analytical thinking from chemistry, physics, and engineering to understand ancient, phenomenological observations in metallurgy and mineralogy. Materials science still incorporates elements of physics, chemistry, and engineering. As such, the field was long considered by academic institutions as a sub-field of these related fields. Beginning in the 1940s, materials science began to be more widely recognized as a specific and distinct field of science and engineering, and major technical universities around the world created dedicated schools for its study.
Materials scientists emphasize understanding how the history of a material (processing) influences its structure, and thus the material's properties and performance. The understanding of processing -structure-properties relationships is called the materials paradigm. This paradigm is used to advance understanding in a variety of research areas, including nanotechnology, biomaterials, and metallurgy.
Materials science is also an important part of forensic engineering and failure analysis – investigating materials, products, structures or components, which fail or do not function as intended, causing personal injury or damage to property. Such investigations are key to understanding, for example, the causes of various aviation accidents and incidents.
== History ==
The material of choice of a given era is often a defining point. Phases such as Stone Age, Bronze Age, Iron Age, and Steel Age are historic, if arbitrary examples. Originally deriving from the manufacture of ceramics and its putative derivative metallurgy, materials science is one of the oldest forms of engineering and applied science. Modern materials science evolved directly from metallurgy, which itself evolved from the use of fire. A major breakthrough in the understanding of materials occurred in the late 19th century, when the American scientist Josiah Willard Gibbs demonstrated that the thermodynamic properties related to atomic structure in various phases are related to the physical properties of a material. Important elements of modern materials science were products of the Space Race; the understanding and engineering of the metallic alloys, and silica and carbon materials, used in building space vehicles enabling the exploration of space. Materials science has driven, and been driven by, the development of revolutionary technologies such as rubbers, plastics, semiconductors, and biomaterials.
Before the 1960s (and in some cases decades after), many eventual materials science departments were metallurgy or ceramics engineering departments, reflecting the 19th and early 20th-century emphasis on metals and ceramics. The growth of material science in the United States was catalyzed in part by the Advanced Research Projects Agency, which funded a series of university-hosted laboratories in the early 1960s, "to expand the national program of basic research and training in the materials sciences." In comparison with mechanical engineering, the nascent material science field focused on addressing materials from the macro-level and on the approach that materials are designed on the basis of knowledge of behavior at the microscopic level. Due to the expanded knowledge of the link between atomic and molecular processes as well as the overall properties of materials, the design of materials came to be based on specific desired properties. The materials science field has since broadened to include every class of materials, including ceramics, polymers, semiconductors, magnetic materials, biomaterials, and nanomaterials, generally classified into three distinct groups: ceramics, metals, and polymers. The prominent change in materials science during the recent decades is active usage of computer simulations to find new materials, predict properties and understand phenomena.
== Fundamentals ==
A material is defined as a substance (most often a solid, but other condensed phases can be included) that is intended to be used for certain applications. There are a myriad of materials around us; they can be found in anything from new and advanced materials that are being developed include nanomaterials, biomaterials, and energy materials to name a few.
The basis of materials science is studying the interplay between the structure of materials, the processing methods to make that material, and the resulting material properties. The complex combination of these produce the performance of a material in a specific application. Many features across many length scales impact material performance, from the constituent chemical elements, its microstructure, and macroscopic features from processing. Together with the laws of thermodynamics and kinetics materials scientists aim to understand and improve materials.
=== Structure ===
Structure is one of the most important components of the field of materials science. The very definition of the field holds that it is concerned with the investigation of "the relationships that exist between the structures and properties of materials". Materials science examines the structure of materials from the atomic scale, all the way up to the macro scale. Characterization is the way materials scientists examine the structure of a material. This involves methods such as diffraction with X-rays, electrons or neutrons, and various forms of spectroscopy and chemical analysis such as Raman spectroscopy, energy-dispersive spectroscopy, chromatography, thermal analysis, electron microscope analysis, etc.
Structure is studied in the following levels.
==== Atomic structure ====
Atomic structure deals with the atoms of the materials, and how they are arranged to give rise to molecules, crystals, etc. Much of the electrical, magnetic and chemical properties of materials arise from this level of structure. The length scales involved are in angstroms (Å). The chemical bonding and atomic arrangement (crystallography) are fundamental to studying the properties and behavior of any material.
===== Bonding =====
To obtain a full understanding of the material structure and how it relates to its properties, the materials scientist must study how the different atoms, ions and molecules are arranged and bonded to each other. This involves the study and use of quantum chemistry or quantum physics. Solid-state physics, solid-state chemistry and physical chemistry are also involved in the study of bonding and structure.
===== Crystallography =====
Crystallography is the science that examines the arrangement of atoms in crystalline solids. Crystallography is a useful tool for materials scientists. One of the fundamental concepts regarding the crystal structure of a material includes the unit cell, which is the smallest unit of a crystal lattice (space lattice) that repeats to make up the macroscopic crystal structure. Most common structural materials include parallelpiped and hexagonal lattice types. In single crystals, the effects of the crystalline arrangement of atoms is often easy to see macroscopically, because the natural shapes of crystals reflect the atomic structure. Further, physical properties are often controlled by crystalline defects. The understanding of crystal structures is an important prerequisite for understanding crystallographic defects. Examples of crystal defects consist of dislocations including edges, screws, vacancies, self inter-stitials, and more that are linear, planar, and three dimensional types of defects. New and advanced materials that are being developed include nanomaterials, biomaterials. Mostly, materials do not occur as a single crystal, but in polycrystalline form, as an aggregate of small crystals or grains with different orientations. Because of this, the powder diffraction method, which uses diffraction patterns of polycrystalline samples with a large number of crystals, plays an important role in structural determination. Most materials have a crystalline structure, but some important materials do not exhibit regular crystal structure. Polymers display varying degrees of crystallinity, and many are completely non-crystalline. Glass, some ceramics, and many natural materials are amorphous, not possessing any long-range order in their atomic arrangements. The study of polymers combines elements of chemical and statistical thermodynamics to give thermodynamic and mechanical descriptions of physical properties.
==== Nanostructure ====
Materials, which atoms and molecules form constituents in the nanoscale (i.e., they form nanostructures) are called nanomaterials. Nanomaterials are the subject of intense research in the materials science community due to the unique properties that they exhibit.
Nanostructure deals with objects and structures that are in the 1 – 100 nm range. In many materials, atoms or molecules agglomerate to form objects at the nanoscale. This causes many interesting electrical, magnetic, optical, and mechanical properties.
In describing nanostructures, it is necessary to differentiate between the number of dimensions on the nanoscale.
Nanotextured surfaces have one dimension on the nanoscale, i.e., only the thickness of the surface of an object is between 0.1 and 100 nm.
Nanotubes have two dimensions on the nanoscale, i.e., the diameter of the tube is between 0.1 and 100 nm; its length could be much greater.
Finally, spherical nanoparticles have three dimensions on the nanoscale, i.e., the particle is between 0.1 and 100 nm in each spatial dimension. The terms nanoparticles and ultrafine particles (UFP) often are used synonymously although UFP can reach into the micrometre range. The term 'nanostructure' is often used, when referring to magnetic technology. Nanoscale structure in biology is often called ultrastructure.
==== Microstructure ====
Microstructure is defined as the structure of a prepared surface or thin foil of material as revealed by a microscope above 25× magnification. It deals with objects from 100 nm to a few cm. The microstructure of a material (which can be broadly classified into metallic, polymeric, ceramic and composite) can strongly influence physical properties such as strength, toughness, ductility, hardness, corrosion resistance, high/low temperature behavior, wear resistance, and so on. Most of the traditional materials (such as metals and ceramics) are microstructured.
The manufacture of a perfect crystal of a material is physically impossible. For example, any crystalline material will contain defects such as precipitates, grain boundaries (Hall–Petch relationship), vacancies, interstitial atoms or substitutional atoms. The microstructure of materials reveals these larger defects and advances in simulation have allowed an increased understanding of how defects can be used to enhance material properties.
==== Macrostructure ====
Macrostructure is the appearance of a material in the scale millimeters to meters, it is the structure of the material as seen with the naked eye.
=== Properties ===
Materials exhibit myriad properties, including the following.
Mechanical properties, see Strength of materials
Chemical properties, see Chemistry
Electrical properties, see Electricity
Thermal properties, see Thermodynamics
Optical properties, see Optics and Photonics
Magnetic properties, see Magnetism
The properties of a material determine its usability and hence its engineering application.
=== Processing ===
Synthesis and processing involves the creation of a material with the desired micro-nanostructure. A material cannot be used in industry if no economically viable production method for it has been developed. Therefore, developing processing methods for materials that are reasonably effective and cost-efficient is vital to the field of materials science. Different materials require different processing or synthesis methods. For example, the processing of metals has historically defined eras such as the Bronze Age and Iron Age and is studied under the branch of materials science named physical metallurgy. Chemical and physical methods are also used to synthesize other materials such as polymers, ceramics, semiconductors, and thin films. As of the early 21st century, new methods are being developed to synthesize nanomaterials such as graphene.
=== Thermodynamics ===
Thermodynamics is concerned with heat and temperature and their relation to energy and work. It defines macroscopic variables, such as internal energy, entropy, and pressure, that partly describe a body of matter or radiation. It states that the behavior of those variables is subject to general constraints common to all materials. These general constraints are expressed in the four laws of thermodynamics. Thermodynamics describes the bulk behavior of the body, not the microscopic behaviors of the very large numbers of its microscopic constituents, such as molecules. The behavior of these microscopic particles is described by, and the laws of thermodynamics are derived from, statistical mechanics.
The study of thermodynamics is fundamental to materials science. It forms the foundation to treat general phenomena in materials science and engineering, including chemical reactions, magnetism, polarizability, and elasticity. It explains fundamental tools such as phase diagrams and concepts such as phase equilibrium.
=== Kinetics ===
Chemical kinetics is the study of the rates at which systems that are out of equilibrium change under the influence of various forces. When applied to materials science, it deals with how a material changes with time (moves from non-equilibrium to equilibrium state) due to application of a certain field. It details the rate of various processes evolving in materials including shape, size, composition and structure. Diffusion is important in the study of kinetics as this is the most common mechanism by which materials undergo change. Kinetics is essential in processing of materials because, among other things, it details how the microstructure changes with application of heat.
== Research ==
Materials science is a highly active area of research. Together with materials science departments, physics, chemistry, and many engineering departments are involved in materials research. Materials research covers a broad range of topics; the following non-exhaustive list highlights a few important research areas.
=== Nanomaterials ===
Nanomaterials describe, in principle, materials of which a single unit is sized (in at least one dimension) between 1 and 1000 nanometers (10−9 meter), but is usually 1 nm – 100 nm. Nanomaterials research takes a materials science based approach to nanotechnology, using advances in materials metrology and synthesis, which have been developed in support of microfabrication research. Materials with structure at the nanoscale often have unique optical, electronic, or mechanical properties. The field of nanomaterials is loosely organized, like the traditional field of chemistry, into organic (carbon-based) nanomaterials, such as fullerenes, and inorganic nanomaterials based on other elements, such as silicon. Examples of nanomaterials include fullerenes, carbon nanotubes, nanocrystals, etc.
=== Biomaterials ===
A biomaterial is any matter, surface, or construct that interacts with biological systems. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering, and materials science.
Biomaterials can be derived either from nature or synthesized in a laboratory using a variety of chemical approaches using metallic components, polymers, bioceramics, or composite materials. They are often intended or adapted for medical applications, such as biomedical devices which perform, augment, or replace a natural function. Such functions may be benign, like being used for a heart valve, or may be bioactive with a more interactive functionality such as hydroxylapatite-coated hip implants. Biomaterials are also used every day in dental applications, surgery, and drug delivery. For example, a construct with impregnated pharmaceutical products can be placed into the body, which permits the prolonged release of a drug over an extended period of time. A biomaterial may also be an autograft, allograft or xenograft used as an organ transplant material.
=== Electronic, optical, and magnetic ===
Semiconductors, metals, and ceramics are used today to form highly complex systems, such as integrated electronic circuits, optoelectronic devices, and magnetic and optical mass storage media. These materials form the basis of our modern computing world, and hence research into these materials is of vital importance.
Semiconductors are a traditional example of these types of materials. They are materials that have properties that are intermediate between conductors and insulators. Their electrical conductivities are very sensitive to the concentration of impurities, which allows the use of doping to achieve desirable electronic properties. Hence, semiconductors form the basis of the traditional computer.
This field also includes new areas of research such as superconducting materials, spintronics, metamaterials, etc. The study of these materials involves knowledge of materials science and solid-state physics or condensed matter physics.
=== Computational materials science ===
With continuing increases in computing power, simulating the behavior of materials has become possible. This enables materials scientists to understand behavior and mechanisms, design new materials, and explain properties formerly poorly understood. Efforts surrounding integrated computational materials engineering are now focusing on combining computational methods with experiments to drastically reduce the time and effort to optimize materials properties for a given application. This involves simulating materials at all length scales, using methods such as density functional theory, molecular dynamics, Monte Carlo, dislocation dynamics, phase field, finite element, and many more.
== Industry ==
Radical materials advances can drive the creation of new products or even new industries, but stable industries also employ materials scientists to make incremental improvements and troubleshoot issues with currently used materials. Industrial applications of materials science include materials design, cost-benefit tradeoffs in industrial production of materials, processing methods (casting, rolling, welding, ion implantation, crystal growth, thin-film deposition, sintering, glassblowing, etc.), and analytic methods (characterization methods such as electron microscopy, X-ray diffraction, calorimetry, nuclear microscopy (HEFIB), Rutherford backscattering, neutron diffraction, small-angle X-ray scattering (SAXS), etc.).
Besides material characterization, the material scientist or engineer also deals with extracting materials and converting them into useful forms. Thus ingot casting, foundry methods, blast furnace extraction, and electrolytic extraction are all part of the required knowledge of a materials engineer. Often the presence, absence, or variation of minute quantities of secondary elements and compounds in a bulk material will greatly affect the final properties of the materials produced. For example, steels are classified based on 1/10 and 1/100 weight percentages of the carbon and other alloying elements they contain. Thus, the extracting and purifying methods used to extract iron in a blast furnace can affect the quality of steel that is produced.
Solid materials are generally grouped into three basic classifications: ceramics, metals, and polymers. This broad classification is based on the empirical makeup and atomic structure of the solid materials, and most solids fall into one of these broad categories. An item that is often made from each of these materials types is the beverage container. The material types used for beverage containers accordingly provide different advantages and disadvantages, depending on the material used. Ceramic (glass) containers are optically transparent, impervious to the passage of carbon dioxide, relatively inexpensive, and are easily recycled, but are also heavy and fracture easily. Metal (aluminum alloy) is relatively strong, is a good barrier to the diffusion of carbon dioxide, and is easily recycled. However, the cans are opaque, expensive to produce, and are easily dented and punctured. Polymers (polyethylene plastic) are relatively strong, can be optically transparent, are inexpensive and lightweight, and can be recyclable, but are not as impervious to the passage of carbon dioxide as aluminum and glass.
=== Ceramics and glasses ===
Another application of materials science is the study of ceramics and glasses, typically the most brittle materials with industrial relevance. Many ceramics and glasses exhibit covalent or ionic-covalent bonding with SiO2 (silica) as a fundamental building block. Ceramics – not to be confused with raw, unfired clay – are usually seen in crystalline form. The vast majority of commercial glasses contain a metal oxide fused with silica. At the high temperatures used to prepare glass, the material is a viscous liquid which solidifies into a disordered state upon cooling. Windowpanes and eyeglasses are important examples. Fibers of glass are also used for long-range telecommunication and optical transmission. Scratch resistant Corning Gorilla Glass is a well-known example of the application of materials science to drastically improve the properties of common components.
Engineering ceramics are known for their stiffness and stability under high temperatures, compression and electrical stress. Alumina, silicon carbide, and tungsten carbide are made from a fine powder of their constituents in a process of sintering with a binder. Hot pressing provides higher density material. Chemical vapor deposition can place a film of a ceramic on another material. Cermets are ceramic particles containing some metals. The wear resistance of tools is derived from cemented carbides with the metal phase of cobalt and nickel typically added to modify properties.
Ceramics can be significantly strengthened for engineering applications using the principle of crack deflection. This process involves the strategic addition of second-phase particles within a ceramic matrix, optimizing their shape, size, and distribution to direct and control crack propagation. This approach enhances fracture toughness, paving the way for the creation of advanced, high-performance ceramics in various industries.
=== Composites ===
Another application of materials science in industry is making composite materials. These are structured materials composed of two or more macroscopic phases.
Applications range from structural elements such as steel-reinforced concrete, to the thermal insulating tiles, which play a key and integral role in NASA's Space Shuttle thermal protection system, which is used to protect the surface of the shuttle from the heat of re-entry into the Earth's atmosphere. One example is reinforced Carbon-Carbon (RCC), the light gray material, which withstands re-entry temperatures up to 1,510 °C (2,750 °F) and protects the Space Shuttle's wing leading edges and nose cap. RCC is a laminated composite material made from graphite rayon cloth and impregnated with a phenolic resin. After curing at high temperature in an autoclave, the laminate is pyrolized to convert the resin to carbon, impregnated with furfuryl alcohol in a vacuum chamber, and cured-pyrolized to convert the furfuryl alcohol to carbon. To provide oxidation resistance for reusability, the outer layers of the RCC are converted to silicon carbide.
Other examples can be seen in the "plastic" casings of television sets, cell-phones and so on. These plastic casings are usually a composite material made up of a thermoplastic matrix such as acrylonitrile butadiene styrene (ABS) in which calcium carbonate chalk, talc, glass fibers or carbon fibers have been added for added strength, bulk, or electrostatic dispersion. These additions may be termed reinforcing fibers, or dispersants, depending on their purpose.
=== Polymers ===
Polymers are chemical compounds made up of a large number of identical components linked together like chains. Polymers are the raw materials (the resins) used to make what are commonly called plastics and rubber. Plastics and rubber are the final product, created after one or more polymers or additives have been added to a resin during processing, which is then shaped into a final form. Plastics in former and in current widespread use include polyethylene, polypropylene, polyvinyl chloride (PVC), polystyrene, nylons, polyesters, acrylics, polyurethanes, and polycarbonates. Rubbers include natural rubber, styrene-butadiene rubber, chloroprene, and butadiene rubber. Plastics are generally classified as commodity, specialty and engineering plastics.
Polyvinyl chloride (PVC) is widely used, inexpensive, and annual production quantities are large. It lends itself to a vast array of applications, from artificial leather to electrical insulation and cabling, packaging, and containers. Its fabrication and processing are simple and well-established. The versatility of PVC is due to the wide range of plasticisers and other additives that it accepts. The term "additives" in polymer science refers to the chemicals and compounds added to the polymer base to modify its material properties.
Polycarbonate would be normally considered an engineering plastic (other examples include PEEK, ABS). Such plastics are valued for their superior strengths and other special material properties. They are usually not used for disposable applications, unlike commodity plastics.
Specialty plastics are materials with unique characteristics, such as ultra-high strength, electrical conductivity, electro-fluorescence, high thermal stability, etc.
The dividing lines between the various types of plastics is not based on material but rather on their properties and applications. For example, polyethylene (PE) is a cheap, low friction polymer commonly used to make disposable bags for shopping and trash, and is considered a commodity plastic, whereas medium-density polyethylene (MDPE) is used for underground gas and water pipes, and another variety called ultra-high-molecular-weight polyethylene (UHMWPE) is an engineering plastic which is used extensively as the glide rails for industrial equipment and the low-friction socket in implanted hip joints.
=== Metal alloys ===
The alloys of iron (steel, stainless steel, cast iron, tool steel, alloy steels) make up the largest proportion of metals today both by quantity and commercial value.
Iron alloyed with various proportions of carbon gives low, mid and high carbon steels. An iron-carbon alloy is only considered steel if the carbon level is between 0.01% and 2.00% by weight. For steels, the hardness and tensile strength of the steel is related to the amount of carbon present, with increasing carbon levels also leading to lower ductility and toughness. Heat treatment processes such as quenching and tempering can significantly change these properties, however. In contrast, certain metal alloys exhibit unique properties where their size and density remain unchanged across a range of temperatures. Cast iron is defined as an iron–carbon alloy with more than 2.00%, but less than 6.67% carbon. Stainless steel is defined as a regular steel alloy with greater than 10% by weight alloying content of chromium. Nickel and molybdenum are typically also added in stainless steels.
Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys have been known for a long time (since the Bronze Age), while the alloys of the other three metals have been relatively recently developed. Due to the chemical reactivity of these metals, the electrolytic extraction processes required were only developed relatively recently. The alloys of aluminium, titanium and magnesium are also known and valued for their high strength to weight ratios and, in the case of magnesium, their ability to provide electromagnetic shielding. These materials are ideal for situations where high strength to weight ratios are more important than bulk cost, such as in the aerospace industry and certain automotive engineering applications.
=== Semiconductors ===
A semiconductor is a material that has a resistivity between a conductor and insulator. Modern day electronics run on semiconductors, and the industry had an estimated US$530 billion market in 2021. Its electronic properties can be greatly altered through intentionally introducing impurities in a process referred to as doping. Semiconductor materials are used to build diodes, transistors, light-emitting diodes (LEDs), and analog and digital electric circuits, among their many uses. Semiconductor devices have replaced thermionic devices like vacuum tubes in most applications. Semiconductor devices are manufactured both as single discrete devices and as integrated circuits (ICs), which consist of a number—from a few to millions—of devices manufactured and interconnected on a single semiconductor substrate.
Of all the semiconductors in use today, silicon makes up the largest portion both by quantity and commercial value. Monocrystalline silicon is used to produce wafers used in the semiconductor and electronics industry. Gallium arsenide (GaAs) is the second most popular semiconductor used. Due to its higher electron mobility and saturation velocity compared to silicon, it is a material of choice for high-speed electronics applications. These superior properties are compelling reasons to use GaAs circuitry in mobile phones, satellite communications, microwave point-to-point links and higher frequency radar systems. Other semiconductor materials include germanium, silicon carbide, and gallium nitride and have various applications.
== Relation with other fields ==
Materials science evolved, starting from the 1950s because it was recognized that to create, discover and design new materials, one had to approach it in a unified manner. Thus, materials science and engineering emerged in many ways: renaming and/or combining existing metallurgy and ceramics engineering departments; splitting from existing solid state physics research (itself growing into condensed matter physics); pulling in relatively new polymer engineering and polymer science; recombining from the previous, as well as chemistry, chemical engineering, mechanical engineering, and electrical engineering; and more.
The field of materials science and engineering is important both from a scientific perspective, as well as for applications field. Materials are of the utmost importance for engineers (or other applied fields) because usage of the appropriate materials is crucial when designing systems. As a result, materials science is an increasingly important part of an engineer's education.
Materials physics is the use of physics to describe the physical properties of materials. It is a synthesis of physical sciences such as chemistry, solid mechanics, solid state physics, and materials science. Materials physics is considered a subset of condensed matter physics and applies fundamental condensed matter concepts to complex multiphase media, including materials of technological interest. Current fields that materials physicists work in include electronic, optical, and magnetic materials, novel materials and structures, quantum phenomena in materials, nonequilibrium physics, and soft condensed matter physics. New experimental and computational tools are constantly improving how materials systems are modeled and studied and are also fields when materials physicists work in.
The field is inherently interdisciplinary, and the materials scientists or engineers must be aware and make use of the methods of the physicist, chemist and engineer. Conversely, fields such as life sciences and archaeology can inspire the development of new materials and processes, in bioinspired and paleoinspired approaches. Thus, there remain close relationships with these fields. Conversely, many physicists, chemists and engineers find themselves working in materials science due to the significant overlaps between the fields.
== Emerging technologies ==
== Subdisciplines ==
The main branches of materials science stem from the four main classes of materials: ceramics, metals, polymers and composites.
Ceramic engineering
Metallurgy
Polymer science and engineering
Composite engineering
There are additionally broadly applicable, materials independent, endeavors.
Materials characterization (spectroscopy, microscopy, diffraction)
Computational materials science
Materials informatics and selection
There are also relatively broad focuses across materials on specific phenomena and techniques.
Crystallography
Surface science
Tribology
Microelectronics
== Related or interdisciplinary fields ==
Condensed matter physics, solid-state physics and solid-state chemistry
Nanotechnology
Mineralogy
Supramolecular chemistry
Biomaterials science
== Professional societies ==
American Ceramic Society
ASM International
Association for Iron and Steel Technology
Materials Research Society
The Minerals, Metals & Materials Society
== See also ==
== References ==
=== Citations ===
=== Bibliography ===
Ashby, Michael; Hugh Shercliff; David Cebon (2007). Materials: engineering, science, processing and design (1st ed.). Butterworth-Heinemann. ISBN 978-0-7506-8391-3.
Askeland, Donald R.; Pradeep P. Phulé (2005). The Science & Engineering of Materials (5th ed.). Thomson-Engineering. ISBN 978-0-534-55396-8.
Callister, Jr., William D. (2000). Materials Science and Engineering – An Introduction (5th ed.). John Wiley and Sons. ISBN 978-0-471-32013-5.
Eberhart, Mark (2003). Why Things Break: Understanding the World by the Way It Comes Apart. Harmony. ISBN 978-1-4000-4760-4.
Gaskell, David R. (1995). Introduction to the Thermodynamics of Materials (4th ed.). Taylor and Francis Publishing. ISBN 978-1-56032-992-3.
González-Viñas, W. & Mancini, H.L. (2004). An Introduction to Materials Science. Princeton University Press. ISBN 978-0-691-07097-1.
Gordon, James Edward (1984). The New Science of Strong Materials or Why You Don't Fall Through the Floor (eissue ed.). Princeton University Press. ISBN 978-0-691-02380-9.
Mathews, F.L. & Rawlings, R.D. (1999). Composite Materials: Engineering and Science. Boca Raton: CRC Press. ISBN 978-0-8493-0621-1.
Lewis, P.R.; Reynolds, K. & Gagg, C. (2003). Forensic Materials Engineering: Case Studies. Boca Raton: CRC Press. ISBN 9780849311826.
Wachtman, John B. (1996). Mechanical Properties of Ceramics. New York: Wiley-Interscience, John Wiley & Son's. ISBN 978-0-471-13316-2.
Walker, P., ed. (1993). Chambers Dictionary of Materials Science and Technology. Chambers Publishing. ISBN 978-0-550-13249-9.
Mahajan, S. (2015). "The role of materials science in the evolution of microelectronics". MRS Bulletin. 12 (40): 1079–1088. Bibcode:2015MRSBu..40.1079M. doi:10.1557/mrs.2015.276.
== Further reading ==
Timeline of Materials Science Archived 2011-07-27 at the Wayback Machine at The Minerals, Metals & Materials Society (TMS) – accessed March 2007
Burns, G.; Glazer, A.M. (1990). Space Groups for Scientists and Engineers (2nd ed.). Boston: Academic Press, Inc. ISBN 978-0-12-145761-7.
Cullity, B.D. (1978). Elements of X-Ray Diffraction (2nd ed.). Reading, Massachusetts: Addison-Wesley Publishing Company. ISBN 978-0-534-55396-8.
Giacovazzo, C; Monaco HL; Viterbo D; Scordari F; Gilli G; Zanotti G; Catti M (1992). Fundamentals of Crystallography. Oxford: Oxford University Press. ISBN 978-0-19-855578-0.
Green, D.J.; Hannink, R.; Swain, M.V. (1989). Transformation Toughening of Ceramics. Boca Raton: CRC Press. ISBN 978-0-8493-6594-2.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 1: Neutron Scattering. Oxford: Clarendon Press. ISBN 978-0-19-852015-3.
Lovesey, S. W. (1984). Theory of Neutron Scattering from Condensed Matter; Volume 2: Condensed Matter. Oxford: Clarendon Press. ISBN 978-0-19-852017-7.
O'Keeffe, M.; Hyde, B.G. (1996). "Crystal Structures; I. Patterns and Symmetry". Zeitschrift für Kristallographie – Crystalline Materials. 212 (12). Washington, DC: Mineralogical Society of America, Monograph Series: 899. Bibcode:1997ZK....212..899K. doi:10.1524/zkri.1997.212.12.899. ISBN 978-0-939950-40-9.
Squires, G.L. (1996). Introduction to the Theory of Thermal Neutron Scattering (2nd ed.). Mineola, New York: Dover Publications Inc. ISBN 978-0-486-69447-4.
Young, R.A., ed. (1993). The Rietveld Method. Oxford: Oxford University Press & International Union of Crystallography. ISBN 978-0-19-855577-3.
== External links ==
MS&T conference organized by the main materials societies
MIT OpenCourseWare for MSE | Wikipedia/Materials_science_&_engineering |
Agriculture encompasses crop and livestock production, aquaculture, and forestry for food and non-food products. Agriculture was a key factor in the rise of sedentary human civilization, whereby farming of domesticated species created food surpluses that enabled people to live in the cities. While humans started gathering grains at least 105,000 years ago, nascent farmers only began planting them around 11,500 years ago. Sheep, goats, pigs, and cattle were domesticated around 10,000 years ago. Plants were independently cultivated in at least 11 regions of the world. In the 20th century, industrial agriculture based on large-scale monocultures came to dominate agricultural output.
As of 2021, small farms produce about one-third of the world's food, but large farms are prevalent. The largest 1% of farms in the world are greater than 50 hectares (120 acres) and operate more than 70% of the world's farmland. Nearly 40% of agricultural land is found on farms larger than 1,000 hectares (2,500 acres). However, five of every six farms in the world consist of fewer than 2 hectares (4.9 acres), and take up only around 12% of all agricultural land. Farms and farming greatly influence rural economics and greatly shape rural society, affecting both the direct agricultural workforce and broader businesses that support the farms and farming populations.
The major agricultural products can be broadly grouped into foods, fibers, fuels, and raw materials (such as rubber). Food classes include cereals (grains), vegetables, fruits, cooking oils, meat, milk, eggs, and fungi. Global agricultural production amounts to approximately 11 billion tonnes of food, 32 million tonnes of natural fibers and 4 billion m3 of wood. However, around 14% of the world's food is lost from production before reaching the retail level.
Modern agronomy, plant breeding, agrochemicals such as pesticides and fertilizers, and technological developments have sharply increased crop yields, but also contributed to ecological and environmental damage. Selective breeding and modern practices in animal husbandry have similarly increased the output of meat, but have raised concerns about animal welfare and environmental damage. Environmental issues include contributions to climate change, depletion of aquifers, deforestation, antibiotic resistance, and other agricultural pollution. Agriculture is both a cause of and sensitive to environmental degradation, such as biodiversity loss, desertification, soil degradation, and climate change, all of which can cause decreases in crop yield. Genetically modified organisms are widely used, although some countries ban them.
== Etymology and scope ==
The word agriculture is a late Middle English adaptation of Latin agricultūra, from ager 'field' and cultūra 'cultivation' or 'growing'. While agriculture usually refers to human activities, certain species of ant, termite and beetle have been cultivating crops for up to 60 million years. Agriculture is defined with varying scopes, in its broadest sense using natural resources to "produce commodities which maintain life, including food, fiber, forest products, horticultural crops, and their related services". Thus defined, it includes arable farming, horticulture, animal husbandry and forestry, but horticulture and forestry are in practice often excluded.
It may also be broadly decomposed into plant agriculture, which concerns the cultivation of useful plants, and animal agriculture, the production of agricultural animals.
== History ==
=== Origins ===
The development of agriculture enabled the human population to grow many times larger than could be sustained by hunting and gathering. Agriculture began independently in different parts of the globe, and included a diverse range of taxa, in at least 11 separate centers of origin. Wild grains were collected and eaten from at least 105,000 years ago. In the Paleolithic Levant, 23,000 years ago, cereals cultivation of emmer, barley, and oats has been observed near the sea of Galilee. Rice was domesticated in China between 11,500 and 6,200 BC with the earliest known cultivation from 5,700 BC, followed by mung, soy and azuki beans. Sheep were domesticated in Mesopotamia between 13,000 and 11,000 years ago. Cattle were domesticated from the wild aurochs in the areas of modern Turkey and Pakistan some 10,500 years ago. Pig production emerged in Eurasia, including Europe, East Asia and Southwest Asia, where wild boar were first domesticated about 10,500 years ago. In the Andes of South America, the potato was domesticated between 10,000 and 7,000 years ago, along with beans, coca, llamas, alpacas, and guinea pigs. Sugarcane and some root vegetables were domesticated in New Guinea around 9,000 years ago. Sorghum was domesticated in the Sahel region of Africa by 7,000 years ago. Cotton was domesticated in Peru by 5,600 years ago, and was independently domesticated in Eurasia. In Mesoamerica, wild teosinte was bred into maize (corn) from 10,000 to 6,000 years ago. The horse was domesticated in the Eurasian Steppes around 3500 BC.
Scholars have offered multiple hypotheses to explain the historical origins of agriculture. Studies of the transition from hunter-gatherer to agricultural societies indicate an initial period of intensification and increasing sedentism; examples are the Natufian culture in the Levant, and the Early Chinese Neolithic in China. Then, wild stands that had previously been harvested started to be planted, and gradually came to be domesticated.
=== Civilizations ===
In Eurasia, the Sumerians started to live in villages from about 8,000 BC, relying on the Tigris and Euphrates rivers and a canal system for irrigation. Ploughs appear in pictographs around 3,000 BC; seed-ploughs around 2,300 BC. Farmers grew wheat, barley, vegetables such as lentils and onions, and fruits including dates, grapes, and figs. Ancient Egyptian agriculture relied on the Nile River and its seasonal flooding. Farming started in the predynastic period at the end of the Paleolithic, after 10,000 BC. Staple food crops were grains such as wheat and barley, alongside industrial crops such as flax and papyrus. In India, wheat, barley and jujube were domesticated by 9,000 BC, soon followed by sheep and goats. Cattle, sheep and goats were domesticated in Mehrgarh culture by 8,000–6,000 BC. Cotton was cultivated by the 5th–4th millennium BC. Archeological evidence indicates an animal-drawn plough from 2,500 BC in the Indus Valley civilization.
In China, from the 5th century BC, there was a nationwide granary system and widespread silk farming. Water-powered grain mills were in use by the 1st century BC, followed by irrigation. By the late 2nd century, heavy ploughs had been developed with iron ploughshares and mouldboards. These spread westwards across Eurasia. Asian rice was domesticated 8,200–13,500 years ago – depending on the molecular clock estimate that is used– on the Pearl River in southern China with a single genetic origin from the wild rice Oryza rufipogon. In Greece and Rome, the major cereals were wheat, emmer, and barley, alongside vegetables including peas, beans, and olives. Sheep and goats were kept mainly for dairy products.
In the Americas, crops domesticated in Mesoamerica (apart from teosinte) include squash, beans, and cacao. Cocoa was domesticated by the Mayo Chinchipe of the upper Amazon around 3,000 BC.
The turkey was probably domesticated in Mexico or the American Southwest. The Aztecs developed irrigation systems, formed terraced hillsides, fertilized their soil, and developed chinampas or artificial islands. The Mayas used extensive canal and raised field systems to farm swampland from 400 BC. In South America agriculture may have begun about 9000 BC with the domestication of squash (Cucurbita) and other plants. Coca was domesticated in the Andes, as were the peanut, tomato, tobacco, and pineapple. Cotton was domesticated in Peru by 3,600 BC. Animals including llamas, alpacas, and guinea pigs were domesticated there. In North America, the indigenous people of the East domesticated crops such as sunflower, tobacco, squash and Chenopodium. Wild foods including wild rice and maple sugar were harvested. The domesticated strawberry is a hybrid of a Chilean and a North American species, developed by breeding in Europe and North America. The indigenous people of the Southwest and the Pacific Northwest practiced forest gardening and fire-stick farming. The natives controlled fire on a regional scale to create a low-intensity fire ecology that sustained a low-density agriculture in loose rotation; a sort of "wild" permaculture. A system of companion planting called the Three Sisters was developed in North America. The three crops were winter squash, maize, and climbing beans.
Indigenous Australians, long supposed to have been nomadic hunter-gatherers, practiced systematic burning, possibly to enhance natural productivity in fire-stick farming. Scholars have pointed out that hunter-gatherers need a productive environment to support gathering without cultivation. Because the forests of New Guinea have few food plants, early humans may have used "selective burning" to increase the productivity of the wild karuka fruit trees to support the hunter-gatherer way of life.
The Gunditjmara and other groups developed eel farming and fish trapping systems from some 5,000 years ago. There is evidence of 'intensification' across the whole continent over that period. In two regions of Australia, the central west coast and eastern central, early farmers cultivated yams, native millet, and bush onions, possibly in permanent settlements.
=== Revolution ===
In the Middle Ages, compared to the Roman period, agriculture in Western Europe became more focused on self-sufficiency. The agricultural population under feudalism was typically organized into manors consisting of several hundred or more acres of land presided over by a lord of the manor with a Roman Catholic church and priest.
Thanks to the exchange with the Al-Andalus where the Arab Agricultural Revolution was underway, European agriculture transformed, with improved techniques and the diffusion of crop plants, including the introduction of sugar, rice, cotton and fruit trees (such as the orange).
After 1492, the Columbian exchange brought New World crops such as maize, potatoes, tomatoes, sweet potatoes, and manioc to Europe, and Old World crops such as wheat, barley, rice, and turnips, and livestock (including horses, cattle, sheep and goats) to the Americas.
Irrigation, crop rotation, and fertilizers advanced from the 17th century with the British Agricultural Revolution, allowing global population to rise significantly. Since 1900, agriculture in developed nations, and to a lesser extent in the developing world, has seen large rises in productivity as mechanization replaces human labor, and assisted by synthetic fertilizers, pesticides, and selective breeding. The Haber-Bosch method allowed the synthesis of ammonium nitrate fertilizer on an industrial scale, greatly increasing crop yields and sustaining a further increase in global population.
Modern agriculture has raised or encountered ecological, political, and economic issues including water pollution, biofuels, genetically modified organisms, tariffs and farm subsidies, leading to alternative approaches such as the organic movement. Unsustainable farming practices in North America led to the Dust Bowl of the 1930s.
== Types ==
Pastoralism involves managing domesticated animals. In nomadic pastoralism, herds of livestock are moved from place to place in search of pasture, fodder, and water. This type of farming is practiced in arid and semi-arid regions of Sahara, Central Asia and some parts of India.
In shifting cultivation, a small area of forest is cleared by cutting and burning the trees. The cleared land is used for growing crops for a few years until the soil becomes too infertile, and the area is abandoned. Another patch of land is selected and the process is repeated. This type of farming is practiced mainly in areas with abundant rainfall where the forest regenerates quickly. This practice is used in Northeast India, Southeast Asia, and the Amazon Basin.
Subsistence farming is practiced to satisfy family or local needs alone, with little left over for transport elsewhere. It is intensively practiced in Monsoon Asia and South-East Asia. An estimated 2.5 billion subsistence farmers worked in 2018, cultivating about 60% of the earth's arable land.
Intensive farming is cultivation to maximize productivity, with a low fallow ratio and a high use of inputs (water, fertilizer, pesticide and automation). It is practiced mainly in developed countries.
== Contemporary agriculture ==
=== Status ===
From the twentieth century onwards, intensive agriculture increased crop productivity. It substituted synthetic fertilizers and pesticides for labor, but caused increased water pollution, and often involved farm subsidies. Soil degradation and diseases such as stem rust are major concerns globally; approximately 40% of the world's agricultural land is seriously degraded. In recent years there has been a backlash against the environmental effects of conventional agriculture, resulting in the organic, regenerative, and sustainable agriculture movements. One of the major forces behind this movement has been the European Union, which first certified organic food in 1991 and began reform of its Common Agricultural Policy (CAP) in 2005 to phase out commodity-linked farm subsidies, also known as decoupling. The growth of organic farming has renewed research in alternative technologies such as integrated pest management, selective breeding, and controlled-environment agriculture. There are concerns about the lower yield associated with organic farming and its impact on global food security. Recent mainstream technological developments include genetically modified food.
By 2015, the agricultural output of China was the largest in the world, followed by the European Union, India and the United States. Economists measure the total factor productivity of agriculture, according to which agriculture in the United States is roughly 1.7 times more productive than it was in 1948.
Agriculture employed 873 million people in 2021, or 27% of the global workforce, compared with 1 027 million (or 40%) in 2000. The share of agriculture in global GDP was stable at around 4% since 2000–2023.
Despite increases in agricultural production and productivity, between 702 and 828 million people were affected by hunger in 2021. Food insecurity and malnutrition can be the result of conflict, climate extremes and variability and economic swings. It can also be caused by a country's structural characteristics such as income status and natural resource endowments as well as its political economy.
Pesticide use in agriculture went up 62% between 2000 and 2021, with the Americas accounting for half the use in 2021.
The International Fund for Agricultural Development posits that an increase in smallholder agriculture may be part of the solution to concerns about food prices and overall food security, given the favorable experience of Vietnam.
=== Workforce ===
Agriculture provides about one-quarter of all global employment, more than half in sub-Saharan Africa and almost 60 percent in low-income countries. As countries develop, other jobs have historically pulled workers away from agriculture, and labor-saving innovations increase agricultural productivity by reducing labor requirements per unit of output. Over time, a combination of labor supply and labor demand trends have driven down the share of population employed in agriculture.
During the 16th century in Europe, between 55 and 75% of the population was engaged in agriculture; by the 19th century, this had dropped to between 35 and 65%. In the same countries today, the figure is less than 10%.
At the start of the 21st century, some one billion people, or over 1/3 of the available work force, were employed in agriculture. This constitutes approximately 70% of the global employment of children, and in many countries constitutes the largest percentage of women of any industry. The service sector overtook the agricultural sector as the largest global employer in 2007.
In many developed countries, immigrants help fill labor shortages in high-value agriculture activities that are difficult to mechanize. Foreign farm workers from mostly Eastern Europe, North Africa and South Asia constituted around one-third of the salaried agricultural workforce in Spain, Italy, Greece and Portugal in 2013. In the United States of America, more than half of all hired farmworkers (roughly 450,000 workers) were immigrants in 2019, although the number of new immigrants arriving in the country to work in agriculture has fallen by 75 percent in recent years and rising wages indicate this has led to a major labor shortage on U.S. farms.
==== Women in agriculture ====
Around the world, women make up a large share of the population employed in agriculture. This share is growing in all developing regions except East and Southeast Asia where women already make up about 50 percent of the agricultural workforce. Women make up 47 percent of the agricultural workforce in sub-Saharan Africa, a rate that has not changed significantly in the past few decades. However, the Food and Agriculture Organization of the United Nations (FAO) posits that the roles and responsibilities of women in agriculture may be changing – for example, from subsistence farming to wage employment, and from contributing household members to primary producers in the context of male-out-migration.
In general, women account for a greater share of agricultural employment at lower levels of economic development, as inadequate education, limited access to basic infrastructure and markets, high unpaid work burden and poor rural employment opportunities outside agriculture severely limit women's opportunities for off-farm work.
Women who work in agricultural production tend to do so under highly unfavorable conditions. They tend to be concentrated in the poorest countries, where alternative livelihoods are not available, and they maintain the intensity of their work in conditions of climate-induced weather shocks and in situations of conflict. Women are less likely to participate as entrepreneurs and independent farmers and are engaged in the production of less lucrative crops.
The gender gap in land productivity between female- and male managed farms of the same size is 24 percent. On average, women earn 18.4 percent less than men in wage employment in agriculture; this means that women receive 82 cents for every dollar earned by men. Progress has been slow in closing gaps in women's access to irrigation and in ownership of livestock, too.
Women in agriculture still have significantly less access than men to inputs, including improved seeds, fertilizers and mechanized equipment. On a positive note, the gender gap in access to mobile internet in low- and middle-income countries fell from 25 percent to 16 percent between 2017 and 2021, and the gender gap in access to bank accounts narrowed from 9 to 6 percentage points. Women are as likely as men to adopt new technologies when the necessary enabling factors are put in place and they have equal access to complementary resources.
=== Safety ===
Agriculture, specifically farming, remains a hazardous industry, and farmers worldwide remain at high risk of work-related injuries, lung disease, noise-induced hearing loss, skin diseases, as well as certain cancers related to chemical use and prolonged sun exposure. On industrialized farms, injuries frequently involve the use of agricultural machinery, and a common cause of fatal agricultural injuries in developed countries is tractor rollovers. Pesticides and other chemicals used in farming can be hazardous to worker health, and workers exposed to pesticides may experience illness or have children with birth defects. As an industry in which families commonly share in work and live on the farm itself, entire families can be at risk for injuries, illness, and death. Ages 0–6 may be an especially vulnerable population in agriculture; common causes of fatal injuries among young farm workers include drowning, machinery and motor accidents, including with all-terrain vehicles.
The International Labor Organization considers agriculture "one of the most hazardous of all economic sectors". It estimates that the annual work-related death toll among agricultural employees is at least 170,000, twice the average rate of other jobs. In addition, incidences of death, injury and illness related to agricultural activities often go unreported. The organization has developed the Safety and Health in Agriculture Convention, 2001, which covers the range of risks in the agriculture occupation, the prevention of these risks and the role that individuals and organizations engaged in agriculture should play.
In the United States, agriculture has been identified by the National Institute for Occupational Safety and Health as a priority industry sector in the National Occupational Research Agenda to identify and provide intervention strategies for occupational health and safety issues.
In the European Union, the European Agency for Safety and Health at Work has issued guidelines on implementing health and safety directives in agriculture, livestock farming, horticulture, and forestry. The Agricultural Safety and Health Council of America (ASHCA) also holds a yearly summit to discuss safety.
== Production ==
Overall production varies by country as listed.
=== Crop cultivation systems ===
Cropping systems vary among farms depending on the available resources and constraints; geography and climate of the farm; government policy; economic, social and political pressures; and the philosophy and culture of the farmer.
Shifting cultivation (or slash and burn) is a system in which forests are burnt, releasing nutrients to support cultivation of annual and then perennial crops for a period of several years. Then the plot is left fallow to regrow forest, and the farmer moves to a new plot, returning after many more years (10–20). This fallow period is shortened if population density grows, requiring the input of nutrients (fertilizer or manure) and some manual pest control. Annual cultivation is the next phase of intensity in which there is no fallow period. This requires even greater nutrient and pest control inputs.
Further industrialization led to the use of monocultures, when one cultivar is planted on a large acreage. Because of the low biodiversity, nutrient use is uniform and pests tend to build up, necessitating the greater use of pesticides and fertilizers. Multiple cropping, in which several crops are grown sequentially in one year, and intercropping, when several crops are grown at the same time, are other kinds of annual cropping systems known as polycultures.
In subtropical and arid environments, the timing and extent of agriculture may be limited by rainfall, either not allowing multiple annual crops in a year, or requiring irrigation. In all of these environments perennial crops are grown (coffee, chocolate) and systems are practiced such as agroforestry. In temperate environments, where ecosystems were predominantly grassland or prairie, highly productive annual farming is the dominant agricultural system.
Important categories of food crops include cereals, legumes, forage, fruits and vegetables. Natural fibers include cotton, wool, hemp, silk and flax. Specific crops are cultivated in distinct growing regions throughout the world. Production is listed in millions of metric tons, based on FAO estimates.
=== Livestock production systems ===
Animal husbandry is the breeding and raising of animals for meat, milk, eggs, or wool, and for work and transport. Working animals, including horses, mules, oxen, water buffalo, camels, llamas, alpacas, donkeys, and dogs, have for centuries been used to help cultivate fields, harvest crops, wrangle other animals, and transport farm products to buyers.
Livestock production systems can be defined based on feed source, as grassland-based, mixed, and landless. As of 2010, 30% of Earth's ice- and water-free area was used for producing livestock, with the sector employing approximately 1.3 billion people. Between the 1960s and the 2000s, there was a significant increase in livestock production, both by numbers and by carcass weight, especially among beef, pigs and chickens, the latter of which had production increased by almost a factor of 10. Non-meat animals, such as milk cows and egg-producing chickens, also showed significant production increases. Global cattle, sheep and goat populations are expected to continue to increase sharply through 2050. Aquaculture or fish farming, the production of fish for human consumption in confined operations, is one of the fastest growing sectors of food production, growing at an average of 9% a year between 1975 and 2007.
During the second half of the 20th century, producers using selective breeding focused on creating livestock breeds and crossbreeds that increased production, while mostly disregarding the need to preserve genetic diversity. This trend has led to a significant decrease in genetic diversity and resources among livestock breeds, leading to a corresponding decrease in disease resistance and local adaptations previously found among traditional breeds.
Grassland based livestock production relies upon plant material such as shrubland, rangeland, and pastures for feeding ruminant animals. Outside nutrient inputs may be used, however manure is returned directly to the grassland as a major nutrient source. This system is particularly important in areas where crop production is not feasible because of climate or soil, representing 30–40 million pastoralists. Mixed production systems use grassland, fodder crops and grain feed crops as feed for ruminant and monogastric (one stomach; mainly chickens and pigs) livestock. Manure is typically recycled in mixed systems as a fertilizer for crops.
Landless systems rely upon feed from outside the farm, representing the de-linking of crop and livestock production found more prevalently in Organization for Economic Co-operation and Development member countries. Synthetic fertilizers are more heavily relied upon for crop production and manure use becomes a challenge as well as a source for pollution. Industrialized countries use these operations to produce much of the global supplies of poultry and pork. Scientists estimate that 75% of the growth in livestock production between 2003 and 2030 will be in confined animal feeding operations, sometimes called factory farming. Much of this growth is happening in developing countries in Asia, with much smaller amounts of growth in Africa. Some of the practices used in commercial livestock production, including the usage of growth hormones, are controversial.
=== Production practices ===
Tillage is the practice of breaking up the soil with tools such as the plow or harrow to prepare for planting, for nutrient incorporation, or for pest control. Tillage varies in intensity from conventional to no-till. It can improve productivity by warming the soil, incorporating fertilizer and controlling weeds, but also renders soil more prone to erosion, triggers the decomposition of organic matter releasing CO2, and reduces the abundance and diversity of soil organisms.
Pest control includes the management of weeds, insects, mites, and diseases. Chemical (pesticides), biological (biocontrol), mechanical (tillage), and cultural practices are used. Cultural practices include crop rotation, culling, cover crops, intercropping, composting, avoidance, and resistance. Integrated pest management attempts to use all of these methods to keep pest populations below the number which would cause economic loss, and recommends pesticides as a last resort.
Nutrient management includes both the source of nutrient inputs for crop and livestock production, and the method of use of manure produced by livestock. Nutrient inputs can be chemical inorganic fertilizers, manure, green manure, compost and minerals. Crop nutrient use may also be managed using cultural techniques such as crop rotation or a fallow period. Manure is used either by holding livestock where the feed crop is growing, such as in managed intensive rotational grazing, or by spreading either dry or liquid formulations of manure on cropland or pastures.
Water management is needed where rainfall is insufficient or variable, which occurs to some degree in most regions of the world. Some farmers use irrigation to supplement rainfall. In other areas such as the Great Plains in the U.S. and Canada, farmers use a fallow year to conserve soil moisture for the following year. Recent technological innovations in precision agriculture allow for water status monitoring and automate water usage, leading to more efficient management. Agriculture represents 70% of freshwater use worldwide. However, water withdrawal ratios for agriculture vary significantly by income level. In least developed countries and landlocked developing countries, water withdrawal ratios for agriculture are as high as 90 percent of total water withdrawals and about 60 percent in Small Island Developing States.
According to 2014 report by the International Food Policy Research Institute, agricultural technologies will have the greatest impact on food production if adopted in combination with each other. Using a model that assessed how eleven technologies could impact agricultural productivity, food security and trade by 2050, the International Food Policy Research Institute found that the number of people at risk from hunger could be reduced by as much as 40% and food prices could be reduced by almost half.
Payment for ecosystem services is a method of providing additional incentives to encourage farmers to conserve some aspects of the environment. Measures might include paying for reforestation upstream of a city, to improve the supply of fresh water.
=== Agricultural automation ===
Different definitions exist for agricultural automation and for the variety of tools and technologies that are used to automate production. One view is that agricultural automation refers to autonomous navigation by robots without human intervention. Alternatively, it is defined as the accomplishment of production tasks through mobile, autonomous, decision-making, mechatronic devices. However, FAO finds that these definitions do not capture all the aspects and forms of automation, such as robotic milking machines that are static, most motorized machinery that automates the performing of agricultural operations, and digital tools (e.g., sensors) that automate only diagnosis. FAO defines agricultural automation as the use of machinery and equipment in agricultural operations to improve their diagnosis, decision-making or performing, reducing the drudgery of agricultural work or improving the timeliness, and potentially the precision, of agricultural operations.
The technological evolution in agriculture has involved a progressive move from manual tools to animal traction, to motorized mechanization, to digital equipment and finally, to robotics with artificial intelligence (AI). Motorized mechanization using engine power automates the performance of agricultural operations such as ploughing and milking. With digital automation technologies, it also becomes possible to automate diagnosis and decision-making of agricultural operations. For example, autonomous crop robots can harvest and seed crops, while drones can gather information to help automate input application. Precision agriculture often employs such automation technologies. Motorized machines are increasingly complemented, or even superseded, by new digital equipment that automates diagnosis and decision-making. A conventional tractor, for example, can be converted into an automated vehicle allowing it to sow a field autonomously.
Motorized mechanization has increased significantly across the world in recent years, although reliable global data with broad country coverage exist only for tractors and only up to 2009. Sub-Saharan Africa is the only region where the adoption of motorized mechanization has stalled over the past decades.
Automation technologies are increasingly used for managing livestock, though evidence on adoption is lacking. Global automatic milking system sales have increased over recent years, but adoption is likely mostly in Northern Europe, and likely almost absent in low- and middle-income countries. Automated feeding machines for both cows and poultry also exist, but data and evidence regarding their adoption trends and drivers is likewise scarce.
Measuring the overall employment impacts of agricultural automation is difficult because it requires large amounts of data tracking all the transformations and the associated reallocation of workers both upstream and downstream. While automation technologies reduce labor needs for the newly automated tasks, they also generate new labor demand for other tasks, such as equipment maintenance and operation. Agricultural automation can also stimulate employment by allowing producers to expand production and by creating other agrifood systems jobs. This is especially true when it happens in context of rising scarcity of rural labor, as is the case in high-income countries and many middle-income countries. On the other hand, if forcedly promoted, for example through government subsidies in contexts of abundant rural labor, it can lead to labor displacement and falling or stagnant wages, particularly affecting poor and low-skilled workers.
=== Effects of climate change on yields ===
Climate change and agriculture are interrelated on a global scale. Climate change affects agriculture through changes in average temperatures, rainfall, and weather extremes (like storms and heat waves); changes in pests and diseases; changes in atmospheric carbon dioxide and ground-level ozone concentrations; changes in the nutritional quality of some foods; and changes in sea level. Global warming is already affecting agriculture, with effects unevenly distributed across the world.
In a 2022 report, the Intergovernmental Panel on Climate Change describes how human-induced warming has slowed growth of agricultural productivity over the past 50 years in mid and low latitudes. Methane emissions have negatively impacted crop yields by increasing temperatures and surface ozone concentrations. Warming is also negatively affecting crop and grassland quality and harvest stability. Ocean warming has decreased sustainable yields of some wild fish populations while ocean acidification and warming have already affected farmed aquatic species. Climate change will probably increase the risk of food insecurity for some vulnerable groups, such as the poor.
== Crop alteration and biotechnology ==
=== Plant breeding ===
Crop alteration has been practiced by humankind for thousands of years, since the beginning of civilization. Altering crops through breeding practices changes the genetic make-up of a plant to develop crops with more beneficial characteristics for humans, for example, larger fruits or seeds, drought-tolerance, or resistance to pests. Significant advances in plant breeding ensued after the work of geneticist Gregor Mendel. His work on dominant and recessive alleles, although initially largely ignored for almost 50 years, gave plant breeders a better understanding of genetics and breeding techniques. Crop breeding includes techniques such as plant selection with desirable traits, self-pollination and cross-pollination, and molecular techniques that genetically modify the organism.
Domestication of plants has, over the centuries increased yield, improved disease resistance and drought tolerance, eased harvest and improved the taste and nutritional value of crop plants. Careful selection and breeding have had enormous effects on the characteristics of crop plants. Plant selection and breeding in the 1920s and 1930s improved pasture (grasses and clover) in New Zealand. Extensive X-ray and ultraviolet induced mutagenesis efforts (i.e. primitive genetic engineering) during the 1950s produced the modern commercial varieties of grains such as wheat, corn (maize) and barley.
The Green Revolution popularized the use of conventional hybridization to sharply increase yield by creating "high-yielding varieties". For example, average yields of corn (maize) in the US have increased from around 2.5 tons per hectare (t/ha) (40 bushels per acre) in 1900 to about 9.4 t/ha (150 bushels per acre) in 2001. Similarly, worldwide average wheat yields have increased from less than 1 t/ha in 1900 to more than 2.5 t/ha in 1990. South American average wheat yields are around 2 t/ha, African under 1 t/ha, and Egypt and Arabia up to 3.5 to 4 t/ha with irrigation. In contrast, the average wheat yield in countries such as France is over 8 t/ha. Variations in yields are due mainly to variation in climate, genetics, and the level of intensive farming techniques (use of fertilizers, chemical pest control, and growth control to avoid lodging).
Investments into innovation for agriculture are long term. This is because it takes time for research to become commercialized and for technology to be adapted to meet multiple regions’ needs, as well as meet national guidelines before being adopted and planted in a farmer's fields. For instance, it took at least 60 years from the introduction of hybrid corn technology before its adoption became widespread.
Agricultural innovation developed for the specific agroecological conditions of one region is not easily transferred and used in another region with different agroecological conditions. Instead, the innovation would have to be adapted to the specific conditions of that other region and respect its biodiversity and environmental requirements and guidelines. Some such adaptations can be seen through the steadily increasing number of plant varieties protected under the plant variety protection instrument administered by the International Union for the Protection of New Varieties of Plants (UPOV).
=== Genetic engineering ===
Genetically modified organisms (GMO) are organisms whose genetic material has been altered by genetic engineering techniques generally known as recombinant DNA technology. Genetic engineering has expanded the genes available to breeders to use in creating desired germlines for new crops. Increased durability, nutritional content, insect and virus resistance and herbicide tolerance are a few of the attributes bred into crops through genetic engineering. For some, GMO crops cause food safety and food labeling concerns. Numerous countries have placed restrictions on the production, import or use of GMO foods and crops. The Biosafety Protocol, an international treaty, regulates the trade of GMOs. There is ongoing discussion regarding the labeling of foods made from GMOs, and while the EU currently requires all GMO foods to be labeled, the US does not.
Herbicide-resistant seeds have a gene implanted into their genome that allows the plants to tolerate exposure to herbicides, including glyphosate. These seeds allow the farmer to grow a crop that can be sprayed with herbicides to control weeds without harming the resistant crop. Herbicide-tolerant crops are used by farmers worldwide. With the increasing use of herbicide-tolerant crops, comes an increase in the use of glyphosate-based herbicide sprays. In some areas glyphosate resistant weeds have developed, causing farmers to switch to other herbicides. Some studies also link widespread glyphosate usage to iron deficiencies in some crops, which is both a crop production and a nutritional quality concern, with potential economic and health implications.
Other GMO crops used by growers include insect-resistant crops, which have a gene from the soil bacterium Bacillus thuringiensis (Bt), which produces a toxin specific to insects. These crops resist damage by insects. Some believe that similar or better pest-resistance traits can be acquired through traditional breeding practices, and resistance to various pests can be gained through hybridization or cross-pollination with wild species. In some cases, wild species are the primary source of resistance traits; some tomato cultivars that have gained resistance to at least 19 diseases did so through crossing with wild populations of tomatoes.
== Environmental impact ==
=== Effects and costs ===
Agriculture is both a cause of and sensitive to environmental degradation, such as biodiversity loss, desertification, soil degradation and climate change, which cause decreases in crop yield. Agriculture is one of the most important drivers of environmental pressures, particularly habitat change, climate change, water use and toxic emissions. Agriculture is the main source of toxins released into the environment, including insecticides, especially those used on cotton. The 2011 UNEP Green Economy report stated that agricultural operations produced some 13 percent of anthropogenic global greenhouse gas emissions. This includes gases from the use of inorganic fertilizers, agro-chemical pesticides, and herbicides, as well as fossil fuel-energy inputs.
Agriculture imposes multiple external costs upon society through effects such as pesticide damage to nature (especially herbicides and insecticides), nutrient runoff, excessive water usage, and loss of natural environment. A 2000 assessment of agriculture in the UK determined total external costs for 1996 of £2,343 million, or £208 per hectare. A 2005 analysis of these costs in the US concluded that cropland imposes approximately $5 to $16 billion ($30 to $96 per hectare), while livestock production imposes $714 million. Both studies, which focused solely on the fiscal impacts, concluded that more should be done to internalize external costs. Neither included subsidies in their analysis, but they noted that subsidies also influence the cost of agriculture to society.
Agriculture seeks to increase yield and to reduce costs, often employing measures that cut biodiversity to very low levels. Yield increases with inputs such as fertilizers and removal of pathogens, predators, and competitors (such as weeds). Costs decrease with increasing scale of farm units, such as making fields larger; this means removing hedges, ditches and other areas of habitat. Pesticides kill insects, plants and fungi. Effective yields fall with on-farm losses, which may be caused by poor production practices during harvesting, handling, and storage.
The environmental effects of climate change show that research on pests and diseases that do not generally afflict areas is essential. In 2021, farmers discovered stem rust on wheat in the Champagne area of France, a disease that had previously only occurred in Morocco for 20 to 30 years. Because of climate change, insects that used to die off over the winter are now alive and multiplying.
=== Livestock issues ===
A senior UN official, Henning Steinfeld, said that "Livestock are one of the most significant contributors to today's most serious environmental problems". Livestock production occupies 70% of all land used for agriculture, or 30% of the land surface of the planet. It is one of the largest sources of greenhouse gases, responsible for 18% of the world's greenhouse gas emissions as measured in CO2 equivalents. By comparison, all transportation emits 13.5% of the CO2. (This comparison later turned out to be an apples-and-oranges analogy.) It produces 65% of human-related nitrous oxide (which has 296 times the global warming potential of CO2) and 37% of all human-induced methane (which is 23 times as warming as CO2.) It also generates 64% of the ammonia emission. Livestock expansion is cited as a key factor driving deforestation; in the Amazon basin 70% of previously forested area is now occupied by pastures and the remainder used for feed crops. Through deforestation and land degradation, livestock is also driving reductions in biodiversity. A well documented phenomenon is woody plant encroachment, caused by overgrazing in rangelands. Furthermore, the United Nations Environment Programme (UNEP) states that "methane emissions from global livestock are projected to increase by 60 per cent by 2030 under current practices and consumption patterns."
=== Land and water issues ===
Land transformation, the use of land to yield goods and services, is the most substantial way humans alter the Earth's ecosystems, and is the driving force causing biodiversity loss. Estimates of the amount of land transformed by humans vary from 39 to 50%. It is estimated that 24% of land globally experiences land degradation, a long-term decline in ecosystem function and productivity, with cropland being disproportionately affected. Land management is the driving factor behind degradation; 1.5 billion people rely upon the degrading land. Degradation can be through deforestation, desertification, soil erosion, mineral depletion, acidification, or salinization. In 2021, the global agricultural land area was 4.79 billion hectares (ha), down 2 percent, or 0.09 billion ha compared with 2000. Between 2000 and 2021, roughly two-thirds of agricultural land were used for permanent meadows and pastures (3.21 billion ha in 2021), which declined by 5 percent (0.17 billion ha). One-third of the total agricultural land was cropland (1.58 billion ha in 2021), which increased by 6 percent (0.09 billion ha).
Eutrophication, excessive nutrient enrichment in aquatic ecosystems resulting in algal blooms and anoxia, leads to fish kills, loss of biodiversity, and renders water unfit for drinking and other industrial uses. Excessive fertilization and manure application to cropland, as well as high livestock stocking densities cause nutrient (mainly nitrogen and phosphorus) runoff and leaching from agricultural land. These nutrients are major nonpoint pollutants contributing to eutrophication of aquatic ecosystems and pollution of groundwater, with harmful effects on human populations. Fertilizers also reduce terrestrial biodiversity by increasing competition for light, favoring those species that are able to benefit from the added nutrients.
Agriculture simultaneously is facing growing freshwater demand and precipitation anomalies (droughts, floods, and extreme rainfall and weather events) on rainfed areas fields and grazing lands. Agriculture accounts for 70 percent of withdrawals of freshwater resources, and an estimated 41 percent of current global irrigation water use occurs at the expense of environmental flow requirements. It is long known that aquifers in areas as diverse as northern China, the Upper Ganges and the western US are being depleted, and new research extends these problems to aquifers in Iran, Mexico and Saudi Arabia. Increasing pressure is being placed on water resources by industry and urban areas, meaning that water scarcity is increasing and agriculture is facing the challenge of producing more food for the world's growing population with reduced water resources. While industrial withdrawals have declined in the past few decades and municipal withdrawals have increased only marginally since 2010, agricultural withdrawals have continued to grow at an ever faster pace. Agricultural water usage can also cause major environmental problems, including the destruction of natural wetlands, the spread of water-borne diseases, and land degradation through salinization and waterlogging, when irrigation is performed incorrectly.
=== Pesticides ===
Pesticide use has increased since 1950 to 2.5 million short tons annually worldwide, yet crop loss from pests has remained relatively constant. The World Health Organization estimated in 1992 that three million pesticide poisonings occur annually, causing 220,000 deaths. Pesticides select for pesticide resistance in the pest population, leading to a condition termed the "pesticide treadmill" in which pest resistance warrants the development of a new pesticide.
An alternative argument is that the way to "save the environment" and prevent famine is by using pesticides and intensive high yield farming, a view exemplified by a quote heading the Center for Global Food Issues website: 'Growing more per acre leaves more land for nature'. However, critics argue that a trade-off between the environment and a need for food is not inevitable, and that pesticides can replace good agronomic practices such as crop rotation. The Push–pull agricultural pest management technique involves intercropping, using plant aromas to repel pests from crops (push) and to lure them to a place from which they can then be removed (pull).
=== Contribution to climate change ===
Agriculture contributes towards climate change through greenhouse gas emissions and by the conversion of non-agricultural land such as forests into agricultural land. The agriculture, forestry and land use sector contribute between 13% and 21% of global greenhouse gas emissions. Emissions of nitrous oxide, methane make up over half of total greenhouse gas emission from agriculture. Animal husbandry is a major source of greenhouse gas emissions.
Approximately 57% of global GHG emissions from the production of food are from the production of animal-based food while plant-based foods contribute 29% and the remaining 14% is for other utilizations. Farmland management and land-use change represented major shares of total emissions (38% and 29%, respectively), whereas rice and beef were the largest contributing plant- and animal-based commodities (12% and 25%, respectively). South and Southeast Asia and South America were the largest emitters of production-based GHGs.
=== Effects of climate change on agriculture ===
Climate change put significant part of crops in danger already at 1.5 degrees of warming. While in North Anerica, Europe and central Asia the share of endangered crops is relatively little at this level of warming, in the Middle east and North Africa region for example, close to 50% of cropland is in danger. With further temperature rise the risk increase in all regions, in some more, in some less. Globally the cropland area in safe climatic zone decrease for all the major crop groups as warming exceed 1.5 degrees.
=== Sustainability ===
Current farming methods have resulted in over-stretched water resources, high levels of erosion and reduced soil fertility. There is not enough water to continue farming using current practices; therefore how water, land, and ecosystem resources are used to boost crop yields must be reconsidered. A solution would be to give value to ecosystems, recognizing environmental and livelihood tradeoffs, and balancing the rights of a variety of users and interests. Inequities that result when such measures are adopted would need to be addressed, such as the reallocation of water from poor to rich, the clearing of land to make way for more productive farmland, or the preservation of a wetland system that limits fishing rights.
Technological advancements help provide farmers with tools and resources to make farming more sustainable. Technology permits innovations like conservation tillage, a farming process which helps prevent land loss to erosion, reduces water pollution, and enhances carbon sequestration.
Agricultural automation can help address some of the challenges associated with climate change and thus facilitate adaptation efforts. For example, the application of digital automation technologies (e.g. in precision agriculture) can improve resource-use efficiency in conditions which are increasingly constrained for agricultural producers. Moreover, when applied to sensing and early warning, they can help address the uncertainty and unpredictability of weather conditions associated with accelerating climate change.
Other potential sustainable practices include conservation agriculture, agroforestry, improved grazing, avoided grassland conversion, and biochar. Current mono-crop farming practices in the United States preclude widespread adoption of sustainable practices, such as 2–3 crop rotations that incorporate grass or hay with annual crops, unless negative emission goals such as soil carbon sequestration become policy.
The food demand of Earth's projected population, with current climate change predictions, could be satisfied by improvement of agricultural methods, expansion of agricultural areas, and a sustainability-oriented consumer mindset.
=== Energy dependence ===
Since the 1940s, agricultural productivity has increased dramatically, due largely to the increased use of energy-intensive mechanization, fertilizers and pesticides. The vast majority of this energy input comes from fossil fuel sources. Between the 1960s and the 1980s, the Green Revolution transformed agriculture around the globe, with world grain production increasing significantly (between 70% and 390% for wheat and 60% to 150% for rice, depending on geographic area) as world population doubled. Heavy reliance on petrochemicals has raised concerns that oil shortages could increase costs and reduce agricultural output.
Industrialized agriculture depends on fossil fuels in two fundamental ways: direct consumption on the farm and manufacture of inputs used on the farm. Direct consumption includes the use of lubricants and fuels to operate farm vehicles and machinery.
Indirect consumption includes the manufacture of fertilizers, pesticides, and farm machinery. In particular, the production of nitrogen fertilizer can account for over half of agricultural energy usage. Together, direct and indirect consumption by US farms accounts for about 2% of the nation's energy use. Direct and indirect energy consumption by U.S. farms peaked in 1979, and has since gradually declined. Food systems encompass not just agriculture but off-farm processing, packaging, transporting, marketing, consumption, and disposal of food and food-related items. Agriculture accounts for less than one-fifth of food system energy use in the US.
=== Plastic pollution ===
Plastic products are used extensively in agriculture, including to increase crop yields and improve the efficiency of water and agrichemical use. "Agriplastic" products include films to cover greenhouses and tunnels, mulch to cover soil (e.g. to suppress weeds, conserve water, increase soil temperature and aid fertilizer application), shade cloth, pesticide containers, seedling trays, protective mesh and irrigation tubing. The polymers most commonly used in these products are low- density polyethylene (LPDE), linear low-density polyethylene (LLDPE), polypropylene (PP) and polyvinyl chloride (PVC).
The total amount of plastics used in agriculture is difficult to quantify. A 2012 study reported that almost 6.5 million tonnes per year were consumed globally while a later study estimated that global demand in 2015 was between 7.3 million and 9 million tonnes. Widespread use of plastic mulch and lack of systematic collection and management have led to the generation of large amounts of mulch residue. Weathering and degradation eventually cause the mulch to fragment. These fragments and larger pieces of plastic accumulate in soil. Mulch residue has been measured at levels of 50 to 260 kg per hectare in topsoil in areas where mulch use dates back more than 10 years, which confirms that mulching is a major source of both microplastic and macroplastic soil contamination.
Agricultural plastics, especially plastic films, are not easy to recycle because of high contamination levels (up to 40–50% by weight contamination by pesticides, fertilizers, soil and debris, moist vegetation, silage juice water, and UV stabilizers) and collection difficulties . Therefore, they are often buried or abandoned in fields and watercourses or burned. These disposal practices lead to soil degradation and can result in contamination of soils and leakage of microplastics into the marine environment as a result of precipitation run-off and tidal washing. In addition, additives in residual plastic film (such as UV and thermal stabilizers) may have deleterious effects on crop growth, soil structure, nutrient transport and salt levels. There is a risk that plastic mulch will deteriorate soil quality, deplete soil organic matter stocks, increase soil water repellence and emit greenhouse gases. Microplastics released through fragmentation of agricultural plastics can absorb and concentrate contaminants capable of being passed up the trophic chain.
== Disciplines ==
=== Agricultural economics ===
Agricultural economics is economics as it relates to the "production, distribution and consumption of [agricultural] goods and services". Combining agricultural production with general theories of marketing and business as a discipline of study began in the late 1800s, and grew significantly through the 20th century. Although the study of agricultural economics is relatively recent, major trends in agriculture have significantly affected national and international economies throughout history, ranging from tenant farmers and sharecropping in the post-American Civil War Southern United States to the European feudal system of manorialism. In the United States, and elsewhere, food costs attributed to food processing, distribution, and agricultural marketing, sometimes referred to as the value chain, have risen while the costs attributed to farming have declined. This is related to the greater efficiency of farming, combined with the increased level of value addition (e.g. more highly processed products) provided by the supply chain. Market concentration has increased in the sector as well, and although the total effect of the increased market concentration is likely increased efficiency, the changes redistribute economic surplus from producers (farmers) and consumers, and may have negative implications for rural communities.
National government policies, such as taxation, subsidies, tariffs and others, can significantly change the economic marketplace for agricultural products. Since at least the 1960s, a combination of trade restrictions, exchange rate policies and subsidies have affected farmers in both the developing and the developed world. In the 1980s, non-subsidized farmers in developing countries experienced adverse effects from national policies that created artificially low global prices for farm products. Between the mid-1980s and the early 2000s, several international agreements limited agricultural tariffs, subsidies and other trade restrictions.
However, as of 2009, there was still a significant amount of policy-driven distortion in global agricultural product prices. The three agricultural products with the most trade distortion were sugar, milk and rice, mainly due to taxation. Among the oilseeds, sesame had the most taxation, but overall, feed grains and oilseeds had much lower levels of taxation than livestock products. Since the 1980s, policy-driven distortions have decreases more among livestock products than crops during the worldwide reforms in agricultural policy. Despite this progress, certain crops, such as cotton, still see subsidies in developed countries artificially deflating global prices, causing hardship in developing countries with non-subsidized farmers. Unprocessed commodities such as corn, soybeans, and cattle are generally graded to indicate quality, affecting the price the producer receives. Commodities are generally reported by production quantities, such as volume, number or weight.
=== Agricultural science ===
Agricultural science is a broad multidisciplinary field of biology that encompasses the parts of exact, natural, economic and social sciences used in the practice and understanding of agriculture. It covers topics such as agronomy, plant breeding and genetics, plant pathology, crop modeling, soil science, entomology, production techniques and improvement, study of pests and their management, and study of adverse environmental effects such as soil degradation, waste management, and bioremediation.
The scientific study of agriculture began in the 18th century, when Johann Friedrich Mayer conducted experiments on the use of gypsum (hydrated calcium sulphate) as a fertilizer. Research became more systematic when in 1843, John Lawes and Henry Gilbert began a set of long-term agronomy field experiments at Rothamsted Research Station in England; some of them, such as the Park Grass Experiment, are still running. In America, the Hatch Act of 1887 provided funding for what it was the first to call "agricultural science", driven by farmers' interest in fertilizers. In agricultural entomology, the USDA began to research biological control in 1881; it instituted its first large program in 1905, searching Europe and Japan for natural enemies of the spongy moth and brown-tail moth, establishing parasitoids (such as solitary wasps) and predators of both pests in the US.
== Policy ==
Agricultural policy is the set of government decisions and actions relating to domestic agriculture and imports of foreign agricultural products. Governments usually implement agricultural policies with the goal of achieving a specific outcome in the domestic agricultural product markets. Some overarching themes include risk management and adjustment (including policies related to climate change, food safety and natural disasters), economic stability (including policies related to taxes), natural resources and environmental sustainability (especially water policy), research and development, and market access for domestic commodities (including relations with global organizations and agreements with other countries). Agricultural policy can also touch on food quality, ensuring that the food supply is of a consistent and known quality, food security, ensuring that the food supply meets the population's needs, and conservation. Policy programs can range from financial programs, such as subsidies, to encouraging producers to enroll in voluntary quality assurance programs.
A 2021 report finds that globally, support to agricultural producers accounts for almost US$540 billion a year. This amounts to 15 percent of total agricultural production value, and is heavily biased towards measures that are leading to inefficiency, as well as are unequally distributed and harmful for the environment and human health.
There are many influences on the creation of agricultural policy, including consumers, agribusiness, trade lobbies and other groups. Agribusiness interests hold a large amount of influence over policy making, in the form of lobbying and campaign contributions. Political action groups, including those interested in environmental issues and labor unions, also provide influence, as do lobbying organizations representing individual agricultural commodities. The Food and Agriculture Organization of the United Nations (FAO) leads international efforts to defeat hunger and provides a forum for the negotiation of global agricultural regulations and agreements. Samuel Jutzi, director of FAO's animal production and health division, states that lobbying by large corporations has stopped reforms that would improve human health and the environment. For example, proposals in 2010 for a voluntary code of conduct for the livestock industry that would have provided incentives for improving standards for health, and environmental regulations, such as the number of animals an area of land can support without long-term damage, were successfully defeated due to large food company pressure.
== See also ==
== References ==
== Cited sources ==
Acquaah, George (2002). Principles of Crop Production: Theory, Techniques, and Technology. Prentice Hall. ISBN 978-0-13-022133-9.
Chrispeels, Maarten J.; Sadava, David E. (1994). Plants, Genes, and Agriculture. Boston, Massachusetts: Jones and Bartlett. ISBN 978-0-86720-871-9.
Needham, Joseph (1986). Science and Civilization in China. Taipei: Caves Books.
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 IGO (license statement/permission). Text taken from Drowning in Plastics – Marine Litter and Plastic Waste Vital Graphics, United Nations Environment Programme.
This article incorporates text from a free content work (license statement/permission). Text taken from In Brief: The State of Food and Agriculture 2019. Moving forward on food loss and waste reduction, FAO, FAO.
This article incorporates text from a free content work (license statement/permission). Text taken from In Brief to The State of Food Security and Nutrition in the World 2022. Repurposing food and agricultural policies to make healthy diets more affordable, FAO.
This article incorporates text from a free content work (license statement/permission). Text taken from In Brief: The State of Food and Agriculture 2018. Migration, agriculture and rural development, FAO, FAO.
This article incorporates text from a free content work (license statement/permission). Text taken from In Brief to The State of Food and Agriculture 2022. Leveraging automation in agriculture for transforming agrifood systems, FAO, FAO.
This article incorporates text from a free content work (license statement/permission). Text taken from Enabling inclusive agricultural automation, FAO, FAO.
This article incorporates text from a free content work. Licensed under CC BY-SA 3.0 (license statement/permission). Text taken from The status of women in agrifood systems – Overview, FAO, FAO.
This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO.
This article incorporates text from a free content work. Licensed under CC BY 4.0 (license statement/permission). Text taken from World Intellectual Property Report 2024 - The importance of local capabilities in AgTech specialization, WIPO, WIPO.
== External links ==
Food and Agriculture Organization
United States Department of Agriculture
Agriculture material from the World Bank Group
Agriculture collected news and commentary at The New York Times
Agriculture collected news and commentary at The Guardian | Wikipedia/Agricultural_systems |
A Bachelor of Arts and Science(s) (BASc), sometimes as Bachelor of Science and Arts (BScA or BSA), is an undergraduate bachelor's degree conferred by a small number of universities from countries including the United States, Canada, the United Kingdom, New Zealand, Australia, and France. There is no one set way in which a Bachelor of Arts and Science programme is generally structured but they generally involve students taking interdisciplinary courses from both the liberal arts and the sciences, and/or require a student to complete the general requirements for a bachelor's degree for two different academic majors (or academic minors) — one that usually leads to a BA degree and one that usually leads to a BSc degree. The degrees are generally designed to be completed in three to four years, depending on the institution.
In English-speaking universities, the BASc is not an example of a double degree, as universities only confer a single degree. However, Sciences Po, the only French-speaking university to offer the programme, grants a dual Bachelor's degree upon completion.
== References == | Wikipedia/Bachelor_of_Arts_and_Science |
The Bachelor of Applied Arts and Sciences, often abbreviated as BAAS or BAASc, is an undergraduate degree.
== Usage ==
In the United States, the Bachelor of Applied Arts and Sciences (B.A.A.S.) degree is considered a completion degree. The degree can be awarded to students who have both technical education and traditional college/university education. Some universities also give credit for work-related training and certification completed by the student. Applied Arts and Sciences degree programs typically require a student to complete an academic core program consisting of English, History, Political Science, Philosophy and Sociology and the Sciences like Mathematics, Biology, Chemistry and Physics consisting of 40-60 semester credit hours. Technical coursework can count 30-60 credit hours, and in some cases, work experience and certifications are evaluated and up to 30 credit hours may be awarded towards a degree. Upper level academic credit hours make up 30-45 hours of coursework, depending on the program. Some programs include the declaration of a particular major or specialization. Others include several concentrations. A total of 120 semester hours is the typical total credit requirement for most Bachelor's degree programs in the United States.
BAAS degrees are fully accredited degrees when offered by accredited educational institutions and meet the same qualifications as more traditional Bachelor's degrees, allowing admission into graduate schools and law schools for further study in pursuit of Master's degrees or Doctoral degrees.
== See also ==
Bachelor of Applied Science
Bachelor of Applied Arts
Bachelor of Arts
Bachelor of Science
Bachelor's degree
== References == | Wikipedia/Bachelor_of_Applied_Arts_and_Sciences |
Integrated Engineering is a degree program (and similar concept programs such as Interdisciplinary and Multidisciplinary Engineering) combining aspects from traditional engineering studies and liberal arts, meant to prepare graduates for multi-disciplinary and project-based workplaces. Integrated engineers acquire background in core disciplines such as: materials, solid mechanics, fluid mechanics, and systems involving chemical, electro-mechanical, biological and environmental components. In the United States ,an alliance of Integrated - type programs has been formed called the Alliance for Integrated Engineering (A4IE).
== Academia and Accreditation ==
=== Institutions ===
Currently, the following academic institutions are known to offer Integrated Engineering programs:
Canada
University of British Columbia
University of Western Ontario
UK
The New Model Institute for Technology and Engineering (NMITE)
University of Bath
University of Cardiff
University of Liverpool
University of Nottingham
Anglia Ruskin University
University Centre Peterborough
United States
Arizona State University
Florida International University
Lafayette College
Lehigh University
Southern Utah University
Minnesota State University, Mankato (Iron Range Engineering)
Texas A&M University
University of Alabama at Birmingham
University of Texas at El Paso (E-Lead Program)
University of San Diego
Wake Forest University
Washington and Lee University
Germany
Baden-Wuerttemberg Cooperative State University (DHBW)
South Westphalia University of Applied Sciences
Estonia
Tallinn University of Technology
Korea
Chung-Ang University
Trinidad and Tobago
University of Trinidad and Tobago
Thailand
Chiang Mai University
=== Canada ===
Integrated Engineering originated at the University of Western Ontario in Ontario, Canada and in 2000 the Applied Science Faculty of the University of British Columbia also began a degree program for Integrated Engineering.
In Canada, the program has been fully accredited by the Canadian Engineering Accreditation Board and engineers are able to obtain a Professional Engineer (P.Eng) Certificate.
=== United Kingdom ===
In 1988, the Engineering Council UK, identified the need for routes to qualification for Chartered (Professional) Engineers that:
meet the identified needs of industry,
increase access to engineering education by more students,
provide a balanced curriculum combining the subjects that engineers use most often and directed towards the needs of the majority of engineers.
This is the fundamental definition for Integrated Engineering.
The qualities looked for by industry when recruiting graduates were identified as:
flexibility and broad education,
ability to understand non engineering functions,
ability to solve problems,
knowledge of the principles of engineering and ability to apply them in practical situations,
information skills,
experience of project work, especially cross linked projects,
ability to work as a member of a team,
presentation and communication skills.
Engineering Council UK, 1988, An Integrated Engineering Degree Programme. Engineering Council UK, 1988, Admissions to Universities - Action to increase the supply of engineers.
Following open competition for additional funding provided by the UK Department for Technology and Industry, and industrial supporters including British Petroleum, six universities were selected from thirty three applicants. Four "Pilot Programmes" were launched at Cardiff University, Nottingham Trent University, Portsmouth University and Sheffield Hallam University.
In 1989, The Nottingham Trent University (UK) admitted students to first of the Engineering Council's new Integrated Engineering Degree Programme courses. The course was accredited, at the CEng and European Engineer level, by the Institutions of Mechanical Engineers, Electrical Engineers and Manufacturing Engineers.
Generic engineering programmes are common. Integrated Engineering is distinct through emphasizing the development of personal competencies, especially the ability of students to work within groups. It is design led, and integration of all the subjects of study is a defining characteristic, achieved partly through the medium of project based learning.
Following the successful experience at The Nottingham Trent University, Integrated Engineering programmes were established in 1993, at selected universities in Bulgaria and Hungary, with the aid of European Union funding granted under the Tempus Programme.
In University of Liverpool, the Integrated Engineering Program is accredited by the Institution of Mechanical Engineers and the Institution of Electrical Engineers, and can lead to Chartered Engineer status.
In Anglia Ruskin University, the Integrated Engineering Program is accredited by the Institution of Engineering and Technology, and can lead to Incorporated Engineer status.
=== United States ===
In the U.S. there are several Integrated engineering education programs.
Southern Utah University requires its students to pass the Fundamentals of Engineering exam (FE) before they graduate; and received ABET accreditation in 2004 that extended retroactively through October 2003. The graduates are also able to obtain a Professional Engineer (P.E.) license.
Minnesota State University, Mankato has developed a collaborative Integrated engineering program to provide engineering education at MNSCU Community Colleges in the Northern Higher Education District in former Iron Range communities. This partnership allows students to stay near home, while earning a bachelor of science in integrated engineering while focusing on local engineering needs of manufacturers and businesses. As part of the program students are also required to sit for the FE examination prior to graduation and are eligible to sit for the P.E. exam license as the program is also ABET accredited.
=== Germany ===
In Germany the [(Baden-Wuerttemberg Cooperative State University (DHBW))] introduced a flexible M.Eng. program in 2015, to fit to the industrial demand for generally educated engineers for Integrated Industry, known as Industry 4.0 in Germany. The graduated school program "Integrated Engineering" is administered at the Center for Advances Studies in Heilbronn and requires at least two years professional experience as an engineer for admission.
=== Korea ===
In Korea the Department of Integrative Engineering at Chung-Ang University aims to develop human resources that will contribute to building a knowledge infrastructure by effectively responding to rapid educational and social changes.
The department will focus on developing fundamental and application technologies by realizing future-oriented converging technologies and, through a global network, on strengthening convergence-related competitiveness at the university and national level. To accomplish the goals, based on imaginative education using an innovative system, the department will develop “integrative engineering” people who are equipped with initiative research abilities.
=== Trinidad and Tobago ===
The Bachelor of Applied Science (B.A.S.c) and Master of Engineering (M.Eng) programs in Utilities Engineering was validated in December 2008 at the University of Trinidad and Tobago. These programs are geared towards the Electrical and Mechanical engineering disciplines that exist within the broad area of Integrated Engineering.
Prior to the introduction of the programs most of the engineers in the utilities sector were specialized in one branch of engineering mainly Electrical or Mechanical. The sector required an engineer who was multi-skilled and versed in both disciplines. The Utilities Engineer therefore performs a wide range of maintenance and operational duties in the following industries:
Process Industry,
Electric Utilities (generation, distribution and efficient utilization),
Transportation Industry,
Processing and Manufacturing Industry,
Water and sanitation industry,
Mining and Smelting Industry,
Renewable and Green Energy Industry.
== See also ==
University of Western Ontario
University of British Columbia
Engineering Undergraduate Society of the University of British Columbia
Southern Utah University
Chung-Ang University
== References ==
== External links ==
Southern Utah University: Integrated Engineering and Pre-Engineering
University of British Columbia: Integrated Engineering
University of Western Ontario: Integrated Engineering
University of Liverpool: Integrated Engineering
University of Windsor: Integrated Engineering
University of Nottingham: Integrated Engineering
Anglia Ruskin University
University Centre Peterborough
Chung-Ang University, Seoul, Korea: Integrative Engineering
University of Trinidad and Tobago: Utilities Engineering
University of San Diego: Integrated Engineering | Wikipedia/Integrated_engineering |
The Telegraph Plateau is a region of the North Atlantic that was supposedly relatively flat and shallow compared to the rest of the ocean away from shore. The term is archaic and no longer used by hydrographers. It was so named because it seemed to be an ideal route for a transatlantic telegraph cable, and was actually used for the first such cable in 1858. The Victorian hydrographers surveying the route failed to notice the Mid-Atlantic Ridge in the middle of the route.
== Discovery and naming ==
The feature was discovered by Matthew Fontaine Maury while producing a bathymetric chart of the ocean in 1853, compiled from sounding data from multiple ships' logs. Maury so named it because he thought it would be an ideal route for a transatlantic telegraph cable, which at the time, was no more than a vague aspiration. His hydrographers confirmed his assessment of the viability of the route using accurate sounders invented by John Mercer Brooke. Brooke's sounder was designed to release the lead immediately it contacted the bottom so that a sample of the sea bed could be recovered without the risk of the line breaking while it was being hauled back up due to the weight of the lead. Maury had discarded many of the historic readings because he thought they were inaccurate due to the inability of the sounder to tell when the lead had contacted the bottom in deep ocean. This resulted in more line being run out before the reading was taken. Maury's solution to this problem was to determine the law of descent of the line for given sizes of line and weights of lead. Sounders were provided with tables of expected rates of line runout, so that once the rate was much less than that expected, they knew that the bottom had been reached.
The feature occupied the shortest route between the British Isles and the Americas. The region begins at around 51° north and runs from near the south of Ireland to Newfoundland in Canada north of the Great Banks for a distance of 1,400 miles (2,300 km). Its average depth was measured at 1,400 fathoms (2,600 m), and greatest depth 2,500 fathoms (4,600 m). It was described as a table-land or steppe (in comparison to the Southern Andean Steppe), and sometimes called the Atlantic steppe. To the south of the plateau, the hydrography was determined to be very uneven, with depths of between 4 and 6 miles (6,400 and 9,700 m) recorded, although six miles is a little beyond any depth marked on a modern chart.
Maury's chart was instrumental in Cyrus Field's decision to land the transatlantic cable in Newfoundland. It showed that the region to the south of Telegraph Plateau was much too rugged to take a cable directly to the United States. The route had appeal, not only because it was flat, not too deep, and was the shortest route, but also because it had moderate currents which would help the cable to sink straight down and a soft seabed (made up of microscopic shells) for the cable to rest on.
The Victorian hydrographers failed to detect the presence of the Mid-Atlantic Ridge due to the widely spaced soundings taken along the proposed cable route. The ridge is particularly narrow at this point and the hydrography on either side is relatively flat. The term Telegraph Plateau is no longer used by modern hydrographers.
== Geology ==
Telegraph Plateau consists of oceanic crust plus a section of the Reykjanes Ridge (the northern branch of the Mid-Atlantic Ridge) where the plateau crosses from the Eurasian plate to the North American plate. The crossing is in the Charlie-Gibbs fracture zone between the Minia Seamount Mountains to the north and the volcanic Faraday Hills to the south. Telegraph Plateau was once thought to be an ancient shield resembling Greenland with a different strike, and the fold belts around it suggested Caledonian folding. These features are now known to be much more recent and are a result of plate tectonics.
== References ==
== Bibliography ==
Barnes, Clifford A.; Broadus, James M.; Ericson, David Barnard; Fleming, Richard Howell; LaMourie, Matthew J.; Namias, Jerome, "Atlantic Ocean", Encyclopædia Britannica (online), Encyclopædia Britannica, Inc., retrieved 26 May 2020.
Kevin P. Furlong, Steven D. Sheaffer, Rocco Malservisi, "Thermal–rheological controls of deformation within ocean transforms", pp. 65–84 in, The Nature and Tectonic Significance of Fault Zone Weakening, Geological Society of London, 2001 ISBN 1862390908.
John Mullaly, The Laying of the Cable, or the Ocean Telegraph, D. Appleton, 1858, LCCN 08-3682.
Helen M. Rozwadowski, Fathoming the Ocean: The Discovery and Exploration of the Deep Sea, Harvard University Press, 2009 ISBN 0674042948.
Jacob Ward, "Oceanscapes and spacescapes in North Atlantic communications: Laying cables", ch. 10 in, Jon Agar, Jacob Ward (eds), Robert E. Holdsworth, R. A. Strachan, J. Magloughlin, R. J. Knipe (eds), Histories of Technology, the Environment and Modern Britain, UCL Press, retrieved 8 May 2020.
N. Zhirov, Atlantis: Atlantology: Basic Problems, University Press of the Pacific, 2001 (1970 reprint) ISBN 0898755913. | Wikipedia/Telegraph_Plateau |
Enderby's Wharf is a wharf and industrial site on the south bank of the Thames in Greenwich, London, associated with Telcon and other companies. It has a history of more than 150 years of production of submarine communication cables and associated equipment, and is one of the most important sites in the history of submarine communications.
== Location ==
The wharf lies on the Greenwich Peninsula, a little to the north of the historic centre of Greenwich. It is between the Thames and the Blackwall Tunnel approach road, across the river from Cubitt Town. It covers an area of some 16 acres (65,000 m2) and has a frontage of around 600 feet (180 m).
== History ==
The wharf was first developed commercially by the whaling company of Samuel Enderby & Sons. The site was first acquired by Samuel Enderby II, with Morden College assisting in the acquisition of the naval ammunition wharf. It was Samuel Enderby III who initially developed the site along with brothers Charles and George, who acquired the site for a ropeworks. Enderbys also built Enderby House in the early 1830s, which stands today as a listed building among modern housing.
In 1857 submarine cable manufacturers Glass, Elliot & Co and W.T.Henley took over the site; Henleys subsequently moved to North Woolwich. As well as jointly making the short-lived first transatlantic telegraph cable, Glass, Elliot supplied many early telegraph cables including Corsica–Sardinia, Lowestoft–Zandvoort, Malta–Alexandria and Sicily–Algeria. In the 1860s Glass, Elliot and the Gutta Percha Company were absorbed into the Telegraph Construction and Maintenance Company (Telcon), which manufactured a second transatlantic telegraph cable at Enderby's Wharf. This was successfully laid by the SS Great Eastern. The company went on to manufacture many more transatlantic cables, and others to Australia, New Zealand, India, Hong Kong etc.
In 1935 the site came into the ownership of the newly formed Submarine Cables Ltd. Some of the cross-channel, D-Day Pluto pipeline was made at the wharf in World War II. After ownership by BICC and AEI, in 1970 the company passed to STC. Manufacture of submarine cable at the site ended in 1975 (transferring to Southampton), and work concentrated on manufacture of optical repeaters and amplifiers. It subsequently passed to Northern Telecom and then to Alcatel of France in 1994. In 2006 Alcatel merged with US company Lucent to create Alcatel-Lucent, and the following year their division based at Enderby Wharf was renamed Alcatel-Lucent Submarine Networks, which became Alcatel Submarine Networks after Alcatel-Lucent was acquired by Nokia in 2016.
Around 2010, a large part of the site was sold to Barratt Developments for a housing estate, called Enderby Wharf. Enderby House, the original office building, was within the Barratt site but stood disused for several years before being developed to become a bar and restaurant, which opened in April 2021.
== Proposed cruise ship terminal ==
In 2010 a proposal was made to turn 3 acres (12,000 m2) of the river frontage of the site not in use by Alcatel into a terminal for huge cruise liners, and housing. The proposal (known as 'Enderby Wharf') received planning approval from Greenwich Council in 2011, subject to approval by the Greater London Authority (GLA). Mayor Boris Johnson gave his approval to a revised application for a larger terminal in August 2015.
It was expected that up to 55 large cruise ships would dock there every year. Each would need to run its diesel engines continuously to power onboard facilities, generating large polluting emissions near residential areas and schools. While London has strict regulations on air quality and emissions, they do not apply to the Thames, which is in the jurisdiction of the Port of London Authority (PLA) rather than the GLA. At the London elections in 2016 the Conservative and Labour mayoral candidates joined their Green and Liberal Democrat rivals to support the residents' campaign against the terminal. In 2018 Greenwich council changed its opinion, and called for Morgan Stanley, current owner of Enderby Wharf, to implement a less polluting solution for the cruise terminal. Residents of the area proposed it should be "zero emissions", supporting ships able to use onshore electrical power without the need to run their engines while docked. Some cruise ships already support the use of shore power, while others are being adapted to do so.
In 2019, Morgan Stanley sold the site to Criterion Capital for further housing development.
== References == | Wikipedia/Telegraph_Construction_and_Maintenance_Company |
The New York, Newfoundland & London Telegraph Company was a company in a series of conglomerations of several companies that eventually laid the first Trans-Atlantic cable.
In 1854 British engineer Charles Bright met an American, Cyrus Field, who had been shared a dream by Frederic Gisborne, of completing a submarine cable connection between North America and Europe. The New York, Newfoundland and London Telegraph Company was founded in 1852 and in 1854 Charles Bright and John Watkins Brett became additional signatories along with Cyrus Field. This was to make sure that Britain had a representative on the company's board and so enable support for a trans-Atlantic cable from the British. In 1855 Charles Bright finished a survey of the Irish coast and came to the conclusion that Valentia Island, on the west coast of Ireland, was best possible location and was also the closest point to North America. Armed with this location and the information that Cyrus Field gathered from a US Navy oceanographic seabed survey, that had taken place the year before, the project got underway.
== External links ==
'Historical Notes'
'1854 Charter of the New York, Newfoundland and London Telegraph Company' | Wikipedia/New_York,_Newfoundland_and_London_Telegraph_Company |
The telegrapher's equations (or telegraph equations) are a set of two coupled, linear partial differential equations that model voltage and current along a linear electrical transmission line. The equations are important because they allow transmission lines to be analyzed using circuit theory. The equations and their solutions are applicable from 0 Hz (i.e. direct current) to frequencies at which the transmission line structure can support higher order non-TEM modes.: 282–286 The equations can be expressed in both the time domain and the frequency domain. In the time domain the independent variables are distance and time. In the frequency domain the independent variables are distance
x
{\displaystyle x}
and either frequency,
ω
{\displaystyle \omega }
, or complex frequency,
s
{\displaystyle s}
. The frequency domain variables can be taken as the Laplace transform or Fourier transform of the time domain variables or they can be taken to be phasors in which case the frequency domain equations can be reduced to ordinary differential equations of distance. An advantage of the frequency domain approach is that differential operators in the time domain become algebraic operations in frequency domain.
The equations come from Oliver Heaviside who developed the transmission line model starting with an August 1876 paper, On the Extra Current. The model demonstrates that the electromagnetic waves can be reflected on the wire, and that wave patterns can form along the line. Originally developed to describe telegraph wires, the theory can also be applied to radio frequency conductors, audio frequency (such as telephone lines), low frequency (such as power lines), and pulses of direct current.
== Distributed components ==
The telegrapher's equations, like all other equations describing electrical phenomena, result from Maxwell's equations. In a more practical approach, one assumes that the conductors are composed of an infinite series of two-port elementary components, each representing an infinitesimally short segment of the transmission line:
The distributed resistance
R
{\displaystyle R}
of the conductors is represented by a series resistor (expressed in ohms per unit length). In practical conductors, at higher frequencies,
R
{\displaystyle R}
increases approximately proportional to the square root of frequency due to the skin effect.
The distributed inductance
L
{\displaystyle L}
(due to the magnetic field around the wires, self-inductance, etc.) is represented by a series inductor (henries per unit length).
The capacitance
C
{\displaystyle C}
between the two conductors is represented by a shunt capacitor
C
{\displaystyle C}
(farads per unit length).
The conductance
G
{\displaystyle G}
of the dielectric material separating the two conductors is represented by a shunt resistor between the signal wire and the return wire (siemens per unit length). This resistor in the model has a resistance of
R
shunt
=
1
G
o
h
m
s
{\displaystyle R_{\text{shunt}}={\frac {1}{G}}\,\mathrm {ohms} }
.
G
{\displaystyle \ G\ }
accounts for both bulk conductivity of the dielectric and dielectric loss. If the dielectric is an ideal vacuum, then
G
≡
0
{\displaystyle G\equiv 0\,}
.
The model consists of an infinite series of the infinitesimal elements shown in the figure, and that the values of the components are specified per unit length so the picture of the component can be misleading. An alternative notation is to use
R
′
{\displaystyle R'}
,
L
′
{\displaystyle L'}
,
C
′
{\displaystyle C'}
, and
G
′
{\displaystyle G'}
to emphasize that the values are derivatives with respect to length, and that the units of measure combine correctly. These quantities can also be known as the primary line constants to distinguish from the secondary line constants derived from them, these being the characteristic impedance, the propagation constant, attenuation constant and phase constant. All these constants are constant with respect to time, voltage and current. They may be non-constant functions of frequency.
=== Role of different components ===
The role of the different components can be visualized based on the animation at right.
Inductance L
The inductance couples current to energy stored in the magnetic field. It makes it look like the current has inertia – i.e. with a large inductance, it is difficult to increase or decrease the current flow at any given point. Large inductance L makes the wave move more slowly, just as waves travel more slowly down a heavy rope than a light string. Large inductance also increases the line's surge impedance (morevoltage needed to push the same AC current through the line).
Capacitance C
The capacitance couples voltage to the energy stored in the electric field. It controls how much the bunched-up electrons within each conductor repel, attract, or divert the electrons in the other conductor. By deflecting some of these bunched up electrons, the speed of the wave and its strength (voltage) are both reduced. With a larger capacitance, C, there is less repulsion, because the other line (which always has the opposite charge) partly cancels out these repulsive forces within each conductor. Larger capacitance equals weaker restoring forces, making the wave move slightly slower, and also gives the transmission line a lower surge impedance (less voltage needed to push the same AC current through the line).
Resistance R
Resistance corresponds to resistance interior to the two lines, combined. That resistance R couples current to ohmic losses that drop a little of the voltage along the line as heat deposited into the conductor, leaving the current unchanged. Generally, the line resistance is very low, compared to inductive reactance ωL at radio frequencies, and for simplicity is treated as if it were zero, with any voltage dissipation or wire heating accounted for as corrections to the "lossless line" calculation, or just ignored.
Conductance G
Conductance between the lines represents how well current can "leak" from one line to the other. Conductance couples voltage to dielectric loss deposited as heat into whatever serves as insulation between the two conductors. G reduces propagating current by shunting it between the conductors. Generally, wire insulation (including air) is quite good, and the conductance is almost nothing compared to the capacitive susceptance ωC, and for simplicity is treated as if it were zero.
All four parameters L, C, R, and G depend on the material used to build the cable or feedline. All four change with frequency: R, and G tend to increase for higher frequencies, and L and C tend to drop as the frequency goes up.
The figure at right shows a lossless transmission line, where both R and G are zero, which is the simplest and by far most common form of the telegrapher's equations used, but slightly unrealistic (especially regarding R).
=== Values of primary parameters for telephone cable ===
Representative parameter data for 24 gauge telephone polyethylene insulated cable (PIC) at 70 °F (294 K)
This data is from Reeve (1995), p. 558. The variation of
R
{\displaystyle \ R\ }
and
L
{\displaystyle \ L\ }
is mainly due to skin effect and proximity effect. The constancy of the capacitance is a consequence of intentional design.
The variation of
G
{\displaystyle \ G\ }
can be inferred from a statement by Terman (1943), p. 112:
"The power factor ... tends to be independent of frequency, since the fraction of energy lost during each cycle ... is substantially independent of the number of cycles per second over wide frequency ranges."
A function of the form
G
(
f
)
=
G
1
⋅
(
f
f
1
)
g
e
{\displaystyle G(f)=G_{1}\cdot \left({\frac {f}{f_{1}}}\right)^{\ g_{\mathrm {e} }}}
with
g
e
{\displaystyle g_{\mathrm {e} }}
close to 1.0 would fit Terman's statement. Chen (2004), p. 26 gives an equation of similar form. Where
G
(
⋅
)
{\displaystyle \ G(\cdot )\ }
is conductivity as a function of frequency,
G
1
,
f
1
,
{\displaystyle \ G_{1},\ f_{1}\ ,}
and
g
e
{\displaystyle \ g_{e}\ }
are all real constants.
Usually the resistive losses (
R
{\displaystyle \ R\ }
) grow proportionately to
f
1
/
2
{\displaystyle \ f^{\ 1/2}\ }
and dielectric losses grow proportionately to
f
g
e
{\displaystyle \ f^{\ g_{\mathrm {e} }}\ }
with
g
e
≈
1
{\displaystyle \ g_{\mathrm {e} }\approx 1\ }
so at a high enough frequency, dielectric losses will exceed resistive losses. In practice, before that point is reached, a transmission line with a better dielectric is used. In long distance rigid coaxial cable, to get very low dielectric losses, the solid dielectric may be replaced by air with plastic spacers at intervals to keep the center conductor on axis.
== The equation ==
=== Time domain ===
The telegrapher's equations in the time domain are:
∂
∂
x
V
(
x
,
t
)
=
−
L
∂
∂
t
I
(
x
,
t
)
−
R
I
(
x
,
t
)
∂
∂
x
I
(
x
,
t
)
=
−
C
∂
∂
t
V
(
x
,
t
)
−
G
V
(
x
,
t
)
{\displaystyle {\begin{aligned}{\frac {\partial }{\partial x}}V(x,t)&=-L\,{\frac {\partial }{\partial t}}I(x,t)-RI(x,t)\\[1ex]{\frac {\partial }{\partial x}}I(x,t)&=-C\,{\frac {\partial }{\partial t}}V(x,t)-GV(x,t)\end{aligned}}}
They can be combined to get two partial differential equations, each with only one dependent variable, either
V
{\displaystyle V}
or
I
{\displaystyle I}
:
∂
2
∂
x
2
V
(
x
,
t
)
−
L
C
∂
2
∂
t
2
V
(
x
,
t
)
=
(
R
C
+
G
L
)
∂
∂
t
V
(
x
,
t
)
+
G
R
V
(
x
,
t
)
∂
2
∂
x
2
I
(
x
,
t
)
−
L
C
∂
2
∂
t
2
I
(
x
,
t
)
=
(
R
C
+
G
L
)
∂
∂
t
I
(
x
,
t
)
+
G
R
I
(
x
,
t
)
{\displaystyle {\begin{aligned}{\frac {\partial ^{2}}{\partial x^{2}}}V(x,t)-LC\,{\frac {\partial ^{2}}{\partial t^{2}}}V(x,t)&=\left(RC+GL\right){\frac {\partial }{\partial t}}V(x,t)+GR\,V(x,t)\\[1ex]{\frac {\partial ^{2}}{\partial x^{2}}}I(x,t)-LC\,{\frac {\partial ^{2}}{\partial t^{2}}}I(x,t)&=\left(RC+GL\right){\frac {\partial }{\partial t}}I(x,t)+GR\,I(x,t)\end{aligned}}}
Except for the dependent variable (
V
{\displaystyle V}
or
I
{\displaystyle I}
) the formulas are identical.
=== Frequency domain ===
The telegrapher's equations in the frequency domain are developed in similar forms in the following references: Kraus,
Hayt,
Marshall,: 59–378
Sadiku,: 497–505
Harrington,
Karakash,
Metzger.
d
d
x
V
ω
(
x
)
=
−
(
j
ω
L
ω
+
R
ω
)
I
ω
(
x
)
,
d
d
x
I
ω
(
x
)
=
−
(
j
ω
C
ω
+
G
ω
)
V
ω
(
x
)
.
{\displaystyle {\begin{aligned}{\frac {d}{dx}}\mathbf {V} _{\omega }(x)&=-\left(j\omega L_{\omega }+R_{\omega }\right)\mathbf {I} _{\omega }(x),\\[1ex]{\frac {d}{dx}}\mathbf {I} _{\omega }(x)&=-\left(j\omega C_{\omega }+G_{\omega }\right)\mathbf {V} _{\omega }(x).\end{aligned}}}
Here,
I
ω
(
x
)
{\displaystyle \mathbf {I} _{\omega }(x)}
and
V
ω
(
x
)
{\displaystyle \mathbf {V} _{\omega }(x)}
are phasors, with the subscript
ω
{\displaystyle \omega }
indicating the possible frequency-dependence of the parameters.
The first equation means that
V
ω
(
x
)
{\displaystyle \mathbf {V} _{\omega }(x)\,}
, the propagating voltage at point
x
{\displaystyle x}
, is decreased by the voltage loss produced by
I
ω
(
x
)
{\displaystyle \mathbf {I} _{\omega }(x)\,}
, the current at that point passing through the series impedance
R
+
j
ω
L
{\displaystyle R+j\omega L}
. The second equation means that
I
ω
(
x
)
{\displaystyle \mathbf {I} _{\omega }(x)\,}
, the propagating current at point
x
{\displaystyle x}
, is decreased by the current loss produced by
V
ω
(
x
)
{\displaystyle \mathbf {V} _{\omega }(x)\,}
, the voltage at that point appearing across the shunt admittance
G
+
j
ω
C
{\displaystyle G+j\omega C\,}
.
These equations may be combined to produce two uncoupled second-order ordinary differential equations
d
2
d
x
2
V
ω
(
x
)
=
γ
2
V
ω
(
x
)
,
d
2
d
x
2
I
ω
(
x
)
=
γ
2
I
ω
(
x
)
,
{\displaystyle {\begin{aligned}{\frac {d^{2}}{dx^{2}}}\mathbf {V} _{\omega }(x)&=\gamma ^{2}\mathbf {V} _{\omega }(x),\\[1ex]{\frac {d^{2}}{dx^{2}}}\mathbf {I} _{\omega }(x)&=\gamma ^{2}\mathbf {I} _{\omega }(x),\end{aligned}}}
with
γ
≡
α
+
j
β
≡
(
R
ω
+
j
ω
L
ω
)
(
G
ω
+
j
ω
C
ω
)
,
{\displaystyle \gamma \equiv \alpha +j\beta \equiv {\sqrt {\left(R_{\omega }+j\omega L_{\omega }\right)\left(G_{\omega }+j\omega C_{\omega }\right)}},}
where
α
{\displaystyle \alpha }
is called the attenuation constant and
β
{\displaystyle \beta }
is called the phase constant.: 385
Working in the frequency domain has the benefit of dealing with both steady state and transient problems in a similar fashion. In case of the latter the frequency
ω
{\displaystyle \omega }
becomes a continuous variable; a solution can be obtained by first solving the above (homogeneous) second-order ODEs and then applying the Fourier inversion theorem. An example of solving steady-state problems is given below.
==== Homogeneous solutions ====
Each of the preceding differential equations have two homogeneous solutions in an infinite transmission line.
For the voltage equation
V
ω
,
F
(
x
)
=
V
ω
,
F
(
a
)
e
+
γ
(
a
−
x
)
;
V
ω
,
R
(
x
)
=
V
ω
,
R
(
b
)
e
−
γ
(
b
−
x
)
;
{\displaystyle {\begin{aligned}\mathbf {V} _{\omega ,F}(x)=\mathbf {V} _{\omega ,F}(a)\,e^{+\gamma (a-x)}\,;\\[1ex]\mathbf {V} _{\omega ,R}(x)=\mathbf {V} _{\omega ,R}(b)\,e^{-\gamma (b-x)}\,;\end{aligned}}}
V
ω
(
x
)
=
V
ω
,
F
(
x
)
+
V
ω
,
R
(
x
)
.
{\displaystyle \mathbf {V} _{\omega }(x)=\mathbf {V} _{\omega ,F}(x)+\mathbf {V} _{\omega ,R}(x)\,.}
For the current equation
I
ω
,
F
(
x
)
=
I
ω
,
F
(
a
)
e
+
γ
(
a
−
x
)
;
I
ω
,
R
(
x
)
=
I
ω
,
R
(
b
)
e
−
γ
(
b
−
x
)
;
{\displaystyle {\begin{aligned}\mathbf {I} _{\omega ,F}(x)=\mathbf {I} _{\omega ,F}(a)\,e^{+\gamma (a-x)}\,;\\\mathbf {I} _{\omega ,R}(x)=\mathbf {I} _{\omega ,R}(b)\,e^{-\gamma (b-x)}\,;\end{aligned}}}
I
ω
(
x
)
=
I
ω
,
F
(
x
)
−
I
ω
,
R
(
x
)
.
{\displaystyle \mathbf {I} _{\omega }(x)=\mathbf {I} _{\omega ,F}(x)-\mathbf {I} _{\omega ,R}(x)\,.}
The negative sign in the previous equation indicates that the current in the reverse wave is traveling in the opposite direction.
Note:
V
ω
,
F
(
x
)
=
Z
c
I
ω
,
F
(
x
)
,
V
ω
,
R
(
x
)
=
Z
c
I
ω
,
R
(
x
)
,
{\displaystyle {\begin{aligned}\mathbf {V} _{\omega ,F}(x)=\mathbf {Z} _{c}\,\mathbf {I} _{\omega ,F}(x),\\[0.4ex]\mathbf {V} _{\omega ,R}(x)=\mathbf {Z} _{c}\,\mathbf {I} _{\omega ,R}(x),\end{aligned}}}
Z
c
=
R
ω
+
j
ω
L
ω
G
ω
+
j
ω
C
ω
,
{\displaystyle \mathbf {Z} _{c}={\sqrt {\frac {R_{\omega }+j\omega L_{\omega }}{G_{\omega }+j\omega C_{\omega }}}}\,,}
where the following symbol definitions hold:
==== Finite length ====
Johnson gives the following solution,: 739–741
V
L
V
S
=
[
(
H
−
1
+
H
2
)
(
1
+
Z
S
Z
L
)
+
(
H
−
1
−
H
2
)
(
Z
S
Z
C
+
Z
C
Z
L
)
]
−
1
=
Z
L
Z
C
Z
C
(
Z
L
+
Z
S
)
cosh
(
γ
x
)
+
(
Z
L
Z
S
+
Z
C
2
)
sinh
(
γ
x
)
{\displaystyle {\begin{aligned}{\frac {\mathbf {V} _{\mathsf {L}}}{\mathbf {V} _{\mathsf {S}}}}&=\left[\left({\frac {\mathbf {H} ^{-1}+\mathbf {H} }{2}}\right)\left(1+{\frac {\mathbf {Z} _{\mathsf {S}}}{\mathbf {Z} _{\mathsf {L}}}}\right)+\left({\frac {\mathbf {H} ^{-1}-\mathbf {H} }{2}}\right)\left({\frac {\mathbf {Z} _{\mathsf {S}}}{\mathbf {\mathbf {Z} } _{\mathsf {C}}}}+{\frac {\mathbf {Z} _{\mathsf {C}}}{\mathbf {Z} _{\mathsf {L}}}}\right)\right]^{-1}\\[2ex]&={\frac {\mathbf {Z} _{\mathsf {L}}\mathbf {Z} _{\mathsf {C}}}{\mathbf {Z} _{\mathsf {C}}\left(\mathbf {Z} _{\mathsf {L}}+\mathbf {Z} _{\mathsf {S}}\right)\cosh \left({\boldsymbol {\gamma }}x\right)+\left(\mathbf {Z} _{\mathsf {L}}\mathbf {Z} _{\mathsf {S}}+\mathbf {Z} _{\mathsf {C}}^{2}\right)\sinh \left({\boldsymbol {\gamma }}x\right)}}\end{aligned}}}
where
H
≡
e
−
γ
x
,
{\displaystyle \mathbf {H} \equiv e^{-{\boldsymbol {\gamma }}x},}
and
x
{\displaystyle x}
is the length of the transmission line.
In the special case where all the impedances are equal,
Z
L
=
Z
S
=
Z
C
,
{\displaystyle \mathbf {Z} _{\mathsf {L}}=\mathbf {Z} _{\mathsf {S}}=\mathbf {Z} _{\mathsf {C}},}
the solution reduces to
V
L
V
S
=
1
2
e
−
γ
x
{\displaystyle {\frac {\mathbf {V} _{\mathsf {L}}}{\mathbf {V} _{\mathsf {S}}}}={\frac {1}{2}}e^{-{\boldsymbol {\gamma }}x}\,}
.
== Lossless transmission ==
When
ω
L
≫
R
{\displaystyle \omega L\gg R}
and
ω
C
≫
G
{\displaystyle \omega C\gg G}
, wire resistance and insulation conductance can be neglected, and the transmission line is considered as an ideal lossless structure. In this case, the model depends only on the L and C elements. The telegrapher's equations then describe the relationship between the voltage V and the current I along the transmission line, each of which is a function of position x and time t:
V
=
V
(
x
,
t
)
I
=
I
(
x
,
t
)
{\displaystyle {\begin{aligned}V&=V(x,t)\\[.5ex]I&=I(x,t)\end{aligned}}}
The equations themselves consist of a pair of coupled, first-order, partial differential equations. The first equation shows that the induced voltage is related to the time rate-of-change of the current through the cable inductance, while the second shows, similarly, that the current drawn by the cable capacitance is related to the time rate-of-change of the voltage.
∂
V
∂
x
=
−
L
∂
I
∂
t
{\displaystyle {\frac {\partial V}{\partial x}}=-L{\frac {\partial I}{\partial t}}}
∂
I
∂
x
=
−
C
∂
V
∂
t
{\displaystyle {\frac {\partial I}{\partial x}}=-C{\frac {\partial V}{\partial t}}}
These equations may be combined to form two wave equations, one for voltage
V
{\displaystyle V}
, the other for current
I
{\displaystyle I}
:
∂
2
V
∂
t
2
−
v
~
2
∂
2
V
∂
x
2
=
0
∂
2
I
∂
t
2
−
v
~
2
∂
2
I
∂
x
2
=
0
{\displaystyle {\begin{aligned}{\frac {\partial ^{2}V}{\partial t^{2}}}-{\tilde {v}}^{2}{\frac {\partial ^{2}V}{\partial x^{2}}}&=0\\[1ex]{\frac {\partial ^{2}I}{\partial t^{2}}}-{\tilde {v}}^{2}{\frac {\partial ^{2}I}{\partial x^{2}}}&=0\end{aligned}}}
where
v
~
≡
1
L
C
{\displaystyle {\tilde {v}}\equiv {\frac {1}{\sqrt {LC}}}}
is the propagation speed of waves traveling through the transmission line. For transmission lines made of parallel perfect conductors with vacuum between them, this speed is equal to the speed of light.
=== Lossless sinusoidal steady-state ===
In the case of sinusoidal steady-state (i.e., when a pure sinusoidal voltage is applied and transients have ceased) the angular frequency
ω
{\displaystyle \omega }
is fixed and the voltage and current take the form of single-tone sine waves:
V
(
x
,
t
)
=
R
e
{
V
(
x
)
e
j
ω
t
}
,
I
(
x
,
t
)
=
R
e
{
I
(
x
)
e
j
ω
t
}
.
{\displaystyle {\begin{aligned}V(x,t)&={\mathcal {Re}}\left\{V(x)e^{j\omega t}\right\},\\[1ex]I(x,t)&={\mathcal {Re}}\left\{I(x)e^{j\omega t}\right\}.\end{aligned}}}
In this case, the telegrapher's equations reduce to
d
V
d
x
=
−
j
ω
L
I
=
−
L
d
I
d
t
,
d
I
d
x
=
−
j
ω
C
V
=
−
C
d
V
d
t
.
{\displaystyle {\begin{aligned}{\frac {dV}{dx}}&=-j\omega LI=-L{\frac {dI}{dt}},\\[1ex]{\frac {dI}{dx}}&=-j\omega CV=-C{\frac {dV}{dt}}.\end{aligned}}}
Likewise, the wave equations reduce to one-dimensional Helmholtz equations
d
2
V
d
x
2
+
k
2
V
=
0
,
d
2
I
d
x
2
+
k
2
I
=
0
,
{\displaystyle {\begin{aligned}&{\frac {d^{2}V}{dx^{2}}}+k^{2}V=0,\\[1ex]&{\frac {d^{2}I}{dx^{2}}}+k^{2}I=0,\end{aligned}}}
where k is the wave number:
k
:=
ω
L
C
=
ω
v
~
.
{\displaystyle k:=\omega {\sqrt {LC\ }}={\frac {\omega }{\tilde {v}}}.}
In the lossless case, it is possible to show that
V
(
x
)
=
V
1
e
−
j
k
x
+
V
2
e
+
j
k
x
,
{\displaystyle V(x)=V_{1}\,e^{-jkx}+V_{2}\,e^{+jkx},}
and
I
(
x
)
=
V
1
Z
o
e
−
j
k
x
−
V
2
Z
o
e
+
j
k
x
,
{\displaystyle I(x)={\frac {V_{1}}{Z_{\mathsf {o}}}}\,e^{-jkx}-{\frac {V_{2}}{Z_{\mathsf {o}}}}\,e^{+jkx}\ ,}
where in this special case,
k
{\displaystyle \ k\ }
is a real quantity that may depend on frequency and
Z
o
{\displaystyle \ Z_{\mathsf {o}}\ }
is the characteristic impedance of the transmission line, which, for a lossless line is given by
Z
o
=
L
C
,
{\displaystyle Z_{\mathsf {o}}={\sqrt {{\frac {L}{C}}\ }}\ ,}
and
V
1
{\displaystyle \ V_{1}\ }
and
V
2
{\displaystyle \ V_{2}\ }
are arbitrary constants of integration, which are determined by the two boundary conditions (one for each end of the transmission line).
This impedance does not change along the length of the line since L and C are constant at any point on the line, provided that the cross-sectional geometry of the line remains constant.
=== Loss-free case, general solution ===
In the loss-free case (
R
=
G
=
0
{\displaystyle R=G=0}
), the general solution of the wave equation for the voltage is the sum of a forward traveling wave and a backward traveling wave:
V
(
x
,
t
)
=
f
1
(
x
−
v
~
t
)
+
f
2
(
x
+
v
~
t
)
{\displaystyle V(x,t)=f_{1}(x-{\tilde {v}}t)+f_{2}(x+{\tilde {v}}t)}
where
f
1
{\displaystyle f_{1}}
and
f
2
{\displaystyle f_{2}}
can be any two analytic functions, and
v
~
≡
1
L
C
{\displaystyle {\tilde {v}}\equiv {\frac {1}{\sqrt {LC}}}}
is the waveform's propagation speed (also known as phase velocity).
Here,
f
1
{\displaystyle f_{1}}
represents the amplitude profile of a wave traveling from left to right – in a positive
x
{\displaystyle x}
direction – whilst
f
2
{\displaystyle f_{2}}
represents the amplitude profile of a wave traveling from right to left. It can be seen that the instantaneous voltage at any point
x
{\displaystyle x}
on the line is the sum of the voltages due to both waves.
Using the current
I
{\displaystyle I}
and voltage
V
{\displaystyle V}
relations given by the telegrapher's equations, we can write
I
(
x
,
t
)
=
1
Z
o
[
f
1
(
x
−
v
~
t
)
−
f
2
(
x
+
v
~
t
)
]
.
{\displaystyle I(x,t)={\frac {1}{Z_{\mathsf {o}}}}{\Bigl [}f_{1}(x-{\tilde {v}}t)-f_{2}(x+{\tilde {v}}t){\Bigr ]}\,.}
== Lossy transmission line ==
When the loss elements
R
{\displaystyle R}
and
G
{\displaystyle G}
are too substantial to ignore, the differential equations describing the elementary segment of line are
∂
∂
x
V
(
x
,
t
)
=
−
L
∂
∂
t
I
(
x
,
t
)
−
R
I
(
x
,
t
)
,
∂
∂
x
I
(
x
,
t
)
=
−
C
∂
∂
t
V
(
x
,
t
)
−
G
V
(
x
,
t
)
.
{\displaystyle {\begin{aligned}{\frac {\partial }{\partial x}}V(x,t)&=-L{\frac {\partial }{\partial t}}I(x,t)-R\,I(x,t)\,,\\[6pt]{\frac {\partial }{\partial x}}I(x,t)&=-C{\frac {\partial }{\partial t}}V(x,t)-G\,V(x,t)\,.\end{aligned}}}
By differentiating both equations with respect to x, and some algebra, we obtain a pair of damped, dispersive hyperbolic partial differential equations each involving only one unknown:
∂
2
∂
x
2
V
=
L
C
∂
2
∂
t
2
V
+
(
R
C
+
G
L
)
∂
∂
t
V
+
G
R
V
,
∂
2
∂
x
2
I
=
L
C
∂
2
∂
t
2
I
+
(
R
C
+
G
L
)
∂
∂
t
I
+
G
R
I
.
{\displaystyle {\begin{aligned}{\frac {\partial ^{2}}{\partial x^{2}}}V&=LC{\frac {\partial ^{2}}{\partial t^{2}}}V+\left(RC+GL\right){\frac {\partial }{\partial t}}V+GRV,\\[6pt]{\frac {\partial ^{2}}{\partial x^{2}}}I&=LC{\frac {\partial ^{2}}{\partial t^{2}}}I+\left(RC+GL\right){\frac {\partial }{\partial t}}I+GRI.\end{aligned}}}
These equations resemble the homogeneous wave equation with extra terms in V and I and their first derivatives. These extra terms cause the signal to decay and spread out with time and distance. If the transmission line is only slightly lossy (
R
≪
ω
L
{\displaystyle R\ll \omega L}
and
G
≪
ω
C
{\displaystyle G\ll \omega C}
), signal strength will decay over distance as
e
−
α
x
{\displaystyle e^{-\alpha x}}
where
α
≈
R
2
Z
0
+
G
Z
0
2
{\displaystyle \alpha \approx {\frac {R}{2Z_{0}}}+{\frac {GZ_{0}}{2}}~}
.
== Solutions of the telegrapher's equations as circuit components ==
The solutions of the telegrapher's equations can be inserted directly into a circuit as components. The circuit in the figure implements the solutions of the telegrapher's equations.
The solution of the telegrapher's equations can be expressed as an ABCD two-port network with the following defining equations: 5–14, 44
V
1
=
V
2
cosh
(
γ
x
)
+
I
2
Z
o
sinh
(
γ
x
)
,
I
1
=
V
2
Z
o
sinh
(
γ
x
)
+
I
2
cosh
(
γ
x
)
.
{\displaystyle {\begin{aligned}V_{1}&=V_{2}\cosh(\gamma x)+I_{2}Z_{\mathsf {o}}\sinh(\gamma x)\,,\\[1ex]I_{1}&={\frac {V_{2}}{Z_{\mathsf {o}}}}\sinh(\gamma x)+I_{2}\cosh(\gamma x)\,.\end{aligned}}}
where
Z
o
≡
R
ω
+
j
ω
L
ω
G
ω
+
j
ω
C
ω
,
{\displaystyle Z_{\mathsf {o}}\equiv {\sqrt {\frac {R_{\omega }+j\omega L_{\omega }}{G_{\omega }+j\omega C_{\omega }}}},}
and
γ
≡
(
R
ω
+
j
ω
L
ω
)
(
G
ω
+
j
ω
C
ω
)
,
{\displaystyle \gamma \equiv {\sqrt {\left(R_{\omega }+j\omega L_{\omega }\right)\left(G_{\omega }+j\omega C_{\omega }\right)}},}
just as in the preceding sections. The line parameters Rω, Lω, Gω, and Cω are subscripted by ω to emphasize that they could be functions of frequency.
The ABCD type two-port gives
V
1
{\displaystyle V_{1}}
and
I
1
{\displaystyle I_{1}}
as functions of
V
2
{\displaystyle V_{2}}
and
I
2
{\displaystyle I_{2}\,}
. The voltage and current relations are symmetrical: Both of the equations shown above, when solved for
V
1
{\displaystyle V_{1}}
and
I
1
{\displaystyle I_{1}}
as functions of
V
2
{\displaystyle V_{2}}
and
I
2
{\displaystyle I_{2}}
yield exactly the same relations, merely with subscripts "1" and "2" reversed, and the
sinh
{\displaystyle \sinh }
terms' signs made negative ("1"→"2" direction is reversed "1"←"2", hence the sign change).
Every two-wire or balanced transmission line has an implicit (or in some cases explicit) third wire which is called the shield, sheath, common, earth, or ground. So every two-wire balanced transmission line has two modes which are nominally called the differential mode and common mode. The circuit shown in the bottom diagram only can model the differential mode.
In the top circuit, the voltage doublers, the difference amplifiers, and impedances Zo(s) account for the interaction of the transmission line with the external circuit. This circuit is a useful equivalent for an unbalanced transmission line like a coaxial cable.
These are not unique: Other equivalent circuits are possible.
== See also ==
Complex power
Law of squares, Lord Kelvin's preliminary work on this subject
LC circuit
Reflections of signals on conducting lines
RLC circuit
Smith chart
== References == | Wikipedia/Telegrapher's_equations |
The British and Irish Magnetic Telegraph Company (also called the Magnetic Telegraph Company or the Magnetic) was a provider of telegraph services and infrastructure. It was founded in 1850 by John Brett. The Magnetic became the principal competitor to the largest telegraph company in the United Kingdom, Electric Telegraph Company (the Electric), and became the leading company in Ireland. The two companies dominated the market until the telegraph was nationalised in 1870.
The Magnetic's telegraph system differed from other telegraph companies. They favoured underground cables rather than wires suspended on poles. This system was problematic because of the limitations of insulation materials available at the time, but the Magnetic was constrained by the wayleaves owned by other companies on better routes. They were also unique in not using batteries which were required on other systems. Instead the operator generated the necessary power electromagnetically. The coded message was sent by the operator moving handles which moved coils past a permanent magnet thus generating telegraph pulses.
The Magnetic laid the first submarine telegraph cable to Ireland and developed an extensive telegraph network there. They had a close connection with the Submarine Telegraph Company and for a while had a monopoly on underwater, and hence, international communication. They also closely cooperated with the London District Telegraph Company who provided a cheap telegram service in London. The Magnetic was amongst the first to employ women as telegraph operators.
== Company history ==
The English and Irish Magnetic Telegraph Company (which was also known as the Magnetic) was established by John Brett in 1850. John Pender also had an interest and Charles Tilston Bright was the chief engineer. The company's initial objective was to connect Britain with Ireland following the success of the Submarine Telegraph Company in connecting England with France with the first ocean cable to be put in service. The British and Irish Magnetic Telegraph Company was formed in 1857 in Liverpool through a merger of the English and Irish Magnetic Telegraph Company and the British Telegraph Company (originally known as the British Electric Telegraph Company).
The main competitor of the Magnetic was the Electric Telegraph Company, later, after a merger, the Electric and International Telegraph Company (the Electric for short) founded by William Fothergill Cooke. By the end of the 1850s, the Electric and Magnetic companies were virtually a cartel in Britain. In 1859, the Magnetic moved its headquarters from Liverpool to Threadneedle Street in London, in recognition that they were no longer a regional company. They shared these premises with the Submarine Telegraph Company.
The company had a close relationship with the Submarine Telegraph Company who laid the first cable to France and many subsequent submarine telegraph cables to Europe. From about 1857, the Magnetic had an agreement with them that all their submarine cables were to be used only with the landlines of the Magnetic. The Magnetic also had control of the first cable to Ireland. This control of international traffic gave them a significant advantage in the domestic market.
Another company with a close relationship was the London District Telegraph Company (the District), formed in 1859. The District provided a cheap telegram service within London only. They shared headquarters and directors with the Magnetic. The Magnetic installed their lines and trained their staff in return for the District passing on traffic for the Magnetic outside London.
The Magnetic founded its own press agency. It promoted its agency by offering lower rates to customers who used it than the rates for customers who wanted connections to rival agencies. In 1870, The Magnetic, along with several other telegraph companies including the Electric, were nationalised under the Telegraph Act 1868 and the company wound up.
== Telegraph system ==
The telegraph system of the Magnetic was somewhat different from other companies. This was largely because the Electric held the patents for the Cooke and Wheatstone telegraph. The name of the company refers to the fact that their telegraph system did not require batteries. Power for the transmissions was generated electromagnetically. The system, invented by William Thomas Henley and George Foster in 1848, was a needle telegraph and came in double-needle or single-needle versions. The machine was worked by the operator pushing pedal keys. An armature connected to the key moved two coils through the magnetic field of a permanent magnet. This generated a pulse of current which caused a deflection of the corresponding needle at both ends of the line. The needles were magnetised and so arranged that they were held in position by the permanent magnet after deflection. The operator was able to apply a current in the reverse direction so that there were two positions that the needle could be held in. The code consisted of various combinations of successive needle deflections to the left or right.
In later years, the Magnetic used other telegraph systems. After the takeover of the British Telegraph Company, the Magnetic acquired the rights to the needle telegraph instrument of that company's founder, Henry Highton. This instrument was the cheapest of any of the instruments produced at the time, but like all needle telegraphs, was slower than audible systems due to the operator having to continually look up at the instrument while transcribing the message. Some companies moved to needle instruments with endstops making two different sounds when the needle struck them (an innovation of Cooke and Wheatstone in 1845) to solve this problem. The Magnetic instead used an 1854 invention of Charles Tilston Bright on its more busy lines. This was the acoustic telegraph (not to be confused with the acoustic telegraphy method of multiplexing) known as Bright's bells. In this system, two bells placed either side of the operator are rung with a hammer made to strike the bell by a solenoid driven by a relay. They are so arranged that the right and left bells are struck according to whether a positive or negative pulse of current is received on the telegraph line. Such bells make a much louder sound than the clicking of a needle.
The Magnetic found a method of overcoming the problem of dispersion on long submarine telegraph cables. The poorly understood phenomenon at that time was called retardation because different parts of a telegraph pulse travels at different speeds on the cable. Part of the pulse appears to be 'retarded', arriving later than the rest at the destination. This 'smearing out' of the pulse interferes with neighbouring pulses making the transmission unintelligible unless messages are sent at a much slower speed. The Magnetic found that if they generated pulses of opposite polarity to the main pulse and slightly delayed from it, the retarded signal was sufficiently cancelled to make the line usable at normal operator speeds. This system was developed theoretically by William Thomson and demonstrated to work by Fleeming Jenkin.
The Magnetic played a part in solving the dispersion problem on the transatlantic telegraph cable of the Atlantic Telegraph Company. Magnetic were strongly connected with this project; Bright promoted it and shares were sold largely to Magnetic shareholders, including Pender. Dispersion on the 1858 Atlantic cable had been so severe that it was almost unusable: it was destroyed by misguided attempts to solve the problem using high voltage. For the 1866 cable, it was planned to use the Magnetic's opposite polarity pulse method, but doubts were expressed over whether it would work over such a great distance. Magnetic connected together various of their British underground cables to provide a total line length of over 2,000 miles (3,200 km) for proof of principle testing. Dispersion was not eliminated from submarine cables until loading coils started to be used on them from 1906 onwards.
== Telegraph network ==
=== First connection to Ireland ===
The company's first objective, in 1852, was to provide the first telegraph service between Great Britain and Ireland by means of a submarine cable between Portpatrick in Scotland and Donaghadee in Ireland. The cable core was gutta-percha insulated copper wire made by the Gutta Percha Company. This was armoured with iron wires by R. S. Newall and Company at their works in Sunderland. Before this could be achieved, two other companies attempted to be the first to make the connection across the Irish Sea.
Despite having the contract to lay the Magnetic company's cable, Newall also secretly constructed another cable at their Gateshead works with the intention of being first to get a telegraph connection to Ireland. This Newall cable was only lightly armoured with an open 'bird-cage' structure of the iron wires, there was no cushioning layer between the core and the armour, and the insulation was not properly tested before laying because of the great hurry to get the job done before Magnetic was ready. This cable was laid from Holyhead in Wales to Howth, near Dublin with William Henry Woodhouse as engineer, and thence to Dublin via underground cable along the railway line. Laying of the submarine cable was completed on 1 June 1852 by the City of Dublin Steam Packet Company's chartered paddle steamer Britannia of 1825, usually used as a cattle ship, and with assistance from the Admiralty with HMS Prospero. However, the cable failed a few days later and was never put into service.
In July of the same year, the Electric Telegraph Company of Ireland tried using an insulated cable inside a hemp rope on the Portpatrick to Donaghadee route. This construction proved problematic because it floated (the Submarine Telegraph Company's Dover to Calais cable in 1850 was also lightweight, having no protection at all other than the insulation, but they had taken the precaution of adding periodic lead weights to sink the cable). It was laid from a schooner Reliance, assisted by tugs. The strong sea currents in the Irish Sea, much deeper than the English Channel, dragged the cable into a large bow and there was consequently insufficient length to land it. The attempt was abandoned.
For their cable, Magnetic were more careful in testing the insulation of batches of cable than Newall. Coils of cable were hung over the side of the dock and left to soak before testing. They used a new type of battery for insulation testing that was capable of being used at sea. Previously, the test batteries had been lined wooden cases with liquid electrolyte (Daniell cells). The new 'sand battery' comprised a moulded gutta-percha case filled with sand saturated with electrolyte, making it virtually unspillable. 144 cells were used in series (around 150 V). Several suspect portions of insulation were removed and repaired, by opening up the iron wire armouring with Spanish windlasses. Newall attempted to lay the Sunderland-made cable, again using the chartered steamer Britannia, in the autumn of 1852. The cable was too taut as she sailed from Portpatrick, resulting in the test instruments being dragged into the sea. Several delays caused by broken iron wires as the cable was laid, resulted in the ship drifting off course and running out of cable and this attempt too was abandoned.
Magnetic were successful with a new cable in 1853 over the same route, with Newall this time using the chartered Newcastle collier William Hutt. This was a six-core cable and heavier than the 1852 cable, weighing seven tons per mile. At over 180 fathoms (330 m) down, it was the deepest cable laid to that date. Repairs to the cable in 1861 required 128 splices. Tests on pieces of retrieved cable found that the copper wire used was very impure, containing less than 50% copper, despite the Gutta Percha Company specifying 85%.
=== Land network ===
The Magnetic's network was centred on northern England, Scotland, and Ireland, with its headquarters in Liverpool. Like most other telegraph companies, it ran its major telegraph trunk lines along railways in its home area. One of their first lines was ten unarmoured wires buried in the space between two railway tracks of the Lancashire and Yorkshire Railway. The Magnetic developed an extensive underground cable network from 1851 onwards. This was in contrast to other companies who used wires suspended between telegraph poles, or in built up areas, from rooftop to rooftop. Partly, the Magnetic buried cables for better protection from the elements. However, a more pressing reason was that many railway companies had exclusive agreements with the Electric, which shut out the Magnetic. Further, the British Telegraph Company, had exclusive rights for overhead lines on public roads, and the United Kingdom Telegraph Company had exclusive rights along canals. The Magnetic had a particular problem in reaching London. Their solution was to run buried cables along major roads. Ten wires were installed in this way along the route London–Birmingham–Manchester–Glasgow–Carlisle.
Wires on poles do not need to be electrically insulated (although they may have a protective coating). This is not so with underground lines. These must be insulated from the ground and from each other. The insulation must also be waterproof. Good insulating materials were not available in the early days of telegraphy, but after William Montgomerie sent samples of gutta-percha to Europe in 1843, the Gutta Percha Company started making gutta-percha insulated electrical cable from 1848 onwards. Gutta-percha is a natural rubber that is thermoplastic, so is good for continuous processes like cable making. Synthetic thermoplastic insulating material was not available until the invention of polyethylene in the 1930s, and it was not used for submarine cables until the 1940s. On cooling, gutta-percha is hard, durable, and waterproof, making it suitable for underground (and later submarine) cables. This was the cable chosen by the Magnetic for its underground lines.
In Ireland too, the Magnetic developed an extensive network of underground cables. In 1851, in anticipation of the submarine cable connection being laid to Donaghadee, the Magnetic laid an underground cable to Dublin. Once the submarine link was in place, Dublin could be connected to London via Manchester and Liverpool. In the west of Ireland, by 1855 they had laid cables that stretched down the entire length of the island on the route Portrush–Sligo–Galway–Limerick–Tralee–Cape Clear. The relationship of the Magnetic with Irish railway companies was the exact opposite of that in Britain. The Magnetic obtained exclusive agreements with many railways, including in 1858 with the Midland Great Western Railway. In Ireland, it was the Electric's turn to be forced on to the roads and canals.
In 1856, the Magnetic discovered that the insulation of cables laid in dry soil was deteriorating. This was due to the essential oils in the gutta-percha evaporating, leaving just a porous, woody residue. Bright tried to overcome this by reinjecting the oils, but with limited success. This problem was the main driver for acquiring the unprofitable British Telegraph Company—so that the Magnetic inherited their overhead cable rights. From this point, the Magnetic avoided laying new underground cables except where it was essential to do so.
== Atlantic cable ==
Brett started the fundraising for the Atlantic Telegraph Company's project to build the transatlantic telegraph cable at the Magnetic's Liverpool headquarters in November 1856. Brett was one of the founders of this company and the Magnetic's shareholders were inclined to invest because they expected that the transatlantic traffic would mean more business for the Magnetic's Irish lines. This was because the landing point for the cable was in Ireland and traffic would therefore have to pass through the Magnetic's lines.
== Social issues ==
The Magnetic was an early advocate of employing women as telegraph operators. They were paid according to the speed with which they could send messages, up to the maximum of ten shillings per week when 10 wpm was achieved. It was a popular job with unmarried women who otherwise had few good options.
== Notes ==
== References ==
== Bibliography ==
Ash, Stewart, "The development of submarine cables", ch. 1 in, Burnett, Douglas R.; Beckman, Robert; Davenport, Tara M., Submarine Cables: The Handbook of Law and Policy, Martinus Nijhoff Publishers, 2014 ISBN 9789004260320.
Barty-King, Hugh, Girdle Round the Earth: The Story of Cable and Wireless and Its Predecessors to Mark the Group's Jubilee, 1929–1979, London: Heinemann, 1979 OCLC 6809756, ISBN 0434049026.
Bowers, Brian, Sir Charles Wheatstone FRS: 1802-1875, Institution of Electrical Engineers, 2001 ISBN 0852961030.
Beauchamp, Ken, History of Telegraphy, Institution of Engineering and Technology, 2001 ISBN 0852967926.
Bright, Charles Tilston, Submarine Telegraphs, London: Crosby Lockwood, 1898 OCLC 776529627.
Bright, Edward Brailsford; Bright, Charles, The Life Story of the Late Sir Charles Tilston Bright, Civil Engineer, Cambridge University Press, 2012 ISBN 1108052886 (first published 1898).
Cookson, Gillian, A Victorian Scientist and Engineer: Fleeming Jenkin and the Birth of Electrical Engineering, Ashgate, 2000 ISBN 0754600793.
Hagen, John B., Radio-Frequency Electronics, Cambridge University Press, 2009 ISBN 052188974X.
Haigh, Kenneth Richardson, Cableships and Submarine Cables, Adlard Coles, 1968 OCLC 497380538.
Hills, Jill, The Struggle for Control of Global Communication,University of Illinois Press, 2002 ISBN 0252027574.
Hunt, Bruce J., The Maxwellians, Cornell University Press, 2005 ISBN 0801482348.
Huurdeman, Anton A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 0471205052.
Kieve, Jeffrey L., The Electric Telegraph: A Social and Economic History, David and Charles, 1973 OCLC 655205099.
Mercer, David, The Telephone: The Life Story of a Technology, Greenwood Publishing Group, 2006 ISBN 031333207X.
Morse, Samuel, "Examination of the Telegraphic Apparatus and the Processes in Telegraphy", in, Blake, William Phipps (ed), Reports of the United States Commissioners to the Paris Universal Exposition, 1867, vol. 4, US Government Printing Office, 1870 OCLC 752259860.
Newell, E.L., "Loading coils for ocean cables", Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics, vol. 76, iss. 4, pp. 478–482, September 1957.
Roberts, Steven, Distant Writing, distantwriting.co.uk,
ch. 5, "Competitors and allies", archived 1 July 2016.
Shaffner, Taliaferro Preston, The Telegraph Manual, Pudney & Russell, 1859.
Shaffner, Taliaferro Preston, "Magneto-electric battery", Shaffner's Telegraph Companion, vol. 2, pp. 162–167, 1855 OCLC 191123856. See also, Catalogue of the Special Loan Collection of Scientific Apparatus at the South Kensington Museum, p. 301, 1876.
Smith, Willoughby, The Rise and Extension of Submarine Telegraphy, London: J.S. Virtue & Co., 1891 OCLC 1079820592.
Wheen, Andrew, Dot-Dash to Dot.Com: How Modern Telecommunications Evolved from the Telegraph to the Internet, Springer, 2011 ISBN 1441967605.
"The progress of the telegraph: part VII", Nature, vol. 12, pp. 110–113, 10 June 1875.
== External links ==
Henley's magneto electric double needle telegraph, 1848–1852 at the Science Museum, London. | Wikipedia/English_and_Irish_Magnetic_Telegraph_Company |
The Russian–American Telegraph, also known as the Western Union Telegraph Expedition and the Collins Overland Telegraph, was an attempt by the Western Union Telegraph Company from 1865 to 1867 to lay a telegraph line from San Francisco, California, to Moscow, Russia.
The route of the $3,000,000 undertaking (equivalent to $61.6 million today) was intended to travel from California via Oregon, Washington Territory, the Colony of British Columbia and Russian America, under the Bering Sea and cross the broad breadth of the Eurasian Continent to Moscow, where lines would communicate with the rest of Europe. It was proposed as a much longer alternative to the challenge of long, deep underwater cables in the Atlantic, having only to cross the comparatively narrow Bering Strait underwater between North America and Siberia.
Laying the cable across Siberia proved more difficult than expected. Meanwhile, Cyrus West Field's transatlantic cable was successfully completed, leading to the abandonment in 1867 of the trans-Russian effort. A Government of Canada historic plaque adds these specifics:
"In 1867 ... construction ceased at Fort Stager at the confluence of the Kispyap and Skeena rivers. The section from New Westminster to the Cariboo was bought by the Canadian Government in 1880."
In spite of the project's economic failure, many regard aspects of the effort a success on the weight of various benefits the exploration brought to the regions that were traversed. To date, no entities have attempted a communications cable across the Bering Sea, with all extant submarine communications cables that travel westbound from North America following more southerly routes across much longer stretches of the North Pacific Ocean, connecting to Asia in Japan and then on to the Asian mainland.
== Rival plans ==
By 1861 the Western Union Telegraph Company had linked the eastern United States by electric telegraph all the way to San Francisco. The challenge then remained to connect North America with the rest of the world.
Working to meet that challenge were two telegraph pioneers, one, Cyrus West Field, seeking to lay an undersea telegraph cable west to east across the Atlantic from North America, and the other, Perry Collins, proposing a west to east link going the opposite direction, overland from the west coast of North America across the Bering Strait and Sibera to Moscow.
Field's Atlantic Telegraph Company laid the first transatlantic cable across the Atlantic Ocean in 1858. However, it had broken three weeks afterwards and attempts to repair it had been unsuccessful.
Meanwhile, entrepreneur Perry Collins visited Russia and took note that it was making good progress extending its telegraph lines eastwards from Moscow over Siberia.
Upon his return to the States, Collins approached Hiram Sibley, head of the Western Union Telegraph Company with the idea of an overland telegraph line that would run through the Northwestern states, the colony of British Columbia and Russian Alaska. Together, they worked on promoting the idea and obtained considerable support in the US, London and Russia.
== Preparations ==
On July 1, 1864, the American president Abraham Lincoln granted Western Union a right of way from San Francisco to the British Columbia border and assigned them the steamship Saginaw from the US Navy.
The George S. Wright and the Nightingale, a former slave ship, were also put into service, as well as a fleet of riverboats and schooners.
To supervise the construction, Collins chose Colonel Charles Bulkley, who had been the Superintendent of Military Telegraphs. Being an ex-military man, Bulkley divided the work crews into "working divisions" and an "Engineer Corps."
Edward Conway was made the head of the project's American route and British Columbia sections. Franklin Pope was assigned to Conway and given the responsibility for the exploring of British Columbia. The task of exploring Russian America went to the Smithsonian naturalist Robert Kennicott. In Siberia, the construction and exploration was under the charge of Russian nobleman Serge Abasa. Assigned to him were Collins Macrae, George Kennan, and J. A. Mahood.
Exploration and construction teams were divided into groups: one was in British Columbia, another worked around the Yukon River and Norton Sound with headquarters at St. Michael, Alaska, a third explored the area along the Amur River in Siberia, and a fourth group of about forty men was sent to Port Clarence to build the line that was to cross the Bering Strait to Siberia.
The Colony of British Columbia gave the project its full and enthusiastic support, allowing the materials for the line to be brought in free of duties and tolls. Chosen as the British Columbia terminus, New Westminster gloated over its triumph over its rival, Victoria, and it was predicted in the British Columbian newspaper that "New Westminster, traduced and dreaded by its jealous neighbor, will now be at the centre of all these great systems." The right of way for the telegraph line followed the shoreline west from the US border, then traversed the high ground of what is now White Rock and South Surrey to the Nicomekl River. From Mud Bay the telegraph line followed the Kennedy Trail northwest across Surrey and North Delta to the Fraser River.
At Brownsville, a cable was laid across the river to New Westminster. The surveying in British Columbia had started before the line reached New Westminster on March 21, 1865. Edward Conway had walked to Hope and was dismayed by the difficulty of the terrain. In response to Conway's concerns, the Colony of British Columbia agreed to build a road from New Westminster to Yale where it would meet the newly completed Cariboo Road. The telegraph company's only responsibility would be to string wires along it.
== Route through Russian America ==
Work began in Russian America in 1865, but initially little progress was made. Contributing to this lack of success was the climate, the terrain, supply shortages and the late arrival of the construction teams. Nevertheless, the entire route through Russian America was surveyed by the fall of 1866. Rather than waiting until spring, as was the usual practice, construction began and continued through that winter.
Many of the Western Union workers were unaccustomed to severe northern winters, and working in frigid conditions made erecting the line a difficult experience. Fires had to be lit to thaw out the frozen ground before holes could be dug to place the telegraph poles. For transportation and to haul the supplies, the only option the work crews had was to use teams of sled dogs.
When the Atlantic cable was successfully completed and the first transatlantic message to England was sent in July 1866, the men in the Russian American division were not aware of it until a full year later.
By then telegraph stations had been built, thousands of poles were cut and distributed along the route, and over 45 mi (72 km) of line had been completed in Russian America. Despite the fact that so much progress had been made, in July 1867 the work was officially ceased.
== Route through British Columbia ==
When that section of the line reached New Westminster, British Columbia, in the spring of 1865, the first message it carried was of the April 15 assassination of Abraham Lincoln.
In May 1865 construction began from New Westminster to Yale and then along the Cariboo Road and the Fraser River to Quesnel. Winter brought a halt to construction, but resumed in the spring with 150 men working northwest from Quesnel.
In 1866, the work progressed rapidly in that section, fifteen log telegraph cabins had been built and line had been strung 400 mi (640 km) from Quesnel, reaching the Kispiox and Bulkley Rivers. The company's sternwheeler, Mumford, traveled 110 mi (180 km) up the Skeena River from the Pacific Coast three times that season, successfully delivering 150 mi (240 km) of material for the telegraph line and 12,000 rations for its workers.
The line passed Fort Fraser and reached the Skeena River, creating the settlement of Hazelton when it was learned that Cyrus West Field had successfully laid the transatlantic cable on July 27.
In British Columbia, construction of the overland line was halted on February 27, 1867, as the whole project was now deemed obsolete.
Nevertheless, left behind in British Columbia was a usable telegraph system from New Westminster to Quesnel, which later would be run to the Cariboo Gold Rush town of Barkerville, and a trail that had been beat through what had largely been uncharted wilderness.
In addition, the expedition left behind a vast store of supplies that were put to good use by some of the First Nations inhabitants. Near Hazelton, Colonel Bulkley had been impressed by the bridge the Hagwilgets had built across the Bulkley River, but was reluctant to let his work party cross it until it had been reinforced with cable.
After the project was abandoned, the Hagwilgets at Hazelton built a second bridge from cable that the company had left behind. Both bridges were considered marvels of engineering and were credited as being "one of the romances of bridge building."
== Legacy ==
In the long run, the telegraph expedition, while an abject economic failure, provided a further means by which America was able to expand its Manifest Destiny beyond its national boundaries and may have precipitated the US purchase of Alaska. The expedition was responsible for the first examination of the flora, fauna and geology of Russian America and the members of the telegraph project were able to play a crucial role in the purchase of Alaska by providing useful valuable data on the territory.
The Colony of British Columbia meanwhile could further explore, colonize and communicate with its northern landscapes beyond what had been done by the Hudson's Bay Company.
Many of the towns in Northwestern British Columbia can trace their initial European settlement back to the Collins Overland Telegraph. Some examples of these are Hazelton, Burns Lake, Telkwa and Telegraph Creek.
The expedition also laid a foundation for the construction of the Yukon Telegraph line which was built from Ashcroft to Telegraph Creek and beyond to Dawson City, Yukon in 1901.
Portions of the telegraph route became part of the Ashcroft trail used by gold seekers during the Klondike gold rush. Of all the trails used by the stampeders the Ashcroft was among the harshest. Of the over fifteen hundred men and three thousand horses left Ashcroft, British Columbia, in the spring of 1898, only six men and no horses reached the goldfields.
Walter R. Hamilton was among those who completed the route. In his book The Yukon Story he describes the state of the trail thirty years after it was abandoned All evidence of the right-of-way and poles were gone, but in a few instances we found pieces of old telegraph wire imbedded several inches in the spruce, jack-pine and poplar trees that had long-since grown up and over the wires that touched them. I found one of the old green glass insulators still attached to a galvanized wire. I kept it as a souvenir but lost it later with a camera and some clothing when a scow was nearly overturned on Lake Laberge.
== Places named for the expedition or its members ==
Mount Pope in British Columbia was named for Franklin Pope, who was the assistant engineer and chief of explorations, responsible for surveying the 1,500 miles section from New Westminster to the Yukon River.
Kennecott, Alaska and the Kennicott Glacier are named for the expedition's naturalist, Robert Kennicott. Although Kennicott died on the expedition, on May 13, 1866, his work was publicized by W. H. Dall, another naturalist hired by Robert Kennicott. This publication and the publicity about Kennicott's death at the age of thirty-one helped Secretary of State William H. Seward convince Congress to purchase Alaska from Russia in 1867.
The Bulkley River, Bulkley Valley, Bulkley Mountains (now named the Bulkley Ranges) and the settlement of Bulkley House in British Columbia are named after Colonel Charles Bulkley. The name of the Bulkley-Nechako Regional District, a regional government in that area, is derived from the geographic names.
Burns Lake was named after Michael Byrnes, scout for the Collins Overland Telegraph scheme who explored the route from Fort Fraser to Skeena Forks (Hazelton, BC)
Decker Lake was named after Stephen Decker, construction foreman in British Columbia.
the Telegraph Range in the Ootsa Lake area is one of several landforms whose name is associated with the project
== Books and memoirs written about the expedition ==
Several major works are available documenting the expedition. The scientific travelogue by Smithsonian scientist W. H. Dall is perhaps the most referenced, while an English travelogue by Frederick Whymper provides additional information. Among personal accounts members of the expedition are a diary of Franklin Pope.
George Kennan and Richard Bush both wrote of the difficulties they encountered during the expedition. Kennan would later become notable for influencing American opinion of the Russian Empire. Originally very much for Russian settlement of the far East, on visiting the exile camps in the 1880s he changed his mind and later wrote Tent Life in Siberia: Adventures Among the Koryaks and Other Tribes in Kamchatka and Northern Asia. Richard Bush, aiming to emulate Kennan's success, wrote "Reindeer, Dogs and Snowshoes".
All documents and books relating to the expedition are of historical value, not only from a travel and discovery perspective but also from a cultural studies standpoint. The ethnocentric descriptions of aboriginal peoples in the places now known as British Columbia, Yukon Territory and Alaska, as well as the general region of Eastern Siberia, typify those attitudes of the time. Telegraph records provide evidence for native land claims such as those of the Gitxsan Nation of northern British Columbia. Dall's records have helped locate Smithsonian exhibits returned to their original native domiciles.
== Notes ==
== Further reading ==
Dwyer, John (2001). To Wire the World: Perry M. Collins and the North Pacific Telegraph Expedition. Westport, Conn.: Praeger. ISBN 0-275-96755-7.
Neering, Rosemary (2000). Continental Dash: The Russian-American Telegraph. Ganges, BC: Horsdal & Schubart. ISBN 0-920663-07-9.
Stuck, Hudson (1917). Voyages on the Yukon.
Dall, William H (1898). The Yukon Territory: The Narrative of W.H. Dall, Leader of the Expedition to Alaska in 1866–1868. ISBN 9780665165184. {{cite book}}: ISBN / Date incompatibility (help)
Kennan, George
Tent Life in Siberia: Adventures Among the Koryaks and Other Tribes in Kamchatka and Northern Asia at Project Gutenberg1870;reprint 1986 ISBN 0-87905-254-6
== External links ==
Media related to Collins Overland telegraph at Wikimedia Commons
Finding Aid to Western Union Telegraph Expedition Collection, 1865–1867 | Wikipedia/Western_Union_Telegraph_Expedition |
The printing telegraph was invented by Royal Earl House in 1846. House's telegraph could transmit around 40 instantly readable words per minute, but was difficult to manufacture in bulk. The printer could copy and print out up to 2,000 words per hour. This invention was first put in operation and exhibited at the Mechanics Institute in New York.
== Printing telegraph advancements ==
House's Type Printing Telegraph of 1849 was Royal Earl House's second and much improved type-printing instrument and was widely used on lines on America's east coast from 1850. David Hughes telegraph devices, which also had piano style keyboards, were very popular in France, where there were likely many more piano and harpsichord players than telegraphers.
Early stock ticker machines are also examples of printing telegraphs.
== Operation ==
Input into device was facilitated through two 28-key piano-style keyboards. Each piano key represented a letter of the alphabet and when pressed, electronically transmitted that letter to a receiving printing telegraph as code. The requested letter would then be print by that recipient telegraph.
A "shift" key allowed an alternative character to be assigned to each key, for example a digit or punctuation mark instead of a letter. A 56-character typewheel at the sending end was synchronised to run at the same speed and to maintain the same angular position as a similar wheel at the receiving end. When the key corresponding to a particular character was pressed at the home station, it actuated the typewheel at the distant station just as the same character moved into the printing position, in a way similar to the daisy wheel printer. It was thus an example of a synchronous data transmission system.
== Advantages ==
The benefit of the Printing Telegraph is that it allows the operator to use a piano-style keyboard to directly input the text of the message. The receiver would then receive the instantly readable text of the message on a paper strip. This is in contrast to the telegraphs that used Morse Code dots and dashes which needed to be converted into readable text.
"The Western Union Telegraph Company is now putting in a new patent telegraph printing machine on the Chicago line and hereafter dispatches transmitted over this line will be printed as they are received at the office in this city. The machine is furnished with keys similar to a piano, each key representing a letter in the alphabet, and by a peculiar mechanical arrangement each letter is printed as it is received at the office. Thus all mistakes arising from blind chirography will be thoroughly appreciated by our citizens. The machine will be put into operation this afternoon."
== Disadvantages ==
Printing Telegraphs were quite temperamental and suffered frequent breakdowns. Transmission speed was also much slower than the Morse system. The complexity of the original House device meant that production was limited. An improved version was designed by George Phelps. The Globotype was invented by David McCallum as a response to problems with the printing telegraph.
== Key layouts ==
Various layouts were produced to improve the efficiency of the keyboard system and accommodate several other alphabets.
== See also ==
Teleprinter
== References == | Wikipedia/Printing_telegraph |
The Submarine Telegraph Company was a British company which laid and operated submarine telegraph cables.
Jacob and John Watkins Brett formed the English Channel Submarine Telegraph Company to lay the first cable across the English Channel. An unarmoured cable with gutta-percha insulation was laid in 1850. The recently introduced gutta-percha was the first thermoplastic material available to cable makers and was resistant to seawater. This first unarmoured cable was a failure and was soon broken either by a French fishing boat or by abrasion on the rocks off the French coast.
The Bretts formed a new company, the Submarine Telegraph Company, and laid a new cable in 1851. This cable had multiple conductors and iron wire armouring. Telegraph communication with France was established for the first time in October of that year. This was the first undersea telegraph cable to be put in service anywhere in the world.
The company continued to lay, and operate, more cables between England and the Continent until they were nationalised in 1890. Through a series of mergers they ultimately became part of Cable & Wireless (CW). The Times commemorated the 50th anniversary of the cable in 1900; CW and the Science Museum, London did the same on the 100th anniversary in 1950.
== History ==
In 1847, the Bretts obtained a concession from the French government to lay and operate a submarine telegraph cable across the Channel. The concession lapsed without anything being achieved. A proof of principle was conducted in 1849 by Charles Vincent Walker of the South Eastern Railway using gutta-percha insulated cable. Gutta-percha, recently introduced by William Montgomerie for making medical equipment, was a natural rubber that was found to be ideal for insulating ocean cables. Walker laid two miles (3.2 km) of the cable from the ship Princess Clementine off the coast of Folkestone. With the other end connected to the railway telegraph lines, he successfully sent telegraph messages from the ship to London. At the conclusion of the experiment, South Eastern Railway reused the cable in a wet railway tunnel.
In the same year, the Bretts had the Channel concession renewed for ten years, but only on condition that communication was established by September 1850. The English Channel Submarine Telegraph Company was formed to carry out this task. The Gutta Percha Company was contracted to manufacture the cable. A paddle tug, Goliath was chartered for cable laying. Goliath transported the cable from the manufacturing plant in Greenwich to Dover in short lengths which were then spliced together onto a single drum.
Winding the cable onto the drum took some time. The individual lengths were retested in water at Dover quayside and repaired as necessary before joining on the drum. Unattended cable suffered from the attentions of souvenir hunters who cut off pieces, or stripped the insulation to confirm to themselves that there was copper inside. It was difficult to wind the cable evenly on the drum because the joints caused bulges and because the manufacturing process did not produce perfectly regular cable. Cotton packing and wooden slats were used to smooth out the unevenness, slowing the process even further.
Goliath laid the cable between Dover and Cap Gris Nez in France on 28 August 1850 Unlike later submarine cables, this one had no armouring to protect it. The single copper wire was protected only by the layer of gutta-percha insulation around it. This made it very light, and it was necessary to attach periodic lead weights to make it sink. Messages sent across the cable were unintelligible due to dispersion of the signal, a phenomenon which was not understood at the time, and would be an even greater problem to the first transatlantic telegraph cable. Dispersion was a problem not fully solved on submarine cables until loading started to be used at the beginning of the 20th century. Both ends of the communication assumed that the messages did not make sense because the other end was in the midst of drunken celebrations of their success. It was decided to try again in the morning. During the night the cable failed. Initial reports stated that cable was damaged where it passed over rocks near Cap Gris Nez, but later French fishermen were blamed. The cable was never put back into service. While it is certainly true that French fishing boats recovered lengths of the cable hauled up in their nets, and in some cases cut the cable to free their gear, it remains unclear if this was the initial cause of the failure. A story circulated much later (from 1865) that the fisherman who initially cut the cable thought it was a new species of seaweed with gold in its centre. Although this story is still found in modern sources, it is likely apocryphal.
=== First working undersea cable ===
The Bretts managed to renew their concession with a new date for establishing communication of October 1851. The company was reformed as the Submarine Telegraph Company in order to raise new capital. The largest investor was railway engineer Thomas Russell Crampton, who was put in charge of ordering the new cable. Crampton specified a much improved cable. The core of the new cable, again made by the Gutta Percha Company, was to have four conductors, substantially increasing the potential traffic, and insulated with gutta-percha as before. However, the four separate insulated conductors were not laid into a single cable by the Gutta Percha Company. This task was given to a wire-rope making company, Wilkins and Wetherly, who armoured the cable with an outer layer of helically laid iron wires. Production was halted for a time due to a dispute with R.S. Newall and Company of Gateshead. Newall had a patent for manufacturing wire rope with a soft core to make it more flexible, and claimed that this submarine cable breached that patent. The issue was resolved by allowing Newall to take over production of the cable at Wilkins and Wetherly's Wapping premises.
The completed cable was 25 nautical miles (46 km; 29 mi) long, far longer and heavier than anything the rope makers had previously manufactured, and there was some difficulty getting the cable out of the Wapping premises. There was no easy access and the adjacent business refused permission to cross their property, thinking that electrical apparatus would invalidate their fire insurance. However, a neighbouring business granted access, but the cable still had to be manually hauled to a wharf on the River Thames. This was a difficult task which had to frequently be halted to tie back protruding broken iron wires. At the River Thames, the cable was loaded on to the Blazer, a hulk loaned to the Submarine Telegraph Company by the government.
The cable was laid between South Foreland and Sangatte by Blazer under tow from two tugs on 25 September 1851. The cable ran out a mile (1.6 km) before reaching Sangatte. As a temporary measure, a length of unarmoured cable used for the underground link from Sangatte to Calais was spliced on to enable the ocean cable to be landed. The telegraph station on the English side was in a private house in Dover. At first, they could not contact France, but soon discovered that the problem was not with the submarine cable. Rather a joint had been omitted in the underground cable between South Foreland and Dover. Telegraph communication between Britain and France was established for the first time on 15 October.
In October, the steam tug Red Rover was tasked with replacing the temporary cable with a new section of armoured cable. Red Rover's first attempt was abandoned after running into bad weather. Trying again, it was discovered that there was no one on board who knew how to find Sangatte. They arrived a day late and missed their rendezvous with HMS Widgeon which was tasked with making the splice at sea. The cable was finally landed and the splice made aboard Widgeon on 19 October.
The line was finally open to the public on 19 November 1851. The occasion was marked by setting off an electrical fuse over the telegraph from Dover to fire a cannon in Calais. In reply, Calais fired a cannon in Dover Castle. The opening had again missed the French government deadline, but the concession was nevertheless renewed on 23 October for ten years from that date. The cable remained in service with the Submarine Telegraph Company for the lifetime of the company. This was the first undersea submarine cable put into service. Werner von Siemens had used gutta-percha-insulated cable to cross the Rhine in 1847 and Kiel Harbour in 1848, but this was the first working undersea cable to link two countries.
=== Manufacturing problems ===
Early submarine cables had numerous quality problems. The insulation was not applied evenly leading to variations in the cable diameter and shape. The conductor was not held on the centreline of the insulation, in places coming close to the surface making it easy for the conductor to become exposed. The insulation was full of air pockets due to the gutta-percha being applied in one thick coat instead of several thinner coats. All these issues with the insulation caused inconsistencies in the electrical properties of the cable.
Quality of the conductor was also inconsistent. The diameter of the copper was variable, again leading to inconsistent electrical properties. There was little experience with annealing long lengths of copper. This resulted in inconsistent mechanical properties with brittle portions in the wire.
An even bigger problem was caused by the joints. The copper wire was supplied in short, inconsistent, lengths. Initially on the 1850 cable, joints were attempted by brazing a scarf joint with hard solder. However, the heat from the blowpipe softened the gutta-percha which became plastic and dripped off the cable. An alternative method was therefore used. Two inches of insulation was stripped from each end, the exposed wires twisted together and soft soldered. Sheets of gutta-percha heated to a plastic state were then wrapped around the joint and clamped in a mould. This resulted in a cigar-shaped bulge around the joint which was undesirable for cable laying.
=== Nationalisation ===
The Submarine Telegraph Company went on to lay many more cables between Britain and the continent. In 1870 the inland telegraphs in Britain were nationalised, and in 1890 the cables and other assets of the Submarine Telegraph Company were taken over by the General Post Office.
== List of cables laid ==
* Until 1863, all cable cores were made by the Gutta Percha Company as they had a monopoly on gutta-percha cable. In 1863, they merged with cable manufacturer Glass, Elliot & Company to form the Telegraph Construction and Maintenance Company.
== References ==
== Bibliography ==
Glover, Bill; Burns, Bill, "The Submarine Telegraph Company", History of the Atlantic Cable & Undersea Communications, accessed and archived 5 August 2020.
Haigh, Kenneth Richardson, Cableships and Submarine Cables, Adlard Coles, 1968 OCLC 497380538.
Huurdeman, Anton A., The Worldwide History of Telecommunications, Wiley, 2003 ISBN 0471205052.
Kieve, Jeffrey L., The Electric Telegraph: A Social and Economic History, David & Charles, 1973 OCLC 655205099.
Newell, E. L., "Loading coils for ocean cables", Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics, vol. 76, iss. 4, pp. 478–482, September 1957.
Smith, Willoughby, The Rise and Extension of Submarine Telegraphy, J.S. Virtue & Company, 1891 OCLC 1079820592. | Wikipedia/Submarine_Telegraph_Company |
A telecom network protocol analyzer is a protocol analyzer used to analyze switching and telecommunications signaling protocols between different nodes in PSTN or Mobile telephone networks, such as 2G or 3G GSM networks, CDMA networks, WiMAX and so on.
In a mobile telecommunication network it can analyze the traffic between MSC and BSC, BSC and BTS, MSC and HLR, MSC and VLR, VLR and HLR, and so on.
Protocol analyzers are mainly used for performance measurement and troubleshooting. These devices connect to the network to calculate key performance indicators to monitor the network and speed-up troubleshooting activities.
== External links ==
GL Communications GSM Protocol Analyzer Overview
Tektronix Protocol Analyzer Overview
Utel Systems - Network Monitoring | Wikipedia/Telecom_network_protocol_analyzer |
The transformer ratio arm bridge or TRA bridge is a type of bridge circuit for measuring electronic components, using a.c. It can be designed to work in terms of either impedance or admittance. It can be used on resistors, capacitors and inductors, measuring minor as well as major terms, e.g. series resistance in capacitors. It is probably the most accurate type of bridge available, being capable of the precision needed, for example, when checking secondary component standards against national standards.
Like all bridges, the TRA bridge involves comparing an unknown component against a standard. Like all a.c. bridges, it requires a signal source and a null detector. The accuracy of this class of bridge depends on the ratio of the turns on one or more transformers. A notable advantage is that normal stray capacitance across the transformer, including lead capacitance, may affect the sensitivity of the bridge but does not affect its measuring accuracy.
== History ==
The invention of the TRA bridge is credited to Alan Blumlein in his UK patent 323037 (published 1929), and this class of bridge is sometimes known as a Blumlein bridge, although links to earlier types of bridge can be seen. Blumlein's first patent was for a capacitance-measuring bridge: Fig. 1 is redrawn from one of the diagrams in the patent.
Subsequently the ratio arm principle was applied more generally, to other classes of electronic components and at frequencies up to r.f., and with many variations in how the unknown component was connected to the transformer or transformers.
Blumlein himself was responsible for several further related patents. He made his first bridge while employed by the British company Standard Telephones and Cables, which did not manufacture test instruments. TRA bridges have since been made by many specialist manufacturers, including Boonton, ESI (formerly Brown Engineering and BECO), General Radio, Marconi Instruments, H. W. Sullivan (now part of Megger) and Wayne Kerr.
== Principle ==
One possible configuration using two transformers is shown in Fig. 2. (The two transformers allow both the signal source and the null detector to be isolated from the measured component.) The unknown
Z
x
{\displaystyle Z_{x}}
and the standard
Z
s
{\displaystyle Z_{s}}
are both driven by T1, feeding currents to the primary of T2. Because of the winding sense of the two halves of the T2 primary, these currents are in antiphase.
If
Z
x
{\displaystyle Z_{x}}
and
Z
s
{\displaystyle Z_{s}}
have the same value and are fed from the same tap on T1, the antiphase currents cancel out perfectly and the null detector will show balance. When
Z
x
{\displaystyle Z_{x}}
and
Z
s
{\displaystyle Z_{s}}
are unequal, balance can be approached by connecting
Z
s
{\displaystyle Z_{s}}
to a different tap on the T1 secondary. An exact balance may be achieved by using two or more standards connected to suitable taps.
Fig. 2 shows
Z
x
{\displaystyle Z_{x}}
and
Z
s
{\displaystyle Z_{s}}
as single components. Fig. 3 shows separate standards for conductance
G
{\displaystyle G}
and susceptance
B
{\displaystyle B}
, allowing minor as well as major terms of
Y
x
{\displaystyle Y_{x}}
to be resolved. The standards are shown as variable components connected to fixed taps on the T1 secondary, but bridges can equally be made with fixed standards connected to variable taps.
The unknown component too may be connected to a tap part-way along the T1 secondary. Also the numbers of turns on the two arms of the T1 secondary are not necessarily equal, and likewise those on the T2 primary. Combinations of these various options offer great flexibility of construction, allowing measurements over a wide range of values while using only a small number of standards – essentially one per significant figure of the resistance or conductance value and one per significant figure of the reactance or susceptance value.
In Fig. 3, at balance
Y
x
=
Y
s
e
s
n
s
e
x
n
x
{\displaystyle Y_{x}=Y_{s}{\frac {e_{s}n_{s}}{e_{x}n_{x}}}}
The bridge may be balanced (nulled) by manual switching of the standards, but "autobalance" bridges, in which the switching is wholly or partially automated, are also made.
== Detailed example ==
The operation of a universal TRA bridge is best explained on the basis of an actual product, the Wayne Kerr B221 bridge, dating from the 1950s. It used valve (vacuum tube) technology. The following description is simplified.
The bridge is based on two transformers (Fig. 4): T1 is described as the voltage transformer, and is driven by the signal source in the usual way. T2, the "current transformer", compares the two arms of the circuit – for the unknown
Z
x
{\displaystyle Z_{x}}
and the various standards – and drives the null detector, which takes the form of a phase-sensitive detector with adjustable sensitivity, feeding two magic eyes. (Later versions of the instrument, with transistorised circuitry, used a moving-coil meter as the display for the null detector.)
Taps at 1, 10, 100 and 1000 turns are shown on the T1 secondary and on T2 primary P2a. Four-way selector switches are shown, but the tap selections are actually combined on a single switch to give seven measuring ranges. Full-scale limits at full accuracy (specified as ±0.1%) are 100 MΩ, 11.1 pF and 10 kH for the least sensitive range, and 100 Ω, 11.1 μF and 10 mH for the most sensitive range. Each range can be extended in the direction of higher resistance, higher inductance or lower capacitance at reduced accuracy. The voltage applied by T1 to
Z
x
{\displaystyle Z_{x}}
is about 30 V r.m.s. on the least sensitive range, 30 mV on the two most sensitive.
The most significant figures of the major and minor components of
Z
x
{\displaystyle Z_{x}}
are obtained by switching the resistance standard Rs1 and the capacitance standard Cs1 to one of taps 0 to 10 on the secondary of T1. The second significant figures are obtained by switching Rs2 and Cs2 in the same way. Continuous ("vernier") fine adjustment to give third and fourth significant figures is provided by Rs3 and Cs3. Rs3 and Cs3 are shown connected to tap 10 on T1, but in practice these two standards may be connected to any convenient tap, as appropriate to their values.
Primary P2b on T2 provides 100-turn taps of both polarities. Switching the capacitance standards between the positive and negative taps selects between capacitance measurements and inductance measurements. Similarly the polarity of the resistance standard can be reversed, so that measurements can be made in all four quadrants.
Besides the main balance controls described above, the front panel of the instrument has zero adjustments for both resistance and capacitance. The inductive elements of the wire-wound resistance standards are compensated by trimming capacitors. All these and other trimming components are omitted in Fig. 4.
This bridge measures conductance and susceptance in parallel. The susceptance reading is displayed as capacitance, and inductance must be calculated as a reciprocal using
L
x
=
1
ω
2
C
x
{\displaystyle L_{x}={\frac {1}{\omega ^{2}C_{x}}}}
To simplify the arithmetic, the bridge operates at 1592 Hz so that ω2 is 108 s−2. The readings can be converted to resistance and capacitance in series. On the most sensitive ranges, readings must be adjusted to take account of lead resistance and inductance.
The external link allows two-, three- or four-terminal measurements to be made. Besides conventional component measurements, the bridge can also be used to measure attenuator performance, transformer turns ratio and the effectiveness of transformer screening. Subject to conditions, in-situ (in-circuit) measurement of a component is possible. With additional external components, capacitors with a polarising voltage or inductors with a standing direct current can be measured.
An optional low-impedance adaptor extends the measuring range downwards by another four orders of magnitude, giving full-scale readings down to 10 mΩ, 5 F and 1 μH at ±1% basic accuracy.
== See also ==
Capacitance meter
LCR meter
== References ==
== Further reading ==
Henry P. Hall, A History of Impedance Measurements, based on a draft for an unpublished book. | Wikipedia/Transformer_ratio_arm_bridge |
In electrical engineering, a function generator is usually a piece of electronic test equipment or software used to generate different types of electrical waveforms over a wide range of frequencies. Some of the most common waveforms produced by the function generator are the sine wave, square wave, triangular wave and sawtooth shapes. These waveforms can be either repetitive or single-shot (which requires an internal or external trigger source). Another feature included on many function generators is the ability to add a DC offset. Integrated circuits used to generate waveforms may also be described as function generator ICs.
Although function generators cover both audio and radio frequencies, they are usually not suitable for applications that need low distortion or stable frequency signals. When those traits are required, other signal generators would be more appropriate.
Some function generators can be phase-locked to an external signal source (which may be a frequency reference) or another function generator.
Function generators are used in the development, test and repair of electronic equipment. For example, they may be used as a signal source to test amplifiers or to introduce an error signal into a control loop. Function generators are primarily used for working with analog circuits, related pulse generators are primarily used for working with digital circuits.
== Electronic instruments ==
=== Principles of Operation ===
Simple function generators usually generate triangular waveform whose frequency can be controlled smoothly as well as in steps. This triangular wave is used as the basis for all of its other outputs. The triangular wave is generated by repeatedly charging and discharging a capacitor from a constant current source. This produces a linearly ascending and descending voltage ramp. As the output voltage reaches upper or lower limits, the charging or discharging is reversed using a comparator, producing the linear triangle wave. By varying the current and the size of the capacitor, different frequencies may be obtained. Sawtooth waves can be produced by charging the capacitor slowly with low current, but using a diode over the current source to discharge quickly - the polarity of the diode changes the polarity of the resulting sawtooth, i.e. slow rise and fast fall, or fast rise and slow fall.
A 50% duty cycle square wave is easily obtained by noting whether the capacitor is being charged or discharged, which is reflected in the current switching comparator output. Other duty cycles (theoretically from 0% to 100%) can be obtained by using a comparator and the sawtooth or triangle signal. Most function generators also contain a non-linear diode shaping circuit that can convert the triangle wave into a reasonably accurate sine wave by rounding off the corners of the triangle wave in a process similar to clipping in audio systems.
A walking ring counter, also called a Johnson counter, and a (linear) resistor-only shaping circuit is an alternative way to produce an approximation of a sine wave.
This is perhaps the simplest numerically-controlled oscillator.
Two such walking ring counters are perhaps the simplest way to generate the continuous-phase frequency-shift keying
used in dual-tone multi-frequency signaling and early modem tones.
A typical function generator can provide frequencies up to 20 MHz. RF generators for higher frequencies are not function generators in the strict sense since they typically produce pure or modulated sine signals only.
Function generators, like most signal generators, may also contain an attenuator, various means of modulating the output waveform, and often the ability to automatically and repetitively "sweep" the frequency of the output waveform (by means of a voltage-controlled oscillator) between two operator-determined limits. This capability makes it very easy to evaluate the frequency response of a given electronic circuit.
Some function generators can also generate white or pink noise.
More advanced function generators are called arbitrary waveform generators (AWG). They use direct digital synthesis (DDS) techniques to generate any waveform that can be described by a table of amplitudes and time steps.
=== Specifications ===
Typical specifications for a general-purpose function generator are:
Produces sine, square, triangular, sawtooth (ramp), and pulse output. Arbitrary waveform generators can produce waves of any shape.
It can generate a wide range of frequencies. For example, the Tektronix FG 502 (ca 1974) covers 0.1 Hz to 11 MHz.
Frequency stability of 0.1 percent per hour for analog generators or 500 ppm for a digital generator.
Maximum sinewave distortion of about 1% (accuracy of diode shaping network) for analog generators. Arbitrary waveform generators may have distortion less than -55 dB below 50 kHz and less than -40 dB above 50 kHz.
Some function generators can be phase locked to an external signal source, which may be a frequency reference or another function generator.
Amplitude modulation (AM), frequency modulation (FM), or phase modulation (PM) may be supported.
Output amplitude up to 10 V peak-to-peak.
Amplitude can be modified, usually by a calibrated attenuator with decade steps and continuous adjustment within each decade.
Some generators provide a DC offset voltage, e.g. adjustable between -5V to +5V.
An output impedance of 50 Ω.
=== Software ===
A completely different approach to function generation is to use software instructions to generate a waveform, with provision for output. For example, a general-purpose digital computer can be used to generate the waveform; if frequency range and amplitude are acceptable, the sound card fitted to most computers can be used to output the generated wave.
== Circuit elements ==
=== Waveform generator ===
An electronic circuit element used for generating waveforms within other apparatus that can be used in communications and instrumentation circuits, and also in a function generator instrument. Examples are the Exar XR2206 and the Intersil ICL8038 integrated circuits, which can generate sine, square, triangle, ramp, and pulse waveforms at a voltage-controllable frequency.
=== Function generator ===
An electronic circuit element that provides an output proportional to some mathematical function (such as the square root) of its input; such devices are used in feedback control systems and in analog computers. Examples are the Raytheon QK329 square-law tube and the Intersil ICL8048 Log/Antilog Amplifier.
== See also ==
Digital pattern generator
Electronic musical instrument
Wavetek
== References ==
== External links ==
Function Generator & Arbitrary Waveform Generator Guidebook
Waveform Generator Fundamentals
Function Generator Fundamentals
Rostky, George (March 13, 2001), Design classics: the function generator, EE Times, retrieved March 31, 2012. History of the function generator. | Wikipedia/Function_generator |
Electrical telegraphy is point-to-point distance communicating via sending electric signals over wire, a system primarily used from the 1840s until the late 20th century. It was the first electrical telecommunications system and the most widely used of a number of early messaging systems called telegraphs, that were devised to send text messages more quickly than physically carrying them. Electrical telegraphy can be considered the first example of electrical engineering.
Electrical telegraphy consisted of two or more geographically separated stations, called telegraph offices. The offices were connected by wires, usually supported overhead on utility poles. Many electrical telegraph systems were invented that operated in different ways, but the ones that became widespread fit into two broad categories. First are the needle telegraphs, in which electric current sent down the telegraph line produces electromagnetic force to move a needle-shaped pointer into position over a printed list. Early needle telegraph models used multiple needles, thus requiring multiple wires to be installed between stations. The first commercial needle telegraph system and the most widely used of its type was the Cooke and Wheatstone telegraph, invented in 1837. The second category are armature systems, in which the current activates a telegraph sounder that makes a click; communication on this type of system relies on sending clicks in coded rhythmic patterns. The archetype of this category was the Morse system and the code associated with it, both invented by Samuel Morse in 1838. In 1865, the Morse system became the standard for international communication, using a modified form of Morse's code that had been developed for German railways.
Electrical telegraphs were used by the emerging railway companies to provide signals for train control systems, minimizing the chances of trains colliding with each other. This was built around the signalling block system in which signal boxes along the line communicate with neighbouring boxes by telegraphic sounding of single-stroke bells and three-position needle telegraph instruments.
In the 1840s, the electrical telegraph superseded optical telegraph systems such as semaphores, becoming the standard way to send urgent messages. By the latter half of the century, most developed nations had commercial telegraph networks with local telegraph offices in most cities and towns, allowing the public to send messages (called telegrams) addressed to any person in the country, for a fee.
Beginning in 1850, submarine telegraph cables allowed for the first rapid communication between people on different continents. The telegraph's nearly-instant transmission of messages across continents – and between continents – had widespread social and economic impacts. The electric telegraph led to Guglielmo Marconi's invention of wireless telegraphy, the first means of radiowave telecommunication, which he began in 1894.
In the early 20th century, manual operation of telegraph machines was slowly replaced by teleprinter networks. Increasing use of the telephone pushed telegraphy into only a few specialist uses; its use by the general public dwindled to greetings for special occasions. The rise of the Internet and email in the 1990s largely made dedicated telegraphy networks obsolete.
== History ==
=== Precursors ===
Prior to the electric telegraph, visual systems were used, including beacons, smoke signals, flag semaphore, and optical telegraphs for visual signals to communicate over distances of land.
An auditory predecessor was West African talking drums. In the 19th century, Yoruba drummers used talking drums to mimic human tonal language to communicate complex messages – usually regarding news of birth, ceremonies, and military conflict – over 4–5 mile distances.
Possibly the earliest design and conceptualization for a telegraph system was by the British polymath Robert Hooke, who gave a vivid and comprehensive outline of visual telegraphy to the Royal Society in a 1684 submission in which he outlined many practical details. The system was largely motivated by military concerns, following the Battle of Vienna in 1683.
The first official optical telegraph was invented in France in the 18th century by Claude Chappe and his brothers. The Chappe system would stretch nearly 5,000 km with 556 stations and was used until the 1850s.
=== Early work ===
From early studies of electricity, electrical phenomena were known to travel with great speed, and many experimenters worked on the application of electricity to communications at a distance. All the known effects of electricity – such as sparks, electrostatic attraction, chemical changes, electric shocks, and later electromagnetism – were applied to the problems of detecting controlled transmissions of electricity at various distances.
In 1753, an anonymous writer in the Scots Magazine suggested an electrostatic telegraph. Using one wire for each letter of the alphabet, a message could be transmitted by connecting the wire terminals in turn to an electrostatic machine, and observing the deflection of pith balls at the far end. The writer has never been positively identified, but the letter was signed C.M. and posted from Renfrew leading to a Charles Marshall of Renfrew being suggested. Telegraphs employing electrostatic attraction were the basis of early experiments in electrical telegraphy in Europe, but were abandoned as being impractical and were never developed into a useful communication system.
In 1774, Georges-Louis Le Sage realised an early electric telegraph. The telegraph had a separate wire for each of the 26 letters of the alphabet and its range was only between two rooms of his home.
In 1800, Alessandro Volta invented the voltaic pile, providing a continuous current of electricity for experimentation. This became a source of a low-voltage current that could be used to produce more distinct effects, and which was far less limited than the momentary discharge of an electrostatic machine, which with Leyden jars were the only previously known human-made sources of electricity.
Another very early experiment in electrical telegraphy was an "electrochemical telegraph" created by the German physician, anatomist and inventor Samuel Thomas von Sömmering in 1809, based on an earlier 1804 design by Spanish polymath and scientist Francisco Salva Campillo. Both their designs employed multiple wires (up to 35) to represent almost all Latin letters and numerals. Thus, messages could be conveyed electrically up to a few kilometers (in von Sömmering's design), with each of the telegraph receiver's wires immersed in a separate glass tube of acid. An electric current was sequentially applied by the sender through the various wires representing each letter of a message; at the recipient's end, the currents electrolysed the acid in the tubes in sequence, releasing streams of hydrogen bubbles next to each associated letter or numeral. The telegraph receiver's operator would watch the bubbles and could then record the transmitted message. This is in contrast to later telegraphs that used a single wire (with ground return).
Hans Christian Ørsted discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle. In the same year Johann Schweigger invented the galvanometer, with a coil of wire around a compass, that could be used as a sensitive indicator for an electric current. Also that year, André-Marie Ampère suggested that telegraphy could be achieved by placing small magnets under the ends of a set of wires, one pair of wires for each letter of the alphabet. He was apparently unaware of Schweigger's invention at the time, which would have made his system much more sensitive. In 1825, Peter Barlow tried Ampère's idea but only got it to work over 200 feet (61 m) and declared it impractical. In 1830 William Ritchie improved on Ampère's design by placing the magnetic needles inside a coil of wire connected to each pair of conductors. He successfully demonstrated it, showing the feasibility of the electromagnetic telegraph, but only within a lecture hall.
In 1825, William Sturgeon invented the electromagnet, with a single winding of uninsulated wire on a piece of varnished iron, which increased the magnetic force produced by electric current. Joseph Henry improved it in 1828 by placing several windings of insulated wire around the bar, creating a much more powerful electromagnet which could operate a telegraph through the high resistance of long telegraph wires. During his tenure at The Albany Academy from 1826 to 1832, Henry first demonstrated the theory of the 'magnetic telegraph' by ringing a bell through one-mile (1.6 km) of wire strung around the room in 1831.
In 1835, Joseph Henry and Edward Davy independently invented the mercury dipping electrical relay, in which a magnetic needle is dipped into a pot of mercury when an electric current passes through the surrounding coil. In 1837, Davy invented the much more practical metallic make-and-break relay which became the relay of choice in telegraph systems and a key component for periodically renewing weak signals. Davy demonstrated his telegraph system in Regent's Park in 1837 and was granted a patent on 4 July 1838. Davy also invented a printing telegraph which used the electric current from the telegraph signal to mark a ribbon of calico infused with potassium iodide and calcium hypochlorite.
=== First working systems ===
The first working telegraph was built by the English inventor Francis Ronalds in 1816 and used static electricity. At the family home on Hammersmith Mall, he set up a complete subterranean system in a 175-yard (160 m) long trench as well as an eight-mile (13 km) long overhead telegraph. The lines were connected at both ends to revolving dials marked with the letters of the alphabet and electrical impulses sent along the wire were used to transmit messages. Offering his invention to the Admiralty in July 1816, it was rejected as "wholly unnecessary". His account of the scheme and the possibilities of rapid global communication in Descriptions of an Electrical Telegraph and of some other Electrical Apparatus was the first published work on electric telegraphy and even described the risk of signal retardation due to induction. Elements of Ronalds' design were utilised in the subsequent commercialisation of the telegraph over 20 years later.
The Schilling telegraph, invented by Baron Schilling von Canstatt in 1832, was an early needle telegraph. It had a transmitting device that consisted of a keyboard with 16 black-and-white keys. These served for switching the electric current. The receiving instrument consisted of six galvanometers with magnetic needles, suspended from silk threads. The two stations of Schilling's telegraph were connected by eight wires; six were connected with the galvanometers, one served for the return current and one for a signal bell. When at the starting station the operator pressed a key, the corresponding pointer was deflected at the receiving station. Different positions of black and white flags on different disks gave combinations which corresponded to the letters or numbers. Pavel Schilling subsequently improved its apparatus by reducing the number of connecting wires from eight to two.
On 21 October 1832, Schilling managed a short-distance transmission of signals between two telegraphs in different rooms of his apartment. In 1836, the British government attempted to buy the design but Schilling instead accepted overtures from Nicholas I of Russia. Schilling's telegraph was tested on a 5-kilometre-long (3.1 mi) experimental underground and underwater cable, laid around the building of the main Admiralty in Saint Petersburg and was approved for a telegraph between the imperial palace at Peterhof and the naval base at Kronstadt. However, the project was cancelled following Schilling's death in 1837. Schilling was also one of the first to put into practice the idea of the binary system of signal transmission. His work was taken over and developed by Moritz von Jacobi who invented telegraph equipment that was used by Tsar Alexander III to connect the Imperial palace at Tsarskoye Selo and Kronstadt Naval Base.
In 1833, Carl Friedrich Gauss, together with the physics professor Wilhelm Weber in Göttingen, installed a 1,200-metre-long (3,900 ft) wire above the town's roofs. Gauss combined the Poggendorff-Schweigger multiplicator with his magnetometer to build a more sensitive device, the galvanometer. To change the direction of the electric current, he constructed a commutator of his own. As a result, he was able to make the distant needle move in the direction set by the commutator on the other end of the line.
At first, Gauss and Weber used the telegraph to coordinate time, but soon they developed other signals and finally, their own alphabet. The alphabet was encoded in a binary code that was transmitted by positive or negative voltage pulses which were generated by means of moving an induction coil up and down over a permanent magnet and connecting the coil with the transmission wires by means of the commutator. The page of Gauss's laboratory notebook containing both his code and the first message transmitted, as well as a replica of the telegraph made in the 1850s under the instructions of Weber are kept in the faculty of physics at the University of Göttingen, in Germany.
Gauss was convinced that this communication would be of help to his kingdom's towns. Later in the same year, instead of a voltaic pile, Gauss used an induction pulse, enabling him to transmit seven letters a minute instead of two. The inventors and university did not have the funds to develop the telegraph on their own, but they received funding from Alexander von Humboldt. Carl August Steinheil in Munich was able to build a telegraph network within the city in 1835–1836. In 1838, Steinheil installed a telegraph along the Nuremberg–Fürth railway line, built in 1835 as the first German railroad, which was the first earth-return telegraph put into service.
By 1837, William Fothergill Cooke and Charles Wheatstone had co-developed a telegraph system which used a number of needles on a board that could be moved to point to letters of the alphabet. Any number of needles could be used, depending on the number of characters it was required to code. In May 1837 they patented their system. The patent recommended five needles, which coded twenty of the alphabet's 26 letters.
Samuel Morse independently developed and patented a recording electric telegraph in 1837. Morse's assistant Alfred Vail developed an instrument that was called the register for recording the received messages. It embossed dots and dashes on a moving paper tape by a stylus which was operated by an electromagnet. Morse and Vail developed the Morse code signalling alphabet.
On 24 May 1844, Morse sent to Vail the historic first message “WHAT HATH GOD WROUGHT" from the Capitol in Washington to the old Mt. Clare Depot in Baltimore.
== Commercial telegraphy ==
=== Cooke and Wheatstone system ===
The first commercial electrical telegraph was the Cooke and Wheatstone system. A demonstration four-needle system was installed on the Euston to Camden Town section of Robert Stephenson's London and Birmingham Railway in 1837 for signalling rope-hauling of locomotives. It was rejected in favour of pneumatic whistles. Cooke and Wheatstone had their first commercial success with a system installed on the Great Western Railway over the 13 miles (21 km) from Paddington station to West Drayton in 1838. This was a five-needle, six-wire system, and had the major advantage of displaying the letter being sent so operators did not need to learn a code. The insulation failed on the underground cables between Paddington and West Drayton, and when the line was extended to Slough in 1843, the system was converted to a one-needle, two-wire configuration with uninsulated wires on poles. The cost of installing wires was ultimately more economically significant than the cost of training operators. The one-needle telegraph proved highly successful on British railways, and 15,000 sets were in use at the end of the nineteenth century; some remained in service in the 1930s. The Electric Telegraph Company, the world's first public telegraphy company, was formed in 1845 by financier John Lewis Ricardo and Cooke.
=== Wheatstone ABC telegraph ===
Wheatstone developed a practical alphabetical system in 1840 called the A.B.C. System, used mostly on private wires. This consisted of a "communicator" at the sending end and an "indicator" at the receiving end. The communicator consisted of a circular dial with a pointer and the 26 letters of the alphabet (and four punctuation marks) around its circumference. Against each letter was a key that could be pressed. A transmission would begin with the pointers on the dials at both ends set to the start position. The transmitting operator would then press down the key corresponding to the letter to be transmitted. In the base of the communicator was a magneto actuated by a handle on the front. This would be turned to apply an alternating voltage to the line. Each half cycle of the current would advance the pointers at both ends by one position. When the pointer reached the position of the depressed key, it would stop and the magneto would be disconnected from the line. The communicator's pointer was geared to the magneto mechanism. The indicator's pointer was moved by a polarised electromagnet whose armature was coupled to it through an escapement. Thus the alternating line voltage moved the indicator's pointer on to the position of the depressed key on the communicator. Pressing another key would then release the pointer and the previous key, and re-connect the magneto to the line. These machines were very robust and simple to operate, and they stayed in use in Britain until well into the 20th century.
=== Morse system ===
The Morse system uses a single wire between offices. At the sending station, an operator taps on a switch called a telegraph key, spelling out text messages in Morse code. Originally, the armature was intended to make marks on paper tape, but operators learned to interpret the clicks and it was more efficient to write down the message directly.
In 1851, a conference in Vienna of countries in the German-Austrian Telegraph Union (which included many central European countries) adopted the Morse telegraph as the system for international communications. The international Morse code adopted was considerably modified from the original American Morse code, and was based on a code used on Hamburg railways (Gerke, 1848). A common code was a necessary step to allow direct telegraph connection between countries. With different codes, additional operators were required to translate and retransmit the message. In 1865, a conference in Paris adopted Gerke's code as the International Morse code and was henceforth the international standard. The US, however, continued to use American Morse code internally for some time, hence international messages required retransmission in both directions.
In the United States, the Morse/Vail telegraph was quickly deployed in the two decades following the first demonstration in 1844. The overland telegraph connected the west coast of the continent to the east coast by 24 October 1861, bringing an end to the Pony Express.
=== Foy–Breguet system ===
France was slow to adopt the electrical telegraph, because of the extensive optical telegraph system built during the Napoleonic era. There was also serious concern that an electrical telegraph could be quickly put out of action by enemy saboteurs, something that was much more difficult to do with optical telegraphs which had no exposed hardware between stations. The Foy-Breguet telegraph was eventually adopted. This was a two-needle system using two signal wires but displayed in a uniquely different way to other needle telegraphs. The needles made symbols similar to the Chappe optical system symbols, making it more familiar to the telegraph operators. The optical system was decommissioned starting in 1846, but not completely until 1855. In that year the Foy-Breguet system was replaced with the Morse system.
=== Expansion ===
As well as the rapid expansion of the use of the telegraphs along the railways, they soon spread into the field of mass communication with the instruments being installed in post offices. The era of mass personal communication had begun. Telegraph networks were expensive to build, but financing was readily available, especially from London bankers. By 1852, National systems were in operation in major countries:
The New York and Mississippi Valley Printing Telegraph Company, for example, was created in 1852 in Rochester, New York and eventually became the Western Union Telegraph Company. Although many countries had telegraph networks, there was no worldwide interconnection. Message by post was still the primary means of communication to countries outside Europe.
Telegraphy was introduced in Central Asia during the 1870s.
=== Telegraphic improvements ===
A continuing goal in telegraphy was to reduce the cost per message by reducing hand-work, or increasing the sending rate. There were many experiments with moving pointers, and various electrical encodings. However, most systems were too complicated and unreliable. A successful expedient to reduce the cost per message was the development of telegraphese.
The first system that did not require skilled technicians to operate was Charles Wheatstone's ABC system in 1840 in which the letters of the alphabet were arranged around a clock-face, and the signal caused a needle to indicate the letter. This early system required the receiver to be present in real time to record the message and it reached speeds of up to 15 words a minute.
In 1846, Alexander Bain patented a chemical telegraph in Edinburgh. The signal current moved an iron pen across a moving paper tape soaked in a mixture of ammonium nitrate and potassium ferrocyanide, decomposing the chemical and producing readable blue marks in Morse code. The speed of the printing telegraph was 16 and a half words per minute, but messages still required translation into English by live copyists. Chemical telegraphy came to an end in the US in 1851, when the Morse group defeated the Bain patent in the US District Court.
For a brief period, starting with the New York–Boston line in 1848, some telegraph networks began to employ sound operators, who were trained to understand Morse code aurally. Gradually, the use of sound operators eliminated the need for telegraph receivers to include register and tape. Instead, the receiving instrument was developed into a "sounder", an electromagnet that was energized by a current and attracted a small iron lever. When the sounding key was opened or closed, the sounder lever struck an anvil. The Morse operator distinguished a dot and a dash by the short or long interval between the two clicks. The message was then written out in long-hand.
Royal Earl House developed and patented a letter-printing telegraph system in 1846 which employed an alphabetic keyboard for the transmitter and automatically printed the letters on paper at the receiver, and followed this up with a steam-powered version in 1852. Advocates of printing telegraphy said it would eliminate Morse operators' errors. The House machine was used on four main American telegraph lines by 1852. The speed of the House machine was announced as 2600 words an hour.
David Edward Hughes invented the printing telegraph in 1855; it used a keyboard of 26 keys for the alphabet and a spinning type wheel that determined the letter being transmitted by the length of time that had elapsed since the previous transmission. The system allowed for automatic recording on the receiving end. The system was very stable and accurate and became accepted around the world.
The next improvement was the Baudot code of 1874. French engineer Émile Baudot patented a printing telegraph in which the signals were translated automatically into typographic characters. Each character was assigned a five-bit code, mechanically interpreted from the state of five on/off switches. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute.
By this point, reception had been automated, but the speed and accuracy of the transmission were still limited to the skill of the human operator. The first practical automated system was patented by Charles Wheatstone. The message (in Morse code) was typed onto a piece of perforated tape using a keyboard-like device called the 'Stick Punch'. The transmitter automatically ran the tape through and transmitted the message at the then exceptionally high speed of 70 words per minute.
==== Teleprinters ====
An early successful teleprinter was invented by Frederick G. Creed. In Glasgow he created his first keyboard perforator, which used compressed air to punch the holes. He also created a reperforator (receiving perforator) and a printer. The reperforator punched incoming Morse signals onto paper tape and the printer decoded this tape to produce alphanumeric characters on plain paper. This was the origin of the Creed High Speed Automatic Printing System, which could run at an unprecedented 200 words per minute. His system was adopted by the Daily Mail for daily transmission of the newspaper contents.
With the invention of the teletypewriter, telegraphic encoding became fully automated. Early teletypewriters used the ITA-1 Baudot code, a five-bit code. This yielded only thirty-two codes, so it was over-defined into two "shifts", "letters" and "figures". An explicit, unshared shift code prefaced each set of letters and figures. In 1901, Baudot's code was modified by Donald Murray.
In the 1930s, teleprinters were produced by Teletype in the US, Creed in Britain and Siemens in Germany.
By 1935, message routing was the last great barrier to full automation. Large telegraphy providers began to develop systems that used telephone-like rotary dialling to connect teletypewriters. These resulting systems were called "Telex" (TELegraph EXchange). Telex machines first performed rotary-telephone-style pulse dialling for circuit switching, and then sent data by ITA2. This "type A" Telex routing functionally automated message routing.
The first wide-coverage Telex network was implemented in Germany during the 1930s as a network used to communicate within the government.
At the rate of 45.45 (±0.5%) baud – considered speedy at the time – up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication.
Automatic teleprinter exchange service was introduced into Canada by CPR Telegraphs and CN Telegraph in July 1957 and in 1958, Western Union started to build a Telex network in the United States.
==== The harmonic telegraph ====
The most expensive aspect of a telegraph system was the installation – the laying of the wire, which was often very long. The costs would be better covered by finding a way to send more than one message at a time through the single wire, thus increasing revenue per wire. Early devices included the duplex and the quadruplex which allowed, respectively, one or two telegraph transmissions in each direction. However, an even greater number of channels was desired on the busiest lines. In the latter half of the 1800s, several inventors worked towards creating a method for doing just that, including Charles Bourseul, Thomas Edison, Elisha Gray, and Alexander Graham Bell.
One approach was to have resonators of several different frequencies act as carriers of a modulated on-off signal. This was the harmonic telegraph, a form of frequency-division multiplexing. These various frequencies, referred to as harmonics, could then be combined into one complex signal and sent down the single wire. On the receiving end, the frequencies would be separated with a matching set of resonators.
With a set of frequencies being carried down a single wire, it was realized that the human voice itself could be transmitted electrically through the wire. This effort led to the invention of the telephone. (While the work toward packing multiple telegraph signals onto one wire led to telephony, later advances would pack multiple voice signals onto one wire by increasing the bandwidth by modulating frequencies much higher than human hearing. Eventually, the bandwidth was widened much further by using laser light signals sent through fiber optic cables. Fiber optic transmission can carry 25,000 telephone signals simultaneously down a single fiber.)
=== Oceanic telegraph cables ===
Soon after the first successful telegraph systems were operational, the possibility of transmitting messages across the sea by way of submarine communications cables was first proposed. One of the primary technical challenges was to sufficiently insulate the submarine cable to prevent the electric current from leaking out into the water. In 1842, a Scottish surgeon William Montgomerie introduced gutta-percha, the adhesive juice of the Palaquium gutta tree, to Europe. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. Gutta-percha was used as insulation on a wire laid across the Rhine between Deutz and Cologne. In 1849, C. V. Walker, electrician to the South Eastern Railway, submerged a 2 miles (3.2 km) wire coated with gutta-percha off the coast from Folkestone, which was tested successfully.
John Watkins Brett, an engineer from Bristol, sought and obtained permission from Louis-Philippe in 1847 to establish telegraphic communication between France and England. The first undersea cable was laid in 1850, connecting the two countries and was followed by connections to Ireland and the Low Countries.
The Atlantic Telegraph Company was formed in London in 1856 to undertake to construct a commercial telegraph cable across the Atlantic Ocean. It was successfully completed on 18 July 1866 by the ship SS Great Eastern, captained by Sir James Anderson, after many mishaps along the way. John Pender, one of the men on the Great Eastern, later founded several telecommunications companies primarily laying cables between Britain and Southeast Asia. Earlier transatlantic submarine cables installations were attempted in 1857, 1858 and 1865. The 1857 cable only operated intermittently for a few days or weeks before it failed. The study of underwater telegraph cables accelerated interest in mathematical analysis of very long transmission lines. The telegraph lines from Britain to India were connected in 1870. (Those several companies combined to form the Eastern Telegraph Company in 1872.) The HMS Challenger expedition in 1873–1876 mapped the ocean floor for future underwater telegraph cables.
Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. This brought news reports from the rest of the world. The telegraph across the Pacific was completed in 1902, finally encircling the world.
From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent.
=== Cable and Wireless Company ===
Cable & Wireless was a British telecommunications company that traced its origins back to the 1860s, with Sir John Pender as the founder, although the name was only adopted in 1934. It was formed from successive mergers including:
The Falmouth, Malta, Gibraltar Telegraph Company
The British Indian Submarine Telegraph Company
The Marseilles, Algiers and Malta Telegraph Company
The Eastern Telegraph Company
The Eastern Extension Australasia and China Telegraph Company
The Eastern and Associated Telegraph Companies
== Telegraphy and longitude ==
Main article § Section: History of longitude § Land surveying and telegraphy.
The telegraph was very important for sending time signals to determine longitude, providing greater accuracy than previously available. Longitude was measured by comparing local time (for example local noon occurs when the sun is at its highest above the horizon) with absolute time (a time that is the same for an observer anywhere on earth). If the local times of two places differ by one hour, the difference in longitude between them is 15° (360°/24h). Before telegraphy, absolute time could be obtained from astronomical events, such as eclipses, occultations or lunar distances, or by transporting an accurate clock (a chronometer) from one location to the other.
The idea of using the telegraph to transmit a time signal for longitude determination was suggested by François Arago to Samuel Morse in 1837, and the first test of this idea was made by Capt. Wilkes of the U.S. Navy in 1844, over Morse's line between Washington and Baltimore. The method was soon in practical use for longitude determination, in particular by the U.S. Coast Survey, and over longer and longer distances as the telegraph network spread across North America and the world, and as technical developments improved accuracy and productivity: 318–330 : 98–107
The "telegraphic longitude net" soon became worldwide. Transatlantic links between Europe and North America were established in 1866 and 1870. The US Navy extended observations into the West Indies and Central and South America with an additional transatlantic link from South America to Lisbon between 1874 and 1890. British, Russian and US observations created a chain from Europe through Suez, Aden, Madras, Singapore, China and Japan, to Vladivostok, thence to Saint Petersburg and back to Western Europe.
Australia's telegraph network was linked to Singapore's via Java in 1871, and the net circled the globe in 1902 with the connection of the Australia and New Zealand networks to Canada's via the All Red Line. The two determinations of longitudes, one transmitted from east to west and the other from west to east, agreed within one second of arc (1⁄15 second of time – less than 30 metres).
== Telegraphy in war ==
The ability to send telegrams brought obvious advantages to those conducting war. Secret messages were encoded, so interception alone would not be sufficient for the opposing side to gain an advantage. There were also geographical constraints on intercepting the telegraph cables that improved security, however once radio telegraphy was developed interception became far more widespread.
=== Crimean War ===
The Crimean War was one of the first conflicts to use telegraphs and was one of the first to be documented extensively. In 1854, the government in London created a military Telegraph Detachment for the Army commanded by an officer of the Royal Engineers. It was to comprise twenty-five men from the Royal Corps of Sappers & Miners trained by the Electric Telegraph Company to construct and work the first field electric telegraph.
Journalistic recording of the war was provided by William Howard Russell (writing for The Times newspaper) with photographs by Roger Fenton. News from war correspondents kept the public of the nations involved in the war informed of the day-to-day events in a way that had not been possible in any previous war. After the French extended their telegraph lines to the coast of the Black Sea in late 1854, war news began reaching London in two days. When the British laid an underwater cable to the Crimean peninsula in April 1855, news reached London in a few hours. These prompt daily news reports energised British public opinion on the war, which brought down the government and led to Lord Palmerston becoming prime minister.
=== American Civil War ===
During the American Civil War the telegraph proved its value as a tactical, operational, and strategic communication medium and an important contributor to Union victory. By contrast the Confederacy failed to make effective use of the South's much smaller telegraph network. Prior to the War the telegraph systems were primarily used in the commercial sector. Government buildings were not inter-connected with telegraph lines, but relied on runners to carry messages back and forth. Before the war the Government saw no need to connect lines within city limits, however, they did see the use in connections between cities. Washington D.C. being the hub of government, it had the most connections, but there were only a few lines running north and south out of the city. It was not until the Civil War that the government saw the true potential of the telegraph system. Soon after the shelling of Fort Sumter, the South cut telegraph lines running into D.C., which put the city in a state of panic because they feared an immediate Southern invasion.
Within 6 months of the start of the war, the U.S. Military Telegraph Corps (USMT) had laid approximately 300 miles (480 km) of line. By war's end they had laid approximately 15,000 miles (24,000 km) of line, 8,000 for military and 5,000 for commercial use, and had handled approximately 6.5 million messages. The telegraph was not only important for communication within the armed forces, but also in the civilian sector, helping political leaders to maintain control over their districts.
Even before the war, the American Telegraph Company censored suspect messages informally to block aid to the secession movement. During the war, Secretary of War Simon Cameron, and later Edwin Stanton, wanted control over the telegraph lines to maintain the flow of information. Early in the war, one of Stanton's first acts as Secretary of War was to move telegraph lines from ending at McClellan's headquarters to terminating at the War Department. Stanton himself said "[telegraphy] is my right arm". Telegraphy assisted Northern victories, including the Battle of Antietam (1862), the Battle of Chickamauga (1863), and Sherman's March to the Sea (1864).
The telegraph system still had its flaws. The USMT, while the main source of telegraphers and cable, was still a civilian agency. Most operators were first hired by the telegraph companies and then contracted out to the War Department. This created tension between generals and their operators. One source of irritation was that USMT operators did not have to follow military authority. Usually they performed without hesitation, but they were not required to, so Albert Myer created a U.S. Army Signal Corps in February 1863. As the new head of the Signal Corps, Myer tried to get all telegraph and flag signaling under his command, and therefore subject to military discipline. After creating the Signal Corps, Myer pushed to further develop new telegraph systems. While the USMT relied primarily on civilian lines and operators, the Signal Corp's new field telegraph could be deployed and dismantled faster than USMT's system.
=== First World War ===
During World War I, Britain's telegraph communications were almost completely uninterrupted, while it was able to quickly cut Germany's cables worldwide. The British government censored telegraph cable companies in an effort to root out espionage and restrict financial transactions with Central Powers nations. British access to transatlantic cables and its codebreaking expertise led to the Zimmermann Telegram incident that contributed to the US joining the war. Despite British acquisition of German colonies and expansion into the Middle East, debt from the war led to Britain's control over telegraph cables to weaken while US control grew.
=== Second World War ===
World War II revived the 'cable war' of 1914–1918. In 1939, German-owned cables across the Atlantic were cut once again, and, in 1940, Italian cables to South America and Spain were cut in retaliation for Italian action against two of the five British cables linking Gibraltar and Malta. Electra House, Cable & Wireless's head office and central cable station, was damaged by German bombing in 1941.
Resistance movements in occupied Europe sabotaged communications facilities such as telegraph lines, forcing the Germans to use wireless telegraphy, which could then be intercepted by Britain.
The Germans developed a highly complex teleprinter attachment (German: Schlüssel-Zusatz, "cipher attachment") that was used for enciphering telegrams, using the Lorenz cipher, between German High Command (OKW) and the army groups in the field. These contained situation reports, battle plans, and discussions of strategy and tactics. Britain intercepted these signals, diagnosed how the encrypting machine worked, and decrypted a large amount of teleprinter traffic.
== End of the telegraph era ==
In America, the end of the telegraph era can be associated with the fall of the Western Union Telegraph Company. Western Union was the leading telegraph provider for America and was seen as the best competition for the National Bell Telephone Company. Western Union and Bell were both invested in telegraphy and telephone technology. Western Union's decision to allow Bell to gain the advantage in telephone technology was the result of Western Union's upper management's failure to foresee the surpassing of the telephone over the, at the time, dominant telegraph system. Western Union soon lost the legal battle for the rights to their telephone copyrights. This led to Western Union agreeing to a lesser position in the telephone competition, which in turn led to the lessening of the telegraph.
While the telegraph was not the focus of the legal battles that occurred around 1878, the companies that were affected by the effects of the battle were the main powers of telegraphy at the time. Western Union thought that the agreement of 1878 would solidify telegraphy as the long-range communication of choice. However, due to the underestimates of telegraph's future and poor contracts, Western Union found itself declining. AT&T acquired working control of Western Union in 1909 but relinquished it in 1914 under threat of antitrust action. AT&T bought Western Union's electronic mail and Telex businesses in 1990.
Although commercial "telegraph" services are still available in many countries, transmission is usually done via a computer network rather than a dedicated wired connection.
== See also ==
== References ==
== Bibliography ==
Beauchamp, Ken (2001). History of Telegraphy. London: The Institution of Electrical Engineers. ISBN 978-0-85296-792-8.
Bowers, Brian, Sir Charles Wheatstone: 1802–1875, IET, 2001 ISBN 0852961030
Calvert, J. B. (2008). "The Electromagnetic Telegraph".
Copeland, B. Jack, ed. (2006). Colossus: The Secrets of Bletchley Park's Codebreaking Computers. Oxford: Oxford University Press. ISBN 978-0-19-284055-4.
Fahie, John Joseph (1884). A History of Electric Telegraphy, to the Year 1837. London: E. & F.N. Spon. OCLC 559318239.
Figes, Orlando (2010). Crimea: The Last Crusade. London: Allen Lane. ISBN 978-0-7139-9704-0.
Gibberd, William (1966). Australian Dictionary of Biography: Edward Davy.
Hochfelder, David (2012). The Telegraph in America, 1832–1920. Johns Hopkins University Press. pp. 6–17, 138–141. ISBN 9781421407470.
Holzmann, Gerard J.; Pehrson, Björn, The Early History of Data Networks, Wiley, 1995 ISBN 0818667826
Huurdeman, Anton A. (2003). The Worldwide History of Telecommunications. Wiley-Blackwell. ISBN 978-0471205050.
Jones, R. Victor (1999). Samuel Thomas von Sömmering's "Space Multiplexed" Electrochemical Telegraph (1808–1810). Archived from the original on 11 October 2012. Retrieved 1 May 2009. Attributed to Michaelis, Anthony R. (1965), From semaphore to satellite, Geneva: International Telecommunication Union
Kennedy, P. M. (October 1971). "Imperial Cable Communications and Strategy, 1870–1914". The English Historical Review. 86 (341): 728–752. doi:10.1093/ehr/lxxxvi.cccxli.728. JSTOR 563928.
Kieve, Jeffrey L. (1973). The Electric Telegraph: A Social and Economic History. David and Charles. ISBN 0-7153-5883-9. OCLC 655205099.
Mercer, David, The Telephone: The Life Story of a Technology, Greenwood Publishing Group, 2006 ISBN 031333207X
Schwoch, James (2018). Wired into Nature: The Telegraph and the North American Frontier. University of Illinois Press. ISBN 978-0252041778.
== Further reading ==
Botjer, George F. (2015). Samuel F.B. Morse and the Dawn of the Age of Electricity. Lanham, MD: Lexington Books. ISBN 978-1-4985-0140-8 – via Internet Archive.
Cooke, W.F., The Electric Telegraph, Was it invented by Prof. Wheatstone?, London 1856.
Gray, Thomas (1892). "The Inventors of the Telegraph And Telephone". Annual Report of the Board of Regents of the Smithsonian Institution. 71: 639–659. Retrieved 7 August 2009.
Gauß, C. F., Works, Göttingen 1863–1933.
Howe, Daniel Walker, What Hath God Wrought: The Transformation of America, 1815–1848, Oxford University Press, 2007 ISBN 0199743797.
Peterson, M.J. Roots of Interconnection: Communications, Transportation and Phases of the Industrial Revolution, International Dimensions of Ethics Education in Science and Engineering Background Reading, Version 1; February 2008.
Steinheil, C.A., Ueber Telegraphie, München 1838.
Yates, JoAnne. The Telegraph's Effect on Nineteenth Century Markets and Firms, Massachusetts Institute of Technology, pp. 149–163.
== External links ==
Morse Telegraph Club, Inc. (The Morse Telegraph Club is an international non-profit organization dedicated to the perpetuation of the knowledge and traditions of telegraphy.)
"Transatlantic Cable Communications". Canada's Digital Collections. Archived from the original on 29 August 2005.
Shilling's telegraph, an exhibit of the A.S. Popov Central Museum of Communications
History of electromagnetic telegraph
The first electric telegraphs
The Dawn of Telegraphy (in Russian)
Pavel Shilling and his telegraph- article in PCWeek, Russian edition.
Distant Writing – The History of the Telegraph Companies in Britain between 1838 and 1868
NASA – Carrington Super Flare Archived 29 March 2010 at the Wayback Machine NASA 6 May 2008
How Cables Unite The World – a 1902 article about telegraph networks and technology from the magazine The World's Work
"Telegraph" . New International Encyclopedia. 1905.
Indiana telegraph and telephone collection, Rare Books and Manuscripts, Indiana State Library
Wonders of electricity and the elements, being a popular account of modern electrical and magnetic discoveries, magnetism and electric machines, the electric telegraph and the electric light, and the metal bases, salt, and acids from Science History Institute Digital Collections
The electro magnetic telegraph: with an historical account of its rise, progress, and present condition from Science History Institute Digital Collections | Wikipedia/Telegraph_line |
Telegraph Act is a stock short title which used to be used for legislation in the United Kingdom, relating to telegraphy.
The Bill for an Act with this short title may have been known as a Telegraph Bill during its passage through Parliament.
Telegraph Acts may be a generic name either for legislation bearing that short title or for all legislation which relates to telegraphy. It is a term of art.
See also Wireless Telegraphy Act.
== List ==
The Telegraph Act 1863 (26 & 27 Vict. c. 112)
The Telegraph Act Amendment Act 1866 (29 & 30 Vict. c. 3)
The Telegraph Act 1868 (31 & 32 Vict. c. 110)
The Telegraph Act 1869 (32 & 33 Vict. c. 73)
The Telegraph Act 1870 (33 & 34 Vict. c. 88)
The Telegraph Act 1878 (41 & 42 Vict. c. 76)
The Submarine Telegraph Act 1885 (48 & 49 Vict. c. 49)
The Telegraph Act 1885 (48 & 49 Vict. c. 58)
The Submarine Telegraph Act 1886 (50 Vict. c. 3)
The Telegraph (Isle of Man) Act 1889 (52 & 53 Vict. c. 34)
The Telegraph Act 1892 (55 & 56 Vict. c. 59)
The Telegraph Act 1899 (62 & 63 Vict. c. 38)
The Telegraph (Money) Act 1907 (7 Edw. 7. c. 6)
The Telegraph (Construction) Act 1908 (8 Edw. 7. c. 33)
The Telegraph (Arbitration) Act 1909 (9 Edw. 7. c. 20)
The Telegraph (Construction) Act 1911 (1 & 2 Geo. 5. c. 39)
The Telegraph (Construction) Act 1916 (6 & 7 Geo. 5. c. 40)
The Telegraph Acts
The Telegraph Acts 1868, 1869 means ... (expression is used as a collective title in the 1870 Act and is presumably authorised by the 1869 Act).
The Telegraph Acts 1863 to 1892 means the Telegraph Act 1863, the Telegraph Act Amendment Act 1866, the Telegraph Act 1868, the Telegraph Act 1869, the Telegraph Act 1870, the Telegraph Act 1878, the Telegraph Act 1885, the Telegraph (Isle of Man) Act 1889 and the Telegraph Act 1892.
== See also ==
List of short titles
== References == | Wikipedia/Telegraph_Act |
Electrical telegraphy is point-to-point distance communicating via sending electric signals over wire, a system primarily used from the 1840s until the late 20th century. It was the first electrical telecommunications system and the most widely used of a number of early messaging systems called telegraphs, that were devised to send text messages more quickly than physically carrying them. Electrical telegraphy can be considered the first example of electrical engineering.
Electrical telegraphy consisted of two or more geographically separated stations, called telegraph offices. The offices were connected by wires, usually supported overhead on utility poles. Many electrical telegraph systems were invented that operated in different ways, but the ones that became widespread fit into two broad categories. First are the needle telegraphs, in which electric current sent down the telegraph line produces electromagnetic force to move a needle-shaped pointer into position over a printed list. Early needle telegraph models used multiple needles, thus requiring multiple wires to be installed between stations. The first commercial needle telegraph system and the most widely used of its type was the Cooke and Wheatstone telegraph, invented in 1837. The second category are armature systems, in which the current activates a telegraph sounder that makes a click; communication on this type of system relies on sending clicks in coded rhythmic patterns. The archetype of this category was the Morse system and the code associated with it, both invented by Samuel Morse in 1838. In 1865, the Morse system became the standard for international communication, using a modified form of Morse's code that had been developed for German railways.
Electrical telegraphs were used by the emerging railway companies to provide signals for train control systems, minimizing the chances of trains colliding with each other. This was built around the signalling block system in which signal boxes along the line communicate with neighbouring boxes by telegraphic sounding of single-stroke bells and three-position needle telegraph instruments.
In the 1840s, the electrical telegraph superseded optical telegraph systems such as semaphores, becoming the standard way to send urgent messages. By the latter half of the century, most developed nations had commercial telegraph networks with local telegraph offices in most cities and towns, allowing the public to send messages (called telegrams) addressed to any person in the country, for a fee.
Beginning in 1850, submarine telegraph cables allowed for the first rapid communication between people on different continents. The telegraph's nearly-instant transmission of messages across continents – and between continents – had widespread social and economic impacts. The electric telegraph led to Guglielmo Marconi's invention of wireless telegraphy, the first means of radiowave telecommunication, which he began in 1894.
In the early 20th century, manual operation of telegraph machines was slowly replaced by teleprinter networks. Increasing use of the telephone pushed telegraphy into only a few specialist uses; its use by the general public dwindled to greetings for special occasions. The rise of the Internet and email in the 1990s largely made dedicated telegraphy networks obsolete.
== History ==
=== Precursors ===
Prior to the electric telegraph, visual systems were used, including beacons, smoke signals, flag semaphore, and optical telegraphs for visual signals to communicate over distances of land.
An auditory predecessor was West African talking drums. In the 19th century, Yoruba drummers used talking drums to mimic human tonal language to communicate complex messages – usually regarding news of birth, ceremonies, and military conflict – over 4–5 mile distances.
Possibly the earliest design and conceptualization for a telegraph system was by the British polymath Robert Hooke, who gave a vivid and comprehensive outline of visual telegraphy to the Royal Society in a 1684 submission in which he outlined many practical details. The system was largely motivated by military concerns, following the Battle of Vienna in 1683.
The first official optical telegraph was invented in France in the 18th century by Claude Chappe and his brothers. The Chappe system would stretch nearly 5,000 km with 556 stations and was used until the 1850s.
=== Early work ===
From early studies of electricity, electrical phenomena were known to travel with great speed, and many experimenters worked on the application of electricity to communications at a distance. All the known effects of electricity – such as sparks, electrostatic attraction, chemical changes, electric shocks, and later electromagnetism – were applied to the problems of detecting controlled transmissions of electricity at various distances.
In 1753, an anonymous writer in the Scots Magazine suggested an electrostatic telegraph. Using one wire for each letter of the alphabet, a message could be transmitted by connecting the wire terminals in turn to an electrostatic machine, and observing the deflection of pith balls at the far end. The writer has never been positively identified, but the letter was signed C.M. and posted from Renfrew leading to a Charles Marshall of Renfrew being suggested. Telegraphs employing electrostatic attraction were the basis of early experiments in electrical telegraphy in Europe, but were abandoned as being impractical and were never developed into a useful communication system.
In 1774, Georges-Louis Le Sage realised an early electric telegraph. The telegraph had a separate wire for each of the 26 letters of the alphabet and its range was only between two rooms of his home.
In 1800, Alessandro Volta invented the voltaic pile, providing a continuous current of electricity for experimentation. This became a source of a low-voltage current that could be used to produce more distinct effects, and which was far less limited than the momentary discharge of an electrostatic machine, which with Leyden jars were the only previously known human-made sources of electricity.
Another very early experiment in electrical telegraphy was an "electrochemical telegraph" created by the German physician, anatomist and inventor Samuel Thomas von Sömmering in 1809, based on an earlier 1804 design by Spanish polymath and scientist Francisco Salva Campillo. Both their designs employed multiple wires (up to 35) to represent almost all Latin letters and numerals. Thus, messages could be conveyed electrically up to a few kilometers (in von Sömmering's design), with each of the telegraph receiver's wires immersed in a separate glass tube of acid. An electric current was sequentially applied by the sender through the various wires representing each letter of a message; at the recipient's end, the currents electrolysed the acid in the tubes in sequence, releasing streams of hydrogen bubbles next to each associated letter or numeral. The telegraph receiver's operator would watch the bubbles and could then record the transmitted message. This is in contrast to later telegraphs that used a single wire (with ground return).
Hans Christian Ørsted discovered in 1820 that an electric current produces a magnetic field that will deflect a compass needle. In the same year Johann Schweigger invented the galvanometer, with a coil of wire around a compass, that could be used as a sensitive indicator for an electric current. Also that year, André-Marie Ampère suggested that telegraphy could be achieved by placing small magnets under the ends of a set of wires, one pair of wires for each letter of the alphabet. He was apparently unaware of Schweigger's invention at the time, which would have made his system much more sensitive. In 1825, Peter Barlow tried Ampère's idea but only got it to work over 200 feet (61 m) and declared it impractical. In 1830 William Ritchie improved on Ampère's design by placing the magnetic needles inside a coil of wire connected to each pair of conductors. He successfully demonstrated it, showing the feasibility of the electromagnetic telegraph, but only within a lecture hall.
In 1825, William Sturgeon invented the electromagnet, with a single winding of uninsulated wire on a piece of varnished iron, which increased the magnetic force produced by electric current. Joseph Henry improved it in 1828 by placing several windings of insulated wire around the bar, creating a much more powerful electromagnet which could operate a telegraph through the high resistance of long telegraph wires. During his tenure at The Albany Academy from 1826 to 1832, Henry first demonstrated the theory of the 'magnetic telegraph' by ringing a bell through one-mile (1.6 km) of wire strung around the room in 1831.
In 1835, Joseph Henry and Edward Davy independently invented the mercury dipping electrical relay, in which a magnetic needle is dipped into a pot of mercury when an electric current passes through the surrounding coil. In 1837, Davy invented the much more practical metallic make-and-break relay which became the relay of choice in telegraph systems and a key component for periodically renewing weak signals. Davy demonstrated his telegraph system in Regent's Park in 1837 and was granted a patent on 4 July 1838. Davy also invented a printing telegraph which used the electric current from the telegraph signal to mark a ribbon of calico infused with potassium iodide and calcium hypochlorite.
=== First working systems ===
The first working telegraph was built by the English inventor Francis Ronalds in 1816 and used static electricity. At the family home on Hammersmith Mall, he set up a complete subterranean system in a 175-yard (160 m) long trench as well as an eight-mile (13 km) long overhead telegraph. The lines were connected at both ends to revolving dials marked with the letters of the alphabet and electrical impulses sent along the wire were used to transmit messages. Offering his invention to the Admiralty in July 1816, it was rejected as "wholly unnecessary". His account of the scheme and the possibilities of rapid global communication in Descriptions of an Electrical Telegraph and of some other Electrical Apparatus was the first published work on electric telegraphy and even described the risk of signal retardation due to induction. Elements of Ronalds' design were utilised in the subsequent commercialisation of the telegraph over 20 years later.
The Schilling telegraph, invented by Baron Schilling von Canstatt in 1832, was an early needle telegraph. It had a transmitting device that consisted of a keyboard with 16 black-and-white keys. These served for switching the electric current. The receiving instrument consisted of six galvanometers with magnetic needles, suspended from silk threads. The two stations of Schilling's telegraph were connected by eight wires; six were connected with the galvanometers, one served for the return current and one for a signal bell. When at the starting station the operator pressed a key, the corresponding pointer was deflected at the receiving station. Different positions of black and white flags on different disks gave combinations which corresponded to the letters or numbers. Pavel Schilling subsequently improved its apparatus by reducing the number of connecting wires from eight to two.
On 21 October 1832, Schilling managed a short-distance transmission of signals between two telegraphs in different rooms of his apartment. In 1836, the British government attempted to buy the design but Schilling instead accepted overtures from Nicholas I of Russia. Schilling's telegraph was tested on a 5-kilometre-long (3.1 mi) experimental underground and underwater cable, laid around the building of the main Admiralty in Saint Petersburg and was approved for a telegraph between the imperial palace at Peterhof and the naval base at Kronstadt. However, the project was cancelled following Schilling's death in 1837. Schilling was also one of the first to put into practice the idea of the binary system of signal transmission. His work was taken over and developed by Moritz von Jacobi who invented telegraph equipment that was used by Tsar Alexander III to connect the Imperial palace at Tsarskoye Selo and Kronstadt Naval Base.
In 1833, Carl Friedrich Gauss, together with the physics professor Wilhelm Weber in Göttingen, installed a 1,200-metre-long (3,900 ft) wire above the town's roofs. Gauss combined the Poggendorff-Schweigger multiplicator with his magnetometer to build a more sensitive device, the galvanometer. To change the direction of the electric current, he constructed a commutator of his own. As a result, he was able to make the distant needle move in the direction set by the commutator on the other end of the line.
At first, Gauss and Weber used the telegraph to coordinate time, but soon they developed other signals and finally, their own alphabet. The alphabet was encoded in a binary code that was transmitted by positive or negative voltage pulses which were generated by means of moving an induction coil up and down over a permanent magnet and connecting the coil with the transmission wires by means of the commutator. The page of Gauss's laboratory notebook containing both his code and the first message transmitted, as well as a replica of the telegraph made in the 1850s under the instructions of Weber are kept in the faculty of physics at the University of Göttingen, in Germany.
Gauss was convinced that this communication would be of help to his kingdom's towns. Later in the same year, instead of a voltaic pile, Gauss used an induction pulse, enabling him to transmit seven letters a minute instead of two. The inventors and university did not have the funds to develop the telegraph on their own, but they received funding from Alexander von Humboldt. Carl August Steinheil in Munich was able to build a telegraph network within the city in 1835–1836. In 1838, Steinheil installed a telegraph along the Nuremberg–Fürth railway line, built in 1835 as the first German railroad, which was the first earth-return telegraph put into service.
By 1837, William Fothergill Cooke and Charles Wheatstone had co-developed a telegraph system which used a number of needles on a board that could be moved to point to letters of the alphabet. Any number of needles could be used, depending on the number of characters it was required to code. In May 1837 they patented their system. The patent recommended five needles, which coded twenty of the alphabet's 26 letters.
Samuel Morse independently developed and patented a recording electric telegraph in 1837. Morse's assistant Alfred Vail developed an instrument that was called the register for recording the received messages. It embossed dots and dashes on a moving paper tape by a stylus which was operated by an electromagnet. Morse and Vail developed the Morse code signalling alphabet.
On 24 May 1844, Morse sent to Vail the historic first message “WHAT HATH GOD WROUGHT" from the Capitol in Washington to the old Mt. Clare Depot in Baltimore.
== Commercial telegraphy ==
=== Cooke and Wheatstone system ===
The first commercial electrical telegraph was the Cooke and Wheatstone system. A demonstration four-needle system was installed on the Euston to Camden Town section of Robert Stephenson's London and Birmingham Railway in 1837 for signalling rope-hauling of locomotives. It was rejected in favour of pneumatic whistles. Cooke and Wheatstone had their first commercial success with a system installed on the Great Western Railway over the 13 miles (21 km) from Paddington station to West Drayton in 1838. This was a five-needle, six-wire system, and had the major advantage of displaying the letter being sent so operators did not need to learn a code. The insulation failed on the underground cables between Paddington and West Drayton, and when the line was extended to Slough in 1843, the system was converted to a one-needle, two-wire configuration with uninsulated wires on poles. The cost of installing wires was ultimately more economically significant than the cost of training operators. The one-needle telegraph proved highly successful on British railways, and 15,000 sets were in use at the end of the nineteenth century; some remained in service in the 1930s. The Electric Telegraph Company, the world's first public telegraphy company, was formed in 1845 by financier John Lewis Ricardo and Cooke.
=== Wheatstone ABC telegraph ===
Wheatstone developed a practical alphabetical system in 1840 called the A.B.C. System, used mostly on private wires. This consisted of a "communicator" at the sending end and an "indicator" at the receiving end. The communicator consisted of a circular dial with a pointer and the 26 letters of the alphabet (and four punctuation marks) around its circumference. Against each letter was a key that could be pressed. A transmission would begin with the pointers on the dials at both ends set to the start position. The transmitting operator would then press down the key corresponding to the letter to be transmitted. In the base of the communicator was a magneto actuated by a handle on the front. This would be turned to apply an alternating voltage to the line. Each half cycle of the current would advance the pointers at both ends by one position. When the pointer reached the position of the depressed key, it would stop and the magneto would be disconnected from the line. The communicator's pointer was geared to the magneto mechanism. The indicator's pointer was moved by a polarised electromagnet whose armature was coupled to it through an escapement. Thus the alternating line voltage moved the indicator's pointer on to the position of the depressed key on the communicator. Pressing another key would then release the pointer and the previous key, and re-connect the magneto to the line. These machines were very robust and simple to operate, and they stayed in use in Britain until well into the 20th century.
=== Morse system ===
The Morse system uses a single wire between offices. At the sending station, an operator taps on a switch called a telegraph key, spelling out text messages in Morse code. Originally, the armature was intended to make marks on paper tape, but operators learned to interpret the clicks and it was more efficient to write down the message directly.
In 1851, a conference in Vienna of countries in the German-Austrian Telegraph Union (which included many central European countries) adopted the Morse telegraph as the system for international communications. The international Morse code adopted was considerably modified from the original American Morse code, and was based on a code used on Hamburg railways (Gerke, 1848). A common code was a necessary step to allow direct telegraph connection between countries. With different codes, additional operators were required to translate and retransmit the message. In 1865, a conference in Paris adopted Gerke's code as the International Morse code and was henceforth the international standard. The US, however, continued to use American Morse code internally for some time, hence international messages required retransmission in both directions.
In the United States, the Morse/Vail telegraph was quickly deployed in the two decades following the first demonstration in 1844. The overland telegraph connected the west coast of the continent to the east coast by 24 October 1861, bringing an end to the Pony Express.
=== Foy–Breguet system ===
France was slow to adopt the electrical telegraph, because of the extensive optical telegraph system built during the Napoleonic era. There was also serious concern that an electrical telegraph could be quickly put out of action by enemy saboteurs, something that was much more difficult to do with optical telegraphs which had no exposed hardware between stations. The Foy-Breguet telegraph was eventually adopted. This was a two-needle system using two signal wires but displayed in a uniquely different way to other needle telegraphs. The needles made symbols similar to the Chappe optical system symbols, making it more familiar to the telegraph operators. The optical system was decommissioned starting in 1846, but not completely until 1855. In that year the Foy-Breguet system was replaced with the Morse system.
=== Expansion ===
As well as the rapid expansion of the use of the telegraphs along the railways, they soon spread into the field of mass communication with the instruments being installed in post offices. The era of mass personal communication had begun. Telegraph networks were expensive to build, but financing was readily available, especially from London bankers. By 1852, National systems were in operation in major countries:
The New York and Mississippi Valley Printing Telegraph Company, for example, was created in 1852 in Rochester, New York and eventually became the Western Union Telegraph Company. Although many countries had telegraph networks, there was no worldwide interconnection. Message by post was still the primary means of communication to countries outside Europe.
Telegraphy was introduced in Central Asia during the 1870s.
=== Telegraphic improvements ===
A continuing goal in telegraphy was to reduce the cost per message by reducing hand-work, or increasing the sending rate. There were many experiments with moving pointers, and various electrical encodings. However, most systems were too complicated and unreliable. A successful expedient to reduce the cost per message was the development of telegraphese.
The first system that did not require skilled technicians to operate was Charles Wheatstone's ABC system in 1840 in which the letters of the alphabet were arranged around a clock-face, and the signal caused a needle to indicate the letter. This early system required the receiver to be present in real time to record the message and it reached speeds of up to 15 words a minute.
In 1846, Alexander Bain patented a chemical telegraph in Edinburgh. The signal current moved an iron pen across a moving paper tape soaked in a mixture of ammonium nitrate and potassium ferrocyanide, decomposing the chemical and producing readable blue marks in Morse code. The speed of the printing telegraph was 16 and a half words per minute, but messages still required translation into English by live copyists. Chemical telegraphy came to an end in the US in 1851, when the Morse group defeated the Bain patent in the US District Court.
For a brief period, starting with the New York–Boston line in 1848, some telegraph networks began to employ sound operators, who were trained to understand Morse code aurally. Gradually, the use of sound operators eliminated the need for telegraph receivers to include register and tape. Instead, the receiving instrument was developed into a "sounder", an electromagnet that was energized by a current and attracted a small iron lever. When the sounding key was opened or closed, the sounder lever struck an anvil. The Morse operator distinguished a dot and a dash by the short or long interval between the two clicks. The message was then written out in long-hand.
Royal Earl House developed and patented a letter-printing telegraph system in 1846 which employed an alphabetic keyboard for the transmitter and automatically printed the letters on paper at the receiver, and followed this up with a steam-powered version in 1852. Advocates of printing telegraphy said it would eliminate Morse operators' errors. The House machine was used on four main American telegraph lines by 1852. The speed of the House machine was announced as 2600 words an hour.
David Edward Hughes invented the printing telegraph in 1855; it used a keyboard of 26 keys for the alphabet and a spinning type wheel that determined the letter being transmitted by the length of time that had elapsed since the previous transmission. The system allowed for automatic recording on the receiving end. The system was very stable and accurate and became accepted around the world.
The next improvement was the Baudot code of 1874. French engineer Émile Baudot patented a printing telegraph in which the signals were translated automatically into typographic characters. Each character was assigned a five-bit code, mechanically interpreted from the state of five on/off switches. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute.
By this point, reception had been automated, but the speed and accuracy of the transmission were still limited to the skill of the human operator. The first practical automated system was patented by Charles Wheatstone. The message (in Morse code) was typed onto a piece of perforated tape using a keyboard-like device called the 'Stick Punch'. The transmitter automatically ran the tape through and transmitted the message at the then exceptionally high speed of 70 words per minute.
==== Teleprinters ====
An early successful teleprinter was invented by Frederick G. Creed. In Glasgow he created his first keyboard perforator, which used compressed air to punch the holes. He also created a reperforator (receiving perforator) and a printer. The reperforator punched incoming Morse signals onto paper tape and the printer decoded this tape to produce alphanumeric characters on plain paper. This was the origin of the Creed High Speed Automatic Printing System, which could run at an unprecedented 200 words per minute. His system was adopted by the Daily Mail for daily transmission of the newspaper contents.
With the invention of the teletypewriter, telegraphic encoding became fully automated. Early teletypewriters used the ITA-1 Baudot code, a five-bit code. This yielded only thirty-two codes, so it was over-defined into two "shifts", "letters" and "figures". An explicit, unshared shift code prefaced each set of letters and figures. In 1901, Baudot's code was modified by Donald Murray.
In the 1930s, teleprinters were produced by Teletype in the US, Creed in Britain and Siemens in Germany.
By 1935, message routing was the last great barrier to full automation. Large telegraphy providers began to develop systems that used telephone-like rotary dialling to connect teletypewriters. These resulting systems were called "Telex" (TELegraph EXchange). Telex machines first performed rotary-telephone-style pulse dialling for circuit switching, and then sent data by ITA2. This "type A" Telex routing functionally automated message routing.
The first wide-coverage Telex network was implemented in Germany during the 1930s as a network used to communicate within the government.
At the rate of 45.45 (±0.5%) baud – considered speedy at the time – up to 25 telex channels could share a single long-distance telephone channel by using voice frequency telegraphy multiplexing, making telex the least expensive method of reliable long-distance communication.
Automatic teleprinter exchange service was introduced into Canada by CPR Telegraphs and CN Telegraph in July 1957 and in 1958, Western Union started to build a Telex network in the United States.
==== The harmonic telegraph ====
The most expensive aspect of a telegraph system was the installation – the laying of the wire, which was often very long. The costs would be better covered by finding a way to send more than one message at a time through the single wire, thus increasing revenue per wire. Early devices included the duplex and the quadruplex which allowed, respectively, one or two telegraph transmissions in each direction. However, an even greater number of channels was desired on the busiest lines. In the latter half of the 1800s, several inventors worked towards creating a method for doing just that, including Charles Bourseul, Thomas Edison, Elisha Gray, and Alexander Graham Bell.
One approach was to have resonators of several different frequencies act as carriers of a modulated on-off signal. This was the harmonic telegraph, a form of frequency-division multiplexing. These various frequencies, referred to as harmonics, could then be combined into one complex signal and sent down the single wire. On the receiving end, the frequencies would be separated with a matching set of resonators.
With a set of frequencies being carried down a single wire, it was realized that the human voice itself could be transmitted electrically through the wire. This effort led to the invention of the telephone. (While the work toward packing multiple telegraph signals onto one wire led to telephony, later advances would pack multiple voice signals onto one wire by increasing the bandwidth by modulating frequencies much higher than human hearing. Eventually, the bandwidth was widened much further by using laser light signals sent through fiber optic cables. Fiber optic transmission can carry 25,000 telephone signals simultaneously down a single fiber.)
=== Oceanic telegraph cables ===
Soon after the first successful telegraph systems were operational, the possibility of transmitting messages across the sea by way of submarine communications cables was first proposed. One of the primary technical challenges was to sufficiently insulate the submarine cable to prevent the electric current from leaking out into the water. In 1842, a Scottish surgeon William Montgomerie introduced gutta-percha, the adhesive juice of the Palaquium gutta tree, to Europe. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. Gutta-percha was used as insulation on a wire laid across the Rhine between Deutz and Cologne. In 1849, C. V. Walker, electrician to the South Eastern Railway, submerged a 2 miles (3.2 km) wire coated with gutta-percha off the coast from Folkestone, which was tested successfully.
John Watkins Brett, an engineer from Bristol, sought and obtained permission from Louis-Philippe in 1847 to establish telegraphic communication between France and England. The first undersea cable was laid in 1850, connecting the two countries and was followed by connections to Ireland and the Low Countries.
The Atlantic Telegraph Company was formed in London in 1856 to undertake to construct a commercial telegraph cable across the Atlantic Ocean. It was successfully completed on 18 July 1866 by the ship SS Great Eastern, captained by Sir James Anderson, after many mishaps along the way. John Pender, one of the men on the Great Eastern, later founded several telecommunications companies primarily laying cables between Britain and Southeast Asia. Earlier transatlantic submarine cables installations were attempted in 1857, 1858 and 1865. The 1857 cable only operated intermittently for a few days or weeks before it failed. The study of underwater telegraph cables accelerated interest in mathematical analysis of very long transmission lines. The telegraph lines from Britain to India were connected in 1870. (Those several companies combined to form the Eastern Telegraph Company in 1872.) The HMS Challenger expedition in 1873–1876 mapped the ocean floor for future underwater telegraph cables.
Australia was first linked to the rest of the world in October 1872 by a submarine telegraph cable at Darwin. This brought news reports from the rest of the world. The telegraph across the Pacific was completed in 1902, finally encircling the world.
From the 1850s until well into the 20th century, British submarine cable systems dominated the world system. This was set out as a formal strategic goal, which became known as the All Red Line. In 1896, there were thirty cable laying ships in the world and twenty-four of them were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent.
=== Cable and Wireless Company ===
Cable & Wireless was a British telecommunications company that traced its origins back to the 1860s, with Sir John Pender as the founder, although the name was only adopted in 1934. It was formed from successive mergers including:
The Falmouth, Malta, Gibraltar Telegraph Company
The British Indian Submarine Telegraph Company
The Marseilles, Algiers and Malta Telegraph Company
The Eastern Telegraph Company
The Eastern Extension Australasia and China Telegraph Company
The Eastern and Associated Telegraph Companies
== Telegraphy and longitude ==
Main article § Section: History of longitude § Land surveying and telegraphy.
The telegraph was very important for sending time signals to determine longitude, providing greater accuracy than previously available. Longitude was measured by comparing local time (for example local noon occurs when the sun is at its highest above the horizon) with absolute time (a time that is the same for an observer anywhere on earth). If the local times of two places differ by one hour, the difference in longitude between them is 15° (360°/24h). Before telegraphy, absolute time could be obtained from astronomical events, such as eclipses, occultations or lunar distances, or by transporting an accurate clock (a chronometer) from one location to the other.
The idea of using the telegraph to transmit a time signal for longitude determination was suggested by François Arago to Samuel Morse in 1837, and the first test of this idea was made by Capt. Wilkes of the U.S. Navy in 1844, over Morse's line between Washington and Baltimore. The method was soon in practical use for longitude determination, in particular by the U.S. Coast Survey, and over longer and longer distances as the telegraph network spread across North America and the world, and as technical developments improved accuracy and productivity: 318–330 : 98–107
The "telegraphic longitude net" soon became worldwide. Transatlantic links between Europe and North America were established in 1866 and 1870. The US Navy extended observations into the West Indies and Central and South America with an additional transatlantic link from South America to Lisbon between 1874 and 1890. British, Russian and US observations created a chain from Europe through Suez, Aden, Madras, Singapore, China and Japan, to Vladivostok, thence to Saint Petersburg and back to Western Europe.
Australia's telegraph network was linked to Singapore's via Java in 1871, and the net circled the globe in 1902 with the connection of the Australia and New Zealand networks to Canada's via the All Red Line. The two determinations of longitudes, one transmitted from east to west and the other from west to east, agreed within one second of arc (1⁄15 second of time – less than 30 metres).
== Telegraphy in war ==
The ability to send telegrams brought obvious advantages to those conducting war. Secret messages were encoded, so interception alone would not be sufficient for the opposing side to gain an advantage. There were also geographical constraints on intercepting the telegraph cables that improved security, however once radio telegraphy was developed interception became far more widespread.
=== Crimean War ===
The Crimean War was one of the first conflicts to use telegraphs and was one of the first to be documented extensively. In 1854, the government in London created a military Telegraph Detachment for the Army commanded by an officer of the Royal Engineers. It was to comprise twenty-five men from the Royal Corps of Sappers & Miners trained by the Electric Telegraph Company to construct and work the first field electric telegraph.
Journalistic recording of the war was provided by William Howard Russell (writing for The Times newspaper) with photographs by Roger Fenton. News from war correspondents kept the public of the nations involved in the war informed of the day-to-day events in a way that had not been possible in any previous war. After the French extended their telegraph lines to the coast of the Black Sea in late 1854, war news began reaching London in two days. When the British laid an underwater cable to the Crimean peninsula in April 1855, news reached London in a few hours. These prompt daily news reports energised British public opinion on the war, which brought down the government and led to Lord Palmerston becoming prime minister.
=== American Civil War ===
During the American Civil War the telegraph proved its value as a tactical, operational, and strategic communication medium and an important contributor to Union victory. By contrast the Confederacy failed to make effective use of the South's much smaller telegraph network. Prior to the War the telegraph systems were primarily used in the commercial sector. Government buildings were not inter-connected with telegraph lines, but relied on runners to carry messages back and forth. Before the war the Government saw no need to connect lines within city limits, however, they did see the use in connections between cities. Washington D.C. being the hub of government, it had the most connections, but there were only a few lines running north and south out of the city. It was not until the Civil War that the government saw the true potential of the telegraph system. Soon after the shelling of Fort Sumter, the South cut telegraph lines running into D.C., which put the city in a state of panic because they feared an immediate Southern invasion.
Within 6 months of the start of the war, the U.S. Military Telegraph Corps (USMT) had laid approximately 300 miles (480 km) of line. By war's end they had laid approximately 15,000 miles (24,000 km) of line, 8,000 for military and 5,000 for commercial use, and had handled approximately 6.5 million messages. The telegraph was not only important for communication within the armed forces, but also in the civilian sector, helping political leaders to maintain control over their districts.
Even before the war, the American Telegraph Company censored suspect messages informally to block aid to the secession movement. During the war, Secretary of War Simon Cameron, and later Edwin Stanton, wanted control over the telegraph lines to maintain the flow of information. Early in the war, one of Stanton's first acts as Secretary of War was to move telegraph lines from ending at McClellan's headquarters to terminating at the War Department. Stanton himself said "[telegraphy] is my right arm". Telegraphy assisted Northern victories, including the Battle of Antietam (1862), the Battle of Chickamauga (1863), and Sherman's March to the Sea (1864).
The telegraph system still had its flaws. The USMT, while the main source of telegraphers and cable, was still a civilian agency. Most operators were first hired by the telegraph companies and then contracted out to the War Department. This created tension between generals and their operators. One source of irritation was that USMT operators did not have to follow military authority. Usually they performed without hesitation, but they were not required to, so Albert Myer created a U.S. Army Signal Corps in February 1863. As the new head of the Signal Corps, Myer tried to get all telegraph and flag signaling under his command, and therefore subject to military discipline. After creating the Signal Corps, Myer pushed to further develop new telegraph systems. While the USMT relied primarily on civilian lines and operators, the Signal Corp's new field telegraph could be deployed and dismantled faster than USMT's system.
=== First World War ===
During World War I, Britain's telegraph communications were almost completely uninterrupted, while it was able to quickly cut Germany's cables worldwide. The British government censored telegraph cable companies in an effort to root out espionage and restrict financial transactions with Central Powers nations. British access to transatlantic cables and its codebreaking expertise led to the Zimmermann Telegram incident that contributed to the US joining the war. Despite British acquisition of German colonies and expansion into the Middle East, debt from the war led to Britain's control over telegraph cables to weaken while US control grew.
=== Second World War ===
World War II revived the 'cable war' of 1914–1918. In 1939, German-owned cables across the Atlantic were cut once again, and, in 1940, Italian cables to South America and Spain were cut in retaliation for Italian action against two of the five British cables linking Gibraltar and Malta. Electra House, Cable & Wireless's head office and central cable station, was damaged by German bombing in 1941.
Resistance movements in occupied Europe sabotaged communications facilities such as telegraph lines, forcing the Germans to use wireless telegraphy, which could then be intercepted by Britain.
The Germans developed a highly complex teleprinter attachment (German: Schlüssel-Zusatz, "cipher attachment") that was used for enciphering telegrams, using the Lorenz cipher, between German High Command (OKW) and the army groups in the field. These contained situation reports, battle plans, and discussions of strategy and tactics. Britain intercepted these signals, diagnosed how the encrypting machine worked, and decrypted a large amount of teleprinter traffic.
== End of the telegraph era ==
In America, the end of the telegraph era can be associated with the fall of the Western Union Telegraph Company. Western Union was the leading telegraph provider for America and was seen as the best competition for the National Bell Telephone Company. Western Union and Bell were both invested in telegraphy and telephone technology. Western Union's decision to allow Bell to gain the advantage in telephone technology was the result of Western Union's upper management's failure to foresee the surpassing of the telephone over the, at the time, dominant telegraph system. Western Union soon lost the legal battle for the rights to their telephone copyrights. This led to Western Union agreeing to a lesser position in the telephone competition, which in turn led to the lessening of the telegraph.
While the telegraph was not the focus of the legal battles that occurred around 1878, the companies that were affected by the effects of the battle were the main powers of telegraphy at the time. Western Union thought that the agreement of 1878 would solidify telegraphy as the long-range communication of choice. However, due to the underestimates of telegraph's future and poor contracts, Western Union found itself declining. AT&T acquired working control of Western Union in 1909 but relinquished it in 1914 under threat of antitrust action. AT&T bought Western Union's electronic mail and Telex businesses in 1990.
Although commercial "telegraph" services are still available in many countries, transmission is usually done via a computer network rather than a dedicated wired connection.
== See also ==
== References ==
== Bibliography ==
Beauchamp, Ken (2001). History of Telegraphy. London: The Institution of Electrical Engineers. ISBN 978-0-85296-792-8.
Bowers, Brian, Sir Charles Wheatstone: 1802–1875, IET, 2001 ISBN 0852961030
Calvert, J. B. (2008). "The Electromagnetic Telegraph".
Copeland, B. Jack, ed. (2006). Colossus: The Secrets of Bletchley Park's Codebreaking Computers. Oxford: Oxford University Press. ISBN 978-0-19-284055-4.
Fahie, John Joseph (1884). A History of Electric Telegraphy, to the Year 1837. London: E. & F.N. Spon. OCLC 559318239.
Figes, Orlando (2010). Crimea: The Last Crusade. London: Allen Lane. ISBN 978-0-7139-9704-0.
Gibberd, William (1966). Australian Dictionary of Biography: Edward Davy.
Hochfelder, David (2012). The Telegraph in America, 1832–1920. Johns Hopkins University Press. pp. 6–17, 138–141. ISBN 9781421407470.
Holzmann, Gerard J.; Pehrson, Björn, The Early History of Data Networks, Wiley, 1995 ISBN 0818667826
Huurdeman, Anton A. (2003). The Worldwide History of Telecommunications. Wiley-Blackwell. ISBN 978-0471205050.
Jones, R. Victor (1999). Samuel Thomas von Sömmering's "Space Multiplexed" Electrochemical Telegraph (1808–1810). Archived from the original on 11 October 2012. Retrieved 1 May 2009. Attributed to Michaelis, Anthony R. (1965), From semaphore to satellite, Geneva: International Telecommunication Union
Kennedy, P. M. (October 1971). "Imperial Cable Communications and Strategy, 1870–1914". The English Historical Review. 86 (341): 728–752. doi:10.1093/ehr/lxxxvi.cccxli.728. JSTOR 563928.
Kieve, Jeffrey L. (1973). The Electric Telegraph: A Social and Economic History. David and Charles. ISBN 0-7153-5883-9. OCLC 655205099.
Mercer, David, The Telephone: The Life Story of a Technology, Greenwood Publishing Group, 2006 ISBN 031333207X
Schwoch, James (2018). Wired into Nature: The Telegraph and the North American Frontier. University of Illinois Press. ISBN 978-0252041778.
== Further reading ==
Botjer, George F. (2015). Samuel F.B. Morse and the Dawn of the Age of Electricity. Lanham, MD: Lexington Books. ISBN 978-1-4985-0140-8 – via Internet Archive.
Cooke, W.F., The Electric Telegraph, Was it invented by Prof. Wheatstone?, London 1856.
Gray, Thomas (1892). "The Inventors of the Telegraph And Telephone". Annual Report of the Board of Regents of the Smithsonian Institution. 71: 639–659. Retrieved 7 August 2009.
Gauß, C. F., Works, Göttingen 1863–1933.
Howe, Daniel Walker, What Hath God Wrought: The Transformation of America, 1815–1848, Oxford University Press, 2007 ISBN 0199743797.
Peterson, M.J. Roots of Interconnection: Communications, Transportation and Phases of the Industrial Revolution, International Dimensions of Ethics Education in Science and Engineering Background Reading, Version 1; February 2008.
Steinheil, C.A., Ueber Telegraphie, München 1838.
Yates, JoAnne. The Telegraph's Effect on Nineteenth Century Markets and Firms, Massachusetts Institute of Technology, pp. 149–163.
== External links ==
Morse Telegraph Club, Inc. (The Morse Telegraph Club is an international non-profit organization dedicated to the perpetuation of the knowledge and traditions of telegraphy.)
"Transatlantic Cable Communications". Canada's Digital Collections. Archived from the original on 29 August 2005.
Shilling's telegraph, an exhibit of the A.S. Popov Central Museum of Communications
History of electromagnetic telegraph
The first electric telegraphs
The Dawn of Telegraphy (in Russian)
Pavel Shilling and his telegraph- article in PCWeek, Russian edition.
Distant Writing – The History of the Telegraph Companies in Britain between 1838 and 1868
NASA – Carrington Super Flare Archived 29 March 2010 at the Wayback Machine NASA 6 May 2008
How Cables Unite The World – a 1902 article about telegraph networks and technology from the magazine The World's Work
"Telegraph" . New International Encyclopedia. 1905.
Indiana telegraph and telephone collection, Rare Books and Manuscripts, Indiana State Library
Wonders of electricity and the elements, being a popular account of modern electrical and magnetic discoveries, magnetism and electric machines, the electric telegraph and the electric light, and the metal bases, salt, and acids from Science History Institute Digital Collections
The electro magnetic telegraph: with an historical account of its rise, progress, and present condition from Science History Institute Digital Collections | Wikipedia/Electrical_telegraphy |
A utility pole, commonly referred to as a transmission pole, telephone pole, telecommunication pole, power pole, hydro pole, telegraph pole, or telegraph post, is a column or post used to support overhead power lines and various other public utilities, such as electrical cable, fiber optic cable, and related equipment such as transformers and street lights while depending on its application. They are used for two different types of power lines: sub transmission lines, which carry higher voltage power between substations, and distribution lines, which distribute lower voltage power to customers.
Electrical wires and cables are routed overhead on utility poles as an inexpensive way to keep them insulated from the ground and out of the way of people and vehicles. Utility poles are usually made out of wood, aluminum alloy, metal, concrete, or composites like fiberglass. A Stobie pole is a multi-purpose pole made of two steel joists held apart by a slab of concrete in the middle, generally found in South Australia.
The first poles were used in 1843 by telegraph pioneer William Fothergill Cooke, who used them on a line along the Great Western Railway. Utility poles were first used in the mid-19th century in America with telegraph systems, starting with Samuel Morse, who attempted to bury a line between Baltimore and Washington, D.C., but moved it above ground when this system proved faulty. Today, underground distribution lines are increasingly used as an alternative to utility poles in residential neighborhoods, due to poles' perceived ugliness, as well as safety concerns in areas with large amounts of snow or ice build up. They have also been suggested in areas prone to hurricanes and blizzards as a way to reduce power outages.
== Use ==
Utility poles are commonly used to carry two types of electric power lines: distribution lines (or "feeders") and sub transmission lines. Distribution lines carry power from local substations to customers. They generally carry voltages from 4.6 to 33 kilovolts (kV) for distances up to 30 mi (50 km), and include transformers to step the voltage down from the primary voltage to the lower secondary voltage used by the customer. A service drop carries this lower voltage to the customer's premises.
Subtransmission lines carry higher voltage power from regional substations to local substations. They usually carry 46 kV, 69 kV, or 115 kV for distances up to 60 mi (100 km). 230 kV lines are often supported on H-shaped towers made with two or three poles. Transmission lines carrying voltages of above 230 kV are usually not supported by poles, but by metal pylons (known as transmission towers in the US).
For economic or practical reasons, such as to save space in urban areas, a distribution line is often carried on the same poles as a sub transmission line but mounted under the higher voltage lines; a practice called "underbuild". Telecommunication cables are usually carried on the same poles that support power lines; poles shared in this fashion are known as joint-use poles, but may have their own dedicated poles.
== Description ==
The standard utility pole in the United States is about 35 ft (10 m) tall and is buried about 6 ft (2 m) in the ground. In order to meet clearance regulations, poles can, however, reach heights of at least 120 feet (40 meters). They are typically spaced about 125 ft (40 m) apart in urban areas, or about 300 ft (100 m) in rural areas, but distances vary widely based on terrain. Joint-use poles are usually owned by one utility, which leases space on it for other cables. In the United States, the National Electrical Safety Code, published by the Institute of Electrical and Electronics Engineers (IEEE) (not to be confused with the National Electrical Code published by the National Fire Protection Association [NFPA]), sets the standards for construction and maintenance of utility poles and their equipment.
=== Pole materials ===
Most utility poles are made of wood, pressure-treated with some type of preservative for protection against rot, fungi and insects. Southern yellow pine is the most widely used species in the United States; however, many species of long straight trees are used to make utility poles, including Douglas fir, jack pine, lodgepole pine, western red cedar, and Pacific silver fir.
Traditionally, the preservative used was creosote, but due to environmental concerns, alternatives such as pentachlorophenol, copper naphthenate and borates are becoming widespread in the United States. In the United States, standards for wood preservative materials and wood preservation processes, along with test criteria, are set by ANSI, ASTM, and American Wood Protection Association (AWPA) specifications. Despite the preservatives, wood poles decay and have a life of approximately 25 to 50 years depending on climate and soil conditions, therefore requiring regular inspection and remedial preservative treatments. Woodpecker damage to wood poles is the most significant cause of pole deterioration in some parts of the U.S.
Other common utility pole materials are aluminum, steel and concrete, with composites (such as fiberglass) also becoming more prevalent. One particular patented utility pole variant used in Australia is the Stobie pole, made up of two vertical steel posts with a slab of concrete between them.
=== Power distribution wires and equipment ===
On poles carrying both electrical and communications wiring, the electric power distribution lines and associated equipment are mounted at the top of the pole above the communication cables, for safety. The vertical space on the pole reserved for this equipment is called the supply space. The wires themselves are usually uninsulated, and supported by insulators, commonly mounted on a horizontal beam (crossarm). Power is transmitted using the three-phase system, with three wires, or phases, labeled "A", "B", and "C".
Sub transmission lines comprise only these 3 wires, plus sometimes an overhead ground wire (OGW), also called a "static line" or a "neutral", suspended above them. The OGW acts like a lightning rod, providing a low resistance path to ground thus protecting the phase conductors from lightning.
Distribution lines use two systems, either grounded-wye ("Y" on electrical schematics) or delta (Greek letter "Δ" on electrical schematics). A delta system requires only a conductor for each of the three phases. A grounded-wye system requires a fourth conductor, the neutral, whose source is the center of the "Y" and is grounded. However, "spur lines" branching off the main line to provide power to side streets often carry only one or two phase wires, plus the neutral. A wide range of standard distribution voltages are used, from 2,400 V to 34,500 V. On poles near a service drop, there is a pole-mounted step-down distribution transformer to transform the high distribution voltage to the lower secondary voltage provided to the customer. In North America, service drops provide 240/120 V split-phase power for residential and light commercial service, using cylindrical single-phase transformers. In Europe and most other countries, 230 V three phase (230Y400) service drops are used. The transformer's primary is connected to the distribution line through protective devices called fuse cutouts. In the event of an overload, the fuse melts and the device pivots open to provide a visual indication of the problem. They can also be opened manually by linemen using a long insulated rod called a hot stick to disconnect the transformer from the line.
The pole may be grounded with a heavy bare copper or copper-clad steel wire running down the pole, attached to the metal pin supporting each insulator, and at the bottom connected to a metal rod driven into the ground. Some countries ground every pole while others only ground every fifth pole and any pole with a transformer on it. This provides a path for leakage currents across the surface of the insulators to get to ground, preventing the current from flowing through the wooden pole which could cause a fire or shock hazard. It provides similar protection in case of flashovers and lightning strikes. A surge arrester (also called a lightning arrester) may also be installed between the line (ahead of the cutout) and the ground wire for lightning protection. The purpose of the device is to conduct extremely high voltages present on the line directly to ground.
If uninsulated conductors touch due to wind or fallen trees, the resultant sparks can start wildfires. To reduce this problem, aerial bundled conductors are being introduced.
=== Communication cables ===
The communications cables are attached below the electric power lines, in a vertical space along the pole designated the communications space. The communications space is separated from the lowest electrical conductor by the communication worker safety zone, which provides room for workers to maneuver safely while servicing the communication cables, avoiding contact with the power lines.
The most common communication cables found on utility poles are copper or fibre-optic cable (FOC) for telephone lines and coaxial cable for cable television (CATV). Coaxial or optical fibre cables linking computer networks are also increasingly found on poles in urban areas. The cable linking the telephone exchange to local customers is a thick cable lashed to a thin supporting cable, containing hundreds of twisted pair subscriber lines. Each twisted pair line provides a single telephone circuit or local loop to a customer. There may also be FOCs interconnecting telephone exchanges. Like electrical distribution lines, communication cables connect to service drops when used to provide local service to customers.
=== Other equipment ===
Utility poles may also carry other equipment such as street lights, supports for traffic lights and overhead wires for electric trolleys, and cellular network antennas. They can also carry fixtures and decorations specific for certain holidays or events specific to the city where they are located.
Solar panels mounted on utility poles may power auxiliary equipment where the expense of a power line connection is unwanted.
Streetlights and holiday fixtures are powered directly from secondary distribution.
== Pole attachment hardware ==
The primary purpose of pole attachment hardware is to secure the cable and associated aerial plant facilities to poles and to help facilitate necessary plant rearrangements. An aerial plant network requires high-quality reliable hardware to
Structurally support the distribution cable plant
Provide directional guying to accommodate lateral stresses created on the pole by pole line configurations and pole loading configuration
Provide the physical support and protection for drop cable plant from the pole to the customer premises
Transition cable plant from the aerial network to underground and buried plant
Provide the means for safe and effective grounding, bonding, and isolation connections for the metallic and dielectric components of the network.
Functional performance requirements common to pole line hardware for utility poles made of wood, steel, concrete, or Fiber-Reinforced Composite (FRC) materials are contained in Telcordia GR-3174, Generic Requirements for Hardware Attachments for Utility Poles.
=== Attachment hardware by pole type ===
Wood poles
The traditional wood pole material provides great flexibility during placement of hardware and cable apparatus. Holes are easily drilled to fit the exact hardware needs and requirements. In addition, fasteners such as lags and screws are easily applied to wood structures to support outside plant (OSP) apparatus.
Non-wood poles
There are three main non-wood pole materials and structures on which the attachment hardware may be mounted: concrete, steel, and fiber-reinforced composite (FRC). Each material has intrinsic characteristics that need to be considered during the design and manufacture of the attachment hardware.
Concrete poles
The most widespread use of concrete poles is in marine environments and coastal zones where excellent corrosion resistance is required to reduce the impact of sea water, salt fog, and corrosive soil conditions (e.g., marsh). Their heavy weight also helps the concrete poles resist the high winds possible in coastal areas.
The various designs for concrete poles include tapered structures and round poles made of solid concrete; pre-stressed concrete (spun-cast or statically cast); and a hybrid of concrete and steel.
The drilling of installed concrete poles is not feasible. Users may wish to have the attachment hardware cast into the concrete during the pole manufacture. As a result of these operational difficulties, banded hardware has become the more popular means to attach cable plant to concrete poles.
Design criteria and requirements for concrete poles can be derived from various industry documents including, but not limited to, ASCE-111, ACI-318, ASTM C935, and ASTM C1089.
Steel poles
Steel poles can provide advantages for high-voltage lines, where taller poles are required for enhanced clearances and longer span requirements. Tubular steel poles are typically made from 11-gauge galvanized steel, with thicker 10- or 7-gauge materials used for some taller poles because of their higher strength and rigidity. For tall tower-type structures, 5-gauge materials are used.
Although steel poles can be drilled on-site with an annular drill bit or standard twist drill, it is not a recommended practice. As with concrete poles, bolt holes could be built into the steel pole during manufacture for use as general attachment points or places for steps to be bolted into the pole.
Welding of attachment hardware or attachment ledges to steel poles may be a feasible alternate approach to help provide reliable attachment points. However, operational and practical hazards of welding in the field may make this process undesirable or uneconomical.
Steel poles should meet industry specifications such as: TIA/EIA-222-G, Structural Standard for Antenna Supporting Structures and Antennas (current); TIA/EIA-222; Structural Standards for Steel; and TIA/EIA-RS-222, or an equivalent requirement set to help ensure a robust and good quality pole is being used.
Fiber-reinforced composite (FRC) poles
FRC poles cover a family of pole materials that combine fiberglass (fiber) strength members with a cross-linked polyester resin and a variety of chemical additives to produce a lightweight, weather-resistant structure. FRC poles are hollow and similar to the tubular steel poles, with a typical wall thickness of 1⁄4 to 1⁄2 in (6 to 13 mm) with an outer polyurethane coating that is ~0.002 in (0.05 mm) thin.
As with all the other non-wood poles, FRC poles cannot be mounted with the traditional climbing hardware of hooks and gaffs. FRC poles can be pre-drilled by the manufacturer, or holes can be drilled on site. Attachments using lag bolts, teeth, nails, and staples are unacceptable for FRC poles. Through-bolts are used instead of lag bolts for maximum bonding to the pole and to avoid loosening of hardware.
The relevant industry documents covering FRC poles include: ASTM D4923, ANSI C136.20, OPCS-03-02, and Telcordia GR-3159, Generic Requirements for Fiber-Reinforced Composite (FRC), Concrete, and Steel Utility Poles.
== Access ==
In some countries, such as the United Kingdom, utility poles have sets of brackets arranged in a standard pattern up the pole to act as hand and foot holds so that maintenance and repair workers can climb the pole to work on the lines. In the United States, such steps have been determined to be a public hazard and are no longer allowed on new poles. Linemen may use climbing spikes called gaffs to ascend wooden poles without steps on them. In the UK, boots fitted with steel loops that go around the pole (known as "Scandinavian Climbers") are also used for climbing poles. In the US, linemen use bucket trucks for the vast majority of poles that are accessible by vehicle.
== Dead-end poles ==
The poles at the end of a straight section of utility line where the line ends or angles off in another direction are called dead-end poles in the United States. Elsewhere they may be referred to as anchor or termination poles. These must carry the lateral tension of the long straight sections of wire. They are usually made with heavier construction. The power lines are attached to the pole by horizontal strain insulators, either placed on crossarms (which are either doubled, tripled, or replaced with a steel crossarm, to provide more resistance to the tension forces) or attached directly to the pole itself.
Dead-end and other poles that support lateral loads have guy-wires to support them. The guys always have strain insulators inserted in their length to prevent any high voltages caused by electrical faults from reaching the lower portion of the cable that is accessible by the public. In populated areas, guy wires are often encased in a yellow plastic or wood tube with reflectors attached to their lower end, so that they can be seen more easily, reducing the chance of people and animals walking into them or vehicles crashing into them.
Another means of providing support for lateral loads is a push brace pole, a second shorter pole that is attached to the side of the first and runs at an angle to the ground. If there is no space for a lateral support, a stronger pole, e.g. a construction of concrete or iron, is used.
== History ==
The system of suspending telegraph wires from poles with ceramic insulators was invented and patented by British telegraph pioneer William Fothergill Cooke. Cooke was the driving force in establishing the electrical telegraph on a commercial basis. With Charles Wheatstone he invented the Cooke and Wheatstone telegraph and founded the world's first telegraph company, the Electric Telegraph Company. Telegraph poles were first used on the Great Western Railway in 1843 when the Cooke and Wheatstone telegraph line was extended to Slough. The line had previously used buried cables but that system had proved troublesome with failing insulation.: 32 In Britain, the trees used for telegraph poles were either native larch or pine from Sweden and Norway. Poles in early installations were treated with tar, but these were found to last only around seven years. Later poles were treated instead with creosote or copper sulphate for the preservative.: 80
Utility poles were first used in the mid-19th century in America with telegraph systems. In 1844, the United States Congress granted Samuel Morse $30,000 (equivalent to $1,012,400 in 2024) to build a 40-mile telegraph line between Baltimore, Maryland and Washington, D.C. Morse began by having a lead-sheathed cable made. After laying seven miles (11 km) underground, he tested it. He found so many faults with this system that he dug up his cable, stripped off its sheath, bought poles and strung his wires overhead. On February 7, 1844, Morse inserted the following advertisement in the Washington newspaper: "Sealed proposals will be received by the undersigned for furnishing 700 straight and sound chestnut posts with the bark on and of the following dimensions to wit: 'Each post must not be less than eight inches in diameter at the butt and tapering to five or six inches at the top. Six hundred and eighty of said posts to be 24 feet in length, and 20 of them 30 feet in length.'"
In some parts of Australia, wooden poles are rapidly destroyed by termites, so metal poles must be used instead and in much of the interior wooden poles are vulnerable to fire. The Oppenheimer pole is a collapsible wrought iron pole in three sections. It is named after Oppenheimer and Company in Germany, but they were mostly manufactured in England under license. They were used on the Australian Overland Telegraph Line built in 1872 which connected the continent north to south directly through the centre and linked to the rest of the world through a submarine cable at Darwin. The Stobie pole was invented in 1924 by James Cyril Stobie of the Adelaide Electric Supply Company and first used in South Terrace, Adelaide.
One of the early Bell System lines was the Washington DC–Norfolk line which was, for the most part, square-sawn tapered poles of yellow pine probably treated to refusal with creosote. "Treated to refusal" means that the manufacturer forces preservatives into the wood, until it refuses to accept more, but performance is not guaranteed.
Some of these were still in service after 80 years. The building of pole lines was resisted in some urban areas in the late 19th century, and political pressure for undergrounding remains powerful in many countries.
In Eastern Europe, Russia, and third-world countries, many utility poles still carry bare communication wires mounted on insulators not only along railway lines, but also along roads and sometimes even in urban areas. Errant traffic being uncommon on railways, their poles are usually less tall. In the United States electricity is predominately carried on unshielded aluminum conductors wound around a solid steel core and affixed to rated insulators made from glass, ceramic, or poly. Telephone, CATV, and FOCs are generally attached directly to the pole without insulators.
In the United Kingdom, much of the rural electricity distribution system is carried on wooden poles. These normally carry electricity at 11 or 33 kV (three phases) from 132 kV substations supplied from pylons to distribution substations or pole-mounted transformers. Wooden poles have been used for 132 kV for a number of years from the early 1980s one is called the trident they are usually used on short sections, though the line from Melbourne, Cambs to near Buntingford, Herts is quite long. The conductors on these are bare metal connected to the posts by insulators. Wood poles can also be used for low voltage distribution to customers.
Today, utility poles may hold much more than the uninsulated copper wire that they originally supported. Thicker cables holding many twisted pair, coaxial cable, or even fibre-optic, may be carried. Simple analogue repeaters or other outside plant equipment have long been mounted against poles, and often new digital equipment for multiplexing/demultiplexing or digital repeaters may now be seen. In many places, as seen in the illustration, providers of electricity, television, telephone, street light, traffic signal and other services share poles, either in joint ownership or by renting space to each other. In the United States, ANSI standard 05.1.2008 governs wood pole sizes and strength loading. Utilities that fall under the Rural Electrification Act must also follow the guidelines set forth in RUS Bulletin 1724E-150 (from the US Department of Agriculture) for pole strength and loading.
Steel utility poles are becoming more prevalent in the United States thanks to improvements in engineering and corrosion prevention coupled with lowered production costs. However, premature failure due to corrosion is a concern when compared to wood. The National Association of Corrosion Engineers Archived 2010-06-19 at the Wayback Machine or NACE is developing inspection, maintenance, and prevention procedures similar to those used on wood utility poles to identify and prevent decay.
== Markings ==
=== Pole brandings ===
British Telecom posts are usually marked with the following information:
'BT' – to mark it as a British Telecom UK Pole (This can also be PO (Post Office) or GPO (General Post Office) depending on the age of the pole)
a horizontal line marking 3 metres from the bottom of the pole
the pole length, typically 8 to 10 metres, and size. 9L is a 9 metres long, light pole, other letters used are 'M' (Medium) and 'S' (Stout).
the year of treatment and therefore generally the year of installation (e.g. the pole in the picture was treated in 2003)
the batch and type of wood used
A date of the last official inspection
An alphanumeric designation e.g. DP 242 where DP is an initialism of Distribution Point
If relevant, a red D plate meaning 'Dangerous' and indicating that the pole was structurally unsafe to climb or due to its proximity to other hazards
The date on the pole is applied by the manufacturer and refers to the date the pole was "preserved" (treated to withstand the elements).
In the United States, utility poles are marked with information concerning the manufacturer, pole height, ANSI strength class, wood species, original preservative, and year manufactured (vintage) in accordance with ANSI standard O5.1.2008. This is called branding, as it is usually burned into the surface; the resulting mark is sometimes called the "birth mark". Although the position of the brand is determined by ANSI specification, it is essentially just below "eye level" after installation. A rule of thumb for understanding a pole's brand is the manufacturer's name or logo at the top with a two-digit date beneath (sometimes preceded by a month).
Below the date is a two-character wood species abbreviation and one- to three-character preservative. Some wood species may be marked "SP" for southern pine, "WC" for western cedar, or "DF" for Douglas fir. Common preservative abbreviations are "C" for creosote, "P" for pentachlorophenol, and "SK" for chromated copper arsenate (originally referred to salts type K). The next line of the brand is usually the pole's ANSI class, used to determine maximum load; this number ranges from 10 to H6 with a smaller number meaning higher strength. The pole's height (from butt to top) in 5-foot increments is usually to the right of the class separated by a hyphen, although it is not uncommon for older brands to have the height on a separate line. The pole brand is sometimes an aluminum tag nailed in place.
Before the practice of branding, many utilities would set a 2- to 4-digit date nail into the pole upon installation. The use of date nails went out of favor during World War II due to war shortages but is still used by a few utilities. These nails are considered valuable to collectors, with older dates being more valuable, and unique markings such as the utilities' name also increasing the value. However, regardless of the value to collectors, all attachments on a utility pole are the property of the utility company, and unauthorized removal is a misdemeanor or felony. (California state law cited as example)
=== Coordinates on pole tags ===
A practice in some areas is to place poles on coordinates upon a grid. The pole at right is a Delmarva Power pole located in a rural area of the state of Maryland in the United States. The lower two tags are the "X" and "Y" coordinates along said grid. Just as in a coordinate plane used in geometry, X increases as one travels east and Y increases as one travels north. The upper two tags are specific to the sub transmission section of the pole; the first refers to the route number, the second to the specific pole along the route.
However, not all power lines follow the road. In the British region of East Anglia, EDF Energy Networks often add the Ordnance Survey Grid Reference coordinates of the pole or substation to the name sign.
In some areas, utility pole name plates may provide valuable coordinate information: a poor man's GPS.
== Pole route ==
A pole route (or pole line in the US) is a telephone link or electrical power line between two or more locations by way of multiple uninsulated wires suspended between wooden utility poles. This method of link is common especially in rural areas where burying the cables would be expensive. Another situation in which pole routes were extensively used were on the railways to link signal boxes. Traditionally, prior to around 1965, pole routes were built with open wires along non-electrical operated railways; this necessitated insulation when the wire passed over the pole, thus preventing the signal from becoming attenuated.
At electrical operated railways, pole routes were usually not built as too much jamming from the overhead wire would occur. To accomplish this, cables were separated using spars with insulators spaced along them; in general four insulators were used per spar. Only one such pole route still exists on the UK rail network, in the highlands of Scotland. There was also a long section in place between Wymondham, Norfolk and Brandon in Suffolk, United Kingdom; however, this was de-wired and removed during March 2009.
== Environmental impact ==
Utility poles are used by birds for nesting and to rest on. Utility poles and related structures are regarded by some to be a form of visual pollution. Many lines are placed underground for this reason, in places of high population density or scenic beauty that justify the expense. Architects design some pylons to be pretty, thus avoiding visual pollution.
Some chemicals used to preserve wood poles including creosote and pentachlorophenol are toxic and have been found in the environment.
The considerable improvement in weathering resistance offered by creosote infusion has long-term drawbacks. In recent years, concerns have been raised about the toxicity of creosote-treated wood waste, such as utility poles. Specifically, their biodegradation can release phenolic compounds in soil, which are considered toxic. Research continues to explore methods to render this waste safe for disposal.
Historically, pole-mounted transformers were filled with a polychlorinated biphenyl (PCB) liquid. PCBs persist in the environment and have adverse effects on animals.
== See also ==
== References ==
== External links ==
The Telegraph Pole Appreciation Society
Article on Utility-Telecom Joint Use of Poles
Many photographs Archived 2016-08-07 at the Wayback Machine
Photographs of Various U.S. Utility Poles
Hungarian Telephone Poles
Utility Poles @ Ann's Garden
Wood Utility Pole Inspection Methods Archived 2016-03-04 at the Wayback Machine
A photo collection of pole routes throughout the UK and abroad
American Wood Protection Association
USDA Rural Development's Electric Programs Archived 2012-10-16 at the Wayback Machine
GR-60, Generic Requirements for Wooden Utility Poles | Wikipedia/Telegraph_pole |
A submarine communications cable is a cable laid on the seabed between land-based stations to carry telecommunication signals across stretches of ocean and sea. The first submarine communications cables were laid beginning in the 1850s and carried telegraphy traffic, establishing the first instant telecommunications links between continents, such as the first transatlantic telegraph cable which became operational on 16 August 1858.
Submarine cables first connected all the world's continents (except Antarctica) when Java was connected to Darwin, Northern Territory, Australia, in 1871 in anticipation of the completion of the Australian Overland Telegraph Line in 1872 connecting to Adelaide, South Australia and thence to the rest of Australia.
Subsequent generations of cables carried telephone traffic, then data communications traffic. These early cables used copper wires in their cores, but modern cables use optical fiber technology to carry digital data, which includes telephone, Internet and private data traffic. Modern cables are typically about 25 mm (1 in) in diameter and weigh around 1.4 tonnes per kilometre (2.5 short tons per mile; 2.2 long tons per mile) for the deep-sea sections which comprise the majority of the run, although larger and heavier cables are used for shallow-water sections near shore.
== Early history: telegraph and coaxial cables ==
=== First successful trials ===
After William Cooke and Charles Wheatstone had introduced their working telegraph in 1839, the idea of a submarine line across the Atlantic Ocean began to be thought of as a possible triumph of the future. Samuel Morse proclaimed his faith in it as early as 1840, and in 1842, he submerged a wire, insulated with tarred hemp and India rubber, in the water of New York Harbor, and telegraphed through it. The following autumn, Wheatstone performed a similar experiment in Swansea Bay. A good insulator to cover the wire and prevent the electric current from leaking into the water was necessary for the success of a long submarine line. India rubber had been tried by Moritz von Jacobi, the Prussian electrical engineer, as far back as the early 19th century.
Another insulating gum which could be melted by heat and readily applied to wire made its appearance in 1842. Gutta-percha, the adhesive juice of the Palaquium gutta tree, was introduced to Europe by William Montgomerie, a Scottish surgeon in the service of the British East India Company.: 26–27 Twenty years earlier, Montgomerie had seen whips made of gutta-percha in Singapore, and he believed that it would be useful in the fabrication of surgical apparatus. Michael Faraday and Wheatstone soon discovered the merits of gutta-percha as an insulator, and in 1845, the latter suggested that it should be employed to cover the wire which was proposed to be laid from Dover to Calais. In 1847 William Siemens, then an officer in the army of Prussia, laid the first successful underwater cable using gutta percha insulation, across the Rhine between Deutz and Cologne. In 1849, Charles Vincent Walker, electrician to the South Eastern Railway, submerged 3 km (2 mi) of wire coated with gutta-percha off the coast from Folkestone, which was tested successfully.: 26–27
=== First commercial cables ===
In August 1850, having earlier obtained a concession from the French government, John Watkins Brett's English Channel Submarine Telegraph Company laid the first line across the English Channel, using the converted tugboat Goliath. It was simply a copper wire coated with gutta-percha, without any other protection, and was not successful.: 192–193 However, the experiment served to secure renewal of the concession, and in September 1851, a protected core, or true, cable was laid by the reconstituted Submarine Telegraph Company from a government hulk, Blazer, which was towed across the Channel.: 192–193
In 1853, more successful cables were laid, linking Great Britain with Ireland, Belgium, and the Netherlands, and crossing The Belts in Denmark.: 361 The British & Irish Magnetic Telegraph Company completed the first successful Irish link on May 23 between Portpatrick and Donaghadee using the collier William Hutt.: 34–36 The same ship was used for the link from Dover to Ostend in Belgium, by the Submarine Telegraph Company.: 192–193 Meanwhile, the Electric & International Telegraph Company completed two cables across the North Sea, from Orford Ness to Scheveningen, the Netherlands. These cables were laid by Monarch, a paddle steamer which later became the first vessel with permanent cable-laying equipment.: 195
In 1858, the steamship Elba was used to lay a telegraph cable from Jersey to Guernsey, on to Alderney and then to Weymouth, the cable being completed successfully in September of that year. Problems soon developed with eleven breaks occurring by 1860 due to storms, tidal and sand movements, and wear on rocks. A report to the Institution of Civil Engineers in 1860 set out the problems to assist in future cable-laying operations.
=== Crimean War (1853–1856) ===
In the Crimean War various forms of telegraphy played a major role; this was a first. At the start of the campaign there was a telegraph link at Bucharest connected to London. In the winter of 1854 the French extended the telegraph link to the Black Sea coast. In April 1855 the British laid an underwater cable from Varna to the Crimean peninsula so that news of the Crimean War could reach London in a handful of hours.
=== Transatlantic telegraph cable ===
The first attempt at laying a transatlantic telegraph cable was promoted by Cyrus West Field, who persuaded British industrialists to fund and lay one in 1858. However, the technology of the day was not capable of supporting the project; it was plagued with problems from the outset, and was in operation for only a month. Subsequent attempts in 1865 and 1866 with the world's largest steamship, the SS Great Eastern, used a more advanced technology and produced the first successful transatlantic cable. Great Eastern later went on to lay the first cable reaching to India from Aden, Yemen, in 1870.
=== British dominance of early cable ===
From the 1850s until 1911, British submarine cable systems dominated the most important market, the North Atlantic Ocean. The British had both supply side and demand side advantages. In terms of supply, Britain had entrepreneurs willing to put forth enormous amounts of capital necessary to build, lay and maintain these cables. In terms of demand, Britain's vast colonial empire led to business for the cable companies from news agencies, trading and shipping companies, and the British government. Many of Britain's colonies had significant populations of European settlers, making news about them of interest to the general public in the home country.
British officials believed that depending on telegraph lines that passed through non-British territory posed a security risk, as lines could be cut and messages could be interrupted during wartime. They sought the creation of a worldwide network within the empire, which became known as the All Red Line, and conversely prepared strategies to quickly interrupt enemy communications. Britain's very first action after declaring war on Germany in World War I was to have the cable ship Alert (not the CS Telconia as frequently reported) cut the five cables linking Germany with France, Spain and the Azores, and through them, North America. Thereafter, the only way Germany could communicate was by wireless, and that meant that Room 40 could listen in.
The submarine cables were an economic benefit to trading companies, because owners of ships could communicate with captains when they reached their destination and give directions as to where to go next to pick up cargo based on reported pricing and supply information. The British government had obvious uses for the cables in maintaining administrative communications with governors throughout its empire, as well as in engaging other nations diplomatically and communicating with its military units in wartime. The geographic location of British territory was also an advantage as it included both Ireland on the east side of the Atlantic Ocean and Newfoundland in North America on the west side, making for the shortest route across the ocean, which reduced costs significantly.
A few facts put this dominance of the industry in perspective. In 1896, there were 30 cable-laying ships in the world, 24 of which were owned by British companies. In 1892, British companies owned and operated two-thirds of the world's cables and by 1923, their share was still 42.7 percent. During World War I, Britain's telegraph communications were almost completely uninterrupted, while it was able to quickly cut Germany's cables worldwide.
=== Cable to India, Singapore, East Asia and Australia ===
Throughout the 1860s and 1870s, British cable expanded eastward, into the Mediterranean Sea and the Indian Ocean. An 1863 cable to Bombay (now Mumbai), India, provided a crucial link to Saudi Arabia. In 1870, Bombay was linked to London via submarine cable in a combined operation by four cable companies, at the behest of the British Government. In 1872, these four companies were combined to form the mammoth globe-spanning Eastern Telegraph Company, owned by John Pender. A spin-off from Eastern Telegraph Company was a second sister company, the Eastern Extension, China and Australasia Telegraph Company, commonly known simply as "the Extension." In 1872, Australia was linked by cable to Bombay via Singapore and China and in 1876, the cable linked the British Empire from London to New Zealand.
=== Submarine cables across the Pacific, 1902–1991 ===
The first trans-Pacific cables providing telegraph service were completed in 1902 and 1903, linking the US mainland to Hawaii in 1902 and Guam to the Philippines in 1903. Canada, Australia, New Zealand and Fiji were also linked in 1902 with the trans-Pacific segment of the All Red Line. Japan was connected into the system in 1906. Service beyond Midway Atoll was abandoned in 1941 due to World War II, but the remainder stayed in operation until 1951 when the FCC gave permission to cease operations.
The first trans-Pacific telephone cable was laid from Hawaii to Japan in 1964, with an extension from Guam to The Philippines. Also in 1964, the Commonwealth Pacific Cable System (COMPAC), with 80 telephone channel capacity, opened for traffic from Sydney to Vancouver, and in 1967, the South East Asia Commonwealth (SEACOM) system, with 160 telephone channel capacity, opened for traffic. This system used microwave radio from Sydney to Cairns (Queensland), cable running from Cairns to Madang (Papua New Guinea), Guam, Hong Kong, Kota Kinabalu (capital of Sabah, Malaysia), Singapore, then overland by microwave radio to Kuala Lumpur. In 1991, the North Pacific Cable system was the first regenerative system (i.e., with repeaters) to completely cross the Pacific from the US mainland to Japan. The US portion of NPC was manufactured in Portland, Oregon, from 1989 to 1991 at STC Submarine Systems, and later Alcatel Submarine Networks. The system was laid by Cable & Wireless Marine on the CS Cable Venture.
=== Construction, 19–20th century ===
Transatlantic cables of the 19th century consisted of an outer layer of iron and later steel wire, wrapping India rubber, wrapping gutta-percha, which surrounded a multi-stranded copper wire at the core. The portions closest to each shore landing had additional protective armour wires. Gutta-percha, a natural polymer similar to rubber, had nearly ideal properties for insulating submarine cables, with the exception of a rather high dielectric constant which made cable capacitance high. William Thomas Henley had developed a machine in 1837 for covering wires with silk or cotton thread that he developed into a wire wrapping capability for submarine cable with a factory in 1857 that became W.T. Henley's Telegraph Works Co., Ltd. The India Rubber, Gutta Percha and Telegraph Works Company, established by the Silver family and giving that name to a section of London, furnished cores to Henley's as well as eventually making and laying finished cable. In 1870 William Hooper established Hooper's Telegraph Works to manufacture his patented vulcanized rubber core, at first to furnish other makers of finished cable, that began to compete with the gutta-percha cores. The company later expanded into complete cable manufacture and cable laying, including the building of the first cable ship specifically designed to lay transatlantic cables.
Gutta-percha and rubber were not replaced as a cable insulation until polyethylene was introduced in the 1930s. Even then, the material was only available to the military and the first submarine cable using it was not laid until 1945 during World War II across the English Channel. In the 1920s, the American military experimented with rubber-insulated cables as an alternative to gutta-percha, since American interests controlled significant supplies of rubber but did not have easy access to gutta-percha manufacturers. The 1926 development by John T. Blake of deproteinized rubber improved the impermeability of cables to water.
Many early cables suffered from attack by sea life. The insulation could be eaten, for instance, by species of Teredo (shipworm) and Xylophaga. Hemp laid between the steel wire armouring gave pests a route to eat their way in. Damaged armouring, which was not uncommon, also provided an entrance. Cases of sharks biting cables and attacks by sawfish have been recorded. In one case in 1873, a whale damaged the Persian Gulf Cable between Karachi and Gwadar. The whale was apparently attempting to use the cable to clean off barnacles at a point where the cable descended over a steep drop. The unfortunate whale got its tail entangled in loops of cable and drowned. The cable repair ship Amber Witch was only able to winch up the cable with difficulty, weighed down as it was with the dead whale's body.
=== Bandwidth problems ===
Early long-distance submarine telegraph cables exhibited formidable electrical problems. Unlike modern cables, the technology of the 19th century did not allow for in-line repeater amplifiers in the cable. Large voltages were used to attempt to overcome the electrical resistance of their tremendous length but the cables' distributed capacitance and inductance combined to distort the telegraph pulses in the line, reducing the cable's bandwidth, severely limiting the data rate for telegraph operation to 10–12 words per minute.
As early as 1816, Francis Ronalds had observed that electric signals were slowed in passing through an insulated wire or core laid underground, and outlined the cause to be induction, using the analogy of a long Leyden jar. The same effect was noticed by Latimer Clark (1853) on cores immersed in water, and particularly on the lengthy cable between England and The Hague. Michael Faraday showed that the effect was caused by capacitance between the wire and the earth (or water) surrounding it. Faraday had noticed that when a wire is charged from a battery (for example when pressing a telegraph key), the electric charge in the wire induces an opposite charge in the water as it travels along. In 1831, Faraday described this effect in what is now referred to as Faraday's law of induction. As the two charges attract each other, the exciting charge is retarded. The core acts as a capacitor distributed along the length of the cable which, coupled with the resistance and inductance of the cable, limits the speed at which a signal travels through the conductor of the cable.
Early cable designs failed to analyse these effects correctly. Famously, E.O.W. Whitehouse had dismissed the problems and insisted that a transatlantic cable was feasible. When he subsequently became chief electrician of the Atlantic Telegraph Company, he became involved in a public dispute with William Thomson. Whitehouse believed that, with enough voltage, any cable could be driven. Thomson believed that his law of squares showed that retardation could not be overcome by a higher voltage. His recommendation was a larger cable. Because of the excessive voltages recommended by Whitehouse, Cyrus West Field's first transatlantic cable never worked reliably, and eventually short circuited to the ocean when Whitehouse increased the voltage beyond the cable design limit.
Thomson designed a complex electric-field generator that minimized current by resonating the cable, and a sensitive light-beam mirror galvanometer for detecting the faint telegraph signals. Thomson became wealthy on the royalties of these, and several related inventions. Thomson was elevated to Lord Kelvin for his contributions in this area, chiefly an accurate mathematical model of the cable, which permitted design of the equipment for accurate telegraphy. The effects of atmospheric electricity and the geomagnetic field on submarine cables also motivated many of the early polar expeditions.
Thomson had produced a mathematical analysis of propagation of electrical signals into telegraph cables based on their capacitance and resistance, but since long submarine cables operated at slow rates, he did not include the effects of inductance. By the 1890s, Oliver Heaviside had produced the modern general form of the telegrapher's equations, which included the effects of inductance and which were essential to extending the theory of transmission lines to the higher frequencies required for high-speed data and voice.
=== Transatlantic telephony ===
While laying a transatlantic telephone cable was seriously considered from the 1920s, the technology required for economically feasible telecommunications was not developed until the 1940s. A first attempt to lay a "pupinized" telephone cable—one with loading coils added at regular intervals—failed in the early 1930s due to the Great Depression.
TAT-1 (Transatlantic No. 1) was the first transatlantic telephone cable system. Between 1955 and 1956, cable was laid between Gallanach Bay, near Oban, Scotland and Clarenville, Newfoundland and Labrador, in Canada. It was inaugurated on September 25, 1956, initially carrying 36 telephone channels.
In the 1960s, transoceanic cables were coaxial cables that transmitted frequency-multiplexed voiceband signals. A high-voltage direct current on the inner conductor powered repeaters (two-way amplifiers placed at intervals along the cable). The first-generation repeaters remain among the most reliable vacuum tube amplifiers ever designed. Later ones were transistorized. Many of these cables are still usable, but have been abandoned because their capacity is too small to be commercially viable. Some have been used as scientific instruments to measure earthquake waves and other geomagnetic events.
=== Other uses ===
In 1942, Siemens Brothers of New Charlton, London, in conjunction with the United Kingdom National Physical Laboratory, adapted submarine communications cable technology to create the world's first submarine oil pipeline in Operation Pluto during World War II.
Active fiber-optic cables may be useful in detecting seismic events which alter cable polarization.
== Modern history ==
=== Optical telecommunications cables ===
In the 1980s, fiber-optic cables were developed. The first transatlantic telephone cable to use optical fiber was TAT-8, which went into operation in 1988. A fiber-optic cable comprises multiple pairs of fibers. Each pair has one fiber in each direction. TAT-8 had two operational pairs and one backup pair. Except for very short lines, fiber-optic submarine cables include repeaters at regular intervals.
Modern optical fiber repeaters use a solid-state optical amplifier, usually an erbium-doped fiber amplifier (EDFA). Each repeater contains separate equipment for each fiber. These comprise signal reforming, error measurement and controls. A solid-state laser dispatches the signal into the next length of fiber. The solid-state laser excites a short length of doped fiber that itself acts as a laser amplifier. As the light passes through the fiber, it is amplified. This system also permits wavelength-division multiplexing, which dramatically increases the capacity of the fiber. EDFA amplifiers were first used in submarine cables in 1995.
Repeaters are powered by a constant direct current passed down the conductor near the centre of the cable, so all repeaters in a cable are in series. Power feed equipment (PFE) is installed at the terminal stations. Typically both ends share the current generation with one end providing a positive voltage and the other a negative voltage. A virtual earth point exists roughly halfway along the cable under normal operation. The amplifiers or repeaters derive their power from the potential difference across them. The voltage passed down the cable is often anywhere from 3000 to 15,000VDC at a current of up to 1,100mA, with the current increasing with decreasing voltage; the current at 10,000VDC is up to 1,650mA. Hence the total amount of power sent into the cable is often up to 16.5 kW.
The optic fiber used in undersea cables is chosen for its exceptional clarity, permitting runs of more than 100 kilometres (62 mi) between repeaters to minimize the number of amplifiers and the distortion they cause. Unrepeated cables are cheaper than repeated cables, and their maximum transmission distance is limited. This transmission distance has increased over the years; in 2014 unrepeated cables of up to 380 kilometres (240 mi) in length were in service; however these require unpowered repeaters to be positioned every 100 km.
The rising demand for these fiber-optic cables outpaced the capacity of providers such as AT&T. Having to shift traffic to satellites resulted in lower-quality signals. To address this issue, AT&T had to improve its cable-laying abilities. It invested $100 million in producing two specialized fiber-optic cable laying vessels. These included laboratories in the ships for splicing cable and testing its electrical properties. Such field monitoring is important because the glass of fiber-optic cable is less malleable than the copper cable that had been formerly used. The ships are equipped with thrusters that increase maneuverability. This capability is important because fiber-optic cable must be laid straight from the stern, which was another factor that copper-cable-laying ships did not have to contend with.
Originally, submarine cables were simple point-to-point connections. With the development of submarine branching units (SBUs), more than one destination could be served by a single cable system. Modern cable systems now usually have their fibers arranged in a self-healing ring to increase their redundancy, with the submarine sections following different paths on the ocean floor. One reason for this development was that the capacity of cable systems had become so large that it was not possible to completely back up a cable system with satellite capacity, so it became necessary to provide sufficient terrestrial backup capability. Not all telecommunications organizations wish to take advantage of this capability, so modern cable systems may have dual landing points in some countries (where back-up capability is required) and only single landing points in other countries where back-up capability is either not required, the capacity to the country is small enough to be backed up by other means, or having backup is regarded as too expensive.
A further redundant-path development over and above the self-healing rings approach is the mesh network whereby fast switching equipment is used to transfer services between network paths with little to no effect on higher-level protocols if a path becomes inoperable. As more paths become available to use between two points, it is less likely that one or two simultaneous failures will prevent end-to-end service.
As of 2012, operators had "successfully demonstrated long-term, error-free transmission at 100 Gbps across Atlantic Ocean" routes of up to 6,000 km (3,700 mi), meaning a typical cable can move tens of terabits per second overseas. Speeds improved rapidly in the previous few years, with 40 Gbit/s having been offered on that route only three years earlier in August 2009.
Switching and all-by-sea routing commonly increases the distance and thus the round trip latency by more than 50%. For example, the round trip delay (RTD) or latency of the fastest transatlantic connections is under 60 ms, close to the theoretical optimum for an all-sea route. While in theory, a great circle route (GCP) between London and New York City is only 5,600 km (3,500 mi), this requires several land masses (Ireland, Newfoundland, Prince Edward Island and the isthmus connecting New Brunswick to Nova Scotia) to be traversed, as well as the extremely tidal Bay of Fundy and a land route along Massachusetts' north shore from Gloucester to Boston and through fairly built up areas to Manhattan itself. In theory, using this partial land route could result in round trip times below 40 ms (which is the speed of light minimum time), and not counting switching. Along routes with less land in the way, round trip times can approach speed of light minimums in the long term.
The type of optical fiber used in unrepeated and very long cables is often PCSF (pure silica core) due to its low loss of 0.172dB per kilometer when carrying a 1550 nm wavelength laser light. The large chromatic dispersion of PCSF means that its use requires transmission and receiving equipment designed with this in mind; this property can also be used to reduce interference when transmitting multiple channels through a single fiber using wavelength division multiplexing (WDM), which allows for multiple optical carrier channels to be transmitted through a single fiber, each carrying its own information. WDM is limited by the optical bandwidth of the amplifiers used to transmit data through the cable and by the spacing between the frequencies of the optical carriers; however this minimum spacing is also limited, with the minimum spacing often being 50 GHz (0.4 nm). The use of WDM can reduce the maximum length of the cable although this can be overcome by designing equipment with this in mind.
Optical post amplifiers, used to increase the strength of the signal generated by the optical transmitter often use a diode-pumped erbium-doped fiber laser. The diode is often a high power 980 or 1480 nm laser diode. This setup allows for an amplification of up to +24dBm in an affordable manner. Using an erbium-ytterbium doped fiber instead allows for a gain of +33dBm, however again the amount of power that can be fed into the fiber is limited. In single carrier configurations the dominating limitation is self phase modulation induced by the Kerr effect which limits the amplification to +18 dBm per fiber. In WDM configurations the limitation due to crossphase modulation becomes predominant instead. Optical pre-amplifiers are often used to negate the thermal noise of the receiver. Pumping the pre-amplifier with a 980 nm laser leads to a noise of at most 3.5 dB, with a noise of 5 dB usually obtained with a 1480 nm laser. The noise has to be filtered using optical filters.
Raman amplification can be used to extend the reach or the capacity of an unrepeatered cable, by launching 2 frequencies into a single fiber; one carrying data signals at 1550 nm, and the other pumping them at 1450 nm. Launching a pump frequency (pump laser light) at a power of just one watt leads to an increase in reach of 45 km or a 6-fold increase in capacity.
Another way to increase the reach of a cable is by using unpowered repeaters called remote optical pre-amplifiers (ROPAs); these still make a cable count as unrepeatered since the repeaters do not require electrical power but they do require a pump laser light to be transmitted alongside the data carried by the cable; the pump light and the data are often transmitted in physically separate fibers. The ROPA contains a doped fiber that uses the pump light (often a 1480 nm laser light) to amplify the data signals carried on the rest of the fibers.
WDM or wavelength division multiplexing was first implemented in submarine fiber optic cables from the 1990s to the 2000s, followed by DWDM or dense wavelength division mulltiplexing around 2007. Each fiber can carry 30 wavelengths at a time. SDM or spatial division multiplexing submarine cables have at least 12 fiber pairs which is an increase from the maximum of 8 pairs found in conventional submarine cables, and submarine cables with up to 24 fiber pairs have been deployed. The type of modulation employed in a submarine cable can have a major impact in its capacity. SDM is combined with DWDM to improve capacity.
Transponders are used to send data through the cable. The open cable concept allows for the design of a submarine cable independently of the transponders that will be used to transmit data through the cable. SLTE (Submarine Line Terminal Equipment) has transponders and a ROADM (Reconfigurable optical add-drop multiplexer) used for handling the signals in the cable via software control. The ROADM is used to improve the reliability of the cable by allowing it to operate even if it has faults. This equipment is located inside a cable landing station (CLS). C-OTDR (Coherent Optical Time Domain Reflectometry) is used in submarine cables to detect the location of cable faults. The wet plant of a submarine cable comprises the cable itself, branching units, repeaters and possibly OADMs (Optical add-drop multiplexers). The SLTE is usually installed in a data center and it may be possible to purchase capacity in a cable for connecting to other points of the cable, connecting the internet, for example at the NAP of the Americas which connects many Latin American ISPs with networks in the US.
=== Investment and finances ===
A typical multi-terabit, transoceanic submarine cable system costs several hundred million dollars to construct. Almost all fiber-optic cables from TAT-8 in 1988 until approximately 1997 were constructed by consortia of operators. For example, TAT-8 counted 35 participants including most major international carriers at the time such as AT&T Corporation. Two privately financed, non-consortium cables were constructed in the late 1990s, which preceded a massive, speculative rush to construct privately financed cables that peaked in more than $22 billion worth of investment between 1999 and 2001. This was followed by the bankruptcy and reorganization of cable operators such as Global Crossing, 360networks, FLAG, Worldcom, and Asia Global Crossing. Tata Communications' Global Network (TGN) is the only wholly owned fiber network circling the planet.
Some governments have invested in cables. For example, Tonga-Fiji Submarine Cable System is owned and operated by Tonga Cable Limited, which developed and manages the cable with financing support from the Asian Development Bank and World Bank. Tonga Cable Limited is a public enterprise 80% owned by the government. In China, three state-owned companies in China—China Mobile, China Telecom, and China Unicom—invested in undersea cables. In the United States, the U.S. Navy owns over 40,000 nautical miles of various subsea cables.
Most cables in the 20th century crossed the Atlantic Ocean, to connect the United States and Europe. However, capacity in the Pacific Ocean was much expanded starting in the 1990s. For example, between 1998 and 2003, approximately 70% of undersea fiber-optic cable was laid in the Pacific. This is in part a response to the emerging significance of Asian markets in the global economy.
After decades of heavy investment in already developed markets such as the transatlantic and transpacific routes, efforts increased in the 21st century to expand the submarine cable network to serve the Developing World. For instance, in July 2009, an underwater fiber-optic cable line plugged East Africa into the broader Internet. The company that provided this new cable was SEACOM, which is 75% owned by East African and South African investors. The project was delayed by a month due to increased piracy along the coast.
Investments in cables present a commercial risk because cables cover 6,200 km of ocean floor, cross submarine mountain ranges and rifts. Because of this most companies only purchase capacity after the cable is finished.
=== Antarctica ===
Antarctica is the only continent not yet reached by a submarine telecommunications cable. Phone, video, and e-mail traffic must be relayed to the rest of the world via satellite links that have limited availability and capacity. Bases on the continent itself are able to communicate with one another via radio, but this is only a local network. To be a viable alternative, a fiber-optic cable would have to be able to withstand temperatures of −80 °C (−112 °F) as well as massive strain from ice flowing up to 10 metres (33 ft) per year. Thus, plugging into the larger Internet backbone with the high bandwidth afforded by fiber-optic cable is still an as-yet infeasible economic and technical challenge in the Antarctic.
=== Arctic ===
The climate change induced melting of Arctic ice has provided the opportunity to lay new cable networks, linking continents and remote regions. Several projects are underway in the Arctic including 12,650 km "Polar Express" and 14,500 km Far North Fiber. However, scholars have raised environmental concerns about the laying of submarine cables in the region and the general lack of a nuanced regulatory framework. Environmental concerns pertain both to ice-related hazards damaging the cables, and cable installation disturbing the seabed or electromagnetic fields and thermal radiation of the cables impacting sensitive organisms.
== Importance of submarine cables ==
Submarine cables, while often perceived as ‘insignificant’ parts of communication infrastructure as they lay "hidden" in the seabed, are an essential infrastructure in the digital era, carrying 99% of the data traffic across the oceans. This data includes all internet traffic, military transmissions, and financial transactions.
The total carrying capacity of a submarine cable is in the terabits per second, while a satellite typically offers only 1 gigabit per second, a ratio of more than 1000 to 1. Satellites handle less than 5% – to an estimate of even 0.5% – of global data transmission, and are less efficient, slower, and more expensive. Therefore, satellites are often exclusively considered for remote areas with challenging conditions for laying submarine cables. Submarine cables are thus the essential technical infrastructure for all internet communication.
=== National security ===
As a result of these cables' cost and usefulness, they are highly valued not only by the corporations building and operating them for profit, but also by national governments. For instance, the Australian government considers its submarine cable systems to be "vital to the national economy". Accordingly, the Australian Communications and Media Authority (ACMA) has created protection zones that restrict activities that could potentially damage cables linking Australia to the rest of the world. The ACMA also regulates all projects to install new submarine cables.
Due to their critical role, disruptions to these cables can lead to communication blackouts and, thus, extensive economic losses. The impact of such disruptions is often exemplified by the 2022 Tonga volcanic eruption that severed the island's only submarine cable and thus connectivity to the rest of the world for several days. The cable break was declared a “national crisis,” and repairs took several weeks, leaving Tonga largely isolated during a crucial period for disaster response.
Submarine cable infrastructure may even have additional technical advantages, such as carrying SMART environmental sensors supporting national disaster early warning systems. Furthermore, the cables are predicted to become even more critical with growing demands from 5G networks, the ‘Internet of things’ (IoT), and artificial intelligence on large data transfers.
=== International security ===
Submarine communication cables are a critical infrastructure within the context of international security. Transmitting massive amounts of sensitive data every day, they are essential for both state operations and private enterprises. One of the catalysts for the amount and sensitivity of data flowing through these cables has been the global rise of cloud computing.
The U.S military, for example, uses the submarine cable network for data transfer from conflict zones to command staff in the United States (U.S.). Interruption of the cable network during intense operations could have direct consequences for the military on the ground.
The criticality of cable services makes their geopolitical influence profound. Scholars argue that state dominance in cable networks can exert political pressure, or shape global internet governance.
An example of such state dominance in the global cable infrastructure is China's ‘Digital Silk Road’ strategy funding the expansion of Chinese cable networks, with the Chinese company HMN Technologies often criticised for providing networks for other states, holding up to 10% of the global market share. Some critiques argue that Chinese investments in critical cable infrastructure, being involvement in approximately 25% of global submarine cables, such as the PEACE Cable linking Eastafrica and Europe, may enable China to reroute data traffic through its own networks, and thus apply political pressure. The strategy is countered by the U.S., supporting alternative projects.
== Vulnerabilities of submarine cables ==
Submarine cables are exposed to a variety of potential threats. Many of these threats are accidental, such as by fishing trawlers, ship anchors, earthquakes, turbidity currents, and even shark bites.
Based on surveying breaks in the Atlantic Ocean and the Caribbean Sea, it was found that between 1959 and 1996, fewer than 9% were due to natural events. In response to this threat to the communications network, the practice of cable burial has developed. The average incidence of cable faults was 3.7 per 1,000 km (620 mi) per year from 1959 to 1979. That rate was reduced to 0.44 faults per 1,000 km per year after 1985, due to widespread burial of cable starting in 1980.
Still, cable breaks are by no means a thing of the past, with more than 50 repairs a year in the Atlantic Ocean alone, and significant breaks in 2006, 2008, 2009 and 2011.
Several vulnerabilities of submarine communication cables make them attractive targets for organized crime. The following section explores these vulnerabilities and currently proposed counter measures to organized crime from different perspectives.
=== Technical perspective ===
==== Technical vulnerabilities ====
The remoteness of these cables in international waters, poses significant challenges for continuous monitoring and increases their attractiveness as targets of physical tampering, data theft, and service disruptions.
The cables' vulnerability is further compounded by technological advancements, such as the development of unmanned underwater vehicles (UUVs), which enable covert cable damage while avoiding detection. However, even low-tech attacks can impact the cable's security significantly, as demonstrated in 2013, when three divers were arrested for severing the main cable linking Egypt with Europe, drastically lowering Egypt's internet speed.
Even in shallow waters, cables remain exposed to risks, as illustrated in the context of the Korea Strait. Such sea passages are often marked as ‘maritime choke points’ where several nations have conflicting interests, increasing the risk of harm from shipping activities and disputes.
Further, most cable locations are publicly available, making them an easy target for criminal acts such as disrupting services or stealing cable materials, which potentially lead to substantial communication blackouts. The stealing of submarine cable has been reported in Vietnam, where more than 11 km of cables went missing in 2007 and were later presumed to be found on fishing boats, attributed to their incentives to sell them, according to media reports.
==== Technical countermeasures ====
Typically, cables are buried in waters with a depth of less than 2,000 meters, but increasingly, they are buried in deeper seabed as a means of protection against high seas fishing and bottom trawling. However, this may also be advantageous against physical attacks from organized crime.
Further technical solutions are advanced protective casings, and monitoring them with, e.g., UUVs. Such technical solutions, however, can be challenging to implement and are limited in the remote areas of the high sea. Other proposed solutions include spatial modelling through protective or safety zones and penalties, increasing resources for surveillance, and a more collaborative approach between states and the private sector. However, how to implement and enforce these solutions remains to be determined. The cables' remoteness thus complicates both physical attacks and their protection.
===== Cable repair =====
Shore stations can locate a break in a cable by electrical measurements, such as through spread-spectrum time-domain reflectometry (SSTDR), a type of time-domain reflectometry that can be used in live environments very quickly. Presently, SSTDR can collect a complete data set in 20ms. Spread spectrum signals are sent down the wire and then the reflected signal is observed. It is then correlated with the copy of the sent signal and algorithms are applied to the shape and timing of the signals to locate the break.
A cable repair ship will be sent to the location to drop a marker buoy near the break. Several types of grapples are used depending on the situation. If the sea bed in question is sandy, a grapple with rigid prongs is used to plough under the surface and catch the cable. If the cable is on a rocky sea surface, the grapple is more flexible, with hooks along its length so that it can adjust to the changing surface. In especially deep water, the cable may not be strong enough to lift as a single unit, so a special grapple that cuts the cable soon after it has been hooked is used and only one length of cable is brought to the surface at a time, whereupon a new section is spliced in. The repaired cable is longer than the original, so the excess is deliberately laid in a "U" shape on the seabed. A submersible can be used to repair cables that lie in shallower waters.
A number of ports near important cable routes became homes to specialized cable repair ships. Halifax, Nova Scotia, was home to a half dozen such vessels for most of the 20th century including long-lived vessels such as the CS Cyrus West Field, CS Minia and CS Mackay-Bennett. The latter two were contracted to recover victims from the sinking of the RMS Titanic. The crews of these vessels developed many new techniques and devices to repair and improve cable laying, such as the "plough".
=== Cybersecurity perspective ===
==== Cyber vulnerabilities ====
Increasingly, sophisticated cyber-attacks threaten the data traffic on the cables, with incentives ranging from financial gain, espionage, or extortion by either state actors or non-state actors. Further, hybrid warfare tactics can interfere with or even weaponize the data transferred by the cables. For example, low-intensity cyber-attacks can be employed for ransomware, data manipulation and theft, opening up new a new opportunity for the use of cybercrime and grey-zone tactics in interstate disputes.
The lack of binding international cybersecurity standards may create a gap in dealing with cyber-enabled sabotage, that can be used by organized crime. However, attributing an incident to a specific actor or motivation of such actor can be challenging, specifically in cyberspace.
==== Cyber espionage and Intelligence-gathering ====
The rising sophistication of cyberattacks underscores the vulnerability of submarine cables to cyberespionage, ultimately complicating their security. Techniques like cable tapping, hacking into network management systems, and targeting cable landing stations enable covert data access by intelligence agencies, with Russia, the U.S., and the United Kingdom (U.K.) noted as primary players.
These activities are driven by both strategic and economic motives, with advancements in technology making interception and data manipulation more effective and difficult to detect. Recent technological advancements increasing the vulnerability include the use of remote access portals and remote network management systems centralizing control over components, enabling attackers to monitor traffic and potentially disrupt data flows.
Intelligence-gathering techniques have been deployed since the late 19th century. Frequently at the beginning of wars, nations have cut the cables of the other sides to redirect the information flow into cables that were being monitored. The most ambitious efforts occurred in World War I, when British and German forces systematically attempted to destroy the others' worldwide communications systems by cutting their cables with surface ships or submarines.
During the Cold War, the United States Navy and National Security Agency (NSA) succeeded in placing wire taps on Soviet underwater communication lines in Operation Ivy Bells.
These historical intelligence-gathering techniques were eventually countered with technological advancements like the widespread use of end-to-end encryption minimizing the threat of wire tapping.
==== Cybersecurity countermeasures ====
Cybersecurity strategies for submarine cables, such as encryption, access controls, and continuous monitoring, primarily focus on preventing unauthorized data access but do not adequately address the physical protection of cables in vulnerable, remote, high-sea areas as stated above.
As a result, while cybersecurity protocols are effective near coastal landing points, their enforcement across vast stretches of the open ocean becomes a challenge. To address these limitations, experts suggest a broader, multi-layered approach that integrates physical security measures with international cooperation and legal frameworks, especially given the jurisdictional ambiguities in international waters.
Multilateral agreements to establish cybersecurity standards specific to submarine cables are highlighted as critical. These agreements can help bridge the jurisdictional ambiguities and often resulting enforcement gaps in international waters, which ultimately hinder effective protection and are frequently exploited by organized crime.
Some scholars advocate for heightened European Union (E.U.) coordination, recommending improvements in surveillance and response capabilities across various agencies, such as the Coast guard and specific telecommunication regulators. Given the central role of private companies in cable ownership, some experts also underscore the need for stronger collaboration between governments and tech firms to pool resources and develop more innovative security measures tailored to this critical infrastructure.
=== Geopolitical perspective ===
==== Geopolitical vulnerabilities ====
Fishing vessels are the leading cause of accidental damage to submarine communication cables. However, some of the academic discussions and recent incidents point to geopolitical tactics influencing the cable's security more than previously expected. These tactics include the ease and potential with which fishing vessels can blend into regular maritime traffic and implement their attacks.
The propensity for fishing trawler nets to cause cable faults may well have been exploited during the Cold War. For example, in February 1959, a series of 12 breaks occurred in five American trans-Atlantic communications cables. In response, a U.S. naval vessel, the USS Roy O. Hale, detained and investigated the Soviet trawler Novorosiysk. A review of the ship's log indicated it had been in the region of each of the cables when they broke. Broken sections of cable were also found on the deck of the Novorosiysk. It appeared that the cables had been dragged along by the ship's nets, and then cut once they were pulled up onto the deck to release the nets. The Soviet Union's stance on the investigation was that it was unjustified, but the U.S. cited the Convention for the Protection of Submarine Telegraph Cables of 1884 to which Russia had signed (prior to the formation of the Soviet Union) as evidence of violation of international protocol.
Several media outlets and organizations indicate that Russian fishing vessels, particularly in 2022, passed over a damaged submarine cable up to 20 times, suggesting potential political motives and the possibility of hybrid warfare tactics used from Russia's side. Russian naval activities near submarine cables are often linked to increased hybrid warfare strategies targeting submarine cables, where sabotage is argued to serve as a tool to disrupt communication networks during conflict and destabilise adversaries.
These tactics elevate cable security to a significant geopolitical issue. Criminal actors may further target cables as a means of economic warfare, aiming to destabilize economies or convey political messages. The disruption of submarine communication cables in highly politicised maritime areas thus has a significant political component that is receiving increased attention.
After two cable breaks in the Baltic Sea in November 2024, one between Lithuania and Sweden and the other between Finland and Germany, Defence Minister Boris Pistorius argued:
“No one believes that these cables were cut accidentally. I also don't want to believe in versions that these were ship anchors that accidentally caused the damage. Therefore, we have to state, without knowing specifically who it came from, that it is a 'hybrid' action. And we also have to assume, without knowing it yet, that it is sabotage."
This statement underlines the current discourse to recognize cable disruptions as threats to national security, which ultimately leads to their securitization in the international context.
==== Geopolitical risks and countermeasures ====
Submarine cables are inherently vulnerable to transnational threats like organized crime. International collaboration to address these threats tends to fall to existing organizations with a cable specific focus – such as the International Cable Protection Committee (ICPC) – which represent key submarine stakeholders, and play a vital role in promoting cooperation and information sharing among stakeholders. Such organizations are argued to be crucial to develop and implement a comprehensive and coordinated global strategy for cable security.
As of 2025, a tense U.S.-China relationship complicates this task especially in the South China Sea where there are territorial disputes. China has increasing control and influence over global cables networks, while both it and the USA financially supports allied-owned cable projects and exerts diplomatic pressure and regulatory action, e.g. against Vietnam.
In light of Nord Stream pipelines sabotage in the Baltic Sea, where subsea infrastructure vital to Germany and Russia was physically destroyed, and other incidents there, NATO has increased patrols and monitoring operations.
=== Legal perspective ===
==== Legal vulnerabilities ====
Submarine cables are internationally regulated within the framework of the United Nations Convention on the Law of the Sea (UNCLOS), in particular through the provisions of Articles 112 and 97, 112 and 115, which mandate operational freedom to lay cables in international waters and beyond the continental shelf and reward measures to protect against shipping accidents.
However, submarine cables face significant legal challenges and lack specific legal protection in UNCLOS and enforcement mechanisms against emerging threats, particular in international waters. This is further complicated by the non-ratification of the treaty by key states such as the U.S. and Turkey. Many countries lack explicit legal provisions to criminalize the destruction or theft of undersea cables, creating jurisdictional ambiguities that organized crime can exploit. Other legal frameworks, such as the 1884 Convention for the Protection of Submarine Telegraph Cables are outdated and fail to address modern threats like cyberattacks and hybrid warfare tactics. The unclear jurisdiction and weak enforcement mechanisms, demonstrate the difficulty to protect submarine cables from organized crime.
The Arctic Ocean in particular exemplifies the challenges associated with surveillance and enforcement in vast and remote areas, leaving a legal vacuum that criminals may exploit. In the Arctic, the absence of a central international authority to oversee submarine cable protection and the reliance on military organizations like NATO hinders general coordinated global responses.
Organizations such as the ICPC thus highlight the need for updated and more comprehensive legal frameworks to ensure the security of submarine cables.
==== Legal countermeasures ====
The legal challenges of protecting submarine cables from organized crime have resulted in recommendations ranging from treaty amendments to domestic law reforms and multi-level governance models.
Some scholars argue that UNCLOS should be updated to protect cables extensively, including cooperative monitoring and enforcement protocols. Additionally, principles from the law of the sea, state responsibility, and the laws on the use of force could be creatively applied to strengthen protections for cables. Enforcement issues could be tackled by aligning domestic laws with UNCLOS, implementing national response protocols, and creating streamlined points of contact for cable incidents. Given the increased involvement of organizations like NATO, others recommend to clarify the roles of military and non-military actors in cable security and enhanced multi-level governance models.
While these proposed legal solutions seem promising, their practical implementation still remains a challenge due to the complexity of international treaties, the need for international cooperation, the lack of domestic criminalization of cable damage, and the evolving nature of technological threats. Additionally, while UNCLOS's ambiguous jurisdiction in international waters hinders effective enforcement, limited political interests seems to hamper treaty development.
== Environmental impact ==
The presence of cables in the oceans can be a danger to marine life. With the proliferation of cable installations and the increasing demand for inter-connectivity that today's society demands, the environmental impact is increasing.
Submarine cables can impact marine life in a number of ways.
=== Alteration of the seabed ===
Seabed ecosystems can be disturbed by the installation and maintenance of cables. The effects of cable installation are generally limited to specific areas. The intensity of disturbance depends on the installation method.
Cables are often laid in the so-called benthic zone of the seabed. The benthic zone is the ecological region at the bottom of the sea where benthos, clams and crabs live, and where the surface sediments, which are deposits of matter and particles in the water that provide a habitat for marine species, are located.
Sediment can be damaged by cable installation by trenching with water jets or ploughing. This can lead to reworking of the sediments, altering the substrate of which they are composed.
According to several studies, the biota of the benthic zone is only slightly affected by the presence of cables. However, the presence of cables can trigger behavioral disturbances in living organisms. The main observation is that the presence of cables provides a hard substrate for anemones attachment. These organisms are found in large number around cables that run through soft sediments, which are not normally suitable for these organisms. This is also the case for flatfish. Although little observed, the presence of cables can also change the water temperature and therefore disturb the surrounding natural habitat.
However, these disturbances are not very persistent over time, and can stabilize within a few days. Cable operators are trying to implement measures to route cables in such a way as to avoid areas with sensitive and vulnerable ecosystems.
=== Entanglement ===
Entanglement of marine animals in cables is one of the main causes of cable damage. Whales and sperm whales are the main animals that entangle themselves in cables and damage them. The encounter between these animals and cables can cause injury and sometimes death. Studies carried out between 1877 and 1955 reported 16 cable ruptures caused by whale entanglement, 13 of them by sperm whales. Between 1907 and 2006, 39 such events were recorded. Cable burial techniques are gradually being introduced to prevent such incidents.
=== The risk of fishing ===
Although submarine cables are located on the seabed, fishing activity can damage the cables. Fishermen using fishing techniques that involve scraping the seabed, or dragging equipment such as trawls or cages, can damage the cables, resulting in the loss of liquids and the chemical and toxic materials that make up the cables.
Areas with a high density of submarine cables have the advantage of being safer from fishing. At the expense of benthic and sedimentary zones, marine fauna is better protected in these maritime regions, thanks to limitations and bans. Studies have shown a positive effect on the fauna surrounding cable installation zones.
=== Pollution ===
Submarine cables are made of copper or optical fibers, surrounded by several protective layers of plastic, wire or synthetic materials. Cables can also be composed of dielectric fluids or hydrocarbon fluids, which act as electrical insulators. These substances can be harmful to marine life.
Fishing, aging cables and marine species that collide with or become entangled in cables can damage cables and spread toxic and harmful substances into the sea. However, the impact of submarine cables is limited compared with other sources of ocean pollution.
There is also a risk of releasing pollutants buried in sediments. When sediments are re-suspended due to the installation of cables, toxic substances such as hydrocarbons may be released.
Preliminary analyses can assess the level of sediment toxicity and select a cable route that avoids the remobilization and dispersion of sediment pollutants. And new, more modern techniques will make it possible to use less polluting materials for cable construction.
=== Sound waves and electromagnetic waves ===
The installation and maintenance of cables requires the use of machinery and equipment that can trigger sound waves or electromagnetic waves that can disturb animals that use waves to find their bearings in space or to communicate. Underwater sound waves depend on the equipment used, the characteristics of the seabed area where the cables are located, and the relief of the area.
Underwater noise and waves can modify the behavior of certain underwater species, such as migratory behavior, disrupting communication or reproduction. Available information is that underwater noise generated by submarine cable engineering operations has limited acoustic footprint and limited duration.
== See also ==
Bathometer
Cable layer
Cable landing point
List of domestic submarine communications cables
List of international submarine communications cables
Loaded submarine cable
Submarine power cable
Transatlantic communications cable
SMART cables
== Notes ==
== References ==
== Further reading ==
Charles Bright (1898). Submarine Telegraphs: Their History, Construction, and Working. Crosby Lockward and Son. ISBN 9780665008672. {{cite book}}: ISBN / Date incompatibility (help)
Vary T. Coates and Bernard Finn (1979). A Retrospective Technology Assessment: The Transatlantic Cable of 1866. San Francisco Press.
Bern Dibner (1959). The Atlantic Cable. Burndy Library.
Dzieza, Josh (April 16, 2024). "The cloud under the sea: The invisible seafaring industry that keeps the internet afloat". The Verge. Retrieved 2024-04-16.
Bernard Finn; Daqing Yang, eds. (2009). Communications Under the Seas:The Evolving Cable Network and Its Implications. MIT Press.
K.R. Haigh (1968). Cableships and Submarine Cables. United States Underseas Cable Corporation.
Norman L. Middlemiss (2000). Cableships. Shield Publications.
Nicole Starosielski (2015). The Undersea Network (Sign, Storage, Transmission). Duke University Press. ISBN 978-0822357551.
John Steele Gordon (2000). A thread under the Ocean. World of Books. ISBN 978-0743231275.
== External links ==
The International Cable Protection Committee – includes a register of submarine cables worldwide (though not always updated as often as one might hope)
Timeline of Submarine Communications Cables, 1850–2010
Kingfisher Information Service – Cable Awareness; UK Fisherman's Submarine Cable Awareness site
Orange's Fishermen's/Submarine Cable Information
Oregon Fisherman's Cable Committee Archived 2006-02-03 at the Wayback Machine
=== Articles ===
History of the Atlantic Cable & Submarine Telegraphy – Wire Rope and the Submarine Cable Industry
Mother Earth Mother Board – Wired article by Neal Stephenson about submarine cables
Medford, L. V.; Meloni, A.; Lanzerotti, L. J.; Gregori, G. P. (April 2, 1981). "Geomagnetic induction on a transatlantic communications cable". Nature. 290 (5805): 392–393. Bibcode:1981Natur.290..392M. doi:10.1038/290392a0. ISSN 1476-4687. S2CID 4330089. Retrieved 2022-07-21.
Hunt, Bruce J. (2004). "Lord Cable". Europhysics News. 35 (6): 186. Bibcode:2004ENews..35..186H. doi:10.1051/epn:2004602.
Winkler, Jonathan Reed. Nexus: Strategic Communications and American Security in World War I. (Cambridge, MA: Harvard University Press, 2008) Archived 2008-05-10 at the Wayback Machine Account of how U.S. government discovered strategic significance of communications lines, including submarine cables, during World War I.
Animations from Alcatel showing how submarine cables are installed and repaired
Work begins to repair severed net
Flexibility in Undersea Networks – Ocean News & Technology magazine Dec. 2014
=== Maps ===
Submarine Cable Map by TeleGeography
Map gallery of submarine cable maps by TeleGeography, showing evolution since 2000. 2008 map in the Guardian; 2014 map on CNN.
Map and Satellite views of US landing sites for transatlantic cables
Map and Satellite views of US landing sites for transpacific cables
Positions and Route information of Submarine Cables in the Seas Around the UK | Wikipedia/Submarine_telegraph_cable |
The Marconi Company was a British telecommunications and engineering company founded by Italian inventor Guglielmo Marconi in 1897 which was a pioneer of wireless long distance communication and mass media broadcasting, eventually becoming one of the UK's most successful manufacturing companies.
Its roots were in the Wireless Telegraph & Signal Company, which underwent several changes in name after mergers and acquisitions. In 1999, its defence equipment manufacturing division, Marconi Electronic Systems, merged with British Aerospace (BAe) to form BAE Systems. In 2006, financial difficulties led to the collapse of the remaining company, with the bulk of the business acquired by the Swedish telecommunications company, Ericsson.
== History ==
=== Naming history ===
1897–1900: The Wireless Telegraph & Signal Company
1900–1963: Marconi's Wireless Telegraph Company
1963–1987: Marconi Company Ltd
1987–1998: GEC-Marconi Ltd
1998–1999: Marconi Electronic Systems Ltd
1999–2003: Marconi plc, with Marconi Communications as principal subsidiary
2003–2006: Marconi Corporation plc
=== Early history ===
Marconi's "Wireless Telegraph and Signal Company" was formed on 20 July 1897 after a British patent for wireless technology was granted on 2 July that year. The company opened the world's first radio factory on Hall Street in Chelmsford northeast of London in 1898 and was responsible for some of the most important advances in radio and television. These include:
The diode vacuum tube in 1904 (Fleming)
Transatlantic radio broadcasting between Clifden, Ireland and Glace Bay, Nova Scotia, 17 October 1907
High frequency tuned broadcasting
Formation of the British Broadcasting Company (later to become the independent BBC)
Formation of the Marconi Wireless Telegraph Company of America (assets acquired by RCA in 1920)
Marconi International Marine Communication Co. (M.I.M.C.Co.), founded 1900 in London
Compagnie de Télégraphie sans Fil (C.T.S.F.), founded 1900 in the City of Brussels
Short wave beam broadcasting
Radar
Television
Avionics
The subsidiary Marconi Wireless Telegraph Company of America, also called "American Marconi", was founded in 1899. It was the dominant radio communications provider in the US until the formation of the Radio Corporation of America (RCA) in 1919.
In 1900 the company's name was changed to "Marconi's Wireless Telegraph Company", and Marconi's Wireless Telegraph Training College was established in 1901. The company and factory was moved within Chelmsford to New Street Works in 1912 to allow for production expansion in light of the RMS Titanic disaster. Along with private entrepreneurs, the Marconi company formed in 1924 the Unione Radiofonica Italiana (URI), which was granted by Mussolini's regime a monopoly of radio broadcasts in 1924. After the war, URI became the RAI, which lives on to this day.
Isaac Shoenberg joined the company in 1914 and became joint general manager in 1924. After leaving Marconi in 1928 he went on to lead research at EMI where he was influential in the development of television broadcasting. The project was a collaboration between EMI and Marconi; for Marconi it was led by Simeon Aisenstein, who had joined Marconi in 1921 after emigrating from the USSR.
In 1939, the Marconi Research Laboratories were founded at Great Baddow, Essex. In 1941 there was a buyout of Marconi-Ekco Instruments to form Marconi Instruments.
=== Operations as English Electric subsidiary ===
English Electric acquired the Marconi Company in 1946 to complement its other operations: heavy electrical engineering, aircraft manufacture and its railway traction business. In 1948 the company was reorganised into four divisions: Communications, Broadcasting, Aeronautics and Radar. These had expanded to 13 manufacturing divisions by 1965 when a further reorganisation took place. The divisions were placed into three groups: Telecommunications, Components and Electronics.
At this time the Marconi Company had facilities at New Street Chelmsford, Baddow, Basildon, Billericay, and Writtle as well as in Wembley, Gateshead and Hackbridge. It also owned Marconi Instruments, Sanders Electronics, Eddystone Radio and Marconi Italiana (based in Genoa, Italy). In 1967 Marconi took over Stratton and Company to form Eddystone Radio.
=== Expansion in Canada ===
In 1903, Marconi founded the Marconi's Wireless Telegraph Company of Canada which was renamed as the Canadian Marconi Company in 1925. The radio business of the Canadian Marconi Company is known as Ultra Electronics TCS since 2002 and its avionic activities as CMC Electronics, owned by Esterline since 2007.
=== Expansion as GEC subsidiary ===
In 1967 or 1968, English Electric was subject to a takeover bid by the Plessey Company but chose instead to accept an offer from the General Electric Company (GEC). Under UK government pressure, the computer section of GEC, English Electric Leo Marconi (EELM), merged with International Computers and Tabulators (ICT) to form International Computers Limited (ICL). The computer interests of Elliott Automation which specialised in real-time computing were amalgamated with those of Marconi's Automation Division to form Marconi-Elliott Computers, later renamed as GEC Computers. In 1968, Marconi Space and Defence Systems and Marconi Underwater Systems were formed.
The Marconi Company continued as the primary defence subsidiary of GEC, GEC-Marconi. Marconi was renamed GEC-Marconi in 1987. During the period 1968–1999 GEC-Marconi/MES underwent significant expansion.
Acquisitions which were folded into the company and partnerships established included:
Other acquisitions included:
Divisions of Plessey in 1989 (others acquired by its partner in the deal, Siemens AG, to meet with regulatory approval).
Plessey Avionics
Plessey Naval Systems
Plessey Cryptography
Plessey Electronic Systems (75%)
Sippican
Leigh Instruments
In a major reorganisation of the company, GEC-Marconi was renamed Marconi Electronic Systems in 1996 and was separated from other non-defence assets.
== Since 1999 ==
In 1999, GEC was broken up and parts sold off. Marconi Electronic Systems, which included its wireless assets, was demerged and sold to British Aerospace which then formed BAE Systems.
GEC, realigning itself as a primarily telecommunications company following the MES sale, retained the Marconi brand and renamed itself Marconi plc. BAE were granted limited rights to continue use of the Marconi name in existing partnerships, which had ceased by 2005. Major spending and the dot-com collapse led to a major restructuring of the Marconi group in 2003: in a debt-for-equity swap, shareholders retained 0.5% of the new company, Marconi Corporation plc.
In October 2005 the Swedish firm Ericsson offered to buy the Marconi name and most of the assets. The transaction was completed on 23 January 2006, effective as of 1 January 2006. The remainder of the Marconi company, with some 2,000 staff working on telecommunications infrastructure in the UK and the Republic of Ireland, was renamed Telent.
== See also ==
Aerospace industry in the United Kingdom – Overview of the aerospace industry in the United Kingdom
GEC-Marconi scientist deaths conspiracy theory – Deaths of British scientists, allegedly linked
Imperial Wireless Chain – Radiotelegraphic communications network within the British Empire in the 20th century
Marconiphone – English manufacturer of domestic receiving equipment
Marconi-Osram Valve – former British manufacturer of vacuum tubesPages displaying wikidata descriptions as a fallback
Sinking of the Titanic § 14 April 1912
== References ==
Baker, W.J. (2002) [1970]. History of the Marconi Company. Oxfordshire: Routledge. ISBN 978-1138863934.
== External links ==
Ericsson press release about the acquisition
Catalogue of the Marconi Archives At the Department of Special Collections and Western Manuscripts, Bodleian Library, University of Oxford
Marconi Calling The Life, Science and Achievements of Guglielmo Marconi
History of Marconi House | Wikipedia/Wireless_Telegraph_&_Signal_Company |
An engine control unit (ECU), also called an engine control module (ECM), is a device that controls various subsystems of an internal combustion engine. Systems commonly controlled by an ECU include the fuel injection and ignition systems.
The earliest ECUs (used by aircraft engines in the late 1930s) were mechanical-hydraulic units; however, most 21st-century ECUs operate using digital electronics.
== Functions ==
The main functions of the ECU are typically:
Fuel injection system
Ignition system
Idle speed control (typically either via an idle air control valve or the electronic throttle system)
Variable valve timing and/or variable valve lift systems
The sensors used by the ECU include:
=== Secondary ===
Other functions include:
Launch control
Fuel pressure regulator
Rev limiter
Wastegate control and anti-lag
Theft prevention by blocking ignition, in response to input from an immobiliser
In a camless piston engine (an experimental design not currently used in any production vehicles), the ECU has continuous control of when each of the intake and exhaust valves are opened and by how much.
== Early systems ==
One of the earliest attempts to use such a unitized and automated device to manage multiple engine control functions simultaneously was the created by BMW in 1939 Kommandogerät system used by the BMW 801 14-cylinder radial engine which powered the Focke-Wulf Fw 190 V5 fighter aircraft. This device replaced the 6 controls used to initiate hard acceleration with one control, however the system could cause surging and stalling problems.
== Usage in motor vehicles ==
In the early 1970s, the Japanese electronics industry began producing integrated circuits and microcontrollers used for controlling engines. The Ford EEC (Electronic Engine Control) system, which utilized the Toshiba TLCS-12 microprocessor, went into mass production in 1975.
The first Bosch engine management system was the Motronic 1.0, which was introduced in the 1979 BMW 7 Series (E23) This system was based on the existing Bosch Jetronic fuel injection system, to which control of the ignition system was added.
In 1981, a Delco Electronics ECU was used by several Chevrolet and Buick engines to control their fuel system (a closed-loop carburetor) and ignition system. By 1988, Delco Electronics was the leading producer of engine management systems, producing over 28,000 ECUs per day.
== Usage in aircraft engines ==
Such systems are used for many internal combustion engines in other applications. In aeronautical applications, the systems are known as "FADECs" (Full Authority Digital Engine Controls). This kind of electronic control is less common in piston-engined light fixed-wing aircraft and helicopters than in automobiles. This is due to the common configuration of a carbureted engine with a magneto ignition system that does not require electrical power generated by an alternator to run, which is considered a safety advantage.
== See also ==
Air-fuel ratio meter
Check engine light
List of auto parts
On-board diagnostics (OBD)
Powertrain control module (PCM)
== References == | Wikipedia/Engine_control_unit |
Cruise Control may refer to:
Cruise control, a system that automatically controls the speed of a motor vehicle
Adaptive cruise control
CruiseControl, software build framework
Cruise Control (play), a 2014 play by David Williamson
"Cruise Control" (Headless Chickens song)
"Cruise Control", a song by the Dixie Dregs from the 1977 album Free Fall
"Cruise Control", a song by Joey Badass from the 2022 album 2000
"Cruise Control", a song by Kylie Minogue from the 2003 album Body Language
"Cruise Control", a song by Mariah Carey from the 2008 album E=MC²
"Cruise Control", a song by Onefour
"Cruise Control", a song by Tower of Power from the 1993 album T.O.P.
Speed 2: Cruise Control, a 1997 film
Speed 2: Cruise Control (soundtrack) | Wikipedia/Cruise_Control_(disambiguation) |
Electronic throttle control (ETC) is an automotive technology that uses electronics to replace the traditional mechanical linkages between the driver's input such as a foot pedal to the vehicle's throttle mechanism which regulates speed or acceleration. This concept is often called drive by wire, and sometimes called accelerate-by-wire or throttle-by-wire.
== Operation ==
A typical ETC system consists of three major components: (i) an accelerator pedal module (ideally with two or more independent sensors), (ii) a throttle valve that can be opened and closed by an electric motor (sometimes referred to as an electric or electronic throttle body (ETB)), and (iii) a powertrain or engine control module (PCM or ECM). The ECM is a type of electronic control unit (ECU), which is an embedded system that employs software to determine the required throttle position by calculations from data measured by other sensors, including the accelerator pedal position sensors, engine speed sensor, vehicle speed sensor, and cruise control switches. The electric motor is then used to open the throttle valve to the desired angle via a closed-loop control algorithm within the ECM.
== Benefits ==
The benefits of electronic throttle control are largely unnoticed by most drivers because the aim is to make the vehicle power-train characteristics seamlessly consistent irrespective of prevailing conditions, such as engine temperature, altitude, and accessory loads. Electronic throttle control is also working 'behind the scenes' to dramatically improve the ease with which the driver can execute gear changes and deal with the dramatic torque changes associated with rapid accelerations and decelerations.
Electronic throttle control facilitates the integration of features such as cruise control, traction control, stability control, and precrash systems and others that require torque management, since the throttle can be moved irrespective of the position of the driver's accelerator pedal. ETC provides some benefit in areas such as air-fuel ratio control, exhaust emissions and fuel consumption reduction, and also works in concert with other technologies such as gasoline direct injection.
== Failure modes ==
There is no mechanical linkage between the accelerator pedal and the throttle valve with electronic throttle control. Instead, the position of the throttle valve (i.e., the amount of air in the engine) is fully controlled by the ETC software via the electric motor. But just opening or closing the throttle valve by sending a new signal to the electric motor is an open loop condition and leads to inaccurate control. Thus, most, if not all, current ETC systems use closed loop feedback systems, such as PID control, whereby the ECU tells the throttle to open or close a certain amount. The throttle position sensor(s) are continually read and then the software makes appropriate adjustments to reach the desired amount of engine power.
There are two primary types of Throttle Position Sensor (TPS): a potentiometer or a non-contact sensor Hall Effect sensor (magnetic device). A potentiometer is a satisfactory way for non-critical applications such as volume control on a radio. However, these devices which are mechanical in nature can wear, get contaminated by dirt and dust and can cause erratic operation in a motor vehicle. The more reliable solution is the magnetic (or optical) coupling, which makes no physical contact, so will never be subject to failing by wear. This is an insidious failure as it may not provide any symptoms until there is total failure. All cars having a TPS have what is known as a 'limp-home-mode'. When the car goes into the limp-home-mode it is because the accelerator, engine control computer and the throttle are not connecting to each other in which they can function together. The engine control computer shuts down the signal to the throttle position motor and a set of springs in the throttle set it to a fast idle, fast enough to get the transmission in gear but not so fast that driving may be dangerous.
Software or electronic failures within the ETC have been suspected by some to be responsible for alleged incidents of unintended acceleration. A series of investigations by the U.S. National Highway Traffic Safety Administration (NHTSA) were unable to get to the bottom of all of the reported incidents of unintended acceleration in 2002 and later model year Toyota and Lexus vehicles. A February 2011 report issued by a team from NASA (which studied the source code and electronics for a 2005 Camry model, at the request of NHTSA) did not rule out software malfunctions as a potential cause. In October 2013, the first jury to hear evidence about Toyota's source code (from expert witness Michael Barr) found Toyota liable for the death of a passenger in a September 2007 unintended acceleration collision in Oklahoma.
== References == | Wikipedia/Electronic_throttle_control |
Linear control are control systems and control theory based on negative feedback for producing a control signal to maintain the controlled process variable (PV) at the desired setpoint (SP). There are several types of linear control systems with different capabilities.
== Proportional control ==
Proportional control is a type of linear feedback control system in which a correction is applied to the controlled variable which is proportional to the difference between the desired value (SP) and the measured value (PV). Two classic mechanical examples are the toilet bowl float proportioning valve and the fly-ball governor.
The proportional control system is more complex than an on–off control system but simpler than a proportional-integral-derivative (PID) control system used, for instance, in an automobile cruise control. On–off control will work for systems that do not require high accuracy or responsiveness but are not effective for rapid and timely corrections and responses. Proportional control overcomes this by modulating the manipulated variable (MV), such as a control valve, at a gain level that avoids instability, but applies correction as fast as practicable by applying the optimum quantity of proportional correction.
A drawback of proportional control is that it cannot eliminate the residual SP–PV error, as it requires an error to generate a proportional output. A PI controller can be used to overcome this. The PI controller uses a proportional term (P) to remove the gross error, and an integral term (I) to eliminate the residual offset error by integrating the error over time.
In some systems, there are practical limits to the range of the MV. For example, a heater has a limit to how much heat it can produce and a valve can open only so far. Adjustments to the gain simultaneously alter the range of error values over which the MV is between these limits. The width of this range, in units of the error variable and therefore of the PV, is called the proportional band (PB).
=== Furnace example ===
When controlling the temperature of an industrial furnace, it is usually better to control the opening of the fuel valve in proportion to the current needs of the furnace. This helps avoid thermal shocks and applies heat more effectively.
At low gains, only a small corrective action is applied when errors are detected. The system may be safe and stable but may be sluggish in response to changing conditions. Errors will remain uncorrected for relatively long periods of time and the system is overdamped. If the proportional gain is increased, such systems become more responsive and errors are dealt with more quickly. There is an optimal value for the gain setting when the overall system is said to be critically damped. Increases in loop gain beyond this point lead to oscillations in the PV and such a system is underdamped. Adjusting gain to achieve critically damped behavior is known as tuning the control system.
In the underdamped case, the furnace heats quickly. Once the setpoint is reached, stored heat within the heater sub-system and in the walls of the furnace will keep the measured temperature rising beyond what is required. After rising above the setpoint, the temperature falls back and eventually heat is applied again. Any delay in reheating the heater sub-system allows the furnace temperature to fall further below the setpoint and the cycle repeats. The temperature oscillations that an underdamped furnace control system produces are undesirable.
In a critically damped system, as the temperature approaches the setpoint, the heat input begins to be reduced, the rate of heating of the furnace has time to slow and the system avoids overshoot. Overshoot is also avoided in an overdamped system but an overdamped system is unnecessarily slow to initially reach a setpoint response to external changes to the system, e.g. opening the furnace door.
== PID control ==
Pure proportional controllers must operate with residual error in the system. Though PI controllers eliminate this error they can still be sluggish or produce oscillations. The PID controller addresses these final shortcomings by introducing a derivative (D) action to retain stability while responsiveness is improved.
=== Derivative action ===
The derivative is concerned with the rate-of-change of the error with time: If the measured variable approaches the setpoint rapidly, then the actuator is backed off early to allow it to coast to the required level; conversely, if the measured value begins to move rapidly away from the setpoint, extra effort is applied—in proportion to that rapidity to help move it back.
On control systems involving motion control of a heavy item like a gun or camera on a moving vehicle, the derivative action of a well-tuned PID controller can allow it to reach and maintain a setpoint better than most skilled human operators. If a derivative action is over-applied, it can, however, lead to oscillations.
=== Integral action ===
The integral term magnifies the effect of long-term steady-state errors, applying an ever-increasing effort until the error is removed. In the example of the furnace above working at various temperatures, if the heat being applied does not bring the furnace up to setpoint, for whatever reason, integral action increasingly moves the proportional band relative to the setpoint until the PV error is reduced to zero and the setpoint is achieved.
=== Ramp up % per minute ===
Some controllers include the option to limit the "ramp up % per minute". This option can be very helpful in stabilizing small boilers (3 MBTUH), especially during the summer, during light loads. A utility boiler "unit may be required to change load at a rate of as much as 5% per minute (IEA Coal Online - 2, 2007)".
== Other techniques ==
It is possible to filter the PV or error signal. Doing so can help reduce instability or oscillations by reducing the response of the system to undesirable frequencies. Many systems have a resonant frequency. By filtering out that frequency, stronger overall feedback can be applied before oscillation occurs, making the system more responsive without shaking itself apart.
Feedback systems can be combined. In cascade control, one control loop applies control algorithms to a measured variable against a setpoint but then provides a varying setpoint to another control loop rather than affecting process variables directly. If a system has several different measured variables to be controlled, separate control systems will be present for each of them.
Control engineering in many applications produces control systems that are more complex than PID control. Examples of such field applications include fly-by-wire aircraft control systems, chemical plants, and oil refineries. Model predictive control systems are designed using specialized computer-aided-design software and empirical mathematical models of the system to be controlled.
== See also ==
Linear system
Linear time-invariant system
Nonlinear control
== References == | Wikipedia/Linear_control |
A compact controller is a generic name given to a small autonomous controller which can control one or several control loops. They are also known as panel mounted, discrete, dedicated, or universal process controllers. The controllers can be easily configured and to control most types of control loop. Simple versions have a numerical display of the process values. Compact controllers in high-end equipment are available with touchscreen and graphical representation of the control loop or the system.
In addition to the actual control loop, compact controllers can also take over control tasks and thus control the process sequence or parts thereof independently. Compact controllers can be found in almost all industries. For example, the program controller function is often used in the food industry, or in hardening to define specific temperature profiles.
== Construction ==
Compact controllers are either fixed or modular and can therefore be extended. The inputs for the actual value are often universal and can be configured for different types of sensors and signals. Digital inputs are also available for the detection of switching operations. Various binary switching elements are available as outputs, such as relay, semiconductor relay, logic and MosFET outputs, and are used either for controlling binary actuators or for control signals. The analog outputs can be configured as voltage or current output, e.g. 4 ... 20 mA / 0 ... 10 V and are used for the continuous control of analog actuators such as, for example, control valves, thyristor power controllers or frequency converters.
The operation, parameterization and configuration can be carried out via the device front, in addition, configuration programs are supplied in which the settings for the user can be clearly arranged. The connection between PC and controller can be established via USB, TCP / IP or serial interfaces.
== References == | Wikipedia/Compact_controller |
In telecommunications, signaling is the use of signals for controlling communications. This may constitute an information exchange concerning the establishment and control of a telecommunication circuit and the management of the network.
== Classification ==
Signaling systems may be classified based on several principal characteristics.
=== In-band and out-of-band signaling ===
In the public switched telephone network (PSTN), in-band signaling is the exchange of call control information within the same physical channel, or within the same frequency band, that the message (the callers' voice) is using. An example is dual-tone multi-frequency signaling (DTMF), which is used on most telephone lines to customer premises.
Out-of-band signaling is telecommunication signaling on a dedicated channel separate from that used for the message. Out-of-band signaling has been used since Signaling System No. 6 (SS6) was introduced in the 1970s, and also in Signalling System No. 7 (SS7) in 1980 which became the standard for signaling among exchanges internationally.
In the mid-20th century, supervision signals on long-distance trunks in North America were primarily in-band, for example at 2600 Hz, necessitating a notch filter to prevent interference. Late in the century, all supervisory signals had been moved out of band. With the advent of digital trunks, supervision signals are carried by robbed bits or other bits in the E1-carrier dedicated to signaling.
=== Line versus register signaling ===
Line signaling is concerned with conveying information on the state of the line or channel, such as on-hook, off-hook (answer supervision and disconnect supervision, together referred to as supervision), ringing, and hook flash.
Register signaling is concerned with conveying addressing information, such as the calling and/or called telephone number. In the early days of telephony, with operator handling calls, the addressing formation is by voice as "Operator, connect me to Mr. Smith please". In the first half of the 20th century, addressing formation is done by using a rotary dial, which rapidly breaks the line current into pulses, with the number of pulses conveying the address. Finally, starting in the second half of the century, address signaling is by DTMF.
=== Channel-associated versus common-channel signaling ===
Channel-associated signaling (CAS) employs a signaling channel that is dedicated to a specific bearer channel.
Common-channel signaling (CCS) employs a signaling channel which conveys signaling information relating to multiple bearer channels. These bearer channels, therefore, have their signaling channel in common.
=== Compelled signaling ===
Compelled signaling refers to signaling where the receipt of each signal from an originating register needs to be explicitly acknowledged before the next signal can be sent.
Most forms of R2 register signaling are compelled, while R1 multi-frequency signaling is not.
The term is only relevant in the case of signaling systems that use discrete signals (e.g. a combination of tones to denote one digit), as opposed to signaling systems which are message-oriented (such as SS7 and ISDN Q.931) where each message is able to convey multiple items of formation (e.g. multiple digits of the called telephone number).
=== Subscriber versus trunk signaling ===
Subscriber signaling refers to the signaling between the telephone and the telephone exchange. Trunk signaling is the signaling between exchanges.
== Examples ==
Every signaling system can be characterized along each of the above axes of classification. A few examples:
DTMF is an in-band, channel-associated register signaling system. It is not compelled.
SS7 (e.g., TUP or ISUP) is an out-of-band, common-channel signaling system that incorporates both line and register signaling.
Metering pulses (depending on the country, these are 50 Hz, 12 kHz or 16 kHz pulses sent by the exchange to payphones or metering boxes) are out-of-band (because they do not fall within the frequency range used by the telephony signal, which is 300 through 3400 Hz) and channel-associated. They are generally regarded as line signaling, although this is open to debate.
E and M signaling (E&M) is an out-of-band channel-associated signaling system. The base system is intended for line signaling, but if decadic pulses are used it can also convey register information. E&M line signaling is however usually paired with DTMF register signaling.
By contrast, the L1 signaling system (which typically employs a 2280 Hz tone of various durations) is an in-band channel-associated signaling system as was the SF 2600 hertz system formerly used in the Bell System.
Loop start, ground start, reverse battery and revertive pulse systems are all DC, thus out of band, and all are channel-associated since the DC currents are on the talking wires.
Whereas common-channel signaling systems are out-of-band by definition, and in-band signaling systems are also necessarily channel-associated, the above metering pulse example demonstrates that there exist channel-associated signaling systems which are out-of-band.
== Protocols ==
A signaling protocol is a type of communications protocol for encapsulating the signaling between communication endpoints and switching systems to establish or terminate a connection and to identify the state of connection.
The following is a list of signaling protocols:
ALOHA
Digital Subscriber System No. 1 (EDSS1)
Dual-tone multi-frequency signaling
H.248
H.323
H.225.0
Jingle
Media Gateway Control Protocol (MGCP)
Megaco
Regional System R1
NBAP (Node B Application Part)
Signalling System R2
Session Initiation Protocol
Signaling System No. 5
Signaling System No. 6
Signaling System No. 7
Skinny Client Control Protocol (SCCP, Skinny)
Q.931
QSIG
== See also ==
Control character
In-band control
Metadata
Out-of-band control
== References ==
This article incorporates public domain material from Federal Standard 1037C. General Services Administration. Archived from the original on 2022-01-22. (in support of MIL-STD-188). | Wikipedia/Control_signal |
A control system is a device or set of devices to manage, command, invade, record, edit, hack, direct or regulate the behavior of other devices or systems. A control mechanism is a process used by a control system.
Control system may also refer to:
== General control systems ==
Distributed control system, where control elements are not centralized
Fuzzy control system, a control system that analyses and manipulates continuous variables (as opposed to discrete variables)
Hierarchical control system
Industrial control system
Real-time control system (disambiguation), several meanings
== Specific control systems ==
Kite control systems
Lighting control system
=== Computer control systems ===
Fire-control system, which assists a weapons system in firing speed and accuracy
Networked control system, a hierarchical control system implemented by a computer network
Revision Control System, which automates various processes to regulate and maintain data revisions
Source Code Control System
=== Vehicle control systems ===
Aircraft flight control systems, which assists pilots in flying an aircraft
Airborne early warning and control, Systems
Cruise control, a system that maintains the speed of a vehicle
Autonomous cruise control system, a cruise control system that uses radar or laser input to detect its surroundings
Environmental control system (aircraft), which controls environmental factors within an aircraft
Reaction control system of a spacecraft, assisting in attitude control and steering
Traction control system, which maintains the traction between a vehicle's wheels and the travelling surface
=== Biological control systems ===
Cell nucleus, the central control system in every biological cell
Regulome, the entire set of control systems within a cell
Homeostasis, the regulation and maintenance of the physical properties of an internal environment, particularly within an organism
Endocrine system, the organ system most responsible for the control of biological processes
Brain, the control system of the central nervous system and of all cognition
Gene regulation, which controls what genes are and are not transcribed or expressed
== See also ==
Control System, 2012 album by Ab-Soul
Control theory
Controller (control theory)
Motion control
Regulation (disambiguation)
Systems control | Wikipedia/Control_system_(disambiguation) |
Guidance, navigation and control (abbreviated GNC, GN&C, or G&C) is a branch of engineering dealing with the design of systems to control the movement of vehicles, especially, automobiles, ships, aircraft, and spacecraft. In many cases these functions can be performed by trained humans. However, because of the speed of, for example, a rocket's dynamics, human reaction time is too slow to control this movement. Therefore, systems—now almost exclusively digital electronic—are used for such control. Even in cases where humans can perform these functions, it is often the case that GNC systems provide benefits such as alleviating operator work load, smoothing turbulence, fuel savings, etc. In addition, sophisticated applications of GNC enable automatic or remote control.
Guidance refers to the determination of the desired path of travel (the "trajectory") from the vehicle's current location to a designated target, as well as desired changes in velocity, rotation and acceleration for following that path.
Navigation refers to the determination, at a given time, of the vehicle's location and velocity (the "state vector") as well as its attitude.
Control refers to the manipulation of the forces, by way of steering controls, thrusters, etc., needed to execute guidance commands while maintaining vehicle stability.
== Parts ==
Guidance, navigation, and control systems consist of 3 essential parts: navigation which tracks current location, guidance which leverages navigation data and target information to direct flight control "where to go", and control which accepts guidance commands to affect change in aerodynamic and/or engine controls.
Navigation
is the art of determining where you are, a science that has seen tremendous focus in 1711 with the Longitude prize. Navigation aids either measure position from a fixed point of reference (ex. landmark, north star, LORAN Beacon), relative position to a target (ex. radar, infra-red, ...) or track movement from a known position/starting point (e.g. IMU). Today's complex systems use multiple approaches to determine current position. For example, today's most advanced navigation systems are embodied within the Anti-ballistic missile, the RIM-161 Standard Missile 3 leverages GPS, IMU and ground segment data in the boost phase and relative position data for intercept targeting. Complex systems typically have multiple redundancy to address drift, improve accuracy (ex. relative to a target) and address isolated system failure. Navigation systems therefore take multiple inputs from many different sensors, both internal to the system and/or external (ex. ground based update). Kalman filter provides the most common approach to combining navigation data (from multiple sensors) to resolve current position.
Guidance
is the "driver" of a vehicle. It takes input from the navigation system (where am I) and uses targeting information (where do I want to go) to send signals to the flight control system that will allow the vehicle to reach its destination (within the operating constraints of the vehicle). The "targets" for guidance systems are one or more state vectors (position and velocity) and can be inertial or relative. During powered flight, guidance is continually calculating steering directions for flight control. For example, the Space Shuttle targets an altitude, velocity vector, and gamma to drive main engine cut off. Similarly, an Intercontinental ballistic missile also targets a vector. The target vectors are developed to fulfill the mission and can be preplanned or dynamically created.
Control
Flight control is accomplished either aerodynamically or through powered controls such as engines. Guidance sends signals to flight control. A Digital Autopilot (DAP) is the interface between guidance and control. Guidance and the DAP are responsible for calculating the precise instruction for each flight control. The DAP provides feedback to guidance on the state of flight controls.
== Examples ==
GNC systems are found in essentially all autonomous or semi-autonomous systems. These include:
Autopilots
Driverless cars, like Mars rovers or those participating in the DARPA Grand Challenge
Guided missiles
Precision-guided airdrop systems
Reaction control systems for spacecraft
Spacecraft launch vehicles
Unmanned aerial vehicles
Auto-steering tractors
Autonomous underwater vehicle
Related examples are:
Celestial navigation is a position fixing technique that was devised to help sailors cross the featureless oceans without having to rely on dead reckoning to enable them to strike land. Celestial navigation uses angular measurements (sights) between the horizon and a common celestial object. The Sun is most often measured. Skilled navigators can use the Moon, planets or one of 57 navigational stars whose coordinates are tabulated in nautical almanacs. Historical tools include a sextant, watch and ephemeris data. Today's space shuttle, and most interplanetary spacecraft, use optical systems to calibrate inertial navigation systems: Crewman Optical Alignment Sight (COAS), Star Tracker.
Inertial Measurement Units (IMUs) are the primary inertial system for maintaining current position (navigation) and orientation in missiles and aircraft. They are complex machines with one or more rotating Gyroscopes that can rotate freely in 3 degrees of motion within a complex gimbal system. IMUs are "spun up" and calibrated prior to launch. A minimum of 3 separate IMUs are in place within most complex systems. In addition to relative position, the IMUs contain accelerometers which can measure acceleration in all axes. The position data, combined with acceleration data provide the necessary inputs to "track" motion of a vehicle. IMUs have a tendency to "drift", due to friction and accuracy. Error correction to address this drift can be provided via ground link telemetry, GPS, radar, optical celestial navigation and other navigation aids. When targeting another (moving) vehicle, relative vectors become paramount. In this situation, navigation aids which provide updates of position relative to the target are more important. In addition to the current position, inertial navigation systems also typically estimate a predicted position for future computing cycles. See also Inertial navigation system.
Astro-inertial guidance is a sensor fusion/information fusion of the Inertial guidance and Celestial navigation.
Long-range Navigation (LORAN) : This was the predecessor of GPS and was (and to an extent still is) used primarily in commercial sea transportation. The system works by triangulating the ship's position based on directional reference to known transmitters.
Global Positioning System (GPS) : GPS was designed by the US military with the primary purpose of addressing "drift" within the inertial navigation of Submarine-launched ballistic missile(SLBMs) prior to launch. GPS transmits 2 signal types: military and a commercial. The accuracy of the military signal is classified but can be assumed to be well under 0.5 meters. The GPS system space segment is composed of 24 to 32 satellites in medium Earth orbit at an altitude of approximately 20,200 km (12,600 mi). The satellites are in six specific orbits and transmit highly accurate time and satellite location information which can be used to derive distances and calculate position.
Radar/Infrared/Laser : This form of navigation provides information to guidance relative to a known target, it has both civilian (ex rendezvous) and military applications.
active (employs own radar to illuminate the target),
passive (detects target's radar emissions),
semiactive radar homing,
Infrared homing : This form of guidance is used exclusively for military munitions, specifically air-to-air and surface-to-air missiles. The missile's seeker head homes in on the infrared (heat) signature from the target's engines (hence the term "heat-seeking missile"),
Ultraviolet homing, used in FIM-92 Stinger - more resistive to countermeasures, than IR homing system
Laser guidance : A laser designator device calculates relative position to a highlighted target. Most are familiar with the military uses of the technology on Laser-guided bomb. The space shuttle crew leverages a hand held device to feed information into rendezvous planning. The primary limitation on this device is that it requires a line of sight between the target and the designator.
Terrain contour matching (TERCOM). Uses a ground scanning radar to "match" topography against digital map data to fix current position. Used by cruise missiles such as the Tomahawk (missile family).
== See also ==
== References ==
== External links ==
AIAA GNC Conference (annual)
Academic Earth: Aircraft Systems Engineering: Lecture 16 GNC. Phil Hattis – MIT
Georgia Tech: GNC: Theory and Applications
NASA Shuttle Technology: GNC Archived 24 September 2016 at the Wayback Machine
Boeing: Defense, Space & Security: International Space Station: GNC
Princeton Satellite Systems: GNC of High-Altitude Airships. Joseph Mueller Archived 11 June 2014 at the Wayback Machine
CEAS: EuroGNC Conference | Wikipedia/Guidance,_navigation,_and_control |
Transistors are simple devices with complicated behavior. In order to ensure the reliable operation of circuits employing transistors, it is necessary to scientifically model the physical phenomena observed in their operation using transistor models. There exists a variety of different models that range in complexity and in purpose. Transistor models divide into two major groups: models for device design and models for circuit design.
== Models for device design ==
The modern transistor has an internal structure that exploits complex physical mechanisms. Device design requires a detailed understanding of how device manufacturing processes such as ion implantation, impurity diffusion, oxide growth, annealing, and etching affect device behavior. Process models simulate the manufacturing steps and provide a microscopic description of device "geometry" to the device simulator. "Geometry" does not mean readily identified geometrical features such as a planar or wrap-around gate structure, or raised or recessed forms of source and drain (see Figure 1 for a memory device with some unusual modeling challenges related to charging the floating gate by an avalanche process). It also refers to details inside the structure, such as the doping profiles after completion of device processing.
With this information about what the device looks like, the device simulator models the physical processes taking place in the device to determine its electrical behavior in a variety of circumstances: DC current–voltage behavior, transient behavior (both large-signal and small-signal), dependence on device layout (long and narrow versus short and wide, or interdigitated versus rectangular, or isolated versus proximate to other devices). These simulations tell the device designer whether the device process will produce devices with the electrical behavior needed by the circuit designer, and is used to inform the process designer about any necessary process improvements. Once the process gets close to manufacture, the predicted device characteristics are compared with measurement on test devices to check that the process and device models are working adequately.
Although long ago the device behavior modeled in this way was very simple – mainly drift plus diffusion in simple geometries – today many more processes must be modeled at a microscopic level; for example, leakage currents in junctions and oxides, complex transport of carriers including velocity saturation and ballistic transport, quantum mechanical effects, use of multiple materials (for example, Si-SiGe devices, and stacks of different dielectrics) and even the statistical effects due to the probabilistic nature of ion placement and carrier transport inside the device. Several times a year the technology changes and simulations have to be repeated. The models may require change to reflect new physical effects, or to provide greater accuracy. The maintenance and improvement of these models is a business in itself.
These models are very computer intensive, involving detailed spatial and temporal solutions of coupled partial differential equations on three-dimensional grids inside the device.
Such models are slow to run and provide detail not needed for circuit design. Therefore, faster transistor models oriented toward circuit parameters are used for circuit design.
== Models for circuit design ==
Transistor models are used for almost all modern electronic design work. Analog circuit simulators such as SPICE use models to predict the behavior of a design. Most design work is related to integrated circuit designs which have a very large tooling cost, primarily for the photomasks used to create the devices, and there is a large economic incentive to get the design working without any iterations. Complete and accurate models allow a large percentage of designs to work the first time.
Modern circuits are usually very complex. The performance of such circuits is difficult to predict without accurate computer models, including but not limited to models of the devices used. The device models include effects of transistor layout: width, length, interdigitation, proximity to other devices; transient and DC current–voltage characteristics; parasitic device capacitance, resistance, and inductance; time delays; and temperature effects; to name a few items.
=== Large-signal nonlinear models ===
Nonlinear, or large signal transistor models fall into three main types:
==== Physical models ====
These are models based upon device physics, based upon approximate modeling of physical phenomena within a transistor. Parameters within these models are based upon physical properties such as oxide thicknesses, substrate doping concentrations, carrier mobility, etc. In the past these models were used extensively, but the complexity of modern devices makes them inadequate for quantitative design. Nonetheless, they find a place in hand analysis (that is, at the conceptual stage of circuit design), for example, for simplified estimates of signal-swing limitations.
==== Empirical models ====
This type of model is entirely based upon curve fitting, using whatever functions and parameter values most adequately fit measured data to enable simulation of transistor operation. Unlike a physical model, the parameters in an empirical model need have no fundamental basis, and will depend on the fitting procedure used to find them. The fitting procedure is key to success of these models if they are to be used to extrapolate to designs lying outside the range of data to which the models were originally fitted. Such extrapolation is a hope of such models, but is not fully realized so far.
=== Small-signal linear models ===
Small-signal or linear models are used to evaluate stability, gain, noise and bandwidth, both in the conceptual stages of circuit design (to decide between alternative design ideas before computer simulation is warranted) and using computers. A small-signal model is generated by taking derivatives of the current–voltage curves about a bias point or Q-point. As long as the signal is small relative to the nonlinearity of the device, the derivatives do not vary significantly, and can be treated as standard linear circuit elements.
An advantage of small signal models is they can be solved directly, while large signal nonlinear models are generally solved iteratively, with possible convergence or stability issues. By simplification to a linear model, the whole apparatus for solving linear equations becomes available, for example, simultaneous equations, determinants, and matrix theory (often studied as part of linear algebra), especially Cramer's rule. Another advantage is that a linear model is easier to think about, and helps to organize thought.
==== Small-signal parameters ====
A transistor's parameters represent its electrical properties. Engineers employ transistor parameters in production-line testing and in circuit design. A group of a transistor's parameters sufficient to predict circuit gain, input impedance, and output impedance are components in its small-signal model.
A number of different two-port network parameter sets may be used to model a transistor. These include:
Transmission parameters (T-parameters),
Hybrid-parameters (h-parameters),
Impedance parameters (z-parameters),
Admittance parameters (y-parameters), and
Scattering parameters (S-parameters).
Scattering parameters, or S parameters, can be measured for a transistor at a given bias point with a vector network analyzer. S parameters can be converted to another parameter set using standard matrix algebra operations.
== Popular models ==
Gummel–Poon model
Ebers–Moll model
Hybrid-pi model
H-parameter model
== See also ==
Bipolar junction transistor § Theory and modeling
Safe operating area
Electronic design automation
Electronic circuit simulation
Semiconductor device modeling
== References ==
== External links ==
Agilent EEsof EDA, IC-CAP Parameter Extraction and Device Modeling Software http://eesof.tm.agilent.com/products/iccap_main.html | Wikipedia/Transistor_models |
The Compact Model Coalition (formerly the Compact Model Council) is a working group in the electronic design automation (EDA) industry formed to choose, maintain and promote the use of standard semiconductor device models. Commercial and industrial analog simulators (such as SPICE) need to add device models as technology advances (see Moore's law) and earlier models become inaccurate. Before this group was formed, new transistor models were largely proprietary, which severely limited the choice of simulators that could be used.
It was formed in August, 1996, for the purpose developing and standardizing the use and implementation of SPICE models and the model interfaces. In May 2013, the Silicon Integration Initiative (Si2) and TechAmerica announced the transfer of the Compact Model Council to Si2 and a renaming to Compact Model Coalition.
To develop and maintain the models, the CMC works with device modeling and simulation experts belonging to an international collection of universities and research institutions. In alphabetical order, the present development organizations are Auburn University (SiGe group), CEA-LETI, Hiroshima University (HiSIM Research Center), Macquarie University, TU Dresden (HiCUM development), UC Berkeley (BSIM Group), and University of Waterloo (WEIS Group). Though the development is done at these different institutions, all of them follow the same Verilog-A coding standards and QA standards, and they all go through a common beta testing and release process. CMC maintains a workgroup for each standardized model, composed of interested industry members and the model developers.
Most of the CMC development is industry-funded, supported by dues from CMC member companies. The member companies primarily are silicon design companies, silicon foundry companies, Integrated Device Manufacturers (IDM), and silicon design EDA companies.
Production releases of industry-funded CMC models are available to the public free of charge. The CMC member companies have access to earlier pre-production versions of those models, and they have the opportunity to help direct the evolution of those models.
The industry-funded CMC models, listed alphabetically, are:
To address the increasing need for Reliability (ageing) simulation the CMC nominated the OMI Interface as the new EDA vendor independent solution for ageing simulations. Technically the Interface is very close the TMI2 Interface developed by TSMC. The standardization will allow Silicon Foundries to develop a common set of aging models that will work with all significant analog simulators.
The CMC also has released a Verilog-A code recommended best practices document (“CMC Policy on Standardization of Verilog-A Model Code”) and a Verilog-A Linter program called VAMPyRE (Github link) which can be freely accessed to help increase the quality of model code for all model developers worldwide.
The CMC continues to evaluate new models for standardarization. New models are submitted to the Coalition, where their technical merits are discussed, and then potential standard models are voted on.
In 2025, the CMC has started a new initiative and is setting up and running the first International Compact Modeling Conference (ICMC), to be held on June 26-June 27 in San Francisco, co-located with the DAC 2025 conference.
== See also ==
Electronic circuit simulation
== References ==
== External links ==
Member list at CMC website
Site map of CMC website including links to working groups | Wikipedia/Compact_Model_Coalition |
Mechanical engineering is the study of physical machines and mechanisms that may involve force and movement. It is an engineering branch that combines engineering physics and mathematics principles with materials science, to design, analyze, manufacture, and maintain mechanical systems. It is one of the oldest and broadest of the engineering branches.
Mechanical engineering requires an understanding of core areas including mechanics, dynamics, thermodynamics, materials science, design, structural analysis, and electricity. In addition to these core principles, mechanical engineers use tools such as computer-aided design (CAD), computer-aided manufacturing (CAM), computer-aided engineering (CAE), and product lifecycle management to design and analyze manufacturing plants, industrial equipment and machinery, heating and cooling systems, transport systems, motor vehicles, aircraft, watercraft, robotics, medical devices, weapons, and others.
Mechanical engineering emerged as a field during the Industrial Revolution in Europe in the 18th century; however, its development can be traced back several thousand years around the world. In the 19th century, developments in physics led to the development of mechanical engineering science. The field has continually evolved to incorporate advancements; today mechanical engineers are pursuing developments in such areas as composites, mechatronics, and nanotechnology. It also overlaps with aerospace engineering, metallurgical engineering, civil engineering, structural engineering, electrical engineering, manufacturing engineering, chemical engineering, industrial engineering, and other engineering disciplines to varying amounts. Mechanical engineers may also work in the field of biomedical engineering, specifically with biomechanics, transport phenomena, biomechatronics, bionanotechnology, and modelling of biological systems.
== History ==
The application of mechanical engineering can be seen in the archives of various ancient and medieval societies. The six classic simple machines were known in the ancient Near East. The wedge and the inclined plane (ramp) were known since prehistoric times. Mesopotamian civilization is credited with the invention of the wheel by several, mainly old sources. However, some recent sources either suggest that it was invented independently in both Mesopotamia and Eastern Europe or credit prehistoric Eastern Europeans with the invention of the wheel The lever mechanism first appeared around 5,000 years ago in the Near East, where it was used in a simple balance scale, and to move large objects in ancient Egyptian technology. The lever was also used in the shadoof water-lifting device, the first crane machine, which appeared in Mesopotamia circa 3000 BC. The earliest evidence of pulleys date back to Mesopotamia in the early 2nd millennium BC.
The Saqiyah was developed in the Kingdom of Kush during the 4th century BC. It relied on animal power reducing the tow on the requirement of human energy. Reservoirs in the form of Hafirs were developed in Kush to store water and boost irrigation. Bloomeries and blast furnaces were developed during the seventh century BC in Meroe. Kushite sundials applied mathematics in the form of advanced trigonometry.
The earliest practical water-powered machines, the water wheel and watermill, first appeared in the Persian Empire, in what are now Iraq and Iran, by the early 4th century BC. In ancient Greece, the works of Archimedes (287–212 BC) influenced mechanics in the Western tradition. The geared Antikythera mechanisms was an Analog computer invented around the 2nd century BC.
In Roman Egypt, Heron of Alexandria (c. 10–70 AD) created the first steam-powered device (Aeolipile). In China, Zhang Heng (78–139 AD) improved a water clock and invented a seismometer, and Ma Jun (200–265 AD) invented a chariot with differential gears. The medieval Chinese horologist and engineer Su Song (1020–1101 AD) incorporated an escapement mechanism into his astronomical clock tower two centuries before escapement devices were found in medieval European clocks. He also invented the world's first known endless power-transmitting chain drive.
The cotton gin was invented in India by the 6th century AD, and the spinning wheel was invented in the Islamic world by the early 11th century, Dual-roller gins appeared in India and China between the 12th and 14th centuries. The worm gear roller gin appeared in the Indian subcontinent during the early Delhi Sultanate era of the 13th to 14th centuries.
During the Islamic Golden Age (7th to 15th century), Muslim inventors made remarkable contributions in the field of mechanical technology. Al-Jazari, who was one of them, wrote his famous Book of Knowledge of Ingenious Mechanical Devices in 1206 and presented many mechanical designs.
In the 17th century, important breakthroughs in the foundations of mechanical engineering occurred in England and the Continent. The Dutch mathematician and physicist Christiaan Huygens invented the pendulum clock in 1657, which was the first reliable timekeeper for almost 300 years, and published a work dedicated to clock designs and the theory behind them. In England, Isaac Newton formulated his laws of motion and developed calculus, which would become the mathematical basis of physics. Newton was reluctant to publish his works for years, but he was finally persuaded to do so by his colleagues, such as Edmond Halley. Gottfried Wilhelm Leibniz, who earlier designed a mechanical calculator, is also credited with developing the calculus during the same time period.
During the early 19th century Industrial Revolution, machine tools were developed in England, Germany, and Scotland. This allowed mechanical engineering to develop as a separate field within engineering. They brought with them manufacturing machines and the engines to power them. The first British professional society of mechanical engineers was formed in 1847 Institution of Mechanical Engineers, thirty years after the civil engineers formed the first such professional society Institution of Civil Engineers. On the European continent, Johann von Zimmermann (1820–1901) founded the first factory for grinding machines in Chemnitz, Germany in 1848.
In the United States, the American Society of Mechanical Engineers (ASME) was formed in 1880, becoming the third such professional engineering society, after the American Society of Civil Engineers (1852) and the American Institute of Mining Engineers (1871). The first schools in the United States to offer an engineering education were the United States Military Academy in 1817, an institution now known as Norwich University in 1819, and Rensselaer Polytechnic Institute in 1825. Education in mechanical engineering has historically been based on a strong foundation in mathematics and science.
== Education ==
Degrees in mechanical engineering are offered at various universities worldwide. Mechanical engineering programs typically take four to five years of study depending on the place and university and result in a Bachelor of Engineering (B.Eng. or B.E.), Bachelor of Science (B.Sc. or B.S.), Bachelor of Science Engineering (B.Sc.Eng.), Bachelor of Technology (B.Tech.), Bachelor of Mechanical Engineering (B.M.E.), or Bachelor of Applied Science (B.A.Sc.) degree, in or with emphasis in mechanical engineering. In Spain, Portugal and most of South America, where neither B.S. nor B.Tech. programs have been adopted, the formal name for the degree is "Mechanical Engineer", and the course work is based on five or six years of training. In Italy the course work is based on five years of education, and training, but in order to qualify as an Engineer one has to pass a state exam at the end of the course. In Greece, the coursework is based on a five-year curriculum.
In the United States, most undergraduate mechanical engineering programs are accredited by the Accreditation Board for Engineering and Technology (ABET) to ensure similar course requirements and standards among universities. The ABET web site lists 302 accredited mechanical engineering programs as of 11 March 2014. Mechanical engineering programs in Canada are accredited by the Canadian Engineering Accreditation Board (CEAB), and most other countries offering engineering degrees have similar accreditation societies.
In Australia, mechanical engineering degrees are awarded as Bachelor of Engineering (Mechanical) or similar nomenclature, although there are an increasing number of specialisations. The degree takes four years of full-time study to achieve. To ensure quality in engineering degrees, Engineers Australia accredits engineering degrees awarded by Australian universities in accordance with the global Washington Accord. Before the degree can be awarded, the student must complete at least 3 months of on the job work experience in an engineering firm. Similar systems are also present in South Africa and are overseen by the Engineering Council of South Africa (ECSA).
In India, to become an engineer, one needs to have an engineering degree like a B.Tech. or B.E., have a diploma in engineering, or by completing a course in an engineering trade like fitter from the Industrial Training Institute (ITIs) to receive a "ITI Trade Certificate" and also pass the All India Trade Test (AITT) with an engineering trade conducted by the National Council of Vocational Training (NCVT) by which one is awarded a "National Trade Certificate". A similar system is used in Nepal.
Some mechanical engineers go on to pursue a postgraduate degree such as a Master of Engineering, Master of Technology, Master of Science, Master of Engineering Management (M.Eng.Mgt. or M.E.M.), a Doctor of Philosophy in engineering (Eng.D. or Ph.D.) or an engineer's degree. The master's and engineer's degrees may or may not include research. The Doctor of Philosophy includes a significant research component and is often viewed as the entry point to academia. The Engineer's degree exists at a few institutions at an intermediate level between the master's degree and the doctorate.
=== Coursework ===
Standards set by each country's accreditation society are intended to provide uniformity in fundamental subject material, promote competence among graduating engineers, and to maintain confidence in the engineering profession as a whole. Engineering programs in the U.S., for example, are required by ABET to show that their students can "work professionally in both thermal and mechanical systems areas." The specific courses required to graduate, however, may differ from program to program. Universities and institutes of technology will often combine multiple subjects into a single class or split a subject into multiple classes, depending on the faculty available and the university's major area(s) of research.
The fundamental subjects required for mechanical engineering usually include:
Mathematics (in particular, calculus, differential equations, and linear algebra)
Basic physical sciences (including physics and chemistry)
Statics and dynamics
Strength of materials and solid mechanics
Materials engineering, composites
Thermodynamics, heat transfer, energy conversion, and HVAC
Fuels, combustion, internal combustion engine
Fluid mechanics (including fluid statics and fluid dynamics)
Mechanism and Machine design (including kinematics and dynamics)
Instrumentation and measurement
Manufacturing engineering, technology, or processes
Vibration, control theory and control engineering
Hydraulics and Pneumatics
Mechatronics and robotics
Engineering design and product design
Drafting, computer-aided design (CAD) and computer-aided manufacturing (CAM)
Mechanical engineers are also expected to understand and be able to apply basic concepts from chemistry, physics, tribology, chemical engineering, civil engineering, and electrical engineering. All mechanical engineering programs include multiple semesters of mathematical classes including calculus, and advanced mathematical concepts including differential equations, partial differential equations, linear algebra, differential geometry, and statistics, among others.
In addition to the core mechanical engineering curriculum, many mechanical engineering programs offer more specialized programs and classes, such as control systems, robotics, transport and logistics, cryogenics, fuel technology, automotive engineering, biomechanics, vibration, optics and others, if a separate department does not exist for these subjects.
Most mechanical engineering programs also require varying amounts of research or community projects to gain practical problem-solving experience. In the United States it is common for mechanical engineering students to complete one or more internships while studying, though this is not typically mandated by the university. Cooperative education is another option. Future work skills research puts demand on study components that feed student's creativity and innovation.
== Job duties ==
Mechanical engineers research, design, develop, build, and test mechanical and thermal devices, including tools, engines, and machines.
Mechanical engineers typically do the following:
Analyze problems to see how mechanical and thermal devices might help solve the problem.
Design or redesign mechanical and thermal devices using analysis and computer-aided design.
Develop and test prototypes of devices they design.
Analyze the test results and change the design as needed.
Oversee the manufacturing process for the device.
Manage a team of professionals in specialized fields like mechanical drafting and designing, prototyping, 3D printing or/and CNC Machines specialists.
Mechanical engineers design and oversee the manufacturing of many products ranging from medical devices to new batteries. They also design power-producing machines such as electric generators, internal combustion engines, and steam and gas turbines as well as power-using machines, such as refrigeration and air-conditioning systems.
Like other engineers, mechanical engineers use computers to help create and analyze designs, run simulations and test how a machine is likely to work.
=== License and regulation ===
Engineers may seek license by a state, provincial, or national government. The purpose of this process is to ensure that engineers possess the necessary technical knowledge, real-world experience, and knowledge of the local legal system to practice engineering at a professional level. Once certified, the engineer is given the title of Professional Engineer United States, Canada, Japan, South Korea, Bangladesh and South Africa), Chartered Engineer (in the United Kingdom, Ireland, India and Zimbabwe), Chartered Professional Engineer (in Australia and New Zealand) or European Engineer (much of the European Union).
In the U.S., to become a licensed Professional Engineer (PE), an engineer must pass the comprehensive FE (Fundamentals of Engineering) exam, work a minimum of 4 years as an Engineering Intern (EI) or Engineer-in-Training (EIT), and pass the "Principles and Practice" or PE (Practicing Engineer or Professional Engineer) exams. The requirements and steps of this process are set forth by the National Council of Examiners for Engineering and Surveying (NCEES), composed of engineering and land surveying licensing boards representing all U.S. states and territories.
In Australia (Queensland and Victoria) an engineer must be registered as a Professional Engineer within the State in which they practice, for example Registered Professional Engineer of Queensland or Victoria, RPEQ or RPEV. respectively.
In the UK, current graduates require a BEng plus an appropriate master's degree or an integrated MEng degree, a minimum of 4 years post graduate on the job competency development and a peer-reviewed project report to become a Chartered Mechanical Engineer (CEng, MIMechE) through the Institution of Mechanical Engineers. CEng MIMechE can also be obtained via an examination route administered by the City and Guilds of London Institute.
In most developed countries, certain engineering tasks, such as the design of bridges, electric power plants, and chemical plants, must be approved by a professional engineer or a chartered engineer. "Only a licensed engineer, for instance, may prepare, sign, seal and submit engineering plans and drawings to a public authority for approval, or to seal engineering work for public and private clients." This requirement can be written into state and provincial legislation, such as in the Canadian provinces, for example the Ontario or Quebec's Engineer Act.
In other countries, such as the UK, no such legislation exists; however, practically all certifying bodies maintain a code of ethics independent of legislation, that they expect all members to abide by or risk expulsion.
=== Salaries and workforce statistics ===
The total number of engineers employed in the U.S. in 2015 was roughly 1.6 million. Of these, 278,340 were mechanical engineers (17.28%), the largest discipline by size. In 2012, the median annual income of mechanical engineers in the U.S. workforce was $80,580. The median income was highest when working for the government ($92,030), and lowest in education ($57,090). In 2014, the total number of mechanical engineering jobs was projected to grow 5% over the next decade. As of 2009, the average starting salary was $58,800 with a bachelor's degree.
== Subdisciplines ==
The field of mechanical engineering can be thought of as a collection of many mechanical engineering science disciplines. Several of these subdisciplines which are typically taught at the undergraduate level are listed below, with a brief explanation and the most common application of each. Some of these subdisciplines are unique to mechanical engineering, while others are a combination of mechanical engineering and one or more other disciplines. Most work that a mechanical engineer does uses skills and techniques from several of these subdisciplines, as well as specialized subdisciplines. Specialized subdisciplines, as used in this article, are more likely to be the subject of graduate studies or on-the-job training than undergraduate research. Several specialized subdisciplines are discussed in this section.
=== Mechanics ===
Mechanics is, in the most general sense, the study of forces and their effect upon matter. Typically, engineering mechanics is used to analyze and predict the acceleration and deformation (both elastic and plastic) of objects under known forces (also called loads) or stresses. Subdisciplines of mechanics include
Statics, the study of non-moving bodies under known loads, how forces affect static bodies
Dynamics, the study of how forces affect moving bodies. Dynamics includes kinematics (about movement, velocity, and acceleration) and kinetics (about forces and resulting accelerations).
Mechanics of materials, the study of how different materials deform under various types of stress
Fluid mechanics, the study of how fluids react to forces
Kinematics, the study of the motion of bodies (objects) and systems (groups of objects), while ignoring the forces that cause the motion. Kinematics is often used in the design and analysis of mechanisms.
Continuum mechanics, a method of applying mechanics that assumes that objects are continuous (rather than discrete)
Mechanical engineers typically use mechanics in the design or analysis phases of engineering. If the engineering project were the design of a vehicle, statics might be employed to design the frame of the vehicle, in order to evaluate where the stresses will be most intense. Dynamics might be used when designing the car's engine, to evaluate the forces in the pistons and cams as the engine cycles. Mechanics of materials might be used to choose appropriate materials for the frame and engine. Fluid mechanics might be used to design a ventilation system for the vehicle (see HVAC), or to design the intake system for the engine.
=== Mechatronics and robotics ===
Mechatronics is a combination of mechanics and electronics. It is an interdisciplinary branch of mechanical engineering, electrical engineering and software engineering that is concerned with integrating electrical and mechanical engineering to create hybrid automation systems. In this way, machines can be automated through the use of electric motors, servo-mechanisms, and other electrical systems in conjunction with special software. A common example of a mechatronics system is a CD-ROM drive. Mechanical systems open and close the drive, spin the CD and move the laser, while an optical system reads the data on the CD and converts it to bits. Integrated software controls the process and communicates the contents of the CD to the computer.
Robotics is the application of mechatronics to create robots, which are often used in industry to perform tasks that are dangerous, unpleasant, or repetitive. These robots may be of any shape and size, but all are preprogrammed and interact physically with the world. To create a robot, an engineer typically employs kinematics (to determine the robot's range of motion) and mechanics (to determine the stresses within the robot).
Robots are used extensively in industrial automation engineering. They allow businesses to save money on labor, perform tasks that are either too dangerous or too precise for humans to perform them economically, and to ensure better quality. Many companies employ assembly lines of robots, especially in Automotive Industries and some factories are so robotized that they can run by themselves. Outside the factory, robots have been employed in bomb disposal, space exploration, and many other fields. Robots are also sold for various residential applications, from recreation to domestic applications.
=== Structural analysis ===
Structural analysis is the branch of mechanical engineering (and also civil engineering) devoted to examining why and how objects fail and to fix the objects and their performance. Structural failures occur in two general modes: static failure, and fatigue failure. Static structural failure occurs when, upon being loaded (having a force applied) the object being analyzed either breaks or is deformed plastically, depending on the criterion for failure. Fatigue failure occurs when an object fails after a number of repeated loading and unloading cycles. Fatigue failure occurs because of imperfections in the object: a microscopic crack on the surface of the object, for instance, will grow slightly with each cycle (propagation) until the crack is large enough to cause ultimate failure.
Failure is not simply defined as when a part breaks, however; it is defined as when a part does not operate as intended. Some systems, such as the perforated top sections of some plastic bags, are designed to break. If these systems do not break, failure analysis might be employed to determine the cause.
Structural analysis is often used by mechanical engineers after a failure has occurred, or when designing to prevent failure. Engineers often use online documents and books such as those published by ASM to aid them in determining the type of failure and possible causes.
Once theory is applied to a mechanical design, physical testing is often performed to verify calculated results. Structural analysis may be used in an office when designing parts, in the field to analyze failed parts, or in laboratories where parts might undergo controlled failure tests.
=== Thermodynamics and thermo-science ===
Thermodynamics is an applied science used in several branches of engineering, including mechanical and chemical engineering. At its simplest, thermodynamics is the study of energy, its use and transformation through a system. Typically, engineering thermodynamics is concerned with changing energy from one form to another. As an example, automotive engines convert chemical energy (enthalpy) from the fuel into heat, and then into mechanical work that eventually turns the wheels.
Thermodynamics principles are used by mechanical engineers in the fields of heat transfer, thermofluids, and energy conversion. Mechanical engineers use thermo-science to design engines and power plants, heating, ventilation, and air-conditioning (HVAC) systems, heat exchangers, heat sinks, radiators, refrigeration, insulation, and others.
=== Design and drafting ===
Drafting or technical drawing is the means by which mechanical engineers design products and create instructions for manufacturing parts. A technical drawing can be a computer model or hand-drawn schematic showing all the dimensions necessary to manufacture a part, as well as assembly notes, a list of required materials, and other pertinent information. A U.S. mechanical engineer or skilled worker who creates technical drawings may be referred to as a drafter or draftsman. Drafting has historically been a two-dimensional process, but computer-aided design (CAD) programs now allow the designer to create in three dimensions.
Instructions for manufacturing a part must be fed to the necessary machinery, either manually, through programmed instructions, or through the use of a computer-aided manufacturing (CAM) or combined CAD/CAM program. Optionally, an engineer may also manually manufacture a part using the technical drawings. However, with the advent of computer numerically controlled (CNC) manufacturing, parts can now be fabricated without the need for constant technician input. Manually manufactured parts generally consist of spray coatings, surface finishes, and other processes that cannot economically or practically be done by a machine.
Drafting is used in nearly every subdiscipline of mechanical engineering, and by many other branches of engineering and architecture. Three-dimensional models created using CAD software are also commonly used in finite element analysis (FEA) and computational fluid dynamics (CFD).
== Modern tools ==
Many mechanical engineering companies, especially those in industrialized nations, have incorporated computer-aided engineering (CAE) programs into their existing design and analysis processes, including 2D and 3D solid modeling computer-aided design (CAD). This method has many benefits, including easier and more exhaustive visualization of products, the ability to create virtual assemblies of parts, and the ease of use in designing mating interfaces and tolerances.
Other CAE programs commonly used by mechanical engineers include product lifecycle management (PLM) tools and analysis tools used to perform complex simulations. Analysis tools may be used to predict product response to expected loads, including fatigue life and manufacturability. These tools include finite element analysis (FEA), computational fluid dynamics (CFD), and computer-aided manufacturing (CAM).
Using CAE programs, a mechanical design team can quickly and cheaply iterate the design process to develop a product that better meets cost, performance, and other constraints. No physical prototype need be created until the design nears completion, allowing hundreds or thousands of designs to be evaluated, instead of a relative few. In addition, CAE analysis programs can model complicated physical phenomena which cannot be solved by hand, such as viscoelasticity, complex contact between mating parts, or non-Newtonian flows.
As mechanical engineering begins to merge with other disciplines, as seen in mechatronics, multidisciplinary design optimization (MDO) is being used with other CAE programs to automate and improve the iterative design process. MDO tools wrap around existing CAE processes, allowing product evaluation to continue even after the analyst goes home for the day. They also use sophisticated optimization algorithms to more intelligently explore possible designs, often finding better, innovative solutions to difficult multidisciplinary design problems.
== Areas of research ==
Mechanical engineers are constantly pushing the boundaries of what is physically possible in order to produce safer, cheaper, and more efficient machines and mechanical systems. Some technologies at the cutting edge of mechanical engineering are listed below (see also exploratory engineering).
=== Micro electro-mechanical systems (MEMS) ===
Micron-scale mechanical components such as springs, gears, fluidic and heat transfer devices are fabricated from a variety of substrate materials such as silicon, glass and polymers like SU8. Examples of MEMS components are the accelerometers that are used as car airbag sensors, modern cell phones, gyroscopes for precise positioning and microfluidic devices used in biomedical applications.
=== Friction stir welding (FSW) ===
Friction stir welding, a new type of welding, was discovered in 1991 by The Welding Institute (TWI). The innovative steady state (non-fusion) welding technique joins materials previously un-weldable, including several aluminum alloys. It plays an important role in the future construction of airplanes, potentially replacing rivets. Current uses of this technology to date include welding the seams of the aluminum main Space Shuttle external tank, Orion Crew Vehicle, Boeing Delta II and Delta IV Expendable Launch Vehicles and the SpaceX Falcon 1 rocket, armor plating for amphibious assault ships, and welding the wings and fuselage panels of the new Eclipse 500 aircraft from Eclipse Aviation among an increasingly growing pool of uses.
=== Composites ===
Composites or composite materials are a combination of materials which provide different physical characteristics than either material separately. Composite material research within mechanical engineering typically focuses on designing (and, subsequently, finding applications for) stronger or more rigid materials while attempting to reduce weight, susceptibility to corrosion, and other undesirable factors. Carbon fiber reinforced composites, for instance, have been used in such diverse applications as spacecraft and fishing rods.
=== Mechatronics ===
Mechatronics is the synergistic combination of mechanical engineering, electronic engineering, and software engineering. The discipline of mechatronics began as a way to combine mechanical principles with electrical engineering. Mechatronic concepts are used in the majority of electro-mechanical systems. Typical electro-mechanical sensors used in mechatronics are strain gauges, thermocouples, and pressure transducers.
=== Nanotechnology ===
At the smallest scales, mechanical engineering becomes nanotechnology—one speculative goal of which is to create a molecular assembler to build molecules and materials via mechanosynthesis. For now that goal remains within exploratory engineering. Areas of current mechanical engineering research in nanotechnology include nanofilters, nanofilms, and nanostructures, among others.
=== Finite element analysis ===
Finite Element Analysis is a computational tool used to estimate stress, strain, and deflection of solid bodies. It uses a mesh setup with user-defined sizes to measure physical quantities at a node. The more nodes there are, the higher the precision. This field is not new, as the basis of Finite Element Analysis (FEA) or Finite Element Method (FEM) dates back to 1941. But the evolution of computers has made FEA/FEM a viable option for analysis of structural problems. Many commercial software applications such as NASTRAN, ANSYS, and ABAQUS are widely used in industry for research and the design of components. Some 3D modeling and CAD software packages have added FEA modules. In the recent times, cloud simulation platforms like SimScale are becoming more common.
Other techniques such as finite difference method (FDM) and finite-volume method (FVM) are employed to solve problems relating heat and mass transfer, fluid flows, fluid surface interaction, etc.
=== Biomechanics ===
Biomechanics is the application of mechanical principles to biological systems, such as humans, animals, plants, organs, and cells. Biomechanics also aids in creating prosthetic limbs and artificial organs for humans. Biomechanics is closely related to engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems.
In the past decade, reverse engineering of materials found in nature such as bone matter has gained funding in academia. The structure of bone matter is optimized for its purpose of bearing a large amount of compressive stress per unit weight. The goal is to replace crude steel with bio-material for structural design.
Over the past decade the Finite element method (FEM) has also entered the Biomedical sector highlighting further engineering aspects of Biomechanics. FEM has since then established itself as an alternative to in vivo surgical assessment and gained the wide acceptance of academia. The main advantage of Computational Biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modelling to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g. BioSpine).
=== Computational fluid dynamics ===
Computational fluid dynamics, usually abbreviated as CFD, is a branch of fluid mechanics that uses numerical methods and algorithms to solve and analyze problems that involve fluid flows. Computers are used to perform the calculations required to simulate the interaction of liquids and gases with surfaces defined by boundary conditions. With high-speed supercomputers, better solutions can be achieved. Ongoing research yields software that improves the accuracy and speed of complex simulation scenarios such as turbulent flows. Initial validation of such software is performed using a wind tunnel with the final validation coming in full-scale testing, e.g. flight tests.
=== Acoustical engineering ===
Acoustical engineering is one of many other sub-disciplines of mechanical engineering and is the application of acoustics. Acoustical engineering is the study of Sound and Vibration. These engineers work effectively to reduce noise pollution in mechanical devices and in buildings by soundproofing or removing sources of unwanted noise. The study of acoustics can range from designing a more efficient hearing aid, microphone, headphone, or recording studio to enhancing the sound quality of an orchestra hall. Acoustical engineering also deals with the vibration of different mechanical systems.
== Related fields ==
Manufacturing engineering, aerospace engineering, automotive engineering and marine engineering are grouped with mechanical engineering at times. A bachelor's degree in these areas will typically have a difference of a few specialized classes.
== See also ==
Automobile engineering
Index of mechanical engineering articles
Lists
Associations
Wikibooks
== References ==
== Further reading ==
Burstall, Aubrey F. (1965). A History of Mechanical Engineering. The MIT Press. ISBN 978-0-262-52001-0.
Marks' Standard Handbook for Mechanical Engineers (11 ed.). McGraw-Hill. 2007. ISBN 978-0-07-142867-5.
Oberg, Erik; Franklin D. Jones; Holbrook L. Horton; Henry H. Ryffel; Christopher McCauley (2016). Machinery's Handbook (30th ed.). New York: Industrial Press Inc. ISBN 978-0-8311-3091-6.
== External links ==
Mechanical engineering at MTU.edu | Wikipedia/mechanical_engineering |
The following outline is provided as an overview of and topical guide to industrial machinery:
== Essence of industrial machinery ==
Heavy equipment
Hardware
Industrial process
Machine
Machine tool
Tool
== Industrial machines ==
Agricultural equipment
Assembly line
Industrial robot
Oil refinery
Packaging and labeling
Paper mill
Sawmill
Smelter
Water wheel
== Industrial processes ==
Bessemer process
Food processing
Manufacturing
Mining
Packaging and labeling
== History of industrial machinery ==
History of agricultural machinery
History of assembly lines
History of the bessemer process
History of heavy equipment
History of industrial robots
History of machines
History of machine tools
History of oil refineries
History of packaging and labeling
History of paper mills
History of smelting
History of water wheels
== See also ==
Outline of industry
== External links == | Wikipedia/Industrial_machinery |
Industrial technology is the use of engineering and manufacturing technology to make production faster, simpler, and more efficient. The industrial technology field employs creative and technically proficient individuals who can help a company achieve efficient and profitable productivity.
Industrial technology programs typically include instruction in optimization theory, human factors, organizational behavior, industrial processes, industrial planning procedures, computer applications, and report and presentation preparation.
Planning and designing manufacturing processes and equipment is the main aspect of being an industrial technologist. An industrial technologist is often responsible for implementing certain designs and processes.
== Accreditation and certification ==
The USA-based Association of Technology, Management, and Applied Engineering (ATMAE), accredits selected collegiate programs in Industrial Technology in the USA. An instructor or graduate of an Industrial Technology program may choose to become a Certified Technology Manager (CTM) by sitting for a rigorous exam administered by ATMAE covering Production Planning & Control, Safety, Quality, and Management/Supervision.
ATMAE program accreditation is recognized by the Council for Higher Education Accreditation (CHEA) for accrediting Industrial Technology programs. CHEA recognizes ATMAE in the U.S. for accrediting associate, baccalaureate, and master's degree programs in technology, applied technology, engineering technology, and technology-related disciplines delivered by national or regional accredited institutions in the United States.(2011) Industrial technology is also one of the largest industries used.
== Knowledge base ==
"A career in industrial technology typically entails formal education from an accredited college or university. Opportunities are available to professionals with all levels of education. Those who hold associate degrees typically qualify for the entry-level technician and technologist positions, such as in the maintenance and operation of machinery. Bachelor's degree-holders could fill management and engineering positions, such as plant manager, production supervisor and quality systems engineering technologist. A graduate degree in industrial technology could qualify individuals for jobs in research, teaching and upper-level management".
Industrial Technology includes wide-ranging subject matter and could be viewed as an amalgamation of industrial engineering and business topics with a focus on practicality and management of technical systems with less focus on actual engineering of those systems.
Typical curriculum at a four-year university might include courses on manufacturing process, technology and impact on society, mechanical and electronic systems, quality assurance and control, materials science, packaging, production and operations management, and manufacturing facility planning and design. In addition, the Industrial Technologist may have exposure to more vocational-style education in the form of courses on CNC manufacturing, welding, and other tools-of-the-trade in manufacturing.
== Industrial technologist ==
Industrial technology program graduates obtain a majority of positions which are applied engineering and/or management oriented. Since "industrial technologist" is not a common job title in the United States, the actual bachelor's degree or associate degree earned by the individual is obscured by the job title he/she receives. Typical job titles for industrial technologists having a bachelor's degree include quality systems engineer, manufacturing engineer, industrial engineer, plant manager, production supervisor, etc. Typical job titles for industrial technologists having a two-year associate degree include project technologist, manufacturing technologist, process technologist, etc.
A technologist curriculum may focus or specialize in a certain technical area of study. Examples of this includes electronics, manufacturing, construction, graphics, automation/robotics, CADD, nanotechnology, aviation, etc.
== Technological development in industry ==
A major subject of study is technological development in industry. This has been defined as:
the introduction of new tools and techniques for performing given tasks in production, distribution, data processing (etc.);
the mechanization of the production process, or the achievement of a state of greater autonomy of technical production systems from human control, responsibility, or intervention;
changes in the nature and level of integration of technical production systems, or enhanced interdependence;
the development, utilization, and application of new scientific ideas, concepts, and information in production and other processes; and
enhancement of technical performance capabilities, or increase in the efficiency of tools, equipment, and techniques in performing given tasks.
Studies in this area often employ a multi-disciplinary research methodology and shade off into the wider analysis of business and economic growth (development, performance). The studies are often based on a mixture of industrial field research and desk-based data analysis and aim to be of interest and use to practitioners in business management and investment (etc.) as well as academics. In engineering, construction, textiles, food and drugs, chemicals and petroleum, and other industries, the focus has been on not only the nature and factors facilitating and hampering the introduction and utilization of new technologies but also the impact of new technologies on the production organization (etc.) of firms and various social and other wider aspects of the technological development process.
How and When Technological development in industry Performed :
Technological Processes based always on (Material, Equipment, Human skills and operating circumstances.
So, If any of these parameters changed, we have to re-calibrate this technology to match the designed product.
This re-calibration can't be considered as a technology change because industrial technology is not more than an Engineering guide to achieve the required specification of the designed product.
To calibrate any industrial technology, we should make a documented copy of manufacturing experiments until matching the final product specifications based on original technology, new changed parameters and scientific basics.
Finally, documentation of the new change should be done to the original industrial technology for that new case as a new addition.
Any application of industrial technology for 1st time or after a long time stop, Technology processes should be tested by a primary samples triers as a Re-calibration process.
== References == | Wikipedia/Industrial_equipment |
A Master of Science in Management (abbreviated as MS Management or MSM) is a professional degree with a focus on management.
In terms of content, it is similar to the Master of Business Administration (MBA) degree as it contains identical management courses but is open to prospective postgraduate candidates at any level in their career unlike MBA programs that have longer course credit requirements and only accept mid-career professionals. In many cases it is synonymous with the Master of Management (MiM) and is also related to the Master of Science in Commerce (MS-Comm or MS-Com).
== Subjects ==
Graduates holding an MSc in Management have commonly studied the following subjects:
Business Ethics
Corporate and Business Strategy
Economics
Engineering management
Entrepreneurship
Finance
Financial Management and managerial accounting
Human Resources Management and Organizational Behavior
Management Information Systems
Management Theory
Marketing or Marketing Management
Operations Management and Supply Chain Management
Protected Area Management
Personal student dissertation (thesis)
In Canada, a highly specialized MSc in Management is also quite common (ex: MSc in Management in Finance and Accounting). These degrees are meant to provide students with a highly specialized set of skills for industry or for further academic study.
== Comparison to MBA ==
As is the case with the MBA degree, as the number of schools granting MSc in Management degrees has grown, so has the diversity of characteristics defining these programs. In most cases, the MSc in Management is an academic degree with no or some requirements for previous job experience, while the MBA is also a professional degree for persons with minimum 2–3 years job experience or 2nd class lower division honorees. However, there are also schools where the MSM degree is granted only to managers with extensive (typically 10 years or more) of work and managerial experience. Whereas MBA programs are open to people from all academic disciplines; about one third of the MSc in Management programs worldwide require a first degree in business or economics.
Some claim the MSc degree is more theory-oriented, and some programs do focus on specific skill set development for managers, while the MBA degree can be more practice-oriented and financially focused. In some schools, the MSc in Management degree studies the academic discipline of Management, while the MBA degree studies the academic discipline of Business Administration. Thus, some MSc degree programs focus on research in a specialized area, while the MBA degree would place more emphasis on strategy. According to one school, "While the MBA program focuses on the practical application of management theory, the M.Sc. in Management will provide for an advanced-level conceptual foundation in a student’s chosen field, and allow for the pursuit of highly focused research through a master’s level thesis."
Both degrees contain strong professional focus and are both very well suited for professionals wishing to improve positions in their respective industries. Most MSc in Management programs contain very directed content geared towards development of a particular set of leadership skills for the mid-career professional looking to improve their credentials. Both the MBA and the MSc in Management can be completed online or in-person in roughly 1-2 years depending on the school providing the programs.
Persons admitted to the degree of MSc in Management are entitled to add the designation MSc or MSM after their names (e.g. Domeng Gomez MSc), while those holding an MBA can add the designation MBA (e.g. Domeng Gomez MBA).
While the MBA degree was started in the United States, the MSc in Management degree is of European origin. There seems to be a tendency that the demand for MBA is saturated whereas the demand for Masters in Management is increasing.
== See also ==
MPhil
Master of Business Administration (MBA)
Master of Management (MM)
Master of Science in Finance
Master of Commerce
Doctor of Management
== References == | Wikipedia/Master_of_Science_in_Management |
The Master of Science in Project Management (M.S.P.M.), also known as Master in Project Management (M.P.M.) is a professional advanced degree in project management. Such degree is not only for future project managers but also offers opportunities in consultancy, evaluation of investment projects, business analysis, business development, operations management, supply chain management, business administration, or any other area of Business administration or management. These Master programs usually provide general education revolving around business organization.
While programs may vary, most curricula are designed to provide professionals with the knowledge, skills and abilities to lead and manage effectively. Lecture and laboratory sessions require the application of critical thinking to problem solving within notional and actual situations. Students normally engage in the study of concepts, methodologies and analytic techniques necessary for successful leadership of programs/projects within complex organizations. Curricula typically focus on problem solving and decision-making using case studies, teaming exercises, hands-on applications, active participation, research and integrative exercises.
Candidates of M.P.M. programs are required to have at least an associate degree or Bachelor's degree from an accredited university, generally related to business administration or engineering. Most programs require 36-42 graduate credits and a thesis or final project.
== References ==
== External links ==
What PMI Accreditation Means
Project Management Institute (PMI)
Global Accreditation Center (GAC)
Master in Project Management with Agile Methodologies | Wikipedia/Master_of_Science_in_Project_Management |
A Master of Science in Administration or Master of Science in Accounting degree (abbreviated MScA or MSA) is a type of Master of Science degree awarded by universities. The field of study came into existence in the mid-to-late 1970s. The focus of the MSA program is management skills and the program is designed to develop and train management graduates who may serve in administrative positions in the private or public sector.
The MSA program is a branch of the Master of Public Administration (MPA) and Master of Business Administration (MBA). The MSA combines courses from several fields, including psychology, economics, political science, statistics, computer science, business administration, technology and resource management. The MSA has similarities to the MPA, as it focuses on organizational behavior, microeconomics, public finance, research methods, policy process and policy analysis, ethics, management, and performance measurement. The similarities with the MBA include the focus on economics, organizational behavior, marketing, accounting, operations management, international business, information technology management, supply chain management, and government policy.
Universities that currently offer this degree include Arizona State University; Boston University; Central Michigan University; Pepperdine University; University of West Florida; Université Laval, Québec, Canada; and HEC Montréal. | Wikipedia/Master_of_Science_in_Administration |
The Master of Accountancy (MAcc, MAcy, or MAccy), alternatively Master of Science in Accounting (MSA or MSAcy) or Master of Professional Accountancy (MPAcy, MPAcc, MPA or MPAc), is a graduate professional degree designed to prepare students for public accounting; academic-focused variants are also offered.
In the United States, the program provides students with the 150 credit hours of classroom, but mostly clinical hours, required by most states before taking the Uniform Certified Public Accountant Examination.
This specialty program usually runs one to two years in length and contains from ten to twelve three semester credit courses (30 to 36 semester hours total). The program may consist of all graduate accounting courses or a combination of graduate accounting courses, graduate management, tax, leadership and other graduate business electives. The program is designed to not only prepare students for the CPA examination but also to provide a strong knowledge of accounting principles and business applications.
Similar graduate programs exist in Canada, where certain universities such as Brock University's Goodman School of Business, Carleton University's Sprott School of Business, University of Saskatchewan's Edwards School of Business, and University of Waterloo's School of Accounting and Finance offer master's programs and waive all education requirements up until the Common Final Examination (CFE) in order to become a Canadian CPA.
A Master of Professional Accounting can also be obtained from Australian universities to qualify for the Australian CPA, IPA or CA.
As above, in other countries the degree's purpose may differ. Where the Bachelor of Accountancy is the prerequisite for professional practice, for example in South Africa, the Master of Accountancy then comprises specialized coursework in a specific area of accountancy (computer auditing, taxation...), as opposed to CPA preparation as above. It may also be offered as a research based program, granting access to doctoral programs.
Graduates entering corporate accounting or consulting often additionally (alternatively) pursue the Certified Management Accountant (CMA), Certified Internal Auditor (CIA) or other such certifications.
== See also ==
Accounting scholarship
Accounting § Education, training and qualifications
Bachelor of Accountancy
Certified Public Accountant
Chartered Professional Accountant
Enrolled Agent
List of master's degrees
Master of Laws
Master of Taxation
== References == | Wikipedia/Master_of_Science_in_Accounting |
A Master of Science in Nursing (MSN) is an advanced-level postgraduate degree for registered nurses and is considered an entry-level degree for nurse educators and managers. The degree may also prepare a nurse to seek a career as a nurse administrator, health policy expert, or clinical nurse leader. The MSN may be used as a prerequisite for doctorate-level nursing education and is the minimum degree required to become an advanced practice registered nurse such as a nurse practitioner, clinical nurse specialist, nurse anesthetist, or nurse midwife.
This graduate-level degree may focus on one or more of many different advanced nursing specialties such as acute care, adult, family, gerontology, neonatology, pediatric, psychiatric, or women's health.
More recently, universities have begun to offer Master of Science pre-registration nursing courses, which cover the registration process and nurse training of the undergraduate course, but with master's-level academic components. This course was initially started at the University of the West of Scotland in the UK and has since been included at other universities.
== See also ==
Associate of Science in Nursing
Bachelor of Science in Nursing
Diploma in Nursing
Doctor of Nursing Practice
National League for Nursing Accrediting Commission
Nurse education
Nursing school
== External links ==
CCNE - Commission on Collegiate Nursing Education - Accrediting body that "ensures the quality and integrity of baccalaureate and graduate education programs preparing effective nurses."
ACEN - Accreditation Commission for Nursing Education - Accrediting body that is "responsible for the specialized accreditation of nursing education programs." | Wikipedia/Master_of_Science_in_Nursing |
This list refers to specific master's degrees in North America. Please see master's degree for a more general overview.
== Accountancy ==
Master of Accountancy (MAcc, MAc, MAcy or MPAcc), alternatively Master of Professional Accountancy (MPAcy or MPA), or Master Science in Accountancy (MSAcy) is typically a one-year, non-thesis graduate program designed to prepare graduates for public accounting and to provide them with the 150 credit hours required by most states before taking the CPA exam.
Master of Accounting (MAcc) is an 8-month degree offered by the University of Waterloo, School of Accounting and Finance in Canada that satisfies the 51 credit hours and CKE exam requirement needed to write the Chartered Accountant Uniform Final Exam (UFE) in the province of Ontario. The School also delivers a Master of Taxation program. The School is housed in the Faculty of Arts.
Master of Professional Accounting (MPAcc) is a two-year, non-thesis graduate program offered by the University of Saskatchewan in Canada. In the United States, the University of Texas at Austin offers a Masters of Professional Accounting (MPA) degree.
== Administration ==
Master of Business Administration (MBA), Master of Management (MAM), Master of Accountancy (MAcy), Master of Science in Taxation (MST), Master of Science in Finance (MSF), Master of Business and Organizational Leadership (MBOL), Master of Engineering Management (MEM), Master of Health Administration (MHA), Master of Not-for-Profit Leadership (MNPL), Master of Public Policy (MPP), Master of Policy, Planning and, Management (MPPM), Master of Public Administration (MPA), Master of International Affairs (MIA), Master of Global Affairs (MGA), Master of Strategic Planning for Critical Infrastructures (MSPCI), Master of Science in Strategic Leadership (MSSL), and Master of Science in Management (MSM) are professional degrees focusing on management for the private and public sectors, both domestic and international.
== Adult Education ==
While other advanced degree education programs tend to be more widely known, the Master of Science in Adult Education provides professional educators with expert-level tools for success in the adult learning environment and advancement in educational leadership. As the name suggests, this degree program provides ample opportunity for the student to take a more scientific approach to the study of education. Many M.S. Adult Education programs offer concentrations in Community Service and Health Sciences (non-profit realm), Human Resources, Technology (distance learning), and Training and Development (corporate or for-profit environment).
== Advanced Study ==
In the United States the Master of Advanced Study (M.A.S.) also the Master of Advanced Studies (MAS) degree is a post-graduate professional degree issued by numerous academic institutions, but most notably by the University of California. M.A.S. programs tend to "concentrate on a set of coordinated coursework with culminating projects or papers rather than emphasizing student research" and frequently are structured as interdisciplinary offerings.
In Canada, the Master of Advanced Study degree is an independent research degree.
Advanced Studies programs tend to be interdisciplinary and tend to be focused toward meeting the needs of professionals rather than academics.
== Appalachian Studies ==
The Master of Appalachian Studies focuses on research into culture, i.e. music, sociology, and sustainability within the region of cultural and geographic region of Appalachia. This degree primarily develops understanding of the historical, political, geographic, and socio-economic circumstances that have led to Appalachia and similar regions to become what they are today.
== Applied Anthropology ==
The Master of Applied Anthropology (MAA) is a two-year program focused on training non-academic anthropologists. The University of Maryland, College Park developed this program to encourage entrepreneurial approaches to careers outside academia, where most new anthropologists are likely to seek and find employment. For this reason, it is considered a professional degree rather than a liberal arts degree.
== Applied Politics ==
The Master of Applied Politics is a 2-year master's degree program offered by The Ray C. Bliss Institute of Applied Politics at The University of Akron. It is one of the few professional master's degree programs in the United States focusing on practical politics and efforts to influence political decisions. This includes winning elections, campaigning, fund raising, influencing legislation and strengthening political organizations. MAP graduates have gone on to manage campaigns, run for political office, join polling and fundraising firms, and start their own consulting firms.
== Applied Sciences ==
The two Master's programs offered in Management Sciences provide both course work and research opportunities in the areas of operations research, information systems, management of technology, engineering and other areas. Operations research, mathematical modeling, economics, and organizational behavior, and other related concepts underlie success in almost all areas of management. Refer to the Master of Engineering degree section for more information.
== Architecture ==
The 4+3 or 5-year Master of Architecture (M.Arch. I) is a first professional degree, after which one is eligible for internship credit [and subsequent exam] required for licensure. The 2-year Master of Architecture (M.Arch. II) is a graduate-level program which assumes previous coursework in architecture (B.Arch. or M. Arch I).
== Archival Studies ==
The Master of Archival Studies degree is awarded following completion of a program of courses in archival science, records management and preservation. The degree was first offered at the University of British Columbia (Canada), and is currently offered at Clayton State University (Georgia). The Master of Archives and Records Administration is offered by San Jose State University (California).
== Bioinformatics ==
The Master of Science in Bioinformatics degree builds on a background in biology and computing. Students learn how to develop software tools for the storage and manipulation of biological data. Graduates typically work in the biotechnology or pharmaceutical industries or in biomedical research. The career prospects are excellent.
== Biomedical Sciences ==
The Master of Biomedical Sciences (MBS) degree prepares students for medical schools, related health professions, and other biomedical careers. The curriculum integrates graduate level human biological sciences with skill development in critical thinking, communication and teamwork.
== Broadcast Journalism ==
The Master of Broadcast Journalism (MBJ) degree prepares students for reporting and journalism in television and radio broadcasting, i.e. on the scene reporting and newsroom newscasting and meteorology.
== Chemistry ==
The Masters of Science in Chemistry is a degree that prepares recipients for jobs as higher-level industrial chemists, laboratory technicians, and for doctorate programs in Chemistry. Schools often offer two programs - a coursework-based masters and a research-based masters. The coursework masters is offered through completion of a number of graduate level chemistry classes and may require the recipient to complete a research proposal to demonstrate their expertise. The research masters is offered through completion a certain number of hours devoted to academic chemistry research, classes related to the research being performed, and the completion of a thesis consisting of the research completed during the masters and its impact of the research on the field.
== Christian Education ==
The Master of Arts in Christian Education is a seminary degree primarily designed for those in the field of church ministry. Various specializations include children's ministry and youth ministry, among others. Thus, many children's pastors and youth pastors obtain the degree, while senior pastors usually pursue the Master of Divinity degree. The degree is usually obtained in 2–3 years.
== City and Regional Planning ==
Master of City and Regional Planning (MCRP) is a professional degree in the study of urban planning.
== Clinical Medical Science ==
Clinical Medical Science is a professional degree awarded to Physician Assistants.
== Communication ==
The Department of Communication of the University of Ottawa offers a Master of Arts (MA) in Communication degree with thesis or with research paper.
The program focuses on five fields of specialization: media studies; organizational communication; health communication; identity and diversity in communication; government communication.
Both teaching and research explore major issues related to new information and communication technologies in media and organizations at the national and international levels.
The academic department of Art History & Communication Studies at McGill University offers Master of Arts (M.A.) degrees, which are differentiated either as Interdisciplinary (Thesis/Non-Thesis) or as Noninterdisciplinary (Thesis/Non-Thesis) programs. The duration for a Non-Thesis option is two years of full-time study. The period for a Thesis option may last longer, depending also on the required level of courses and complexity of the thesis. Students who are admitted to the Interdisciplinary Thesis option Communication Studies - Gender and Women's Studies (incl. e.g. psychology and/or other subjects) might have to earn "very high research" credits at the 700-PhD-level, and they may need to complete their program in the maximum of three years of full-time candidature.
The Communication & Media Arts Department at Lancaster Bible College offers a Master of Arts (MA) degree in Strategic Communication Leadership.
== Computer Science ==
The Master of Science in Computer Science and Master of Science in Information Technology are graduate degrees for information technology professionals and computer engineers. They are generally based on core computer science subjects where knowledge can be used for advanced work especially in the information technology industry.
== Community Health and Prevention Research ==
A Master of Science in Community Health and Prevention Research is a graduate degree for students interested in advancing health in communities through evidence based science. The degree is similar to a public health degree with an emphasis on epidemiology, measurement, research, and statistics in the coursework though with a strong applied focus and emphasis on community engagement; theory and applied principles of behavior change; and intervention development, evaluation, and dissemination. Programs may combine in-class instruction, faculty and peer-to-peer mentoring, with community-based internships.
== Criminal Justice ==
The Master of Criminal Justice is a professional degree in the study of criminal justice. The program is designed as a terminal degree for professionals in the field of criminal justice or as preparation for doctoral programs. It may also be referred to as a Master of Science in Justice Administration (M.S.J.A.).
== Cross-Cultural and International Education ==
This master's degree, offered by Bowling Green State University in Bowling Green, Ohio, prepares professional educators to be effective leaders in the internationalization of schools and communities and to be positive facilitators of cross-cultural understanding. Students complete the MACIE program with a capstone seminar or a master's thesis.
== Cultural Studies ==
This master's degree is a one or two year degree that allows students to engage the heterogeneous body of theories and practices associated with cultural studies and critical theory in the critical investigation of culture.
== Cyber Security ==
The Master of Information and Cybersecurity (MICS) is an interdisciplinary degree program that examines computer security technologies as well as human factors in information security, including the economic, legal, behavioral, and ethical impacts of the cybersecurity domain.
A Master of Science in Cyber Security is typically seated within the computer science discipline and is focused on the technical aspects of cybersecurity.
Other cybersecurity master's degree programs focus on policy and legal aspects of cybersecurity.
== Data Science ==
Master in Interdisciplinary Data Science, the University of Michigan School of Information's Master of Applied Data Science (MADS), and Master of Information and Data Science (MIDS) are professional graduate degrees in Data Science designed to help meet the need for knowledgeable data scientists who can answer important questions with data-backed insights, by drawing upon computer science, social sciences, statistics, management, and law.
== Dentistry ==
The Master of Science in Dentistry is a post-graduate degree awarded for those with a dental degree (BDS, DMD, DDS, BDent, BChD, etc.), who have completed a post-graduate level course of study.
== Digital Media ==
The Master of Digital Media is a professional degree in the study of digital media, which includes entertainment technology, and can be defined as media experiences made possible by the advent of primarily computer-mediated digital technologies (e.g., electronic games and special effects in motion pictures). This is also called the Master of Interactive Technology (MIT), which is offered at SMU Guildhall, or Master of Entertainment Technology (Carnegie Mellon).
== Dispute Resolution ==
Dispute Resolution as a master's degree program, a first in Australia, focuses on the wide range of non-adversarial dispute resolution processes. The subject accommodate distinct streams that include commerce, family, community and court-annexed programs. This subject is an introduction to the philosophy, theory and practice of an area of increasing importance in all professions, business and government. Dispute resolution processes are now integrated into the adversarial framework as well as being applied to an ever-widening range of private and public situations. This emerging practice of professional dispute resolution converge within and outside the legal profession.
== Divinity ==
The Master of Divinity (M.Div.) is the first professional degree in ministry (in the United States and Canada) and is a common academic degree among theological seminaries. It typically takes students three years to complete. Other theology degree titles used are Master of Theology (Th.M. or M.Th.), Master of Theological Studies (M.T.S.), Master of Arts in Practical Theology (M.A.P.T.), Master of Sacred Theology (S.T.M.) and Master of ecclesiastical Philosophy (M.EPh.).
== Education ==
Master of Education degrees are similar to MA, MS, and MSc, where the subject studied is education. In some states in the United States, teachers can earn teacher licensure with a bachelor's degree, but some states require a master's degree within a set number of years as continuing education. Other education-related master's degrees conferred in the United States are Master of Arts in Teaching (M.A.T.), Master of Science in Instruction (M.S.I.), Master of Science in education (M.S.Ed. or M.S.E.), Master of Arts in education (M.A.Ed.), Master of Adult Education (M.Ad. Ed.), and Master of Music Education (M.Mus.Ed.).
A Master of Education degree, or M.Ed., is a professional, graduate-level degree geared toward individuals who are seeking to move beyond the classroom into administrative-level positions or other specialized roles. It is generally not a degree leading to teaching at a college level, though it can very well prepare individuals for employment in higher education management and student personnel administration, as well as becoming adjunct college instructors. Many online M.Ed. programs offer a specialization in educational leadership. Over the past few years, however, the opportunity to specialize in educational technology has also become increasingly available. While many M.Ed. graduates seek to become principals and school district administrators, others become reading or technology specialists. The Master of Education degree is sometimes referred to as a practitioner's degree, because of its immediate and practical application to the school environment.
A Master of Arts in Education is perhaps the most flexible degree in the field, and often allows an educator to specialize in one of several concentrations. In addition to taking core classes in educational philosophy, child psychology, educational ethics, and education research methods, teachers pursuing this advanced education degree generally specialize in one of several fields.
Educational professionals who are looking to remain in the classroom often opt to pursue an online Master of Arts in education with a concentration in either elementary or secondary education. At many universities, a concentration in special education is also available. Individuals who are looking to leave the classroom often pursue concentrations in educational leadership, technology, or counseling. This list is by no means complete, as each university offers its own options for specialization.
Overall, the M.A. in Education includes more of the theoretical study of education than most of the other advanced degree options. The Master of Arts in Education also offers an extremely high level of flexibility, and can help to advance careers both inside and outside of the classroom.
While the other advanced degree programs tend to be more widely known, the Master of Science in Education can also provide professional educators with the tools needed for success in the classroom and advancement in educational leadership. As the name suggests, this degree program provides ample opportunity for the student to take a more scientific approach to the study of education. Many of those individuals who choose to follow the scientific route concentrate on topics like instructional technology or educational research.
In many instances, M.S. Education programs that take a scientific slant tend to include coursework in statistics and educational evaluation and measurement. Educators who pursue the more scientific path generally leave the classroom, and in many instances, the school. They have excellent job prospects in the educational research sector. Many go on to work with school districts, state governments, or private research organizations to assess student performance and suggest policies that will boost student achievement. Others supervise technology initiatives for schools or school districts, work in distance education, or pursue doctoral studies.
Other individuals who pursue an MS in Education opt for a less scientific course of study, such as educational leadership, or literacy. In some instances, these programs resemble the previously discussed Master of Education degree, but at other schools, these programs place a much greater focus on the scientific aspects of studying education. In either case, the same opportunities for advancement as a school administrator should be available, regardless of whether one has earned a degree of Master of Science in education, a Master of Arts in education, or a Master of Education.
== Educational Technology ==
The Association for Educational Communications and Technology (AECT) defines the field as "the study and ethical practice of facilitating learning and improving performance by creating, using and managing appropriate technological processes and resources.". Programs typically include courses on instructional design, learning theories, educational media, instructional messaging, related theory, and research methods. Some institutions use the term instructional technology to refer to their programs. Although some experts within the field distinguish between educational and instructional technology, on a practical level, the two are essentially synonymous.
Master's in Educational Technology programs are offered in at least one university in nearly every US state, and in many countries outside of the United States, including Australia, Canada, China, Singapore, South Korea, and Turkey.
One of the oldest programs in North America is based in the Department of Education at Concordia University in Montreal, Quebec, Canada, which has graduated approximately 2,000 master's degree and over 150 PhD students in its 50 years. Students can study full- or part-time, preparing to use their skills and knowledge to design curricula and programs, integrate technology, advise on educational technology policy, and conduct related research for schools, higher education, workplace learning, and community and informal learning.
The University of British Columbia (UBC) in Vancouver, British Columbia, Canada offers a part-time program within the Faculty of Education, focusing on curriculum design and technology integration.
The one-year professional Masters in Educational Technology and Applied Learning Science (METALS) is an interdisciplinary program offer by Carnegie Mellon University in Pittsburgh, Pennsylvania. It is jointly taught by the Human-Computer Interaction in the School of Computer Science and Psychology in Dietrich College of Humanities and Social Sciences. The program is an outgrowth of the research conducted by the National Science Foundation's Science of Learning Center, LearnLab, in which more than 200 researchers produced over 1600 publications and talks. METALS trains students to design, develop and evaluate evidenced-based programs for learning in settings that range from schools to homes, workplaces to museums, and online to offline learning environments. Students with backgrounds in psychology, education, computer science, design, information technology, or business are encouraged to apply.
== Electronic Business Technologies ==
The Master in Electronic Business Technologies (MEBT) is an interdisciplinary master's program offered at the University of Ottawa in Ontario, Canada.
== Engineering ==
The Master of Engineering (Magister in Ingeniaria) degree is awarded to students who have done graduate work at the master's level in the field of engineering. In the United States, engineering candidates are typically awarded MS degrees, although a growing number of schools also offer an MEng (e.g. the University of California, Berkeley). The distinction between the two programs varies between schools, but the MS is largely considered an academic degree, whereas the MEng is a professional degree. In the UK and Canada, candidates are generally awarded MSc, MASc or MEng degrees.
In Canada, the Master of Applied Science (MASc) is awarded to master's degree students with a research focus (having completed work leading to a thesis), while an MEng is awarded to master's degree students with a coursework focus and the completion of a research paper. The distinction between MASc and MEng is not definite since some universities grant only an MEng and some universities grant only an MASc, be it either research or coursework-focused.
In Francophone universities, the Master's Degree is referred to as a Maîtrise. The Master of Applied Science translates to Maîtrise des sciences appliquées and is abbreviated MScA. The Maîtrise in Canada is not equivalent to the Maîtrise in France, nor is the Baccalauréat. Canadian French-language degree and title nomenclature are consistent to North American custom. The MEng title translates to MIng, though this title cannot be used in one's signature in Québec (nor can BIng), as the title ing. (equivalent of P.Eng. in other provinces) is reserved for members of the provincial board of engineers, the Ordre des ingénieurs du Québec.
The Master of Science in Engineering is a post-graduate degree to be differentiated from the Master of Engineering. It requires a thesis and qualifies students holding it to apply for a Doctor of Philosophy (PhD) in Engineering.
== Environment ==
The Master of Environment (MEnv) is available at Concordia University, the Université de Sherbrooke, and the University of Colorado Boulder. The Master of Environmental Science (MEnvSc) is offered at University of Toronto Scarborough.
== Environmental Management ==
The Master of Environmental Science and Management (MESM) is offered by UC Santa Barbara. The Master of Land and Water Systems is available at the University of British Columbia.
== Finance ==
The Master of Science in Finance is a common degree in the corporate finance and investment finance world. It is considered the financial service industry's answer to accounting's Master of Accountatcny (MAcc) degree.
== Fine Arts ==
The Master of Fine Arts (M.F.A.) is a two to three-year terminal degree in a creative field of study, such as theatre arts, creative writing, filmmaking, or studio art.
== Foreign Service ==
The Master of Science in Foreign Service (MSFS) is a two-year degree program offered by Georgetown University's Edmund A. Walsh School of Foreign Service. Established in 1922, it is the first international relations graduate program in the United States. The 48-credit multidisciplinary curriculum emphasizes both theory and practice to educate international affairs professionals in the public, private, and non-profit sectors. Foundational courses in international relations, economics, and history are complemented by specialized courses in students’ areas of concentration: international development, politics and security, science and technology, global business and finance. In addition to course requirements, the degree requires successful passing of the oral examination, proficiency in a foreign language, and completion of an internship and leadership requirements.
== Forensic Science ==
The Master of Forensic Sciences (MFS) is a specialized professional degree designed for law enforcement, lab personnel, attorneys, investigators and other professionals. The Master of Science in Forensic Science is offered by John Jay College of Criminal Justice at City University of New York.
The Master of Science in Forensic Science and Law is a degree program available at Duquesne University. It combines all applications of forensic science with law and its application and legal use before a court of law.
Universities offering degree programs in this field have applied for accreditation from the American Academy of Forensic Sciences (AAFS)'s Forensic Science Education Programs Accreditation Commission.
== Forestry ==
The Master of Forestry (MF) degree is offered by Yale School of the Environment. The two-year MF degree is accredited by the Society of American Foresters (SAF) and prepares students for careers in sustainable natural resource management and policy. The curriculum is divided into three stages and focuses on the complex relationships among the science, management, and policy of forest resources. Students are also required to complete a summer internship and a capstone.
A similar 48-credit MF degree at Duke University's Nicholas School of the Environment is also accredited by the SAF and can be pursued on its own or concurrently with the Master of Environmental Management (MEM) degree or with degrees from other professional schools at Duke and the University of North Carolina at Chapel Hill.
== Global Affairs ==
The Master of Science in Global Affairs is a degree program available at New York University. The 42-credit curriculum is designed to help students unravel the complex relationships between nations and key international factors and make sense of world events. Coursework covers subjects ranging from economic globalization and the issues facing developing countries to conflict resolution and international law.
The Master of Global Affairs (MGA) is a two-year professional degree offered by the Munk School within the University of Toronto. The interdisciplinary degree aims to equip students with an awareness of global financial systems, global civil society, and global governance to prepare students for strategic thinking and responsible leadership on global issues.
The Master's of Arts in Global Governance (MAGG) offered by the Balsillie School of International Affairs designed to be completed in 16 months, consists of two terms of course work, a third term in which students complete a major research paper, followed by a fourth term as an intern working on global governance issues in the public or private sector, a research institute, or NGO. The selection process for the MAGG is highly competitive and only 15-18 students are admitted per year.
A Master of Global Affairs program is offered at the University of Notre Dame, Rice University, and the University of Toronto.
== Health Administration ==
Master of Health Administration (MHA) is a two-year degree similar to an MBA but instead is focused on health care systems rather than businesses in general.
== Health Science ==
The Master of Health Science is awarded to students who have completed a post-graduate course of study in health sciences or health policy fields, usually associated with the Public Health field. The MHS is often a more focused program for public health professionals, often with non-health professional backgrounds. This degree is abbreviated as MHSc in Canada. New degree programs in the US, include MSHS (Master of Science in Health Sciences) that relate to this same particular program type are also becoming more common as universities and medical schools develop more degree specialties throughout the US.
== Historic Preservation ==
The Master of Science in Historic Preservation (MSHP) is a graduate degree, often offered through schools and colleges of architecture, which focuses on the theory and practical elements of preserving buildings of historic importance. There are only a few programs in the United States, but all of the programs tend to focus on architectural conservation, design, history/theory, preservation planning, building analysis, and preservation law.
The Master of Arts in Historic Preservation provides training in the research, documentation, and preservation of the historic built environment. Typically the MAHP is differentiated from the Master of Historic Preservation MHP by an emphasis on historic research and writing, and is usually housed in history departments.
== Historic Preservation ==
The Master of Historic Preservation (MHP) is a two to two and a half year degree in the field of historic preservation. The MHP is usually considered a terminal degree, although a few Phd programs exist offering historic preservation as a concentration within another field such as community planning. Commonly MHP programs are housed in history, planning, or architecture departments. Interdisciplinary by nature, MHP programs typically consist of courses in architectural history, history and theory of preservation practice, cultural landscape preservation, historic resource documentation and evaluation, community planning, and rehabilitation philosophy and practice.
== History ==
The Master of Arts in History
== Human-Computer Interaction ==
The Master of Human-computer interaction is a professional degree that focuses on the training and research around topics related to human-computer interaction (HCI). Human-computer interaction, while touches upon areas of research covered by computer science, psychology, cognitive science, social sciences, design, media and other fields of studies, is often categorized under the department of computer science or information science. And students who pursue their graduate studies in this area usually receive a degree of master of computer science or master of information science.
There is a limit amount of institutions that offer a master's degree directly under HCI. The Master of human-computer interaction (MHCI), a program at Carnegie Mellon University offered by the Human-Computer Interaction Institute (HCII), is the first program that is solely focused on professional training for students who wish to pursue a career in human-computer interaction related area. There are similar programs offered by other institution, such as Master of Human-Computer Interaction and Design (MHCI+D) by University of Washington, Master of Science in Human-Computer Interaction (MS in HCI) by Georgia Tech, and Master of Science in HCI by Indiana University, Bloomington. These programs often consist of academic learning experiences in research and design and industry training in client relation and project management. Students are usually expected to gain a professional proficiency for HCI related topics in an industry setting.
== Human Factors ==
The Master of Science in Human Factors (MSHF) degree focuses on human factors and ergonomics in systems and processes.
== Humanities ==
The Master of Arts in Humanities degree requires two years in an accredited college.
== Industrial and Labor Relations ==
Industrial and Labor Relations.
== Industrial Design ==
Master of Industrial Design is a two- or three-year program in the field of industrial design.
The MID acronym is also used for a Master of International Development, which is a postgraduate degree in the study of developmental economics, non-governmental organizations and civil society, development planning, environmental sustainability, and human security.
== Information ==
The Master of Science in Information (MSI) is a graduate degree designed for information science professionals.
== Information Management ==
The Master of Science in Information Management (MSIM) is a graduate degree designed for information management professionals.
== Information Systems ==
The Master of Science in Information Systems (MSIS) is a graduate degree designed for information systems professionals. The Master of Information Systems (MIS or MSIS) is a 2-year degree geared towards professionals trained in both management and information systems. The culmination of both fields is often referred to as management information systems. The Master of Information Systems is sometimes extended with additional courses such as those focused on management (MISM or MIS\M), or information security (MISIS, MIS\IS).
== Information Technology ==
The Master of Science in Information Technology is a graduate degree designed for information technology professionals. It is generally based on core computer science subjects where knowledge can be used for advanced work, especially in Information Technology industry. Whereas the Bachelor level degree provides a well-balanced foundation in Information Technology, the Master's degree allows the student not only further advancement of core knowledge, but also an opportunity to specialize in selected technology areas of interest. The Master of Information Technology is one of the most sought degrees in the field of Computer Science and Information Technology and is much sought-after by employers in the Information Technology marketplace. This degree ensures that the person who has attained this degree is competent in all key areas of Information Technology sector and has further advanced this with specialized knowledge, research, and publications.
== Interactive Media ==
The Master of Science in Interactive Media is a graduate degree offered by Quinnipiac University in Hamden, Connecticut. Through a balance of courses in interactive theory, media production, programming, Web design, and animation, students learn how to transform traditional media and original content into multimedia productions. The combination of study in the intellectual and production aspects of interactive media creates students that are innovative thinkers who understand the shift from legacy media to online.
== International Business ==
The Master of International Business (MIB) degree is a postgraduate degree designed to develop the capabilities and resources of managers in the global economy. It is ideal for those seeking to establish or accelerate a career in international business.
Emphasizing the practical application of specialized knowledge, the program equips management with skills tailored to the international business environment.
The Master of International Business focuses on strategic planning for international operations and provides an in-depth understanding of the organizational capabilities required for international operations, including specialized functions such as international marketing, finance and human resource management.
The degree may be thought of as an MBA with a particular focus on multinational corporations.
== International Development ==
The MID acronym is also used for a Master of International Development, which is a postgraduate degree in the study of developmental economics, non-governmental organizations and civil society, development planning, environmental sustainability, and human security.
== International Economics and Finance ==
The Master of Arts in international economics and finance is a two-year degree in the field of economics.
== International Hotel Management ==
The Master of Arts in International Hotel Management is a two-year professional graduate degree awarded by Royal Roads University [1] that prepares individuals to succeed in senior and executive hospitality positions within the accommodations sector including hotels, resorts, and cruise ships. RRU delivers this 39 credit program through a combination of 15 credits during three-short term residences (two in Victoria, BC Canada and one in an international location), and 24 credits through on-line distance learning.
== Internet Technology ==
Internet Technology degrees are available online and on campus. The Internet Technology MS degree at The Seidenberg School of Computer Science and Information Systems at Pace University consists of a 12-credit foundational core, followed by a 12-credit concentration in E-Commerce or Security and Information Assurance. There are other programs available at other Universities in the United States.
== Jurisprudence ==
Master of Jurisprudence (M.J.) is sometimes used as an alternative name for both Master of Laws and Master of Juridical Science. Offered within United States law schools, students of an M.J. curriculum are often business professionals and/or Juris Doctor degree holders; who wish to enhance their knowledge in a specialized field of law. A Master of Jurisprudence is highly beneficial for those that need an in-depth understanding of the law within current executive level positions.M.J. students are required to develop a comprehensive understanding of the operation of law as it applies to a specified area of law. An M.J. program combines a combination of graduate level legal courses with MBA-style courses in concentrated areas of study. Master of Jurisprudence program offerings include but aren't limited to degrees in Business and Corporate Governance Law, Health Law and Policy, and Child and Family Law. The M.J. program is typically 24 credit hours and can be completed in two years; or longer depending on the law student's enrollment status.l
== Landscape Architecture ==
The Master of Landscape Architecture (MLA) degree is a professional degree in the field of landscape architecture.
== Laws ==
Master of Laws (LL.M.) is an advanced degree in law, pursued after earning a first degree in law within the U.S. or abroad, such as a Juris Doctor (J.D.). The LL.M. program typically lasts one year if taken full-time. For foreign law graduates, the LL.M. is similar to a 'study abroad program' and offers a general overview of the American Legal System. Domestic U.S. law graduates pursue the LL.M. for different reasons, largely academic. With the exception of LL.M. programs in highly specialized areas where advanced knowledge in a field is useful (e.g., Taxation, International Taxation, Intellectual Property; etc.), the Master of Laws is designed for those intending to teach law, whereas the J.D. is a professional doctorate.
== Leadership ==
The Master of Science in Leadership is an alternative to, not a substitute for, the traditional Master of Business Administration (MBA) degree. The MSL degree requirements may include some business courses that are required in an MBA program. However, this degree program concentrates heavily on leader-follower interactions, cross-cultural communications, coaching, team development, project leadership, and behavioral motivation theories; it does not concentrate on financial or quantitative analysis, marketing, or accounting which are common in MBA programs. The degree program is appealing to people in well-established careers already. The MSL degree is similar to the Master of Organizational Leadership (MSOL) degree.
== Liberal Studies ==
The Master of Arts in Liberal Studies (MALS) Master of Liberal Arts (MLA, ALM) and Master of Liberal Studies (MLS) are interdisciplinary master's degrees. Characteristics that distinguish these degrees from others include curricular flexibility, interdisciplinary synthesis via Master's thesis or capstone project, and optional part-time enrollment.
== Library Science ==
A Master of Library Science (MLS) degree is the culmination of an interdisciplinary program encompassing information science, information management, librarianship, and/or related topics. Modern variants include Master of Library and Information Studies (MLIS), Master of Science in Information Studies (MSIS), Master of Librarianship, Master of Information Management and Systems (MIMS), Master of Science in Library Science (MSLS), and others. Some universities use standard degree titles such as Master of Arts while others, such as the University of Michigan, use Master of Arts in Library Science (AMLS). (University of Iowa) and Master of Science (University of Illinois) for their Library Science master's degrees.
== Logistics, Trade, and Transportation ==
The Master of Science in logistics, Trade, and Transportation (MS LTT) at the University of Southern Mississippi is an interdisciplinary program of 30 total credit hours. This LTT program can be completed in one year and customized to meet career advancement needs. The MS LTT program comprises logistics, supply chain management, global trade and economic development, business, and other courses.
== Management ==
The Master of Arts in management (MAM), in the United States, is a professional graduate degree that prepares business professionals for senior level management positions and executive leadership of organizations and corporations (for-profit, nonprofit and public sector). The business MAM degree should not be confused with Master's degrees in Arts Management, which may be more numerous in the U.S.. The MAM degree is a specialized degree that focuses studies on all areas of business, e.g., strategic planning and leadership, marketing analysis and strategy, operations management, project management, human resource management, organizational design and development, finance, accounting, management and contract negotiations, statistical methods and applications, economic theory, and research.
A characteristic that tends to distinguish the MAM and MBA business degrees from other Master's degrees in business is the absence of in-depth study within one particular area of business. M.S. business degrees typically focus on traditional areas like Finance, Accounting, and Information Systems. The MAM student's courses cover advanced management and strategic leadership as they apply to all areas of business, e.g., accounting, finance, operations management marketing, strategic planning, and human resources. The MAM student "masters the art of management." MBA and MAM degrees are both Master's level business degrees that cover broad and general content.
Master of Science programs in Management involve course work focusing within one areas of business such as Management Information Systems, Finance, Accounting and other areas.
== Management in the Network Economy ==
Management in the Network Economy is a one- or two-year interdisciplinary post-graduate program that uniquely blends information economics, technology management and business administration, in order to forge leaders able to understand and manage the complexity of organizations and markets in the digital economy.
In certain universities, like the Catholic University in Italy, this Master encompasses typical courses of a Master of Information Systems Management (MISM or MIS/M) and the business knowledge you can gain from a Master of Business Administration (MBA) or a Master of Business and Organizational Leadership (MBOL).
== Mathematics ==
Either pure or applied. Usually an MA, sometimes an MS.
== Marketing and Communication ==
The Master of Science in Marketing and Communication (MCM) is an integrated marketing communication graduate degree offered at Franklin University in Columbus, Ohio.
The Master of Science in Integrated Marketing Communications (IMC) is graduate degree offered at West Virginia University in Morgantown, WV through WVU's Reed College of Media.
== Marketing Research ==
Master of Marketing Research (MMR) is a specialized degree in marketing focusing in research. Sometimes called a Master of Science in Marketing Research (MSMR).
== Mass Communication ==
The Master of Mass Communications is a two- to three-year degree in the field of journalism and mass communications that prepares degree candidates for careers in media management. Students typically undertake courses in media law, marketing, integrated communications, research methods, and management.
== Medical Education ==
The Masters of Science in Medical Education prepares physicians for careers as academic Clinician-Educator-Scholars through didactic training in medical education and research methods, and mentored education research.
== Medical Science ==
Medical Science is a two-year postgraduate degree in medical research, usually for those possessing a Doctorate of Medicine already.
== Ministry ==
The Master of Ministry is a two- to three-year multidisciplinary degree with a program typically designed to apply appropriate theological principles to practice-based settings and serve as a foundation for an original research project. Such a program responds to the need for structured learning and theological development among professionals serving Church, non-profit, public, and private sector organizations.
Typical concentrations (or majors) include: Missions, evangelism, pastoral counseling, chaplaincy, church growth and development, Christian administration, homiletics, spiritual formation, pastoral theology, Church administration, Biblical counseling, clergy, Biblical archeology, religious education, Christian management, church music, social work, spiritual direction.
== Music ==
Master of Music, usually abbreviated M.Mus. or M.M., is one to two year graduate degree that combines advanced studies in an applied area of specialization (usually music performance, composition, or conducting) with graduate-level academic study in subjects such as music history, music theory, or music pedagogy. The degree, which takes one or two years of full-time study to complete, prepares students to be professional performers, conductors, and composers. The degree is often required as the minimum teaching credential for university, college, and conservatory instrumental or vocal teaching positions. Other related degrees include the Master of Music Education (M.Mus.Ed.), Master of Arts in Music Education (M.A.), Master of Sacred Music (M.S.M.), and Master of Worship Studies (M.W.S.).
== Natural Resources ==
The Master of Natural Resources (MNR) program is a graduate degree program designed for natural resource instructors and practitioners. The program, offered remotely and in person from several institutions, focuses on several aspects of natural resource policy, management, and assessment. MNR degrees often do not include a thesis, instead opting for 30-35 credits of courses taught by industry professionals and research professors.
== Natural Sciences ==
The Master of Science in Natural Sciences (MSNS) program is a graduate degree program designed for elementary, middle and high school science teachers, stressing content and the processes of natural sciences.
== Nonprofit Management ==
Master of Nonprofit Organizations (MNPO or MNO) and Master of Nonprofit Management (MNM) programs offer specialized, graduate-level knowledge for individuals currently working in the nonprofit sector or in organizations that partner with the nonprofit sector or for those seeking a career in the nonprofit sector. The program provides advanced knowledge in nonprofit management, resource development, strategic planning, and program evaluation that serves to enhance the education and career development of students. This degree program provides opportunities for students to prepare for employment or to advance their careers as administrators in nonprofit organizations. MNPO, MNO, and MNM programs are offered through a range of academic units, including schools and departments of social work, business, management, public administration, and independent units.
== Nonprofit Organizations ==
Master of Nonprofit Organizations (MNPO or MNO) and Master of Nonprofit Management (MNM) programs offer specialized, graduate-level knowledge for individuals currently working in the nonprofit sector or in organizations that partner with the nonprofit sector or for those seeking a career in the nonprofit sector. The program provides advanced knowledge in nonprofit management, resource development, strategic planning, and program evaluation that serves to enhance the education and career development of students. This degree program provides opportunities for students to prepare for employment or to advance their careers as administrators in nonprofit organizations. MNPO, MNO, and MNM programs are offered through a range of academic units, including schools and departments of social work, business, management, public administration, and independent units.
== Nurse Anesthesia ==
The Master of Science in Nurse Anesthesia degree prepares students to master the intellectual and technical skills required to become competent in the safe administration of anesthesia.
== Nursing ==
The Master of Science in Nursing (MSN) is the most common title for a graduate professional degree in nursing. A few schools also use the titles Master of Nursing or Master of Arts. Admittance into a MSN program requires that the professional be a registered nurse (RN), have an up-to-date license, and to have successfully completed a Bachelor of Science in Nursing (BSN) degree program. A full-time student can expect to complete the degree program in 18 to 24 months. A part-time student could take anywhere from 3 to 5 years to complete a MSN program.
Completion of a MSN/NP program earns nursing professionals the title of Nurse Practitioner (NP). While registered nurses administer medication and perform basic diagnostic tests, nurse practitioner's order and analyze the diagnostic tests, as well as prescribe treatments. Nurse Practitioner's entry level salary is typically $20,000 more than a Registered Nurse will earn.
== Occupational Therapy ==
The Master of Occupational Therapy is awarded to students who have completed a post-graduate course of study, and is now the entry-level degree for this profession. This degree is sometimes also conferred as a Master of Science in Occupational Therapy.
== Organizational Leadership ==
MSOL is a multi-disciplinary masters program that focuses on values-based leadership. The courses focus on the development of relationships between organizational members, effective decision-making processes, and an understanding of how modern technology can best support leaders. The MSOL degree is an alternative, not a substitute for an MBA. The programs are different in content and purpose. The MSOL degree is multidisciplinary and focuses more on people and organization issues, less on business topics such as finance, accounting and marketing. For example, in MSOL you will take courses in psychology and philosophy as well as courses in business and management. The MSOL degree is intended for those who are already established in a career. In contrast, those who are preparing to enter the world of work or change careers often seek an MBA degree. Although the MBA degree is more widely known, degree programs like the MSOL are being developed across the country because of increasing demand for ethical organizational leadership.
== Pacific International Affairs ==
The Master of Pacific International Affairs degree is a professional master's degree that provides training in various aspects of international affairs including International Business Management, International Politics, Public Policy, International Environmental Policy and Development and Non-Profit Management. The program requires mastery in a Pacific Rim language, quantitative and economic analysis techniques and a regional focus.
== Pharmacy ==
The Master of Pharmacy degree is awarded to students who have completed the four-year undergraduate Pharmacy course. Failure to complete the course, but having completed three years, usually awards the student a bachelor's degree in Pharmaceutical Science.
== Philosophy ==
In the United States and Canada, a Master of Philosophy or Magister Philosophiae (MPhil) degree is sometimes awarded to ABD (all but dissertation) doctoral candidates who have completed all coursework, passed their written and oral examinations, and met any other special requirements before beginning work on the doctoral dissertation. Such programs generally award the M.A. to students who have completed all coursework and preliminary exams (about two years after the B.A.), and the M.Phil. after advanced exams (comprehensives) and all language requirements have been met, and a dissertation topic approved (usually a year after the M.A.).
In other countries, assuming all requirements are met, the MPhil degree is generally awarded after about one year of full-time study towards a doctorate. The MPhil is considered equivalent to the former French DEA (Diplôme d'études approfondies) and Spanish DEA (Diploma de Estudios Avanzados).
== Physician Assistant Studies ==
The Master of Physician Assistant Studies is a professional degree providing training in the profession of a physician assistant to practice medicine based on the medical school model. This degree is also sometimes seen as an MS in Physician Assistant Studies (MSPAS), the Master of Physician Assistant Practice (MPAP)
== Professional Counseling ==
The Masters of Arts in Professional Counseling (MAPC) is a two-year program that prepares individuals for the independent professional practice of psychological counseling, involving the rendering of therapeutic services to individuals and groups experiencing psychological problems and exhibiting distress symptoms. Includes instruction in counseling theory, therapeutic intervention strategies, patient/counselor relationships, testing and assessment methods and procedures, group therapy, marital and family therapy, child and adolescent therapy, supervised counseling practice, ethical standards, and applicable regulations.
== Professional Studies ==
The Master's of Professional Studies (MPS or MProfStuds) is a terminal interdisciplinary degree and is sometimes used by programs that do not fit into any traditional categories. In some cases it is used as replacement for an MFA for programs with heavy technology focuses like NYU's Interactive Telecommunications Program. Other programs use it for Organizational Studies or interdisciplinary Social Science programs.
== Professional Writing ==
The Master of Professional Writing (MPW) degree is a professional graduate degree program that prepares candidates for a wide variety of writing-related positions in business, education, publishing, and the arts. Coursework in three concentrations - applied writing, composition and rhetoric, and creative writing - allows students to gain theoretical and practical knowledge in various fields of professional writing.
== Project Management ==
The Master of Project Management is a terminal professional degree awarded to students who have completed a post-graduate course of study, and is usually associated with construction management, urban planning, or architecture and engineering design management. There are a limited number of universities and schools worldwide to be accredited by the Global Accreditation Center of the Project Management Institute® PMI, they must meet the standards of the leading association for project management professionals.
== Public Administration ==
The Master of Public Administration (M.P.A. or MPA; MAP in Québec) degree is one of several Master's level professional public affairs degrees that provides training in public policy and project and program implementation (more recently known as public management).
MPA programs focus on public administration at the local, state/provincial, national/federal and supranational levels, as well as in the nongovernmental organization (NGO) and nonprofit sectors. In the course of its history the MPA degree has become more interdisciplinary by drawing from fields such as economics, sociology, anthropology, political science, and regional planning in order to equip MPA graduates with skills and knowledge covering a broad range of topics and disciplines relevant to the public sector. A core curriculum of a typical MPA program usually includes courses on microeconomics, public finance, research methods / statistics, policy process and policy analysis, ethics, public management, leadership, planning & GIS, and program evaluation/performance measurement. Depending on their interest, MPA students can focus their studies on a variety of public sector fields such as urban planning, emergency management, transportation, health care (particularly public health), economic development, urban management, community development, education, non-profits, information technology, environmental policy, etc.
== Public Health ==
The Master of Public Health and Master of Science in Public Health degrees are awarded to students who have completed a "post-graduate course" of study in Public Health. The MPH is considered a management/leadership degree specific to the fields related to public health while the MSPH is considered an academic degree, with a focus on empirical research methodologies.
== Public Management ==
The Master of Public Management is offered through Carnegie Mellon University Heinz College. This masters level degree is conferred upon those students who have completed a post-graduate course of study, and is usually associated with broadening the students' understanding of social, political, technological and economic processes, as well as paradigms of organizational and human behavior.
== Public Policy ==
The Master of Public Policy is a master level professional degree that provides training in policy analysis and program evaluation at public policy schools. Over time, the curriculum of Master of Public Policy and the Master of Public Administration (M.P.A.) degrees have blended and converged, due to the realization that policy analysis and program evaluation could benefit from an understanding of public administration, and vice versa. However, MPP programs still place more emphasis in policy analysis, research and evaluation, while MPA programs place more emphasis on operationalization of public policies and the design of effective programs and projects to achieve public policy goals. Over the years MPP programs have become more interdisciplinary drawing from economics, sociology, anthropology, politics, and regional planning. Depending on the interest, MPP students can concentrate in many policy areas including, but not limited to, urban policy, global policy, social policy, health policy, non-profit management, transportation, economic development, education, information technology, etc. Students interested in pursuing a degree program focused entirely on global public policy can also consider Master of Public Policy and Global Affairs (M.P.P.G.A) programs.
== Public Service ==
The Master of Arts in Public Service (MAPS) degree is a professional graduate degree program which offers specializations in areas of administration of justice, dispute resolution, health care administration, leadership studies, and non-profit studies. [2]
== Quality Assurance ==
The Master of Science in Quality Assurance (MSQA) is a graduate degree designed for quality management professionals in diverse industries including service, manufacturing, software, government and health care organizations.
== Resource Management ==
The Masters of Resource Management (M.R.M.) is designed for recent graduates from a range of disciplines, and for individuals with experience in private organizations or public agencies dealing with natural resources and the environment. Relevant disciplines of undergraduate training or experience include fields such as biology, engineering, chemistry, forestry and geology, as well as business administration, economics, geography, planning and a variety of social sciences. The M.R.M. degree provides training for professional careers in private or public organizations and preparation for further training for research and academic careers.
== Sacred Music ==
The Master of Sacred Music is a graduate degree combining academic studies in theology with applied studies in music.
== Security Technologies ==
Master of Science in Security Technologies (MSST) is a 32 credit, thesis graduate program for security as it relates to technology, intelligence collection, policy, law, cyber and physical security management, and security methodology. The program is within the College of Science and Engineering at the University of Minnesota.
== Social Work ==
The Master of Social Work (MSW) is a professional graduate degree preparing students to become professional social workers, typically in either direct practice or community practice. MSW programs require students to complete an extensive field practicum, under mentorship of a senior social worker. MSW programs in the United States are accredited by the Council on Social Work Education.
The degree title MSW is not used in the US by all social work schools. The University of Chicago uses A.M. and Columbia University uses M.S. to name a few of the exceptions.
== Strategic Leadership ==
The Master of Science in Strategic Leadership (MSSL) graduate degree is an Executive Program in Organizational Leadership and Management Development, teaching the skills and knowledge in working effectively with people, organizational systems, and complex information. MSSL objectives embody development of a leadership skill set, strategies for problem solving, and solutions to facilitate and manage change applicable in business and not-for-profit environments.
== Taxation ==
The Master of Science in Taxation (MST) is a professional graduate degree designed for Certified Public Accountants (CPAs) and other tax professionals.
== Teaching ==
Coursework and practice leading to a Master of Arts in Teaching (MAT) degree is intended to prepare individuals for a teaching career in a specific subject of middle and/or secondary-level curricula (i.e., middle or high school). The MAT differs from the MEd degree in that the course requirements are dominated by classes in the subject area to be taught (e.g., foreign language, math, science, etc.) rather than educational theory; and that the MAT candidate does not already hold a teaching credential whereas the MEd candidate will. The MAT often is the initial teacher education program for those who hold a bachelor's degree in the subject that they intend to teach. Work toward most MAT degrees will, however, necessarily include classes on educational theory in order to meet program and state requirements. Work toward the MAT degree may also include practica (i.e., student teaching). This abbreviation is also sometimes used to refer to a Master's in Theology (see ThM).
The Master of Arts in Teaching, or MAT, differs from the M.Ed. and the other Master's degrees in education primarily in that the majority of coursework focuses on the subject to be taught (i.e. history, English, math, biology, etc.) rather than on educational theory. While some online MAT programs offer a more general overview of the foundations of effective teaching, most MAT programs combine the study of widely established ‘best practices’ in the classroom with a focus on teaching within a specific discipline. Either way, the MA in Teaching is truly a teaching degree. Individuals who pursue the Master of Arts in Teaching generally choose to remain in the classroom. An MAT can also provide an educator with the appropriate credentials to become a department chairperson.
== Theology ==
The Master of Divinity (M.Div.) is the first professional degree in ministry (in the United States and Canada) and is a common academic degree among theological seminaries. It is typically three years in length. Other theology degree titles used are Master of Theology (Th.M. or M.Th.), Master of Theological Studies (M.T.S.), Master of Arts in Practical Theology (M.A.P.T.), and Master of Sacred Theology (S.T.M.).
== Urban Planning ==
The Master of Urban Planning (MUP), Master of City and Regional Planning (MCRP), Master of Urban and Regional Planning (MURP), Master of Environmental Design (MEDes (planning)) and Master of City Planning (MCP) are professional degrees in the study of urban planning.
== Urban Studies ==
The degree is primarily focused on urban issues including planning issues.
== References == | Wikipedia/Master_of_Science_in_Computer_Science |
Occupational therapy (OT), also known as ergotherapy, is a healthcare profession. Ergotherapy is derived from the Greek ergon which is allied to work, to act and to be active. Occupational therapy is based on the assumption that engaging in meaningful activities, also referred to as occupations, is a basic human need and that purposeful activity has a health-promoting and therapeutic effect. Occupational science the study of humans as 'doers' or 'occupational beings' was developed by inter-disciplinary scholars, including occupational therapists, in the 1980s.
The World Federation of Occupational Therapists (WFOT) defines occupational therapy as ‘a client-centred health profession concerned with promoting health and wellbeing through occupation. The primary goal of occupational therapy is to enable people to participate in the activities of everyday life. Occupational therapists achieve this outcome by working with people and communities to enhance their ability to engage in the occupations they want to, need to, or are expected to do, or by modifying the occupation or the environment to better support their occupational engagement'.
Many of the Member Organisations of WFOT have agreed a national definition of occupational therapy. In New Zealand occupational therapy is translated into Maori as 'whakaora ngangahau'. 'Whakaora' means ‘to restore to health' and 'ngangahau' is an adjective meaning 'active, spirited, zealous'.
Education programmes leading to entry to practice as an occupational therapist can be at diploma, baccalaureate, bachelors, masters or doctoral level. Information about entry level education programmes, currently or previously approved by WFOT, is available on the WFOT website
Occupational therapy is an allied health profession. In England, allied health professions (AHPs) are the third largest clinical workforce in health and care. Fifteen professions, with 352,593 registrants are regulated by the Health and Care Professions Council in the United Kingdom.
== History ==
The earliest evidence of using occupations as a method of therapy can be found in ancient times. In c. 100 BCE, Greek physician Asclepiades treated patients with a mental illness humanely using therapeutic baths, massage, exercise, and music. Later, the Roman Celsus prescribed music, travel, conversation and exercise to his patients. However, by medieval times the use of these interventions with people with mental illness was rare, if not nonexistent.
=== Moral treatment and graded activity ===
In late 18th-century Europe, doctors such as Philippe Pinel and Johann Christian Reil reformed the mental asylum system. Their institutions used rigorous work and leisure activities. This became part of what was known as moral treatment. Although it was thriving in Europe, interest in the reform movement fluctuated in the United States throughout the 19th century.
In the late 19th and early 20th centuries, the establishment of public health measures to control infectious diseases included the building of fever hospitals. Patients with tuberculosis were recommended to have a regime of prolonged bed rest followed by a gradual increase in exercise.
This was a time in which the rising incidence of disability related to industrial accidents, tuberculosis, and mental illness brought about an increasing social awareness of the issues involved.
The Arts and Crafts movement that took place between 1860 and 1910 also impacted occupational therapy. The movement emerged against the monotony and lost autonomy of factory work in the developed world. Arts and crafts were used to promote learning through doing, provided a creative outlet, and served as a way to avoid boredom during long hospital stays.
From the late 1870's, Scottish tuberculosis doctor Robert William Philip prescribed graded activity from complete rest through to gentle exercise and eventually to activities such as digging, sawing, carpentry and window cleaning. During this period a farm colony near Edinburgh and a village settlement near Papworth in England were established, both of which aimed to employ people in appropriate long-term work prior to their return to open employment.
=== Development into a health profession ===
In the United States, the health profession of occupational therapy was conceived in the early 1910s as a reflection of the Progressive Era. Early professionals merged highly valued ideals, such as having a strong work ethic and the importance of crafting with one's own hands with scientific and medical principles.
American social worker Eleanor Clarke Slagle (1870-1942) is considered to be the "mother" of occupational therapy. Slagle proposed habit training as a primary occupational therapy model of treatment. Based on the philosophy that engagement in meaningful routines shape a person's wellbeing, habit training focused on creating structure and balance between work, rest and leisure. In 1912, she became director of a department of occupational therapy at The Henry Phipps Psychiatric Clinic in Baltimore.
=== World War I ===
In 1915, Slagle worked at the first occupational therapy training program, the Henry B. Favill School of Occupations at Hull House in Chicago.
British-Canadian teacher and architect Thomas B. Kidner was appointed vocational secretary of the Canadian Military Hospitals Commission in January 1916. He was given the duty of preparing soldiers returning from World War I to return to their former vocational duties or retrain soldiers no longer able to perform their previous duties. He developed a program that engaged soldiers recovering from wartime injuries or tuberculosis in occupations even while they were still bedridden. Once the soldiers were sufficiently recovered they would work in a curative workshop and eventually progress to an industrial workshop before being placed in an appropriate work setting. He used occupations (daily activities) as a medium for manual training and helping injured individuals to return to productive duties such as work.The entry of the United States into World War I in April 1917 was a crucial event in the history of the profession. Up until this time, occupational therapy was not formalised into a profession. U.S. involvement in the war led to an escalating number of injured and disabled soldiers, which presented a daunting challenge to those in command.
The inaugural meeting of the National Society for the Promotion of Occupational Therapy (NSPOT) was held in Clifton Springs, New York, 15-17 March 1917. The meeting was attended by six founders: George Edward Barton, William Rush Dunton, Eleanor Clarke Slagle, Thomas B Kidner, Susan Cox Johnson and Isabel Gladwin Newton Barton. Susan E. Tracy and Herbert James Hall, did not attend but are considered near founders of the Society.
The military enlisted the assistance of NSPOT to recruit and train over 1,200 "reconstruction aides" to help with the rehabilitation of those wounded in the war.
Dunton's 1918 article "The Principles of Occupational Therapy" appeared in the journal Public Health, and laid the foundation for the textbook he published in 1919 entitled Reconstruction Therapy.
Dunton struggled with "the cumbersomeness of the term occupational therapy", as he thought it lacked the "exactness of meaning which is possessed by scientific terms". Other titles such as "work-cure", "ergo therapy" (ergo being the Greek root for "work"), and "creative occupations" were discussed as substitutes, but ultimately, none possessed the broad meaning that the practice of occupational therapy demanded in order to capture the many forms of treatment that existed from the beginning. NSPOT formally adopted the name "occupational therapy" for the field in 1921.
=== Inter-war period ===
There was a struggle to keep people in the profession during the post-war years. Emphasis shifted from the altruistic war-time mentality to the financial, professional, and personal satisfaction that comes with being a therapist. To make the profession more appealing, practice was standardized, as was the curriculum. Entry and exit criteria were established, and the American Occupational Therapy Association advocated for steady employment, decent wages, and fair working conditions. Via these methods, occupational therapy sought and obtained medical legitimacy in the 1920s.
The emergence of occupational therapy challenged the views of mainstream scientific medicine. Instead of focusing purely on the medical model, occupational therapists argued that a complex combination of social, economic, and biological reasons cause dysfunction. Principles and techniques were borrowed from many disciplines—including but not limited to physical therapy, nursing, psychiatry, rehabilitation, self-help, orthopedics, and social work—to enrich the profession's scope.
The 1920s and 1930s were a time of establishing standards of education and laying the foundation of the profession and its organization. Eleanor Clarke Slagle proposed a 12-month course of training in 1922, and these standards were adopted in 1923. In 1928, William Denton published another textbook, Prescribing Occupational Therapy. Educational standards were expanded to a total training time of 18 months in 1930 to place the requirements for professional entry on par with those of other professions. By the early 1930s, AOTA had established educational guidelines and accreditation procedures.
Margaret Barr Fulton became the first US qualified occupational therapist to work in the United Kingdom in 1925. She qualified at the Philadelphia School in the United States and was appointed to the Aberdeen Royal Hospital for mental patients where she worked until her retirement in 1963. US-style OT was introduced into England by Dr Elizabeth Casson who had visited similar establishments in America. (Casson had also earlier worked under the transformative English social reformer Octavia Hill.) In 1929 she established her own residential clinic in Bristol, Dorset House, for "women with mental disorders", and worked as its medical director. It was here in 1930 that she founded the first school of occupational therapy in the UK.
The Scottish Association of Occupational Therapists was founded in 1932. The profession was served in the rest of the UK by the Association of Occupational Therapists from 1936. (The two later merged to form what is today the Royal College of Occupational Therapists in 1974.)
=== World War II ===
With the US entry into World War II and the ensuing skyrocketing demand for occupational therapists to treat those injured in the war, the field of occupational therapy underwent dramatic growth and change. Occupational therapists needed to be skilled not only in the use of constructive activities such as crafts, but also increasingly in the use of activities of daily living.
The body that is now Occupational Therapy Australia began in 1944.
=== Post-World War II ===
Another textbook was published in the United States for occupational therapy in 1947, edited by Helen S. Willard and Clare S. Spackman. The profession continued to grow and redefine itself in the 1950s. In 1954, AOTA created the Eleanor Clarke Slagle Lectureship Award in its namesake's honor. Each year, this award recognizes a member of AOTA "who has creatively contributed to the development of the body of knowledge of the profession through research, education, or clinical practice." The profession also began to assess the potential for the use of trained assistants in the attempt to address the ongoing shortage of qualified therapists, and educational standards for occupational therapy assistants were implemented in 1960.
The 1960s and 1970s were a time of ongoing change and growth for the profession as it struggled to incorporate new knowledge and cope with the recent and rapid growth of the profession in the previous decades. New developments in the areas of neurobehavioral research led to new conceptualizations and new treatment approaches, possibly the most groundbreaking being the sensory integrative approach developed by A. Jean Ayres.
The profession has continued to grow and expand its scope and settings of practice. Occupational science, the study of occupation, was founded in 1989 by Elizabeth Yerxa at the University of Southern California as an academic discipline to provide foundational research on occupation to support and advance the practice of occupation-based occupational therapy, as well as offer a basic science to study topics surrounding "occupation".
In addition, occupational therapy practitioner's roles have expanded to include political advocacy (from a grassroots base to higher legislation); for example, in 2010 PL 111-148 titled the Patient Protection and Affordable Care Act had a habilitation clause that was passed in large part due to AOTA's political efforts. Furthermore, occupational therapy practitioners have been striving personally and professionally toward concepts of occupational justice and other human rights issues that have both local and global impacts. The World Federation of Occupational Therapist's Resource Centre has many position statements on occupational therapy's roles regarding their participation in human rights issues.
In 2021, U.S. News & World Report ranked occupational therapy as #19 of their list of '100 Best Jobs'.
== Practice frameworks ==
An occupational therapist works systematically with a client through a sequence of actions called an "occupational therapy process." There are several versions of this process. All practice frameworks include the components of evaluation (or assessment), intervention, and outcomes. This process provides a framework through which occupational therapists assist and contribute to promoting health and ensures structure and consistency among therapists.
=== Occupational Therapy Practice Framework (OTPF, United States) ===
The Occupational Therapy Practice Framework (OTPF) is the core competency of occupational therapy in the United States. The OTPF is divided into two sections: domain and process. The domain includes environment, client factors, such as the individual's motivation, health status, and status of performing occupational tasks. The domain looks at the contextual picture to help the occupational therapist understand how to diagnose and treat the patient. The process is the actions taken by the therapist to implement a plan and strategy to treat the patient.
=== Canadian Practice Process Framework ===
The Canadian Model of Client Centered Enablement (CMCE) embraces occupational enablement as the core competency of occupational therapy and the Canadian Practice Process Framework (CPPF) as the core process of occupational enablement in Canada. The Canadian Practice Process Framework (CPPF) has eight action points and three contextual element which are: set the stage, evaluate, agree on objective plan, implement plan, monitor/modify, and evaluate outcome. A central element of this process model is the focus on identifying both client and therapists strengths and resources prior to developing the outcomes and action plan.
=== International Classification of Functioning, Disability and Health (ICF) ===
The International Classification of Functioning, Disability and Health (ICF) is the World Health Organisation's framework to measure health and ability by illustrating how these components impact one's function. This relates very closely to the Occupational Therapy Practice Framework, as it is stated that "the profession's core beliefs are in the positive relationship between occupation and health and its view of people as occupational beings". The ICF is built into the 2nd edition of the practice framework. Activities and participation examples from the ICF overlap Areas of Occupation, Performance Skills, and Performance Patterns in the framework. The ICF also includes contextual factors (environmental and personal factors) that relate to the framework's context. In addition, body functions and structures classified within the ICF help describe the client factors described in the Occupational Therapy Practice Framework. Further exploration of the relationship between occupational therapy and the components of the ICIDH-2 (revision of the original International Classification of Impairments, Disabilities, and Handicaps (ICIDH), which later became the ICF) was conducted by McLaughlin Gray.
It is noted in the literature that occupational therapists should use specific occupational therapy vocabulary along with the ICF in order to ensure correct communication about specific concepts. The ICF might lack certain categories to describe what occupational therapists need to communicate to clients and colleagues. It also may not be possible to exactly match the connotations of the ICF categories to occupational therapy terms. The ICF is not an assessment and specialized occupational therapy terminology should not be replaced with ICF terminology. The ICF is an overarching framework for current therapy practices.
== Occupations ==
According to the American Occupational Therapy Association's (AOTA) Occupational Therapy Practice Framework: Domain and Process, 4th Edition (OTPF-4), occupations are defined as "everyday activities that people do as individuals, and families, and with communities to occupy time and bring meaning and purpose to life. Occupations include things people need to, want to and are expected to do". Occupations are central to a client's (person's, group's, or population's) health, identity, and sense of competence and have particular meaning and value to that client. Occupations include activities of daily living (ADLs), instrumental activities of daily living (IADLs), education, work, play, leisure, social participation, rest and sleep.
== Practice settings ==
According to the 2019 Salary and Workforce Survey by the American Occupational Therapy Association, occupational therapists work in a wide-variety of practice settings including: hospitals (28.6%), schools (18.8%), long-term care facilities/skilled nursing facilities (14.5%), free-standing outpatient (13.3%), home health (7.3%), academia (6.9%), early intervention (4.4%), mental health (2.2%), community (2.4%), and other (1.6%). According to the AOTA, the most common primary work setting for occupational therapists is in hospitals. Also according to the survey, 46% of occupational therapists work in urban areas, 39% work in suburban areas and the remaining 15% work in rural areas.
The Canadian Institute for Health Information (CIHI) found that as of 2020 nearly half (46.1%) of occupational therapists worked in hospitals, 43.2% worked in community health, 3.6% work in long-term care (LTC) and 7.1% work in "other", including government, industry, manufacturing, and commercial settings. The CIHI also found that 68% of occupational therapists in Canada work in urban settings and only 3.7% work in rural settings.
== Areas of practice in the United States ==
=== Children and youth ===
Occupational therapists work with infants, toddlers, children, youth, and their families in a variety of settings, including schools, clinics, homes, hospitals, and the community. Evaluation assesses the child's ability to engage in daily, meaningful occupations, the underlying skills (or performance components) which may be physical, cognitive, or emotional in nature, and the fit between the client's skills and the environments and contexts in which the client functions. OT intervention and involves evaluating a young person's occupational performance in areas of feeding, playing, socializing which aligns with their neurodiversity, daily living skills, or attending school. In planning treatment, occupational therapists work in collaboration with the children and teens themselves, parents, caregivers, and teachers in order to develop functional goals within a variety of occupations meaningful to the young client.
Early intervention addresses daily functioning of a child between the ages of birth to three years old. OTs who practice in early intervention support a family's ability to care for their child with special needs and promote his or her function and participation in the most natural environment. Each child is required to have an Individualized Family Service Plan (IFSP) that focuses on the family's goals for the child. It's possible for an OT to serve as the family's service coordinator and facilitate the team process for creating an IFSP for each eligible child.
Objectives that an occupational therapist addresses with children and youth may take a variety of forms. Examples are as follows:
Providing rehabilitation activities to children with neuromuscular disabilities such as cerebral palsy
Supporting self-regulation within neurodivergent children whose neurobiology does not align with the sensory environment or the contexts in which they function
Facilitating coping skills to a child with generalized anxiety disorder.
Consulting with teachers, psychologists, social workers, parents/caregivers, and other professionals who work with children regarding modifications, accommodations and supports in a variety of areas, such as sensory processing, motor planning, visual processing, and executive function skills.
Providing individualized treatment for sensory processing differences.
Providing splinting and caregiver education in a hospital burn unit.
Instructing caregivers in regard to mealtime intervention for autistic children who have feeding challenges.
Facilitating handwriting development through providing intervention to develop fine motor and writing readiness skills in school-aged children.
In the United States, pediatric occupational therapists work in the school setting as a "related service" for children with an Individual Education Plan (IEP). Every student who receives special education and related services in the public school system is required by law to have an IEP, which is a very individualized plan designed for each specific student (U.S. Department of Education, 2007). Related services are "developmental, corrective, and other supportive services as are required to assist a child with a disability to benefit from special education," and include a variety of professions such as speech–language pathology and audiology services, interpreting services, psychological services, and physical and occupational therapy.
As a related service, occupational therapists work with children with varying disabilities to address those skills needed to access the special education program and support academic achievement and social participation throughout the school day (AOTA, n.d.-b). In doing so, occupational therapists help children fulfill their role as students and prepare them to transition to post-secondary education, career and community integration (AOTA, n.d.-b).
Occupational therapists have specific knowledge to increase participation in school routines throughout the day, including:
Modification of the school environment to allow physical access for children with disabilities
Provide assistive technology to support student success
Helping to plan instructional activities for implementation in the classroom
Support the needs of students with significant challenges such as helping to determine methods for alternate assessment of learning
Helping students develop the skills necessary to transition to post-high school employment, independent living or further education (AOTA).
Other settings, such as homes, hospitals, and the community are important environments where occupational therapists work with children and teens to promote their independence in meaningful, daily activities. Outpatient clinics offer a growing OT intervention referred to as "Sensory Integration Treatment". This therapy, provided by experienced and knowledgeable pediatric occupational therapists, was originally developed by A. Jean Ayres, an occupational therapist. Sensory integration therapy is an evidence-based practice which enables children to better process and integrate sensory input from the child's body and from the environment, thus improving his or her emotional regulation, ability to learn, behavior, and functional participation in meaningful daily activities.
Recognition of occupational therapy programs and services for children and youth is increasing worldwide. Occupational therapy for both children and adults is now recognized by the United Nations as a human right which is linked to the social determinants of health. As of 2018, there are over 500,000 occupational therapists working worldwide (many of whom work with children) and 778 academic institutions providing occupational therapy instruction.
=== Health and wellness ===
According to the American Occupational Therapy Association's (AOTA) Occupational Therapy Practice Framework, 3rd Edition, the domain of occupational therapy is described as "Achieving health, well-being, and participation in life through engagement in occupation". Occupational therapy practitioners have a distinct value in their ability to utilize daily occupations to achieve optimal health and well-being. By examining an individual's roles, routines, environment, and occupations, occupational therapists can identify the barriers in achieving overall health, well-being and participation.
Occupational therapy practitioners can intervene at primary, secondary and tertiary levels of intervention to promote health and wellness. It can be addressed in all practice settings to prevent disease and injuries, and adapt healthy lifestyle practices for those with chronic diseases. Two of the occupational therapy programs that have emerged targeting health and wellness are the Lifestyle Redesign Program and the REAL Diabetes Program.
Occupational therapy interventions for health and wellness vary in each setting:
==== School ====
Occupational therapy practitioners target school-wide advocacy for health and wellness including: bullying prevention, backpack awareness, recess promotion, school lunches, and PE inclusion. They also heavily work with students with learning disabilities such as those on the autism spectrum.
A study conducted in Switzerland showed that a large majority of occupational therapists collaborate with schools, half of them providing direct services within mainstream school settings. The results also show that services were mainly provided to children with medical diagnoses, focusing on the school environment rather than the child's disability.
==== Outpatient ====
Occupational therapy practitioners conduct 1:1 treatment sessions and group interventions to address: leisure, health literacy and education, modified physical activity, stress/anger management, healthy meal preparation, and medication management.
==== Acute care ====
Occupational therapy practitioners in acute care assess whether a patient has the cognitive, emotional and physical ability as well as the social supports needed to live independently and care for themselves after discharge from the hospital. Occupational therapists are uniquely positioned to support patients in acute care as they focus on both clinical and social determinants of health.
Services delivered by occupational therapists in acute care include:
Direct rehabilitation interventions, individually or in group settings to address physical, emotional and cognitive skills that are required for the patient to perform self-care and other important activities.
Caregiver training to assist patients after discharge.
Recommendations for adaptive equipment for increased safety and independence with activities of daily living (e.g. aids for getting dressed, shower chairs for bathing, and medication organizers for self-administering medications).
They also perform home safety assessments to suggest modifications for improved safety and function after discharge.
Occupational therapists use a variety of models, including the Model of Human Occupation, Person, Environment and Occupation, and Canadian Occupational Performance Model to adopt a client centered approach used for discharge planning. Hospital spending on occupational therapy services in acute care was found to be the single most significant spending category in reducing the risk of readmission to the hospital for heart failure, pneumonia, and acute myocardial infarction.
==== Community-based ====
Occupational therapy practitioners develop and implement community wide programs to assist in prevention of diseases and encourage healthy lifestyles by: conducting education classes for prevention, facilitating gardening, offering ergonomic assessments, and offering pleasurable leisure and physical activity programs.
=== Mental health ===
Mental Health
Occupational therapy's foundation in mental health is deeply rooted in the moral treatment movement, which sought to replace the harsh treatment of mental disorders with the establishment of healthy routines and engagement in meaningful activities. This movement significantly influenced the development of occupational therapy, particularly through the contributions of early 20th-century practitioners and theorists like Adolph Meyer, who emphasized a holistic approach to mental health care (Christiansen & Haertl, 2014).
According to the American Occupational Therapy Association (AOTA), occupational therapy is based on the principle that "active engagement in occupation promotes, facilitates, supports, and maintains health and participation" (AOTA, 2017). Occupations refer to individuals' activities to structure their time and provide meaning. The primary goals of occupational therapy include promoting physical and mental health and well-being and establishing, restoring, maintaining, and improving function and quality of life for individuals at risk of or affected by physical or mental health disorders (AOTA, 2017).
Education and Professional Qualifications
Occupational therapists require a master's degree or clinical doctorate, while occupational therapy assistants need at least an associate's degree. Their education encompasses extensive mental health-related topics, including biological, physical, social, and behavioral sciences, and supervised clinical experiences culminating in full-time internships. Both must pass national examinations and meet state licensure requirements. Occupational therapists apply mental and physical health knowledge, focusing on participation and occupation, using performance-based assessments to understand the relationship between occupational participation and well-being. Their education covers various aspects of mental health, including neurophysiological changes, human development, historical and contemporary perspectives on mental health, and current diagnostic criteria. This comprehensive training prepares occupational therapy practitioners to address the complex interplay of client variables, activity demands, and environmental factors in promoting health and managing health challenges (Bazyk & Downing, 2017).
Occupational therapy role in mental health practice
Occupational therapy practitioners play a critical role in mental health by using therapeutic activities to promote mental health and support full participation in life for individuals at risk of or experiencing psychiatric, behavioral, and substance use disorders. They work across the lifespan and in various settings, including homes, schools, workplaces, community environments, hospitals, outpatient clinics, and residential facilities (AOTA,2017). Occupational therapists and occupational therapy assistants assume diverse roles, such as case managers, care coordinators, group facilitators, community mental health providers, consultants, program developers, and advocates. Their interventions aim to facilitate engagement in meaningful occupations, enhance role performance, and improve overall well-being. This involves analyzing, adapting, and modifying tasks and environments to support clients' goals and optimal engagement in daily activities (AOTA, 2017).
Occupational therapy practitioners utilize clinical reasoning, informed by various theoretical perspectives and evidence-based approaches, to guide evaluation and intervention. They are skilled in analyzing the complex interplay among client variables, activity demands, and the environments where participation occurs. For individuals experiencing any mental health issues, his or her ability to participate in occupations actively may be hindered. For example, an individual diagnosed with depression or anxiety may experience interruptions in sleep, difficulty completing self-care tasks, decreased motivation to participate in leisure activities, decreased concentration for school or job-related work, and avoidance of social interactions.
Occupational therapy utilizes the public health approach to mental health (WHO, 2001) which emphasizes the promotion of mental health as well as the prevention of, and intervention for, mental illness. This model highlights the distinct value of occupational therapists in mental health promotion, prevention, and intensive interventions across the lifespan (Miles et al., 2010). Below are the three major levels of service:
==== Tier 3: intensive interventions ====
Intensive interventions are provided for individuals with identified mental, emotional, or behavioral disorders that limit daily functioning, interpersonal relationships, feelings of emotional well-being, and the ability to cope with challenges in daily life. Occupational therapy practitioners are committed to the recovery model which focuses on enabling persons with mental health challenges through a client-centered process to live a meaningful life in the community and reach their potential (Champagne & Gray, 2011).
The focus of intensive interventions (direct–individual or group, consultation) is engagement in occupation to foster recovery or "reclaiming mental health" resulting in optimal levels of community participation, daily functioning, and quality of life; functional assessment and intervention (skills training, accommodations, compensatory strategies) (Brown, 2012); identification and implementation of healthy habits, rituals, and routines to support wellness.
==== Tier 2: targeted services ====
Targeted services are designed to prevent mental health problems in persons who are at risk of developing mental health challenges, such as those who have emotional experiences (e.g., trauma, abuse), situational stressors (e.g., physical disability, bullying, social isolation, obesity) or genetic factors (e.g., family history of mental illness). Occupational therapy practitioners are committed to early identification of and intervention for mental health challenges in all settings.
The focus of targeted services (small groups, consultation, accommodations, education) is engagement in occupations to promote mental health and diminish early symptoms; small, therapeutic groups (Olson, 2011); environmental modifications to enhance participation (e.g., create Sensory friendly classrooms, home, or work environments)
==== Tier 1: universal services ====
Universal services are provided to all individuals with or without mental health or behavioral problems, including those with disabilities and illnesses (Barry & Jenkins, 2007). Occupational therapy services focus on mental health promotion and prevention for all: encouraging participation in health-promoting occupations (e.g., enjoyable activities, healthy eating, exercise, adequate sleep); fostering self-regulation and coping strategies (e.g., mindfulness, yoga); promoting mental health literacy (e.g., knowing how to take care of one's mental health and what to do when experiencing symptoms associated with ill mental health). Occupational therapy practitioners develop universal programs and embed strategies to promote mental health and well-being in a variety of settings, from schools to the workplace.
The focus of universal services (individual, group, school-wide, employee/organizational level) is universal programs to help all individuals successfully participate in occupations that promote positive mental health (Bazyk, 2011); educational and coaching strategies with a wide range of relevant stakeholders focusing on mental health promotion and prevention; the development of coping strategies and resilience; environmental modifications and supports to foster participation in health-promoting occupations.
=== Productive aging ===
Occupational therapists work with older adults to maintain independence, participate in meaningful activities, and live fulfilling lives. Some examples of areas that occupational therapists address with older adults are driving, aging in place, low vision, and dementia or Alzheimer's disease (AD). When addressing driving, driver evaluations are administered to determine if drivers are safe behind the wheel. To enable independence of older adults at home, occupational therapists perform falls risk assessments, assess clients functioning in their homes, and recommend specific home modifications. When addressing low vision, occupational therapists modify tasks and the environment. While working with individuals with AD, occupational therapists focus on maintaining quality of life, ensuring safety, and promoting independence.
=== Geriatrics/productive aging ===
Occupational therapists address all aspects of aging from health promotion to treatment of various disease processes. The goal of occupational therapy for older adults is to ensure that older adults can maintain independence and reduce health care costs associated with hospitalization and institutionalization. In the community, occupational therapists can assess an older adults ability to drive and if they are safe to do so. If it is found that an individual is not safe to drive the occupational therapist can assist with finding alternate transit options. Occupational therapists also work with older adults in their home as part of home care. In the home, an occupational therapist can work on such things as fall prevention, maximizing independence with activities of daily living, ensuring safety and being able to stay in the home for as long as the person wants. An occupational therapist can also recommend home modifications to ensure safety in the home. Many older adults have chronic conditions such as diabetes, arthritis, and cardiopulmonary conditions. Occupational therapists can help manage these conditions by offering education on energy conservation strategies or coping strategies. Not only do occupational therapists work with older adults in their homes, they also work with older adults in hospitals, nursing homes and post-acute rehabilitation. In nursing homes, the role of the occupational therapist is to work with clients and caregivers on education for safe care, modifying the environment, positioning needs and enhancing IADL skills to name a few. In post-acute rehabilitation, occupational therapists work with clients to get them back home and to their prior level of function after a hospitalization for an illness or accident. Occupational therapists also play a unique role for those with dementia. The therapist may assist with modifying the environment to ensure safety as the disease progresses along with caregiver education to prevent burnout. Occupational therapists also play a role in palliative and hospice care. The goal at this stage of life is to ensure that the roles and occupations that the individual finds meaningful continue to be meaningful. If the person is no longer able to perform these activities, the occupational therapist can offer new ways to complete these tasks while taking into consideration the environment along with psychosocial and physical needs. Not only do occupational therapists work with older adults in traditional settings, they also work in senior centre's and ALFs.
=== Visual impairment ===
Visual impairment is one of the top 10 disabilities among American adults. Occupational therapists work with other professions, such as optometrists, ophthalmologists, and certified low vision therapists, to maximize the independence of persons with a visual impairment by using their remaining vision as efficiently as possible. AOTA's promotional goal of "Living Life to Its Fullest" speaks to who people are and learning about what they want to do, particularly when promoting the participation in meaningful activities, regardless of a visual impairment. Populations that may benefit from occupational therapy includes older adults, persons with traumatic brain injury, adults with potential to return to driving, and children with visual impairments. Visual impairments addressed by occupational therapists may be characterized into two types including low vision or a neurological visual impairment. An example of a neurological impairment is a cortical visual impairment (CVI) which is defined as "...abnormal or inefficient vision resulting from a problem or disorder affecting the parts of brain that provide sight". The following section will discuss the role of occupational therapy when working with the visually impaired.
Occupational therapy for older adults with low vision includes task analysis, environmental evaluation, and modification of tasks or the environment as needed. Many occupational therapy practitioners work closely with optometrists and ophthalmologists to address visual deficits in acuity, visual field, and eye movement in people with traumatic brain injury, including providing education on compensatory strategies to complete daily tasks safely and efficiently. Adults with a stable visual impairment may benefit from occupational therapy for the provision of a driving assessment and an evaluation of the potential to return to driving. Lastly, occupational therapy practitioners enable children with visual impairments to complete self care tasks and participate in classroom activities using compensatory strategies.
=== Adult rehabilitation ===
Occupational therapists address the need for rehabilitation following an injury or impairment. When planning treatment, occupational therapists address the physical, cognitive, psychosocial, and environmental needs involved in adult populations across a variety of settings.
Occupational therapy in adult rehabilitation may take a variety of forms:
Working with adults with autism at day rehabilitation programs to promote successful relationships and community participation through instruction on social skills
Increasing the quality of life for an individual with cancer by engaging them in occupations that are meaningful, providing anxiety and stress reduction methods, and suggesting fatigue management strategies
Coaching individuals with hand amputations how to put on and take off a myoelectrically controlled limb as well as training for functional use of the limb
Pressure sore prevention for those with sensation loss such as in spinal cord injuries.
Using and implementing new technology such as speech to text software and Nintendo Wii video games
Communicating via telehealth methods as a service delivery model for clients who live in rural areas
Working with adults who have had a stroke to regain their activities of daily living
=== Assistive technology ===
Occupational therapy practitioners, or occupational therapists (OTs), are uniquely poised to educate, recommend, and promote the use of assistive technology to improve the quality of life for their clients. OTs are able to understand the unique needs of the individual in regards to occupational performance and have a strong background in activity analysis to focus on helping clients achieve goals. Thus, the use of varied and diverse assistive technology is strongly supported within occupational therapy practice models.
=== Travel occupational therapy ===
Because of the rising need for occupational therapy practitioners in the U.S., many facilities are opting for travel occupational therapy practitioners—who are willing to travel, often out of state, to work temporarily in a facility. Assignments can range from 8 weeks to 9 months, but typically last 13–26 weeks in length. Travel therapists work in many different settings, but the highest need for therapists are in home health and skilled nursing facility settings. There are no further educational requirements needed to be a travel occupational therapy practitioner; however, there may be different state licensure guidelines and practice acts that must be followed. According to Zip Recruiter, as of July 2019, the national average salary for a full-time travel therapist is $86,475 with a range between $62,500 to $100,000 across the United States. Most commonly (43%), travel occupational therapists enter the industry between the ages of 21–30.
=== Occupational justice ===
The practice area of occupational justice relates to the "benefits, privileges and harms associated with participation in occupations" and the effects related to access or denial of opportunities to participate in occupations. This theory brings attention to the relationship between occupations, health, well-being, and quality of life. Occupational justice can be approached individually and collectively. The individual path includes disease, disability, and functional restrictions. The collective way consists of public health, gender and sexual identity, social inclusion, migration, and environment. The skills of occupational therapy practitioners enable them to serve as advocates for systemic change, impacting institutions, policy, individuals, communities, and entire populations. Examples of populations that experience occupational injustice include refugees, prisoners, homeless persons, survivors of natural disasters, individuals at the end of their life, people with disabilities, elderly living in residential homes, individuals experiencing poverty, children, immigrants, and LGBTQI+ individuals.
For example, the role of an occupational therapist working to promote occupational justice may include:
Analyzing task, modifying activities and environments to minimize barriers to participation in meaningful activities of daily living.
Addressing physical and mental aspects that may hinder a person's functional ability.
Provide intervention that is relevant to the client, family, and social context.
Contribute to global health by advocating for individuals with disabilities to participate in meaningful activities on a global level. Occupation therapists are involved with the World Health Organization (WHO), non-governmental organizations and community groups and policymaking to influence the health and well-being of individuals with disabilities worldwide
Occupational therapy practitioners' role in occupational justice is not only to align with perceptions of procedural and social justice but to advocate for the inherent need of meaningful occupation and how it promotes a just society, well-being, and quality of life among people relevant to their context. It is recommended to the clinicians to consider occupational justice in their everyday practice to promote the intention of helping people participate in tasks that they want and need to do.
=== Occupational injustice ===
In contrast, occupational injustice relates to conditions wherein people are deprived, excluded or denied of opportunities that are meaningful to them. Types of occupational injustices and examples within the OT practice include:
Occupational deprivation: The exclusion from meaningful occupations due to external factors that are beyond the person's control. For example, a person with difficulties with functional mobility may find it challenging to reintegrate into the community due to transportation barriers.
OTs can help in raising awareness and bringing communities together to reduce occupational deprivation
OTs can recommend the removal of environmental barriers to facilitate occupation, whilst designing programs that enable engagement.
Advocacy by providing information to policy to prevent possible unintended occupational deprivation and increase social cohesion and inclusion
Occupational apartheid: The exclusion of a person in chosen occupations due to personal characteristics such as age, gender, race, nationality, or socioeconomic status. An example can be seen in children with developmental disabilities from low socioeconomic backgrounds whose families would opt out of therapy due to financial constraints.
OTs providing interventions within a segregated population must focus on increasing occupational engagement through large-scale environmental modification and occupational exploration.
OTs can address occupational engagement through group and individual skill-building opportunities, as well as community-based experiences that explore free and local resources
Occupational marginalization: Relates to how implicit norms of behavior or societal expectations prevent a person from engaging in a chosen occupation. As an example, a child with physical impairments may only be offered table-top leisure activities instead of sports as an extracurricular activity due to the functional limitations caused by his physical impairments.
OTs can design, develop, and/or provide programs that mitigate the negative impacts of occupational marginalization and enhance optimal levels of performance and wellbeing that enable participation
Occupational imbalance: The limited participation in a meaningful occupation brought about by another role in a different occupation. This can be seen in the situation of a caregiver of a person with a disability who also has to fulfill other roles such as being a parent to other children, a student, or a worker.
OTs can advocate fostering for supportive environments for participation in occupations that promote individuals' well-being and in advocating for building healthy public policy
Occupational alienation: The imposition of an occupation that does not hold meaning for that person. In the OT profession, this manifests in the provision of rote activities that do not really relate to the goals or the client's interests.
OTs can develop individualized activities tailored to the interests of the individual to maximize their potential.
OTs can design, develop and promote programs that can be inclusive and provide a variety of choices that the individual can engage in.
Within occupational therapy practice, injustice may ensue in situations wherein professional dominance, standardized treatments, laws and political conditions create a negative impact on the occupational engagement of our clients. Awareness of these injustices will enable the therapist to reflect on his own practice and think of ways in approaching their client's problems while promoting occupational justice.
=== Community-based therapy ===
As occupational therapy (OT) has grown and developed, community-based practice has blossomed from an emerging area of practice to a fundamental part of occupational therapy practice (Scaffa & Reitz, 2013). Community-based practice allows for OTs to work with clients and other stakeholders such as families, schools, employers, agencies, service providers, stores, day treatment and day care and others who may influence the degree of success the client will have in participating. It also allows the therapist to see what is actually happening in the context and design interventions relevant to what might support the client in participating and what is impeding her or him from participating. Community-based practice crosses all of the categories within which OTs practice from physical to cognitive, mental health to spiritual, all types of clients may be seen in community-based settings. The role of the OT also may vary, from advocate to consultant, direct care provider to program designer, adjunctive services to therapeutic leader.
=== Nature-based therapy ===
Nature-based interventions and outdoor activities may be incorporated into occupational therapy practice as they can provide therapeutic benefits in various ways. Examples include therapeutic gardening, animal-assisted therapy (AAT), and adventure therapy.
For instance, parents reported improvement in the emotional regulation and social engagement of their children with autism spectrum disorder (ASD) in a study of parental perceptions regarding the outcomes of AAT conducted with trained dogs. They also observed reductions in problematic behaviors. A source cited in the study found similar results with AAT employing horses and llamas.
Gardening in a group setting may serve as a complementary intervention in stroke rehabilitation; in addition to being mentally restful and conducive to social connection, it helps patients master skills and can remind them of experiences from their past. Royal Rehab's Productive Garden Project in Australia, managed by a horticultural therapist, allows patients and practitioners to participate in meaningful activity outside the usual healthcare settings. Thus, tending a garden helps facilitate experiential activities, perhaps attaining a better balance between clinical and real-life pursuits during rehabilitation, in lieu of mainly relying on clinical interventions.
For adults with acquired brain injury, nature-based therapy has been found to improve motor abilities, cognitive function, and general quality of life. Contributing to a theoretical understanding of such successes in nature-based approaches are: nature's positive impact on problem solving and the refocusing of attention; an innate human connection with, and positive response to, the natural world; an increased sense of well-being when in contact with nature; and the emotional, nonverbal, and cognitive aspects of human-environment interaction.
== Education ==
Worldwide, there is a range of qualifications required to practice as an occupational therapist or occupational therapy assistant. Depending on the country and expected level of practice, degree options include associate degree, Bachelor's degree, entry-level master's degree, post-professional master's degree, entry-level Doctorate (OTD), post-professional Doctorate (DrOT or OTD), Doctor of Clinical Science in OT (CScD), Doctor of Philosophy in Occupational Therapy (PhD), and combined OTD/PhD degrees.
Both occupational therapist and occupational therapy assistant roles exist internationally. Currently in the United States, dual points of entry exist for both OT and OTA programs. For OT, that is entry-level Master's or entry-level Doctorate. For OTA, that is associate degree or bachelor's degree.
The World Federation of Occupational Therapists (WFOT) has minimum standards for the education of OTs, which was revised in 2016. All of the educational programs around the world need to meet these minimum standards. These standards are subsumed by and can be supplemented with academic standards set by a country's national accreditation organization. As part of the minimum standards, all programs must have a curriculum that includes practice placements (fieldwork). Examples of fieldwork settings include: acute care, inpatient hospital, outpatient hospital, skilled nursing facilities, schools, group homes, early intervention, home health, and community settings.
The profession of occupational therapy is based on a wide theoretical and evidence based background. The OT curriculum focuses on the theoretical basis of occupation through multiple facets of science, including occupational science, anatomy, physiology, biomechanics, and neurology. In addition, this scientific foundation is integrated with knowledge from psychology, sociology and more.
In the United States, Canada, and other countries around the world, there is a licensure requirement. In order to obtain an OT or OTA license, one must graduate from an accredited program, complete fieldwork requirements, and pass a national certification examination.
== Philosophical underpinnings ==
The philosophy of occupational therapy has evolved over the history of the profession. The philosophy articulated by the founders owed much to the ideals of romanticism, pragmatism and humanism, which are collectively considered the fundamental ideologies of the past century.
One of the most widely cited early papers about the philosophy of occupational therapy was presented by Adolf Meyer, a psychiatrist who had emigrated to the United States from Switzerland in the late 19th century and who was invited to present his views to a gathering of the new Occupational Therapy Society in 1922. At the time, Dr. Meyer was one of the leading psychiatrists in the United States and head of the new psychiatry department and Phipps Clinic at Johns Hopkins University in Baltimore, Maryland.
William Rush Dunton, a supporter of the National Society for the Promotion of Occupational Therapy, now the American Occupational Therapy Association, sought to promote the ideas that occupation is a basic human need, and that occupation is therapeutic. From his statements came some of the basic assumptions of occupational therapy, which include:
Occupation has a positive effect on health and well-being.
Occupation creates structure and organizes time.
Occupation brings meaning to life, culturally and personally.
Occupations are individual. People value different occupations.
These assumptions have been developed over time and are the basis of the values that underpin the Codes of Ethics issued by the national associations. The relevance of occupation to health and well-being remains the central theme.
In the 1950s, criticism from medicine and the multitude of disabled World War II veterans resulted in the emergence of a more reductionistic philosophy. While this approach led to developments in technical knowledge about occupational performance, clinicians became increasingly disillusioned and re-considered these beliefs. As a result, client centeredness and occupation have re-emerged as dominant themes in the profession. Over the past century, the underlying philosophy of occupational therapy has evolved from being a diversion from illness, to treatment, to enablement through meaningful occupation.
Three commonly mentioned philosophical precepts of occupational therapy are that occupation is necessary for health, that its theories are based on holism and that its central components are people, their occupations (activities), and the environments in which those activities take place. However, there have been some dissenting voices. Mocellin, in particular, advocated abandoning the notion of health through occupation as he proclaimed it obsolete in the modern world. As well, he questioned the appropriateness of advocating holism when practice rarely supports it. Some values formulated by the American Occupational Therapy Association have been critiqued as being therapist-centric and do not reflect the modern reality of multicultural practice.
In recent times occupational therapy practitioners have challenged themselves to think more broadly about the potential scope of the profession, and expanded it to include working with groups experiencing occupational injustice stemming from sources other than disability. Examples of new and emerging practice areas would include therapists working with refugees, children experiencing obesity, and people experiencing homelessness.
== Theoretical frameworks ==
A distinguishing facet of occupational therapy is that therapists often espouse the use theoretical frameworks to frame their practice. Many have argued that the use of theory complicates everyday clinical care and is not necessary to provide patient-driven care.
Note that terminology differs between scholars. An incomplete list of theoretical bases for framing a human and their occupations include the following:
=== Generic models ===
Generic models are the overarching title given to a collation of compatible knowledge, research and theories that form conceptual practice. More generally they are defined as "those aspects which influence our perceptions, decisions and practice".
The Person Environment Occupation Performance model (PEOP) was originally published in 1991 (Charles Christiansen & M. Carolyn Baum) and describes an individual's performance based on four elements including: environment, person, performance and occupation. The model focuses on the interplay of these components and how this interaction works to inhibit or promote successful engagement in occupation.
Occupation-focused practice models
Occupational Therapy Intervention Process Model (OTIPM) (Anne Fisher and others)
Occupational Performance Process Model (OPPM)
Model of Human Occupation (MOHO) (Gary Kielhofner and others)
MOHO was first published in 1980. It explains how people select, organise and undertake occupations within their environment. The model is supported with evidence generated over thirty years and has been successfully applied throughout the world.
Canadian Model of Occupational Performance and Engagement (CMOP-E)
This framework was originated in 1997 by the Canadian Association of Occupational Therapists (CAOT) as the Canadian Model of Occupational Performance (CMOP). It was expanded in 2007 by Palatjko, Townsend and Craik to add engagement. This framework upholds the view that three components—the person, environment and occupation- are related. Engagement was added to encompass occupational performance. A visual model is depicted with the person located at the center of the model as a triangle. The triangles three points represent cognitive, affective, and physical components with a spiritual center. The person triangle is surrounded by an outer ring symbolizing the context of environment with an inner ring symbolizing the context of occupation.
Occupational Performances Model – Australia (OPM-A) (Chris Chapparo & Judy Ranka)
The OPM(A) was conceptualized in 1986 with its current form launched in 2006. The OPM(A) illustrates the complexity of occupational performance, the scope of occupational therapy practice, and provides a framework for occupational therapy education.
Kawa (River) Model (Michael Iwama)
Biopsychosocial models
Engel's biopsychosocial model takes into account how disease and illness can be impacted by social, environmental, psychological and body functions. The biopsychosocial model is unique in that it takes the client's subjective experience and the client-provider relationship as factors to wellness. This model also factors in cultural diversity as many countries have different societal norms and beliefs. This is a multifactorial and multi-dimensional model to understand not only the cause of disease but also a person-centered approach that the provider has more of a participatory and reflective role.
Other models which incorporate biology (body and brain), psychology (mind), and social (relational, attachment) elements influencing human health include interpersonal neurobiology (IPNB), polyvagal theory (PVT), and the dynamic-maturational model of attachment and adaptation (DMM). The latter two in particular provide detail about the source, mechanism and function of somatic symptoms. Kasia Kozlowska describes how she uses these models to better connect with clients, to understand complex human illness, and how she includes occupational therapists as part of a team to address functional somatic symptoms. Her research indicates children with functional neurological disorders (FND) utilize higher, or more challenging, DMM self-protective attachment strategies to cope with their family environments, and how those impact functional somatic symptoms.
Pamela Meredith and colleagues have been exploring the relationship between the attachment system and psychological and neurobiological systems with implications for how occupational therapists can improve their approach and techniques. They have found correlations between attachment and adult sensory processing, distress, and pain perception. In a literature review, Meredith identified a number of ways that occupational therapists can effectively apply an attachment perspective, sometimes uniquely.
=== Frames of reference ===
Frames of reference are an additional knowledge base for the occupational therapist to develop their treatment or assessment of a patient or client group. Though there are conceptual models (listed above) that allow the therapist to conceptualise the occupational roles of the patient, it is often important to use further reference to embed clinical reasoning. Therefore, many occupational therapists will use additional frames of reference to both assess and then develop therapy goals for their patients or service users.
Biomechanical frame of reference
The biomechanical frame of reference is primarily concerned with motion during occupation. It is used with individuals who experience limitations in movement, inadequate muscle strength or loss of endurance in occupations. The frame of reference was not originally compiled by occupational therapists, and therapists should translate it to the occupational therapy perspective, to avoid the risk of movement or exercise becoming the main focus.
Rehabilitative (compensatory)
Neurofunctional (Gordon Muir Giles and Clark-Wilson)
Dynamic systems theory
Client-centered frame of reference
This frame of reference is developed from the work of Carl Rogers. It views the client as the center of all therapeutic activity, and the client's needs and goals direct the delivery of the occupational therapy process.
Cognitive-behavioural frame of reference
Ecology of human performance model
The recovery model
Sensory integration
Sensory integration framework is commonly implemented in clinical, community, and school-based occupational therapy practice. It is most frequently used with children with developmental delays and developmental disabilities such as autism spectrum disorder, Sensory processing disorder and dyspraxia. Core features of sensory integration in treatment include providing opportunities for the client to experience and integrate feedback using multiple sensory systems, providing therapeutic challenges to the client's skills, integrating the client's interests into therapy, organizing of the environment to support the client's engagement, facilitating a physically safe and emotionally supportive environment, modifying activities to support the client's strengths and weaknesses, and creating sensory opportunities within the context of play to develop intrinsic motivation. While sensory integration is traditionally implemented in pediatric practice, there is emerging evidence for the benefits of sensory integration strategies for adults.
== See also ==
Busy work
Occupational apartheid
Occupational therapy and substance use disorder
Occupational therapy in the management of cerebral palsy
Occupational therapy in Greece
Occupational therapy in the United Kingdom
== References ==
American Occupational Therapy Association (2014c). Occupational therapy practice framework: Domain and process (3rd ed.). American Journal of Occupational Therapy, 68(Suppl. 1), S1–S48. https://doi.org/10.5014/ajot.2014.682006
American Occupational Therapy Association (2017). Mental Health Promotion, Prevention, and Intervention in Occupational Therapy Practice. The American Journal of Occupational Therapy. 71(Suppl. 2). https://doi.org/10.5014/ajot.2017.716S03
Christiansen, C. H., & Haertl, K. (2014). A contextual history of occupational therapy. In B. A. B. Schell, G. Gillen, & M. E. Scaffa (Eds.), Willard and Spackman's occupational therapy (12th ed., pp. 9–34).Philadelphia: Lippincott Williams & Wilkins.
== External links ==
World Federation of Occupational Therapists | Wikipedia/Master_of_Science_in_Occupational_Therapy |
The Master of Economics (MEcon or MEc)
is a postgraduate master's degree in economics comprising training in economic theory, econometrics, and/or applied economics.
The degree is also offered as an MS or MSc, MA or MCom in economics;
variants are the Master in Economic Sciences (MEconSc), and the Master of Applied Economics.
== Structure ==
The degree may be offered as a terminal degree or as additional preparation for doctoral study, and is sometimes offered as a professional degree, such as the emerging MPS in Applied Economics.
The program emphases and curricula will differ correspondingly. The course of study for the master's degree lasts from one to two years. A thesis is often required, particularly for terminal degrees.
Many universities (in the United States) do not offer the master's degree directly; rather, the degree is routinely awarded as a master's degree "en route", after completion of a designated phase of the PhD program in economics.
Entry requirements are undergraduate work in (calculus-based) economics, at least at the "intermediate" level, and often as a major, and a sufficient level of mathematical training (including courses in probability and statistics; often (multivariable) calculus and linear algebra; and sometimes mathematical analysis.)
== Curriculum ==
Typically, the curriculum is structured around core topics, with any optional coursework complementary to the program focus. The core modules are usually in microeconomic theory, macroeconomic theory and econometrics.
At this level, the topics covered are microfoundations and dynamic stochastic general equilibrium, and allow for heterogeneity, relaxing the idea of a representative agent.
Sometimes, topics from heterodox economics are introduced.
Econometrics extends the undergraduate domain to multiple linear regression and multivariate time series,
and introduces simultaneous equation methods and generalized linear models.
Game theory and computational economics are often included.
Some (doctoral) programs include core work in economic history.
Theory-focused degrees will tend to cover these core topics more mathematically, and emphasize econometric theory as opposed to econometric techniques and software; these will also require a separate course in mathematical economics.
Note though that regardless of focus, most programs "now place a marked emphasis on the primacy of mathematics", and many universities thus also require "Quantitative Techniques for Economics",
or "Math for Economics" especially where mathematical economics is not a core course.
The optional or additional coursework will depend on the program's emphasis.
In theory-focused degrees, and those preparing students for doctoral work, this coursework is often in these same core topics, but in greater depth.
In terminal or applied or career-focused degrees, options may include public finance, labour-, financial-, development-, industrial-, health- or agricultural economics.
These degrees may also allow for a specialization in one of these areas, and may be named correspondingly (for example Master's in Financial Economics, Masters in International Economics, Masters in Development Economics, Master's in Sustainable Economic Development and Masters in Agricultural Economics.)
Recently, the more general Master of Applied Economics, combines economic theory with selections from finance and data analytics, including machine learning and data science.
== See also ==
Bachelor of Economics
Category:Economics schools
Civilekonom; Civiløkonom; Siviløkonom; Cand.oecon.
Economics education
Humanistic economics
Economist § Professions
European Joint Master degree in Economics
QEM
EMLE
Outline of economics
== References ==
== External links and references ==
Discussion
Graduate Degrees in Economics, The American Economic Association
Masters degrees in economics: How to be a leader of the future, The Independent
Is an M.A. in Economics a Waste of Time?, economics.about.com (archived)
Getting an MBA vs. a Master's in Finance or Economics, mbapodcaster.com
Applying to graduate school in economics, davidson.edu
Books to Study Before Going to Graduate School in Economics Archived 2011-07-07 at the Wayback Machine, economics.about.com
Lists of programs
Alphabetical List of U.S. Programs | Wikipedia/Master_of_Science_in_Economics |
The Master of Finance is a master's degree awarded by universities or graduate schools preparing students for careers in finance. The degree is often titled Master in Finance (M.Fin., MiF, MFin), or Master of Science in Finance (MSF in North America, and MSc in Finance in the UK and Europe). In the U.S. and Canada the program may be positioned as a professional degree. Particularly in Australia, the degree may be offered as a Master of Applied Finance (MAppFin). In some cases, the degree is offered as a Master of Management in Finance (MMF). More specifically focused and titled degrees are also offered.
== Structure ==
MSF and M.Fin / MSc programs differ as to career preparation and hence degree focus — with the former centered on financial management and investment management, and the latter on more technical roles
(although, see below for further discussion as to this distinction). Both degree types, though, emphasize quantitative topics, and may also offer some non-quantitative elective coursework, such as corporate governance, business ethics and business strategy. Programs generally require one to two years of study, and are often offered as a non-thesis degree.
The MSF program, typically, prepares graduates for careers in corporate finance, investment banking and investment management. The core curriculum is thus focused on managerial finance, corporate finance and investment analysis. These topics are usually preceded by more fundamental coursework in economics, (managerial) accounting, and "quantitative methods" (usually time value of money and business statistics). In many programs, these fundamental topics are a prerequisite for admission or assumed as known, and if part of the curriculum, students with appropriate background may be exempt from these. The program usually concludes with coursework in advanced topics — where several areas are integrated or applied — such as portfolio management, financial modeling, mergers and acquisitions, real options, and lately Fintech;
in some programs quantitative finance, analytics, and managerial economics may also be offered as advanced courses.
The M.Fin / MSc prepares graduates for more technical roles, and thus "focuses on the theory and practice of finance" with a "strong emphasis on financial economics in addition to financial engineering and computational methods." The MSF core topics are (often) also covered, although in (substantially) less detail. Elective work includes specific topics in quantitative finance and computational finance, but also in corporate finance, private equity and the like; several of the MSF advanced topics — such as real options and managerial economics — will thus also be offered, here differing as to a more technical orientation. As regards coverage of quantitative finance as compared to more specialized degrees, see below. Topics (or specializations
) in data science, machine learning and business analytics are becoming common.
The MSF-M.Fin distinction is not absolute: some MSF programs, although general in coverage, are "quantitatively rigorous" or offer a "quantitative track" (and may be STEM-designated ); while others are specifically technically oriented, or, in some cases, even offer a finance and mathematics dual degree.
Also, although the "MSc in Finance" generally corresponds to the M.Fin, many schools offer a range of MSc programs where finance may be combined with accountancy and/or management, and these then correspond to the MSF.
MMF programs may, similarly, offer either broad- or specialized finance coverage.
Many MSc programs are further specialized,
with the degree as a whole focused on, for example, financial management, behavioral finance, Islamic finance, personal finance / financial planning, or wealth management.
As mentioned, these degrees may be specifically titled, e.g.:
MSc in Investment Management,
Master of Financial Planning,
MSc Financial Management,
Masters in Corporate Finance,
and MS in Fintech.
Degrees in Applied Risk Management / ERM may be offered here,
while more technical and mathematical programs are usually through an MQF or similar; see below.
The MAppFin spans the MSF-M.Fin spectrum in terms of available specializations and corresponding coursework; it differs in that it is "for and by practitioners" and therefore "blends... finance theory with industry practice", as appropriate to the specialization. Similar to the MSc, programs are sometimes specifically focused on Financial Planning or Banking, for example, as opposed to more general coverage of finance. Some universities offer both the MAppFin and the MFin, with the latter requiring additional semester-time and coursework (and exclusively offering doctoral access). These programs may also differ as to entrance requirements.
Programs require a bachelor's degree for admission, but many do not require that the undergraduate major be in finance, economics, or even general business. The usual requirement is a sufficient level of numeracy, often including exposure to probability / statistics and calculus. The M.Fin and MSc will often require more advanced topics such as multivariate calculus, linear algebra and differential equations; these may also require a greater background in Finance or Economics than the MSF. Some programs may require work experience (sometimes at the managerial level), particularly if the candidate lacks a relevant undergraduate degree.
== Comparison with other qualifications ==
Although there is some overlap with an MBA, the finance Masters provides a broader and deeper exposure to finance, but more limited exposure to general management topics. Thus, the program focuses on finance and financial markets, while an MBA, by contrast, is more diverse, covering general aspects of business, such as human resource management and operations management. At the same time, an MBA without a specialization in finance will not have covered many of the topics dealt with in the MSF (breadth), and, often even where there is specialization, those areas that are covered may be in less depth (certainly as regards the M.Fin).
MBA candidates will sometimes "dual major" with an MBA/MSF — certain universities also offer this combination as a joint degree — or later pursue an M.Fin degree to gain specialized finance knowledge; some universities offer an advanced certificate in finance appended to the MBA, allowing students to complete coursework beyond the standard finance specialization.
Other specialized business Masters, such as the MSM (Finance)
and the MCom (Finance)
closely correspond to the MSF, similarly.
Note that the latter Master of Commerce is often theory-centric, placing less emphasis on practice;
at the same time, notwithstanding its foundational courses in business, it often shares the same electives as the MFin.
As above, some MSF and all M.Fin programs overlap with degrees in financial engineering, computational finance and mathematical finance; see Master of Quantitative Finance (MQF). Note, however, that the treatment of any common topics — usually financial modeling, derivatives and risk management — will differ as to level of detail and approach. The MSF deals with these topics conceptually, as opposed to technically, and the overlap is therefore slight: although practical, these topics are too technical for a generalist finance degree, and the exposure will be limited to the generalist level. The M.Fin / MSc, on the other hand, cover these topics in a substantively mathematical fashion, and the treatment is often identical. The distinction here though, is that these place relatively more emphasis on financial theory than the MQF, and also allow for electives outside of quantitative finance; at the same time, their range of quantitative electives is often smaller. Entrance requirements to the MQF are significantly more mathematical than for the MSF, while for the M.Fin / MSc the requirements may be identical.
A Master of Financial Economics focuses on theoretical finance, and on developing models and theory. The overlap with the M.Fin / MSc, then, as with the MQF, is often substantial. As regards the MSF, on the other hand, although the two programs do differ in the weight assigned to theory, there is some overlap: firstly, some MSF curricula do include a formal study of finance theory; secondly, even where the theory is not studied formally, MSF programs do cover the assumptions underpinning the models studied (at least in overview); thirdly, many financial economics programs include coverage of individual financial instruments, corporate finance and portfolio management, although this treatment is usually less practical. (As regards managerial economics, similar comments apply. The course is taught to strengthen the theoretical underpin of the degree; however, since the emphasis is application, it is not developed.) At some universities, the more general Master of Applied Economics combines economic theory with selections from finance and data analytics.
The Chartered Financial Analyst (CFA) designation is sometimes compared to a Masters in Finance. In fact, several universities have embedded a significant percentage of the CFA Program "Candidate Body of Knowledge" into their degree programs; and the degree title may reflect this: "Master in Financial Analysis" or similar.
In general though, the CFA program is focused on portfolio management and investment analysis, and provides more depth in these areas than the standard Finance Masters, whereas for other areas of finance the CFA coverage is in less depth. Likewise, several programs have curricula aligned with the FRM / PRM, or the CAIA
(note that the so-called "Indian C.F.A." is, in fact, a master's degree).
A further distinction — as regards all such designations — is that (most) Masters programs include practice on, for example, the Bloomberg Terminal, and in building advanced financial models, while "hands on" training of this sort will not (typically) be included in a professional certification program.
== See also ==
Outline of finance
List of master's degrees
Master of Financial Economics
Master of Quantitative Finance
Master of Economics
QEM
Category:Professional certification in finance
Bachelor of Finance
Financial analyst § Qualification
== References ==
== External links ==
FT Ranking of post-experience Masters in Finance programmes
FT Ranking of pre-experience Masters in Finance programmes | Wikipedia/Master_of_Science_in_Finance |
Master of Corporate Communication (MCC), or Master of Science in Corporate Communication (MSc.CC), is a post-graduate master's degree designed to prepare communication professionals who in time will function as corporate communication officer (CCO) at a strategic level in the organization. The MCC program structure and admissions are similar to that of the Master of Business Administration and Master of Science in Management degrees. The equivalent of MCC at some universities is Master's in Communication, Master (of Arts) in Public Relations, Master (of Science) in Communication Management.
== Admission ==
B-school MCC admission committees normally evaluate applicants based on resume, letters of recommendation, work experiences, as well as by a candidate's bachelor's degree GPA and GPA of graduate studies if applicable.
Based on these indicators, the committee decides if the applicant can handle the academic rigor of the program and demonstrate considerable leadership potential. The committee also looks for applicants that can improve diversity in the classroom as well as contribute positively to the student body as a whole.
== Programme structure ==
The MCC structure varies from program to program, but typically resembles one of the five major types of MBAs (distance-learning, part-time, accelerated, two-year, or executive).
The MCC program for professionals often resembles one of a distance-learning, part-time, accelerated, or two-year MBA program. Most programs begin with a set of required courses and then offer more specialized courses two thirds of the way through the program.
== Courses ==
Depending on the each university, the general management courses may include, but are not limited to the following courses:
Accounting
Corporate strategy
Data analysis
Industrial organization
Issues management
Organizational behavior
Reputation management
Strategic marketing
Some of the more subject-oriented topics (electives) in an MCC program include:
Corporate branding
Integrated marketing communication
Business and marketing strategy
Business, marketing, advertising and media planning
Crisis communication
Internal communication and change management
Stakeholder relations and communication
Organizational identity
Corporate social responsibility / corporate shared value
Organizational reputation management
Sponsoring and partnerships
Commercial communication law
Corporate communication and market research methods
KPIs and performance analytics
Public affairs
Issues management
== Graduation requirements ==
Graduation requirements are different for full-time MCC program from that for executive professionals. Total number of credits required for graduation in regular MCC program may differ from one university to another. However, usually the required number of credits for master program in countries practicing the Bologna system of education is no less than 120 ECTS. The number of credits for executive master programs is usually not less than 60 ECTS, depending on the university.
== Europe ==
In 1997 the Corporate Communication Centre, an entity of the Rotterdam School of Management of the Erasmus University Rotterdam, started with the Master of Corporate Communication program, developed by Cees van Riel. On a worldwide basis, it was the first program of its kind offered on a university level. Since then at least four other European universities have started offering MCC program. At present MCC is offered also at Aarhus University, IE Business School in Madrid, Spain, (who also offers an Executive Masters of Corporate Communication degree), Sorbonne University, Universität Leipzig, University of Navarra, Rome Business School in Rome, Italy, and Università della Svizzera Italiana.
In 2008 the Master of Corporate Communication Program at the Rotterdam School of Management was accredited by the NVAO, the accreditation organisation of the Netherlands and Flanders, to hold the Master of Science in Corporate Communication (MSc. CC) title.
== External links ==
Master of Corporate Communication, Rotterdam School of Management
RSM on Financial Times Top 10 Masters in Management Rankings 2008
University of Paris, Communication des entreprises et des institutions
University of Lugano, MCC
Master in Corporate Communication, IE School of Communication, IE University
Master in Corporate Communication, TRACOR, The Communication Arts Institute
Master's Program in Corporate Communication, Aalto University School of Business
== References == | Wikipedia/Master_of_Science_in_Corporate_Communication |
A Master of Science in Engineering (MSE) is an academic graduate degree awarded by universities in many countries. It is differentiated from a Master of Engineering (a professional degree).
A MSE can require completion of a thesis and qualifies the holder to apply for a program leading to a Doctor of Philosophy (often abbreviated PhD or DPhil) in engineering, while a Master of Engineering can require completion of a project rather than thesis and usually does not qualify its holder to apply for a PhD or DPhil in engineering science.
The MSE is considered equivalent to diplom degree in engineering in the countries that do not have a specific distinction between MSE and Master of Engineering.
In the UK the MEng is an extended undergraduate degree, and the MSc is a one year postgraduate degree for those who already have a BEng. They both contribute to obtaining a chartered engineer qualification.
== See also ==
Engineering education, about the structure in many countries
Engineer's degree
== References == | Wikipedia/Master_of_Science_in_Engineering |
The Master of Science in Information Systems (MSIS), the Master of Science in Management Information Systems (MSMIS) and Masters in Management Information Systems (MMIS) are specialized master's degree programs usually offered in a university's College of Business and in integrated Information Science and Technology colleges. The MSIS degree is designed for those managing design and development information technology, especially the information systems development process.
The MSIS degree is thought to be functionally equivalent to a Master of Information Systems/Technology Management; however, the two are distinguishable in that the latter has much more equitable composition of information science/systems, and business/management. MSMIS and MMIS degrees are recognized by the Association to Advance Collegiate Schools of Business (AACSB) and the Accreditation Council for Business Schools and Programs (ACBSP).
A joint committee of Association for Information Systems (AIS) and Association for Computing Machinery (ACM) members developed a model curriculum for the MSIS in 2006.
== References == | Wikipedia/Master_of_Science_in_Information_Systems |
This list refers to specific master's degrees in North America. Please see master's degree for a more general overview.
== Accountancy ==
Master of Accountancy (MAcc, MAc, MAcy or MPAcc), alternatively Master of Professional Accountancy (MPAcy or MPA), or Master Science in Accountancy (MSAcy) is typically a one-year, non-thesis graduate program designed to prepare graduates for public accounting and to provide them with the 150 credit hours required by most states before taking the CPA exam.
Master of Accounting (MAcc) is an 8-month degree offered by the University of Waterloo, School of Accounting and Finance in Canada that satisfies the 51 credit hours and CKE exam requirement needed to write the Chartered Accountant Uniform Final Exam (UFE) in the province of Ontario. The School also delivers a Master of Taxation program. The School is housed in the Faculty of Arts.
Master of Professional Accounting (MPAcc) is a two-year, non-thesis graduate program offered by the University of Saskatchewan in Canada. In the United States, the University of Texas at Austin offers a Masters of Professional Accounting (MPA) degree.
== Administration ==
Master of Business Administration (MBA), Master of Management (MAM), Master of Accountancy (MAcy), Master of Science in Taxation (MST), Master of Science in Finance (MSF), Master of Business and Organizational Leadership (MBOL), Master of Engineering Management (MEM), Master of Health Administration (MHA), Master of Not-for-Profit Leadership (MNPL), Master of Public Policy (MPP), Master of Policy, Planning and, Management (MPPM), Master of Public Administration (MPA), Master of International Affairs (MIA), Master of Global Affairs (MGA), Master of Strategic Planning for Critical Infrastructures (MSPCI), Master of Science in Strategic Leadership (MSSL), and Master of Science in Management (MSM) are professional degrees focusing on management for the private and public sectors, both domestic and international.
== Adult Education ==
While other advanced degree education programs tend to be more widely known, the Master of Science in Adult Education provides professional educators with expert-level tools for success in the adult learning environment and advancement in educational leadership. As the name suggests, this degree program provides ample opportunity for the student to take a more scientific approach to the study of education. Many M.S. Adult Education programs offer concentrations in Community Service and Health Sciences (non-profit realm), Human Resources, Technology (distance learning), and Training and Development (corporate or for-profit environment).
== Advanced Study ==
In the United States the Master of Advanced Study (M.A.S.) also the Master of Advanced Studies (MAS) degree is a post-graduate professional degree issued by numerous academic institutions, but most notably by the University of California. M.A.S. programs tend to "concentrate on a set of coordinated coursework with culminating projects or papers rather than emphasizing student research" and frequently are structured as interdisciplinary offerings.
In Canada, the Master of Advanced Study degree is an independent research degree.
Advanced Studies programs tend to be interdisciplinary and tend to be focused toward meeting the needs of professionals rather than academics.
== Appalachian Studies ==
The Master of Appalachian Studies focuses on research into culture, i.e. music, sociology, and sustainability within the region of cultural and geographic region of Appalachia. This degree primarily develops understanding of the historical, political, geographic, and socio-economic circumstances that have led to Appalachia and similar regions to become what they are today.
== Applied Anthropology ==
The Master of Applied Anthropology (MAA) is a two-year program focused on training non-academic anthropologists. The University of Maryland, College Park developed this program to encourage entrepreneurial approaches to careers outside academia, where most new anthropologists are likely to seek and find employment. For this reason, it is considered a professional degree rather than a liberal arts degree.
== Applied Politics ==
The Master of Applied Politics is a 2-year master's degree program offered by The Ray C. Bliss Institute of Applied Politics at The University of Akron. It is one of the few professional master's degree programs in the United States focusing on practical politics and efforts to influence political decisions. This includes winning elections, campaigning, fund raising, influencing legislation and strengthening political organizations. MAP graduates have gone on to manage campaigns, run for political office, join polling and fundraising firms, and start their own consulting firms.
== Applied Sciences ==
The two Master's programs offered in Management Sciences provide both course work and research opportunities in the areas of operations research, information systems, management of technology, engineering and other areas. Operations research, mathematical modeling, economics, and organizational behavior, and other related concepts underlie success in almost all areas of management. Refer to the Master of Engineering degree section for more information.
== Architecture ==
The 4+3 or 5-year Master of Architecture (M.Arch. I) is a first professional degree, after which one is eligible for internship credit [and subsequent exam] required for licensure. The 2-year Master of Architecture (M.Arch. II) is a graduate-level program which assumes previous coursework in architecture (B.Arch. or M. Arch I).
== Archival Studies ==
The Master of Archival Studies degree is awarded following completion of a program of courses in archival science, records management and preservation. The degree was first offered at the University of British Columbia (Canada), and is currently offered at Clayton State University (Georgia). The Master of Archives and Records Administration is offered by San Jose State University (California).
== Bioinformatics ==
The Master of Science in Bioinformatics degree builds on a background in biology and computing. Students learn how to develop software tools for the storage and manipulation of biological data. Graduates typically work in the biotechnology or pharmaceutical industries or in biomedical research. The career prospects are excellent.
== Biomedical Sciences ==
The Master of Biomedical Sciences (MBS) degree prepares students for medical schools, related health professions, and other biomedical careers. The curriculum integrates graduate level human biological sciences with skill development in critical thinking, communication and teamwork.
== Broadcast Journalism ==
The Master of Broadcast Journalism (MBJ) degree prepares students for reporting and journalism in television and radio broadcasting, i.e. on the scene reporting and newsroom newscasting and meteorology.
== Chemistry ==
The Masters of Science in Chemistry is a degree that prepares recipients for jobs as higher-level industrial chemists, laboratory technicians, and for doctorate programs in Chemistry. Schools often offer two programs - a coursework-based masters and a research-based masters. The coursework masters is offered through completion of a number of graduate level chemistry classes and may require the recipient to complete a research proposal to demonstrate their expertise. The research masters is offered through completion a certain number of hours devoted to academic chemistry research, classes related to the research being performed, and the completion of a thesis consisting of the research completed during the masters and its impact of the research on the field.
== Christian Education ==
The Master of Arts in Christian Education is a seminary degree primarily designed for those in the field of church ministry. Various specializations include children's ministry and youth ministry, among others. Thus, many children's pastors and youth pastors obtain the degree, while senior pastors usually pursue the Master of Divinity degree. The degree is usually obtained in 2–3 years.
== City and Regional Planning ==
Master of City and Regional Planning (MCRP) is a professional degree in the study of urban planning.
== Clinical Medical Science ==
Clinical Medical Science is a professional degree awarded to Physician Assistants.
== Communication ==
The Department of Communication of the University of Ottawa offers a Master of Arts (MA) in Communication degree with thesis or with research paper.
The program focuses on five fields of specialization: media studies; organizational communication; health communication; identity and diversity in communication; government communication.
Both teaching and research explore major issues related to new information and communication technologies in media and organizations at the national and international levels.
The academic department of Art History & Communication Studies at McGill University offers Master of Arts (M.A.) degrees, which are differentiated either as Interdisciplinary (Thesis/Non-Thesis) or as Noninterdisciplinary (Thesis/Non-Thesis) programs. The duration for a Non-Thesis option is two years of full-time study. The period for a Thesis option may last longer, depending also on the required level of courses and complexity of the thesis. Students who are admitted to the Interdisciplinary Thesis option Communication Studies - Gender and Women's Studies (incl. e.g. psychology and/or other subjects) might have to earn "very high research" credits at the 700-PhD-level, and they may need to complete their program in the maximum of three years of full-time candidature.
The Communication & Media Arts Department at Lancaster Bible College offers a Master of Arts (MA) degree in Strategic Communication Leadership.
== Computer Science ==
The Master of Science in Computer Science and Master of Science in Information Technology are graduate degrees for information technology professionals and computer engineers. They are generally based on core computer science subjects where knowledge can be used for advanced work especially in the information technology industry.
== Community Health and Prevention Research ==
A Master of Science in Community Health and Prevention Research is a graduate degree for students interested in advancing health in communities through evidence based science. The degree is similar to a public health degree with an emphasis on epidemiology, measurement, research, and statistics in the coursework though with a strong applied focus and emphasis on community engagement; theory and applied principles of behavior change; and intervention development, evaluation, and dissemination. Programs may combine in-class instruction, faculty and peer-to-peer mentoring, with community-based internships.
== Criminal Justice ==
The Master of Criminal Justice is a professional degree in the study of criminal justice. The program is designed as a terminal degree for professionals in the field of criminal justice or as preparation for doctoral programs. It may also be referred to as a Master of Science in Justice Administration (M.S.J.A.).
== Cross-Cultural and International Education ==
This master's degree, offered by Bowling Green State University in Bowling Green, Ohio, prepares professional educators to be effective leaders in the internationalization of schools and communities and to be positive facilitators of cross-cultural understanding. Students complete the MACIE program with a capstone seminar or a master's thesis.
== Cultural Studies ==
This master's degree is a one or two year degree that allows students to engage the heterogeneous body of theories and practices associated with cultural studies and critical theory in the critical investigation of culture.
== Cyber Security ==
The Master of Information and Cybersecurity (MICS) is an interdisciplinary degree program that examines computer security technologies as well as human factors in information security, including the economic, legal, behavioral, and ethical impacts of the cybersecurity domain.
A Master of Science in Cyber Security is typically seated within the computer science discipline and is focused on the technical aspects of cybersecurity.
Other cybersecurity master's degree programs focus on policy and legal aspects of cybersecurity.
== Data Science ==
Master in Interdisciplinary Data Science, the University of Michigan School of Information's Master of Applied Data Science (MADS), and Master of Information and Data Science (MIDS) are professional graduate degrees in Data Science designed to help meet the need for knowledgeable data scientists who can answer important questions with data-backed insights, by drawing upon computer science, social sciences, statistics, management, and law.
== Dentistry ==
The Master of Science in Dentistry is a post-graduate degree awarded for those with a dental degree (BDS, DMD, DDS, BDent, BChD, etc.), who have completed a post-graduate level course of study.
== Digital Media ==
The Master of Digital Media is a professional degree in the study of digital media, which includes entertainment technology, and can be defined as media experiences made possible by the advent of primarily computer-mediated digital technologies (e.g., electronic games and special effects in motion pictures). This is also called the Master of Interactive Technology (MIT), which is offered at SMU Guildhall, or Master of Entertainment Technology (Carnegie Mellon).
== Dispute Resolution ==
Dispute Resolution as a master's degree program, a first in Australia, focuses on the wide range of non-adversarial dispute resolution processes. The subject accommodate distinct streams that include commerce, family, community and court-annexed programs. This subject is an introduction to the philosophy, theory and practice of an area of increasing importance in all professions, business and government. Dispute resolution processes are now integrated into the adversarial framework as well as being applied to an ever-widening range of private and public situations. This emerging practice of professional dispute resolution converge within and outside the legal profession.
== Divinity ==
The Master of Divinity (M.Div.) is the first professional degree in ministry (in the United States and Canada) and is a common academic degree among theological seminaries. It typically takes students three years to complete. Other theology degree titles used are Master of Theology (Th.M. or M.Th.), Master of Theological Studies (M.T.S.), Master of Arts in Practical Theology (M.A.P.T.), Master of Sacred Theology (S.T.M.) and Master of ecclesiastical Philosophy (M.EPh.).
== Education ==
Master of Education degrees are similar to MA, MS, and MSc, where the subject studied is education. In some states in the United States, teachers can earn teacher licensure with a bachelor's degree, but some states require a master's degree within a set number of years as continuing education. Other education-related master's degrees conferred in the United States are Master of Arts in Teaching (M.A.T.), Master of Science in Instruction (M.S.I.), Master of Science in education (M.S.Ed. or M.S.E.), Master of Arts in education (M.A.Ed.), Master of Adult Education (M.Ad. Ed.), and Master of Music Education (M.Mus.Ed.).
A Master of Education degree, or M.Ed., is a professional, graduate-level degree geared toward individuals who are seeking to move beyond the classroom into administrative-level positions or other specialized roles. It is generally not a degree leading to teaching at a college level, though it can very well prepare individuals for employment in higher education management and student personnel administration, as well as becoming adjunct college instructors. Many online M.Ed. programs offer a specialization in educational leadership. Over the past few years, however, the opportunity to specialize in educational technology has also become increasingly available. While many M.Ed. graduates seek to become principals and school district administrators, others become reading or technology specialists. The Master of Education degree is sometimes referred to as a practitioner's degree, because of its immediate and practical application to the school environment.
A Master of Arts in Education is perhaps the most flexible degree in the field, and often allows an educator to specialize in one of several concentrations. In addition to taking core classes in educational philosophy, child psychology, educational ethics, and education research methods, teachers pursuing this advanced education degree generally specialize in one of several fields.
Educational professionals who are looking to remain in the classroom often opt to pursue an online Master of Arts in education with a concentration in either elementary or secondary education. At many universities, a concentration in special education is also available. Individuals who are looking to leave the classroom often pursue concentrations in educational leadership, technology, or counseling. This list is by no means complete, as each university offers its own options for specialization.
Overall, the M.A. in Education includes more of the theoretical study of education than most of the other advanced degree options. The Master of Arts in Education also offers an extremely high level of flexibility, and can help to advance careers both inside and outside of the classroom.
While the other advanced degree programs tend to be more widely known, the Master of Science in Education can also provide professional educators with the tools needed for success in the classroom and advancement in educational leadership. As the name suggests, this degree program provides ample opportunity for the student to take a more scientific approach to the study of education. Many of those individuals who choose to follow the scientific route concentrate on topics like instructional technology or educational research.
In many instances, M.S. Education programs that take a scientific slant tend to include coursework in statistics and educational evaluation and measurement. Educators who pursue the more scientific path generally leave the classroom, and in many instances, the school. They have excellent job prospects in the educational research sector. Many go on to work with school districts, state governments, or private research organizations to assess student performance and suggest policies that will boost student achievement. Others supervise technology initiatives for schools or school districts, work in distance education, or pursue doctoral studies.
Other individuals who pursue an MS in Education opt for a less scientific course of study, such as educational leadership, or literacy. In some instances, these programs resemble the previously discussed Master of Education degree, but at other schools, these programs place a much greater focus on the scientific aspects of studying education. In either case, the same opportunities for advancement as a school administrator should be available, regardless of whether one has earned a degree of Master of Science in education, a Master of Arts in education, or a Master of Education.
== Educational Technology ==
The Association for Educational Communications and Technology (AECT) defines the field as "the study and ethical practice of facilitating learning and improving performance by creating, using and managing appropriate technological processes and resources.". Programs typically include courses on instructional design, learning theories, educational media, instructional messaging, related theory, and research methods. Some institutions use the term instructional technology to refer to their programs. Although some experts within the field distinguish between educational and instructional technology, on a practical level, the two are essentially synonymous.
Master's in Educational Technology programs are offered in at least one university in nearly every US state, and in many countries outside of the United States, including Australia, Canada, China, Singapore, South Korea, and Turkey.
One of the oldest programs in North America is based in the Department of Education at Concordia University in Montreal, Quebec, Canada, which has graduated approximately 2,000 master's degree and over 150 PhD students in its 50 years. Students can study full- or part-time, preparing to use their skills and knowledge to design curricula and programs, integrate technology, advise on educational technology policy, and conduct related research for schools, higher education, workplace learning, and community and informal learning.
The University of British Columbia (UBC) in Vancouver, British Columbia, Canada offers a part-time program within the Faculty of Education, focusing on curriculum design and technology integration.
The one-year professional Masters in Educational Technology and Applied Learning Science (METALS) is an interdisciplinary program offer by Carnegie Mellon University in Pittsburgh, Pennsylvania. It is jointly taught by the Human-Computer Interaction in the School of Computer Science and Psychology in Dietrich College of Humanities and Social Sciences. The program is an outgrowth of the research conducted by the National Science Foundation's Science of Learning Center, LearnLab, in which more than 200 researchers produced over 1600 publications and talks. METALS trains students to design, develop and evaluate evidenced-based programs for learning in settings that range from schools to homes, workplaces to museums, and online to offline learning environments. Students with backgrounds in psychology, education, computer science, design, information technology, or business are encouraged to apply.
== Electronic Business Technologies ==
The Master in Electronic Business Technologies (MEBT) is an interdisciplinary master's program offered at the University of Ottawa in Ontario, Canada.
== Engineering ==
The Master of Engineering (Magister in Ingeniaria) degree is awarded to students who have done graduate work at the master's level in the field of engineering. In the United States, engineering candidates are typically awarded MS degrees, although a growing number of schools also offer an MEng (e.g. the University of California, Berkeley). The distinction between the two programs varies between schools, but the MS is largely considered an academic degree, whereas the MEng is a professional degree. In the UK and Canada, candidates are generally awarded MSc, MASc or MEng degrees.
In Canada, the Master of Applied Science (MASc) is awarded to master's degree students with a research focus (having completed work leading to a thesis), while an MEng is awarded to master's degree students with a coursework focus and the completion of a research paper. The distinction between MASc and MEng is not definite since some universities grant only an MEng and some universities grant only an MASc, be it either research or coursework-focused.
In Francophone universities, the Master's Degree is referred to as a Maîtrise. The Master of Applied Science translates to Maîtrise des sciences appliquées and is abbreviated MScA. The Maîtrise in Canada is not equivalent to the Maîtrise in France, nor is the Baccalauréat. Canadian French-language degree and title nomenclature are consistent to North American custom. The MEng title translates to MIng, though this title cannot be used in one's signature in Québec (nor can BIng), as the title ing. (equivalent of P.Eng. in other provinces) is reserved for members of the provincial board of engineers, the Ordre des ingénieurs du Québec.
The Master of Science in Engineering is a post-graduate degree to be differentiated from the Master of Engineering. It requires a thesis and qualifies students holding it to apply for a Doctor of Philosophy (PhD) in Engineering.
== Environment ==
The Master of Environment (MEnv) is available at Concordia University, the Université de Sherbrooke, and the University of Colorado Boulder. The Master of Environmental Science (MEnvSc) is offered at University of Toronto Scarborough.
== Environmental Management ==
The Master of Environmental Science and Management (MESM) is offered by UC Santa Barbara. The Master of Land and Water Systems is available at the University of British Columbia.
== Finance ==
The Master of Science in Finance is a common degree in the corporate finance and investment finance world. It is considered the financial service industry's answer to accounting's Master of Accountatcny (MAcc) degree.
== Fine Arts ==
The Master of Fine Arts (M.F.A.) is a two to three-year terminal degree in a creative field of study, such as theatre arts, creative writing, filmmaking, or studio art.
== Foreign Service ==
The Master of Science in Foreign Service (MSFS) is a two-year degree program offered by Georgetown University's Edmund A. Walsh School of Foreign Service. Established in 1922, it is the first international relations graduate program in the United States. The 48-credit multidisciplinary curriculum emphasizes both theory and practice to educate international affairs professionals in the public, private, and non-profit sectors. Foundational courses in international relations, economics, and history are complemented by specialized courses in students’ areas of concentration: international development, politics and security, science and technology, global business and finance. In addition to course requirements, the degree requires successful passing of the oral examination, proficiency in a foreign language, and completion of an internship and leadership requirements.
== Forensic Science ==
The Master of Forensic Sciences (MFS) is a specialized professional degree designed for law enforcement, lab personnel, attorneys, investigators and other professionals. The Master of Science in Forensic Science is offered by John Jay College of Criminal Justice at City University of New York.
The Master of Science in Forensic Science and Law is a degree program available at Duquesne University. It combines all applications of forensic science with law and its application and legal use before a court of law.
Universities offering degree programs in this field have applied for accreditation from the American Academy of Forensic Sciences (AAFS)'s Forensic Science Education Programs Accreditation Commission.
== Forestry ==
The Master of Forestry (MF) degree is offered by Yale School of the Environment. The two-year MF degree is accredited by the Society of American Foresters (SAF) and prepares students for careers in sustainable natural resource management and policy. The curriculum is divided into three stages and focuses on the complex relationships among the science, management, and policy of forest resources. Students are also required to complete a summer internship and a capstone.
A similar 48-credit MF degree at Duke University's Nicholas School of the Environment is also accredited by the SAF and can be pursued on its own or concurrently with the Master of Environmental Management (MEM) degree or with degrees from other professional schools at Duke and the University of North Carolina at Chapel Hill.
== Global Affairs ==
The Master of Science in Global Affairs is a degree program available at New York University. The 42-credit curriculum is designed to help students unravel the complex relationships between nations and key international factors and make sense of world events. Coursework covers subjects ranging from economic globalization and the issues facing developing countries to conflict resolution and international law.
The Master of Global Affairs (MGA) is a two-year professional degree offered by the Munk School within the University of Toronto. The interdisciplinary degree aims to equip students with an awareness of global financial systems, global civil society, and global governance to prepare students for strategic thinking and responsible leadership on global issues.
The Master's of Arts in Global Governance (MAGG) offered by the Balsillie School of International Affairs designed to be completed in 16 months, consists of two terms of course work, a third term in which students complete a major research paper, followed by a fourth term as an intern working on global governance issues in the public or private sector, a research institute, or NGO. The selection process for the MAGG is highly competitive and only 15-18 students are admitted per year.
A Master of Global Affairs program is offered at the University of Notre Dame, Rice University, and the University of Toronto.
== Health Administration ==
Master of Health Administration (MHA) is a two-year degree similar to an MBA but instead is focused on health care systems rather than businesses in general.
== Health Science ==
The Master of Health Science is awarded to students who have completed a post-graduate course of study in health sciences or health policy fields, usually associated with the Public Health field. The MHS is often a more focused program for public health professionals, often with non-health professional backgrounds. This degree is abbreviated as MHSc in Canada. New degree programs in the US, include MSHS (Master of Science in Health Sciences) that relate to this same particular program type are also becoming more common as universities and medical schools develop more degree specialties throughout the US.
== Historic Preservation ==
The Master of Science in Historic Preservation (MSHP) is a graduate degree, often offered through schools and colleges of architecture, which focuses on the theory and practical elements of preserving buildings of historic importance. There are only a few programs in the United States, but all of the programs tend to focus on architectural conservation, design, history/theory, preservation planning, building analysis, and preservation law.
The Master of Arts in Historic Preservation provides training in the research, documentation, and preservation of the historic built environment. Typically the MAHP is differentiated from the Master of Historic Preservation MHP by an emphasis on historic research and writing, and is usually housed in history departments.
== Historic Preservation ==
The Master of Historic Preservation (MHP) is a two to two and a half year degree in the field of historic preservation. The MHP is usually considered a terminal degree, although a few Phd programs exist offering historic preservation as a concentration within another field such as community planning. Commonly MHP programs are housed in history, planning, or architecture departments. Interdisciplinary by nature, MHP programs typically consist of courses in architectural history, history and theory of preservation practice, cultural landscape preservation, historic resource documentation and evaluation, community planning, and rehabilitation philosophy and practice.
== History ==
The Master of Arts in History
== Human-Computer Interaction ==
The Master of Human-computer interaction is a professional degree that focuses on the training and research around topics related to human-computer interaction (HCI). Human-computer interaction, while touches upon areas of research covered by computer science, psychology, cognitive science, social sciences, design, media and other fields of studies, is often categorized under the department of computer science or information science. And students who pursue their graduate studies in this area usually receive a degree of master of computer science or master of information science.
There is a limit amount of institutions that offer a master's degree directly under HCI. The Master of human-computer interaction (MHCI), a program at Carnegie Mellon University offered by the Human-Computer Interaction Institute (HCII), is the first program that is solely focused on professional training for students who wish to pursue a career in human-computer interaction related area. There are similar programs offered by other institution, such as Master of Human-Computer Interaction and Design (MHCI+D) by University of Washington, Master of Science in Human-Computer Interaction (MS in HCI) by Georgia Tech, and Master of Science in HCI by Indiana University, Bloomington. These programs often consist of academic learning experiences in research and design and industry training in client relation and project management. Students are usually expected to gain a professional proficiency for HCI related topics in an industry setting.
== Human Factors ==
The Master of Science in Human Factors (MSHF) degree focuses on human factors and ergonomics in systems and processes.
== Humanities ==
The Master of Arts in Humanities degree requires two years in an accredited college.
== Industrial and Labor Relations ==
Industrial and Labor Relations.
== Industrial Design ==
Master of Industrial Design is a two- or three-year program in the field of industrial design.
The MID acronym is also used for a Master of International Development, which is a postgraduate degree in the study of developmental economics, non-governmental organizations and civil society, development planning, environmental sustainability, and human security.
== Information ==
The Master of Science in Information (MSI) is a graduate degree designed for information science professionals.
== Information Management ==
The Master of Science in Information Management (MSIM) is a graduate degree designed for information management professionals.
== Information Systems ==
The Master of Science in Information Systems (MSIS) is a graduate degree designed for information systems professionals. The Master of Information Systems (MIS or MSIS) is a 2-year degree geared towards professionals trained in both management and information systems. The culmination of both fields is often referred to as management information systems. The Master of Information Systems is sometimes extended with additional courses such as those focused on management (MISM or MIS\M), or information security (MISIS, MIS\IS).
== Information Technology ==
The Master of Science in Information Technology is a graduate degree designed for information technology professionals. It is generally based on core computer science subjects where knowledge can be used for advanced work, especially in Information Technology industry. Whereas the Bachelor level degree provides a well-balanced foundation in Information Technology, the Master's degree allows the student not only further advancement of core knowledge, but also an opportunity to specialize in selected technology areas of interest. The Master of Information Technology is one of the most sought degrees in the field of Computer Science and Information Technology and is much sought-after by employers in the Information Technology marketplace. This degree ensures that the person who has attained this degree is competent in all key areas of Information Technology sector and has further advanced this with specialized knowledge, research, and publications.
== Interactive Media ==
The Master of Science in Interactive Media is a graduate degree offered by Quinnipiac University in Hamden, Connecticut. Through a balance of courses in interactive theory, media production, programming, Web design, and animation, students learn how to transform traditional media and original content into multimedia productions. The combination of study in the intellectual and production aspects of interactive media creates students that are innovative thinkers who understand the shift from legacy media to online.
== International Business ==
The Master of International Business (MIB) degree is a postgraduate degree designed to develop the capabilities and resources of managers in the global economy. It is ideal for those seeking to establish or accelerate a career in international business.
Emphasizing the practical application of specialized knowledge, the program equips management with skills tailored to the international business environment.
The Master of International Business focuses on strategic planning for international operations and provides an in-depth understanding of the organizational capabilities required for international operations, including specialized functions such as international marketing, finance and human resource management.
The degree may be thought of as an MBA with a particular focus on multinational corporations.
== International Development ==
The MID acronym is also used for a Master of International Development, which is a postgraduate degree in the study of developmental economics, non-governmental organizations and civil society, development planning, environmental sustainability, and human security.
== International Economics and Finance ==
The Master of Arts in international economics and finance is a two-year degree in the field of economics.
== International Hotel Management ==
The Master of Arts in International Hotel Management is a two-year professional graduate degree awarded by Royal Roads University [1] that prepares individuals to succeed in senior and executive hospitality positions within the accommodations sector including hotels, resorts, and cruise ships. RRU delivers this 39 credit program through a combination of 15 credits during three-short term residences (two in Victoria, BC Canada and one in an international location), and 24 credits through on-line distance learning.
== Internet Technology ==
Internet Technology degrees are available online and on campus. The Internet Technology MS degree at The Seidenberg School of Computer Science and Information Systems at Pace University consists of a 12-credit foundational core, followed by a 12-credit concentration in E-Commerce or Security and Information Assurance. There are other programs available at other Universities in the United States.
== Jurisprudence ==
Master of Jurisprudence (M.J.) is sometimes used as an alternative name for both Master of Laws and Master of Juridical Science. Offered within United States law schools, students of an M.J. curriculum are often business professionals and/or Juris Doctor degree holders; who wish to enhance their knowledge in a specialized field of law. A Master of Jurisprudence is highly beneficial for those that need an in-depth understanding of the law within current executive level positions.M.J. students are required to develop a comprehensive understanding of the operation of law as it applies to a specified area of law. An M.J. program combines a combination of graduate level legal courses with MBA-style courses in concentrated areas of study. Master of Jurisprudence program offerings include but aren't limited to degrees in Business and Corporate Governance Law, Health Law and Policy, and Child and Family Law. The M.J. program is typically 24 credit hours and can be completed in two years; or longer depending on the law student's enrollment status.l
== Landscape Architecture ==
The Master of Landscape Architecture (MLA) degree is a professional degree in the field of landscape architecture.
== Laws ==
Master of Laws (LL.M.) is an advanced degree in law, pursued after earning a first degree in law within the U.S. or abroad, such as a Juris Doctor (J.D.). The LL.M. program typically lasts one year if taken full-time. For foreign law graduates, the LL.M. is similar to a 'study abroad program' and offers a general overview of the American Legal System. Domestic U.S. law graduates pursue the LL.M. for different reasons, largely academic. With the exception of LL.M. programs in highly specialized areas where advanced knowledge in a field is useful (e.g., Taxation, International Taxation, Intellectual Property; etc.), the Master of Laws is designed for those intending to teach law, whereas the J.D. is a professional doctorate.
== Leadership ==
The Master of Science in Leadership is an alternative to, not a substitute for, the traditional Master of Business Administration (MBA) degree. The MSL degree requirements may include some business courses that are required in an MBA program. However, this degree program concentrates heavily on leader-follower interactions, cross-cultural communications, coaching, team development, project leadership, and behavioral motivation theories; it does not concentrate on financial or quantitative analysis, marketing, or accounting which are common in MBA programs. The degree program is appealing to people in well-established careers already. The MSL degree is similar to the Master of Organizational Leadership (MSOL) degree.
== Liberal Studies ==
The Master of Arts in Liberal Studies (MALS) Master of Liberal Arts (MLA, ALM) and Master of Liberal Studies (MLS) are interdisciplinary master's degrees. Characteristics that distinguish these degrees from others include curricular flexibility, interdisciplinary synthesis via Master's thesis or capstone project, and optional part-time enrollment.
== Library Science ==
A Master of Library Science (MLS) degree is the culmination of an interdisciplinary program encompassing information science, information management, librarianship, and/or related topics. Modern variants include Master of Library and Information Studies (MLIS), Master of Science in Information Studies (MSIS), Master of Librarianship, Master of Information Management and Systems (MIMS), Master of Science in Library Science (MSLS), and others. Some universities use standard degree titles such as Master of Arts while others, such as the University of Michigan, use Master of Arts in Library Science (AMLS). (University of Iowa) and Master of Science (University of Illinois) for their Library Science master's degrees.
== Logistics, Trade, and Transportation ==
The Master of Science in logistics, Trade, and Transportation (MS LTT) at the University of Southern Mississippi is an interdisciplinary program of 30 total credit hours. This LTT program can be completed in one year and customized to meet career advancement needs. The MS LTT program comprises logistics, supply chain management, global trade and economic development, business, and other courses.
== Management ==
The Master of Arts in management (MAM), in the United States, is a professional graduate degree that prepares business professionals for senior level management positions and executive leadership of organizations and corporations (for-profit, nonprofit and public sector). The business MAM degree should not be confused with Master's degrees in Arts Management, which may be more numerous in the U.S.. The MAM degree is a specialized degree that focuses studies on all areas of business, e.g., strategic planning and leadership, marketing analysis and strategy, operations management, project management, human resource management, organizational design and development, finance, accounting, management and contract negotiations, statistical methods and applications, economic theory, and research.
A characteristic that tends to distinguish the MAM and MBA business degrees from other Master's degrees in business is the absence of in-depth study within one particular area of business. M.S. business degrees typically focus on traditional areas like Finance, Accounting, and Information Systems. The MAM student's courses cover advanced management and strategic leadership as they apply to all areas of business, e.g., accounting, finance, operations management marketing, strategic planning, and human resources. The MAM student "masters the art of management." MBA and MAM degrees are both Master's level business degrees that cover broad and general content.
Master of Science programs in Management involve course work focusing within one areas of business such as Management Information Systems, Finance, Accounting and other areas.
== Management in the Network Economy ==
Management in the Network Economy is a one- or two-year interdisciplinary post-graduate program that uniquely blends information economics, technology management and business administration, in order to forge leaders able to understand and manage the complexity of organizations and markets in the digital economy.
In certain universities, like the Catholic University in Italy, this Master encompasses typical courses of a Master of Information Systems Management (MISM or MIS/M) and the business knowledge you can gain from a Master of Business Administration (MBA) or a Master of Business and Organizational Leadership (MBOL).
== Mathematics ==
Either pure or applied. Usually an MA, sometimes an MS.
== Marketing and Communication ==
The Master of Science in Marketing and Communication (MCM) is an integrated marketing communication graduate degree offered at Franklin University in Columbus, Ohio.
The Master of Science in Integrated Marketing Communications (IMC) is graduate degree offered at West Virginia University in Morgantown, WV through WVU's Reed College of Media.
== Marketing Research ==
Master of Marketing Research (MMR) is a specialized degree in marketing focusing in research. Sometimes called a Master of Science in Marketing Research (MSMR).
== Mass Communication ==
The Master of Mass Communications is a two- to three-year degree in the field of journalism and mass communications that prepares degree candidates for careers in media management. Students typically undertake courses in media law, marketing, integrated communications, research methods, and management.
== Medical Education ==
The Masters of Science in Medical Education prepares physicians for careers as academic Clinician-Educator-Scholars through didactic training in medical education and research methods, and mentored education research.
== Medical Science ==
Medical Science is a two-year postgraduate degree in medical research, usually for those possessing a Doctorate of Medicine already.
== Ministry ==
The Master of Ministry is a two- to three-year multidisciplinary degree with a program typically designed to apply appropriate theological principles to practice-based settings and serve as a foundation for an original research project. Such a program responds to the need for structured learning and theological development among professionals serving Church, non-profit, public, and private sector organizations.
Typical concentrations (or majors) include: Missions, evangelism, pastoral counseling, chaplaincy, church growth and development, Christian administration, homiletics, spiritual formation, pastoral theology, Church administration, Biblical counseling, clergy, Biblical archeology, religious education, Christian management, church music, social work, spiritual direction.
== Music ==
Master of Music, usually abbreviated M.Mus. or M.M., is one to two year graduate degree that combines advanced studies in an applied area of specialization (usually music performance, composition, or conducting) with graduate-level academic study in subjects such as music history, music theory, or music pedagogy. The degree, which takes one or two years of full-time study to complete, prepares students to be professional performers, conductors, and composers. The degree is often required as the minimum teaching credential for university, college, and conservatory instrumental or vocal teaching positions. Other related degrees include the Master of Music Education (M.Mus.Ed.), Master of Arts in Music Education (M.A.), Master of Sacred Music (M.S.M.), and Master of Worship Studies (M.W.S.).
== Natural Resources ==
The Master of Natural Resources (MNR) program is a graduate degree program designed for natural resource instructors and practitioners. The program, offered remotely and in person from several institutions, focuses on several aspects of natural resource policy, management, and assessment. MNR degrees often do not include a thesis, instead opting for 30-35 credits of courses taught by industry professionals and research professors.
== Natural Sciences ==
The Master of Science in Natural Sciences (MSNS) program is a graduate degree program designed for elementary, middle and high school science teachers, stressing content and the processes of natural sciences.
== Nonprofit Management ==
Master of Nonprofit Organizations (MNPO or MNO) and Master of Nonprofit Management (MNM) programs offer specialized, graduate-level knowledge for individuals currently working in the nonprofit sector or in organizations that partner with the nonprofit sector or for those seeking a career in the nonprofit sector. The program provides advanced knowledge in nonprofit management, resource development, strategic planning, and program evaluation that serves to enhance the education and career development of students. This degree program provides opportunities for students to prepare for employment or to advance their careers as administrators in nonprofit organizations. MNPO, MNO, and MNM programs are offered through a range of academic units, including schools and departments of social work, business, management, public administration, and independent units.
== Nonprofit Organizations ==
Master of Nonprofit Organizations (MNPO or MNO) and Master of Nonprofit Management (MNM) programs offer specialized, graduate-level knowledge for individuals currently working in the nonprofit sector or in organizations that partner with the nonprofit sector or for those seeking a career in the nonprofit sector. The program provides advanced knowledge in nonprofit management, resource development, strategic planning, and program evaluation that serves to enhance the education and career development of students. This degree program provides opportunities for students to prepare for employment or to advance their careers as administrators in nonprofit organizations. MNPO, MNO, and MNM programs are offered through a range of academic units, including schools and departments of social work, business, management, public administration, and independent units.
== Nurse Anesthesia ==
The Master of Science in Nurse Anesthesia degree prepares students to master the intellectual and technical skills required to become competent in the safe administration of anesthesia.
== Nursing ==
The Master of Science in Nursing (MSN) is the most common title for a graduate professional degree in nursing. A few schools also use the titles Master of Nursing or Master of Arts. Admittance into a MSN program requires that the professional be a registered nurse (RN), have an up-to-date license, and to have successfully completed a Bachelor of Science in Nursing (BSN) degree program. A full-time student can expect to complete the degree program in 18 to 24 months. A part-time student could take anywhere from 3 to 5 years to complete a MSN program.
Completion of a MSN/NP program earns nursing professionals the title of Nurse Practitioner (NP). While registered nurses administer medication and perform basic diagnostic tests, nurse practitioner's order and analyze the diagnostic tests, as well as prescribe treatments. Nurse Practitioner's entry level salary is typically $20,000 more than a Registered Nurse will earn.
== Occupational Therapy ==
The Master of Occupational Therapy is awarded to students who have completed a post-graduate course of study, and is now the entry-level degree for this profession. This degree is sometimes also conferred as a Master of Science in Occupational Therapy.
== Organizational Leadership ==
MSOL is a multi-disciplinary masters program that focuses on values-based leadership. The courses focus on the development of relationships between organizational members, effective decision-making processes, and an understanding of how modern technology can best support leaders. The MSOL degree is an alternative, not a substitute for an MBA. The programs are different in content and purpose. The MSOL degree is multidisciplinary and focuses more on people and organization issues, less on business topics such as finance, accounting and marketing. For example, in MSOL you will take courses in psychology and philosophy as well as courses in business and management. The MSOL degree is intended for those who are already established in a career. In contrast, those who are preparing to enter the world of work or change careers often seek an MBA degree. Although the MBA degree is more widely known, degree programs like the MSOL are being developed across the country because of increasing demand for ethical organizational leadership.
== Pacific International Affairs ==
The Master of Pacific International Affairs degree is a professional master's degree that provides training in various aspects of international affairs including International Business Management, International Politics, Public Policy, International Environmental Policy and Development and Non-Profit Management. The program requires mastery in a Pacific Rim language, quantitative and economic analysis techniques and a regional focus.
== Pharmacy ==
The Master of Pharmacy degree is awarded to students who have completed the four-year undergraduate Pharmacy course. Failure to complete the course, but having completed three years, usually awards the student a bachelor's degree in Pharmaceutical Science.
== Philosophy ==
In the United States and Canada, a Master of Philosophy or Magister Philosophiae (MPhil) degree is sometimes awarded to ABD (all but dissertation) doctoral candidates who have completed all coursework, passed their written and oral examinations, and met any other special requirements before beginning work on the doctoral dissertation. Such programs generally award the M.A. to students who have completed all coursework and preliminary exams (about two years after the B.A.), and the M.Phil. after advanced exams (comprehensives) and all language requirements have been met, and a dissertation topic approved (usually a year after the M.A.).
In other countries, assuming all requirements are met, the MPhil degree is generally awarded after about one year of full-time study towards a doctorate. The MPhil is considered equivalent to the former French DEA (Diplôme d'études approfondies) and Spanish DEA (Diploma de Estudios Avanzados).
== Physician Assistant Studies ==
The Master of Physician Assistant Studies is a professional degree providing training in the profession of a physician assistant to practice medicine based on the medical school model. This degree is also sometimes seen as an MS in Physician Assistant Studies (MSPAS), the Master of Physician Assistant Practice (MPAP)
== Professional Counseling ==
The Masters of Arts in Professional Counseling (MAPC) is a two-year program that prepares individuals for the independent professional practice of psychological counseling, involving the rendering of therapeutic services to individuals and groups experiencing psychological problems and exhibiting distress symptoms. Includes instruction in counseling theory, therapeutic intervention strategies, patient/counselor relationships, testing and assessment methods and procedures, group therapy, marital and family therapy, child and adolescent therapy, supervised counseling practice, ethical standards, and applicable regulations.
== Professional Studies ==
The Master's of Professional Studies (MPS or MProfStuds) is a terminal interdisciplinary degree and is sometimes used by programs that do not fit into any traditional categories. In some cases it is used as replacement for an MFA for programs with heavy technology focuses like NYU's Interactive Telecommunications Program. Other programs use it for Organizational Studies or interdisciplinary Social Science programs.
== Professional Writing ==
The Master of Professional Writing (MPW) degree is a professional graduate degree program that prepares candidates for a wide variety of writing-related positions in business, education, publishing, and the arts. Coursework in three concentrations - applied writing, composition and rhetoric, and creative writing - allows students to gain theoretical and practical knowledge in various fields of professional writing.
== Project Management ==
The Master of Project Management is a terminal professional degree awarded to students who have completed a post-graduate course of study, and is usually associated with construction management, urban planning, or architecture and engineering design management. There are a limited number of universities and schools worldwide to be accredited by the Global Accreditation Center of the Project Management Institute® PMI, they must meet the standards of the leading association for project management professionals.
== Public Administration ==
The Master of Public Administration (M.P.A. or MPA; MAP in Québec) degree is one of several Master's level professional public affairs degrees that provides training in public policy and project and program implementation (more recently known as public management).
MPA programs focus on public administration at the local, state/provincial, national/federal and supranational levels, as well as in the nongovernmental organization (NGO) and nonprofit sectors. In the course of its history the MPA degree has become more interdisciplinary by drawing from fields such as economics, sociology, anthropology, political science, and regional planning in order to equip MPA graduates with skills and knowledge covering a broad range of topics and disciplines relevant to the public sector. A core curriculum of a typical MPA program usually includes courses on microeconomics, public finance, research methods / statistics, policy process and policy analysis, ethics, public management, leadership, planning & GIS, and program evaluation/performance measurement. Depending on their interest, MPA students can focus their studies on a variety of public sector fields such as urban planning, emergency management, transportation, health care (particularly public health), economic development, urban management, community development, education, non-profits, information technology, environmental policy, etc.
== Public Health ==
The Master of Public Health and Master of Science in Public Health degrees are awarded to students who have completed a "post-graduate course" of study in Public Health. The MPH is considered a management/leadership degree specific to the fields related to public health while the MSPH is considered an academic degree, with a focus on empirical research methodologies.
== Public Management ==
The Master of Public Management is offered through Carnegie Mellon University Heinz College. This masters level degree is conferred upon those students who have completed a post-graduate course of study, and is usually associated with broadening the students' understanding of social, political, technological and economic processes, as well as paradigms of organizational and human behavior.
== Public Policy ==
The Master of Public Policy is a master level professional degree that provides training in policy analysis and program evaluation at public policy schools. Over time, the curriculum of Master of Public Policy and the Master of Public Administration (M.P.A.) degrees have blended and converged, due to the realization that policy analysis and program evaluation could benefit from an understanding of public administration, and vice versa. However, MPP programs still place more emphasis in policy analysis, research and evaluation, while MPA programs place more emphasis on operationalization of public policies and the design of effective programs and projects to achieve public policy goals. Over the years MPP programs have become more interdisciplinary drawing from economics, sociology, anthropology, politics, and regional planning. Depending on the interest, MPP students can concentrate in many policy areas including, but not limited to, urban policy, global policy, social policy, health policy, non-profit management, transportation, economic development, education, information technology, etc. Students interested in pursuing a degree program focused entirely on global public policy can also consider Master of Public Policy and Global Affairs (M.P.P.G.A) programs.
== Public Service ==
The Master of Arts in Public Service (MAPS) degree is a professional graduate degree program which offers specializations in areas of administration of justice, dispute resolution, health care administration, leadership studies, and non-profit studies. [2]
== Quality Assurance ==
The Master of Science in Quality Assurance (MSQA) is a graduate degree designed for quality management professionals in diverse industries including service, manufacturing, software, government and health care organizations.
== Resource Management ==
The Masters of Resource Management (M.R.M.) is designed for recent graduates from a range of disciplines, and for individuals with experience in private organizations or public agencies dealing with natural resources and the environment. Relevant disciplines of undergraduate training or experience include fields such as biology, engineering, chemistry, forestry and geology, as well as business administration, economics, geography, planning and a variety of social sciences. The M.R.M. degree provides training for professional careers in private or public organizations and preparation for further training for research and academic careers.
== Sacred Music ==
The Master of Sacred Music is a graduate degree combining academic studies in theology with applied studies in music.
== Security Technologies ==
Master of Science in Security Technologies (MSST) is a 32 credit, thesis graduate program for security as it relates to technology, intelligence collection, policy, law, cyber and physical security management, and security methodology. The program is within the College of Science and Engineering at the University of Minnesota.
== Social Work ==
The Master of Social Work (MSW) is a professional graduate degree preparing students to become professional social workers, typically in either direct practice or community practice. MSW programs require students to complete an extensive field practicum, under mentorship of a senior social worker. MSW programs in the United States are accredited by the Council on Social Work Education.
The degree title MSW is not used in the US by all social work schools. The University of Chicago uses A.M. and Columbia University uses M.S. to name a few of the exceptions.
== Strategic Leadership ==
The Master of Science in Strategic Leadership (MSSL) graduate degree is an Executive Program in Organizational Leadership and Management Development, teaching the skills and knowledge in working effectively with people, organizational systems, and complex information. MSSL objectives embody development of a leadership skill set, strategies for problem solving, and solutions to facilitate and manage change applicable in business and not-for-profit environments.
== Taxation ==
The Master of Science in Taxation (MST) is a professional graduate degree designed for Certified Public Accountants (CPAs) and other tax professionals.
== Teaching ==
Coursework and practice leading to a Master of Arts in Teaching (MAT) degree is intended to prepare individuals for a teaching career in a specific subject of middle and/or secondary-level curricula (i.e., middle or high school). The MAT differs from the MEd degree in that the course requirements are dominated by classes in the subject area to be taught (e.g., foreign language, math, science, etc.) rather than educational theory; and that the MAT candidate does not already hold a teaching credential whereas the MEd candidate will. The MAT often is the initial teacher education program for those who hold a bachelor's degree in the subject that they intend to teach. Work toward most MAT degrees will, however, necessarily include classes on educational theory in order to meet program and state requirements. Work toward the MAT degree may also include practica (i.e., student teaching). This abbreviation is also sometimes used to refer to a Master's in Theology (see ThM).
The Master of Arts in Teaching, or MAT, differs from the M.Ed. and the other Master's degrees in education primarily in that the majority of coursework focuses on the subject to be taught (i.e. history, English, math, biology, etc.) rather than on educational theory. While some online MAT programs offer a more general overview of the foundations of effective teaching, most MAT programs combine the study of widely established ‘best practices’ in the classroom with a focus on teaching within a specific discipline. Either way, the MA in Teaching is truly a teaching degree. Individuals who pursue the Master of Arts in Teaching generally choose to remain in the classroom. An MAT can also provide an educator with the appropriate credentials to become a department chairperson.
== Theology ==
The Master of Divinity (M.Div.) is the first professional degree in ministry (in the United States and Canada) and is a common academic degree among theological seminaries. It is typically three years in length. Other theology degree titles used are Master of Theology (Th.M. or M.Th.), Master of Theological Studies (M.T.S.), Master of Arts in Practical Theology (M.A.P.T.), and Master of Sacred Theology (S.T.M.).
== Urban Planning ==
The Master of Urban Planning (MUP), Master of City and Regional Planning (MCRP), Master of Urban and Regional Planning (MURP), Master of Environmental Design (MEDes (planning)) and Master of City Planning (MCP) are professional degrees in the study of urban planning.
== Urban Studies ==
The degree is primarily focused on urban issues including planning issues.
== References == | Wikipedia/Master_of_Science_in_Foreign_Service |
The Nelder–Mead method (also downhill simplex method, amoeba method, or polytope method) is a numerical method used to find the minimum or maximum of an objective function in a multidimensional space. It is a direct search method (based on function comparison) and is often applied to nonlinear optimization problems for which derivatives may not be known. However, the Nelder–Mead technique is a heuristic search method that can converge to non-stationary points on problems that can be solved by alternative methods.
The Nelder–Mead technique was proposed by John Nelder and Roger Mead in 1965, as a development of the method of Spendley et al.
== Overview ==
The method uses the concept of a simplex, which is a special polytope of n + 1 vertices in n dimensions. Examples of simplices include a line segment in one-dimensional space, a triangle in two-dimensional space, a tetrahedron in three-dimensional space, and so forth.
The method approximates a local optimum of a problem with n variables when the objective function varies smoothly and is unimodal. Typical implementations minimize functions, and we maximize
f
(
x
)
{\displaystyle f(\mathbf {x} )}
by minimizing
−
f
(
x
)
{\displaystyle -f(\mathbf {x} )}
.
For example, a suspension bridge engineer has to choose how thick each strut, cable, and pier must be. These elements are interdependent, but it is not easy to visualize the impact of changing any specific element. Simulation of such complicated structures is often extremely computationally expensive to run, possibly taking upwards of hours per execution. The Nelder–Mead method requires, in the original variant, no more than two evaluations per iteration, except for the shrink operation described later, which is attractive compared to some other direct-search optimization methods. However, the overall number of iterations to proposed optimum may be high.
Nelder–Mead in n dimensions maintains a set of n + 1 test points arranged as a simplex. It then extrapolates the behavior of the objective function measured at each test point in order to find a new test point and to replace one of the old test points with the new one, and so the technique progresses. The simplest approach is to replace the worst point with a point reflected through the centroid of the remaining n points. If this point is better than the best current point, then we can try stretching exponentially out along this line. On the other hand, if this new point isn't much better than the previous value, then we are stepping across a valley, so we shrink the simplex towards a better point. An intuitive explanation of the algorithm from "Numerical Recipes":
The downhill simplex method now takes a series of steps, most steps just moving the point of the simplex where the function is largest (“highest point”) through the opposite face of the simplex to a lower point. These steps are called reflections, and they are constructed to conserve the volume of the simplex (and hence maintain its nondegeneracy). When it can do so, the method expands the simplex in one or another direction to take larger steps. When it reaches a “valley floor”, the method contracts itself in the transverse direction and tries to ooze down the valley. If there is a situation where the simplex is trying to “pass through the eye of a needle”, it contracts itself in all directions, pulling itself in around its lowest (best) point.
Unlike modern optimization methods, the Nelder–Mead heuristic can converge to a non-stationary point, unless the problem satisfies stronger conditions than are necessary for modern methods. Modern improvements over the Nelder–Mead heuristic have been known since 1979.
Many variations exist depending on the actual nature of the problem being solved. A common variant uses a constant-size, small simplex that roughly follows the gradient direction (which gives steepest descent). Visualize a small triangle on an elevation map flip-flopping its way down a valley to a local bottom. This method is also known as the flexible polyhedron method. This, however, tends to perform poorly against the method described in this article because it makes small, unnecessary steps in areas of little interest.
== One possible variation of the NM algorithm ==
(This approximates the procedure in the original Nelder–Mead article.)
We are trying to minimize the function
f
(
x
)
{\displaystyle f(\mathbf {x} )}
, where
x
∈
R
n
{\displaystyle \mathbf {x} \in \mathbb {R} ^{n}}
. Our current test points are
x
1
,
…
,
x
n
+
1
{\displaystyle \mathbf {x} _{1},\ldots ,\mathbf {x} _{n+1}}
.
Note:
α
{\displaystyle \alpha }
,
γ
{\displaystyle \gamma }
,
ρ
{\displaystyle \rho }
and
σ
{\displaystyle \sigma }
are respectively the reflection, expansion, contraction and shrink coefficients. Standard values are
α
=
1
{\displaystyle \alpha =1}
,
γ
=
2
{\displaystyle \gamma =2}
,
ρ
=
1
/
2
{\displaystyle \rho =1/2}
and
σ
=
1
/
2
{\displaystyle \sigma =1/2}
.
For the reflection, since
x
n
+
1
{\displaystyle \mathbf {x} _{n+1}}
is the vertex with the higher associated value among the vertices, we can expect to find a lower value at the reflection of
x
n
+
1
{\displaystyle \mathbf {x} _{n+1}}
in the opposite face formed by all vertices
x
i
{\displaystyle \mathbf {x} _{i}}
except
x
n
+
1
{\displaystyle \mathbf {x} _{n+1}}
.
For the expansion, if the reflection point
x
r
{\displaystyle \mathbf {x} _{r}}
is the new minimum along the vertices, we can expect to find interesting values along the direction from
x
o
{\displaystyle \mathbf {x} _{o}}
to
x
r
{\displaystyle \mathbf {x} _{r}}
.
Concerning the contraction, if
f
(
x
r
)
>
f
(
x
n
)
{\displaystyle f(\mathbf {x} _{r})>f(\mathbf {x} _{n})}
, we can expect that a better value will be inside the simplex formed by all the vertices
x
i
{\displaystyle \mathbf {x} _{i}}
.
Finally, the shrink handles the rare case that contracting away from the largest point increases
f
{\displaystyle f}
, something that cannot happen sufficiently close to a non-singular minimum. In that case we contract towards the lowest point in the expectation of finding a simpler landscape. However, Nash notes that finite-precision arithmetic can sometimes fail to actually shrink the simplex, and implemented a check that the size is actually reduced.
== Initial simplex ==
The initial simplex is important. Indeed, a too small initial simplex can lead to a local search, consequently the NM can get more easily stuck. So this simplex should depend on the nature of the problem. However, the original article suggested a simplex where an initial point is given as
x
1
{\displaystyle \mathbf {x} _{1}}
, with the others generated with a fixed step along each dimension in turn. Thus the method is sensitive to scaling of the variables that make up
x
{\displaystyle \mathbf {x} }
.
== Termination ==
Criteria are needed to break the iterative cycle. Nelder and Mead used the sample standard deviation of the function values of the current simplex. If these fall below some tolerance, then the cycle is stopped and the lowest point in the simplex returned as a proposed optimum. Note that a very "flat" function may have almost equal function values over a large domain, so that the solution will be sensitive to the tolerance. Nash adds the test for shrinkage as another termination criterion. Note that programs terminate, while iterations may converge.
== See also ==
== References ==
== Further reading ==
Avriel, Mordecai (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 978-0-486-43227-4.
Coope, I. D.; Price, C. J. (2002). "Positive Bases in Numerical Optimization". Computational Optimization and Applications. 21 (2): 169–176. doi:10.1023/A:1013760716801. S2CID 15947440.
Gill, Philip E.; Murray, Walter; Wright, Margaret H. (1981). "Methods for Multivariate Non-Smooth Functions". Practical Optimization. New York: Academic Press. pp. 93–96. ISBN 978-0-12-283950-4.
Kowalik, J.; Osborne, M. R. (1968). Methods for Unconstrained Optimization Problems. New York: Elsevier. pp. 24–27. ISBN 0-444-00041-0.
Swann, W. H. (1972). "Direct Search Methods". In Murray, W. (ed.). Numerical Methods for Unconstrained Optimization. New York: Academic Press. pp. 13–28. ISBN 978-0-12-512250-4.
== External links ==
Nelder–Mead (Downhill Simplex) explanation and visualization with the Rosenbrock banana function
John Burkardt: Nelder–Mead code in Matlab - note that a variation of the Nelder–Mead method is also implemented by the Matlab function fminsearch.
Nelder-Mead optimization in Python in the SciPy library.
nelder-mead - A Python implementation of the Nelder–Mead method
NelderMead() - A Go/Golang implementation
SOVA 1.0 (freeware) - Simplex Optimization for Various Applications
[1] - HillStormer, a practical tool for nonlinear, multivariate and linear constrained Simplex Optimization by Nelder Mead. | Wikipedia/Nelder-Mead_method |
An adjoint equation is a linear differential equation, usually derived from its primal equation using integration by parts. Gradient values with respect to a particular quantity of interest can be efficiently calculated by solving the adjoint equation. Methods based on solution of adjoint equations are used in wing shape optimization, fluid flow control and uncertainty quantification.
== Example: Advection-Diffusion PDE ==
Consider the following linear, scalar advection-diffusion equation for the primal solution
u
(
x
→
)
{\displaystyle u({\vec {x}})}
, in the domain
Ω
{\displaystyle \Omega }
with Dirichlet boundary conditions:
∇
⋅
(
c
→
u
−
μ
∇
u
)
=
f
,
x
→
∈
Ω
,
u
=
b
,
x
→
∈
∂
Ω
.
{\displaystyle {\begin{aligned}\nabla \cdot \left({\vec {c}}u-\mu \nabla u\right)&=f,\qquad {\vec {x}}\in \Omega ,\\u&=b,\qquad {\vec {x}}\in \partial \Omega .\end{aligned}}}
Let the output of interest be the following linear functional:
J
(
u
)
=
∫
Ω
g
u
d
V
.
{\displaystyle J(u)=\int _{\Omega }gu\ dV.}
Derive the weak form by multiplying the primal equation with a weighting function
w
(
x
→
)
{\displaystyle w({\vec {x}})}
and performing integration by parts:
B
(
u
,
w
)
=
L
(
w
)
,
{\displaystyle {\begin{aligned}B(u,w)&=L(w),\end{aligned}}}
where,
B
(
u
,
w
)
=
∫
Ω
w
∇
⋅
(
c
→
u
−
μ
∇
u
)
d
V
=
∫
∂
Ω
w
(
c
→
u
−
μ
∇
u
)
⋅
n
→
d
A
−
∫
Ω
∇
w
⋅
(
c
→
u
−
μ
∇
u
)
d
V
,
(Integration by parts)
L
(
w
)
=
∫
Ω
w
f
d
V
.
{\displaystyle {\begin{aligned}B(u,w)&=\int _{\Omega }w\nabla \cdot \left({\vec {c}}u-\mu \nabla u\right)dV\\&=\int _{\partial \Omega }w\left({\vec {c}}u-\mu \nabla u\right)\cdot {\vec {n}}dA-\int _{\Omega }\nabla w\cdot \left({\vec {c}}u-\mu \nabla u\right)dV,\qquad {\text{(Integration by parts)}}\\L(w)&=\int _{\Omega }wf\ dV.\end{aligned}}}
Then, consider an infinitesimal perturbation to
L
(
w
)
{\displaystyle L(w)}
which produces an infinitesimal change in
u
{\displaystyle u}
as follows:
B
(
u
+
u
′
,
w
)
=
L
(
w
)
+
L
′
(
w
)
B
(
u
′
,
w
)
=
L
′
(
w
)
.
{\displaystyle {\begin{aligned}B(u+u',w)&=L(w)+L'(w)\\B(u',w)&=L'(w).\end{aligned}}}
Note that the solution perturbation
u
′
{\displaystyle u'}
must vanish at the boundary, since the Dirichlet boundary condition does not admit variations on
∂
Ω
{\displaystyle \partial \Omega }
.
Using the weak form above and the definition of the adjoint
ψ
(
x
→
)
{\displaystyle \psi ({\vec {x}})}
given below:
L
′
(
ψ
)
=
J
(
u
′
)
B
(
u
′
,
ψ
)
=
J
(
u
′
)
,
{\displaystyle {\begin{aligned}L'(\psi )&=J(u')\\B(u',\psi )&=J(u'),\end{aligned}}}
we obtain:
∫
∂
Ω
ψ
(
c
→
u
′
−
μ
∇
u
′
)
⋅
n
→
d
A
−
∫
Ω
∇
ψ
⋅
(
c
→
u
′
−
μ
∇
u
′
)
d
V
=
∫
Ω
g
u
′
d
V
.
{\displaystyle {\begin{aligned}\int _{\partial \Omega }\psi \left({\vec {c}}u'-\mu \nabla u'\right)\cdot {\vec {n}}dA-\int _{\Omega }\nabla \psi \cdot \left({\vec {c}}u'-\mu \nabla u'\right)dV&=\int _{\Omega }gu'\ dV.\end{aligned}}}
Next, use integration by parts to transfer derivatives of
u
′
{\displaystyle u'}
into derivatives of
ψ
{\displaystyle \psi }
:
∫
∂
Ω
ψ
(
c
→
u
′
−
μ
∇
u
′
)
⋅
n
→
d
A
−
∫
Ω
∇
ψ
⋅
(
c
→
u
′
−
μ
∇
u
′
)
d
V
−
∫
Ω
g
u
′
d
V
=
0
∫
∂
Ω
ψ
(
c
→
u
′
−
μ
∇
u
′
)
⋅
n
→
d
A
+
∫
Ω
u
′
(
−
c
→
⋅
∇
ψ
)
d
V
+
∫
Ω
∇
u
′
⋅
(
μ
∇
ψ
)
d
V
−
∫
Ω
g
u
′
d
V
=
0
∫
∂
Ω
ψ
(
c
→
u
′
−
μ
∇
u
′
)
⋅
n
→
d
A
+
∫
Ω
u
′
(
−
c
→
⋅
∇
ψ
)
d
V
+
∫
∂
Ω
u
′
(
μ
∇
ψ
)
⋅
n
→
d
A
−
∫
Ω
u
′
∇
⋅
(
μ
∇
ψ
)
d
V
−
∫
Ω
g
u
′
d
V
=
0
(Repeating integration by parts on diffusion volume term)
∫
Ω
u
′
[
−
c
→
⋅
∇
ψ
−
∇
⋅
(
μ
∇
ψ
)
−
g
]
d
V
+
∫
∂
Ω
ψ
(
c
→
u
′
−
μ
∇
u
′
)
⋅
n
→
d
A
+
∫
∂
Ω
u
′
(
μ
∇
ψ
)
⋅
n
→
d
A
=
0.
{\displaystyle {\begin{aligned}\int _{\partial \Omega }\psi \left({\vec {c}}u'-\mu \nabla u'\right)\cdot {\vec {n}}dA-\int _{\Omega }\nabla \psi \cdot \left({\vec {c}}u'-\mu \nabla u'\right)dV-\int _{\Omega }gu'\ dV&=0\\\int _{\partial \Omega }\psi \left({\vec {c}}u'-\mu \nabla u'\right)\cdot {\vec {n}}dA+\int _{\Omega }u'\left(-{\vec {c}}\cdot \nabla \psi \right)dV+\int _{\Omega }\nabla u'\cdot \left(\mu \nabla \psi \right)dV-\int _{\Omega }gu'\ dV&=0\\\int _{\partial \Omega }\psi \left({\vec {c}}u'-\mu \nabla u'\right)\cdot {\vec {n}}dA+\int _{\Omega }u'\left(-{\vec {c}}\cdot \nabla \psi \right)dV+\int _{\partial \Omega }u'\left(\mu \nabla \psi \right)\cdot {\vec {n}}dA-\int _{\Omega }u'\nabla \cdot \left(\mu \nabla \psi \right)dV-\int _{\Omega }gu'\ dV&=0\qquad {\text{(Repeating integration by parts on diffusion volume term)}}\\\int _{\Omega }u'\left[-{\vec {c}}\cdot \nabla \psi -\nabla \cdot \left(\mu \nabla \psi \right)-g\right]dV+\int _{\partial \Omega }\psi \left({\vec {c}}u'-\mu \nabla u'\right)\cdot {\vec {n}}dA+\int _{\partial \Omega }u'\left(\mu \nabla \psi \right)\cdot {\vec {n}}dA&=0.\end{aligned}}}
The adjoint PDE and its boundary conditions can be deduced from the last equation above. Since
u
′
{\displaystyle u'}
is generally non-zero within the domain
Ω
{\displaystyle \Omega }
, it is required that
[
−
c
→
⋅
∇
ψ
−
∇
⋅
(
μ
∇
ψ
)
−
g
]
{\displaystyle \left[-{\vec {c}}\cdot \nabla \psi -\nabla \cdot \left(\mu \nabla \psi \right)-g\right]}
be zero in
Ω
{\displaystyle \Omega }
, in order for the volume term to vanish. Similarly, since the primal flux
(
c
→
u
′
−
μ
∇
u
′
)
⋅
n
→
{\displaystyle \left({\vec {c}}u'-\mu \nabla u'\right)\cdot {\vec {n}}}
is generally non-zero at the boundary, we require
ψ
{\displaystyle \psi }
to be zero there in order for the first boundary term to vanish. The second boundary term vanishes trivially since the primal boundary condition requires
u
′
=
0
{\displaystyle u'=0}
at the boundary.
Therefore, the adjoint problem is given by:
−
c
→
⋅
∇
ψ
−
∇
⋅
(
μ
∇
ψ
)
=
g
,
x
→
∈
Ω
,
ψ
=
0
,
x
→
∈
∂
Ω
.
{\displaystyle {\begin{aligned}-{\vec {c}}\cdot \nabla \psi -\nabla \cdot \left(\mu \nabla \psi \right)&=g,\qquad {\vec {x}}\in \Omega ,\\\psi &=0,\qquad {\vec {x}}\in \partial \Omega .\end{aligned}}}
Note that the advection term reverses the sign of the convective velocity
c
→
{\displaystyle {\vec {c}}}
in the adjoint equation, whereas the diffusion term remains self-adjoint.
== See also ==
Adjoint state method
Costate equations
== References ==
Jameson, Antony (1988). "Aerodynamic Design via Control Theory". Journal of Scientific Computing. 3 (3): 233–260. doi:10.1007/BF01061285. hdl:2060/19890004037. S2CID 7782485. | Wikipedia/Adjoint_equation |
In metallurgy, solid solution strengthening is a type of alloying that can be used to improve the strength of a pure metal. The technique works by adding atoms of one element (the alloying element) to the crystalline lattice of another element (the base metal), forming a solid solution. The local nonuniformity in the lattice due to the alloying element makes plastic deformation more difficult by impeding dislocation motion through stress fields. In contrast, alloying beyond the solubility limit can form a second phase, leading to strengthening via other mechanisms (e.g. the precipitation of intermetallic compounds).
== Types ==
Depending on the size of the alloying element, a substitutional solid solution or an interstitial solid solution can form. In both cases, atoms are visualised as rigid spheres where the overall crystal structure is essentially unchanged. The rationale of crystal geometry to atom solubility prediction is summarized in the Hume-Rothery rules and Pauling's rules.
Substitutional solid solution strengthening occurs when the solute atom is large enough that it can replace solvent atoms in their lattice positions. Some alloying elements are only soluble in small amounts, whereas some solvent and solute pairs form a solution over the whole range of binary compositions. Generally, higher solubility is seen when solvent and solute atoms are similar in atomic size (15% according to the Hume-Rothery rules) and adopt the same crystal structure in their pure form. Examples of completely miscible binary systems are Cu-Ni and the Ag-Au face-centered cubic (FCC) binary systems, and the Mo-W body-centered cubic (BCC) binary system.
Interstitial solid solutions form when the solute atom is small enough (radii up to 57% the radii of the parent atoms) to fit at interstitial sites between the solvent atoms. The atoms crowd into the interstitial sites, causing the bonds of the solvent atoms to compress and thus deform (this rationale can be explained with Pauling's rules). Elements commonly used to form interstitial solid solutions include H, Li, Na, N, C, and O. Carbon in iron (steel) is one example of interstitial solid solution.
== Mechanism ==
The strength of a material is dependent on how easily dislocations in its crystal lattice can be propagated. These dislocations create stress fields within the material depending on their character. When solute atoms are introduced, local stress fields are formed that interact with those of the dislocations, impeding their motion and causing an increase in the yield stress of the material, which means an increase in strength of the material. This gain is a result of both lattice distortion and the modulus effect.
When solute and solvent atoms differ in size, local stress fields are created that can attract or repel dislocations in their vicinity. This is known as the size effect. By relieving tensile or compressive strain in the lattice, the solute size mismatch can put the dislocation in a lower energy state. In substitutional solid solutions, these stress fields are spherically symmetric, meaning they have no shear stress component. As such, substitutional solute atoms do not interact with the shear stress fields characteristic of screw dislocations. Conversely, in interstitial solid solutions, solute atoms cause a tetragonal distortion, generating a shear field that can interact with edge, screw, and mixed dislocations. The attraction or repulsion of the dislocation to the solute atom depends on whether the atom sits above or below the slip plane. For example, consider an edge dislocation encountering a smaller solute atom above its slip plane. In this case, the interaction energy is negative, resulting in attraction of the dislocation to the solute. This is due to the reduced dislocation energy by the compressed volume lying above the dislocation core. If the solute atom were positioned below the slip plane, the dislocation would be repelled by the solute. However, the overall interaction energy between an edge dislocation and a smaller solute is negative because the dislocation spends more time at sites with attractive energy. This is also true for solute atom with size greater than the solvent atom. Thus, the interaction energy dictated by the size effect is generally negative.
The elastic modulus of the solute atom can also determine the extent of strengthening. For a “soft” solute with elastic modulus lower than that of the solvent, the interaction energy due to modulus mismatch (Umodulus) is negative, which reinforce the size interaction energy (Usize). In contrast, Umodulus is positive for a “hard” solute, which results in lower total interaction energy than a soft atom. Even though the interaction force is negative (attractive) in both cases when the dislocation is approaching the solute. The maximum force (Fmax) necessary to tear dislocation away from the lowest energy state (i.e. the solute atom) is greater for the soft solute than the hard one. As a result, a soft solute will strengthen a crystal more than a hard solute due to the synergistic strengthening by combining both size and modulus effects.
The elastic interaction effects (i.e. size and modulus effects) dominate solid-solution strengthening for most crystalline materials. However, other effects, including charge and stacking fault effects, may also play a role. For ionic solids where electrostatic interaction dictates bond strength, charge effect is also important. For example, addition of divalent ion to a monovalent material may strengthen the electrostatic interaction between the solute and the charged matrix atoms that comprise a dislocation. However, this strengthening is to a less extent than the elastic strengthening effects. For materials containing a higher density of stacking faults, solute atoms may interact with the stacking faults either attractively or repulsively. This lowers the stacking fault energy, leading to repulsion of the partial dislocations, which thus makes the material stronger.
Surface carburizing, or case hardening, is one example of solid solution strengthening in which the density of solute carbon atoms is increased close to the surface of the steel, resulting in a gradient of carbon atoms throughout the material. This provides superior mechanical properties to the surface of the steel without having to use a higher-cost material for the component.
== Governing equations ==
Solid solution strengthening increases yield strength of the material by increasing the shear stress,
τ
{\displaystyle \tau }
, to move dislocations:
Δ
τ
=
G
b
ϵ
3
2
c
{\displaystyle \Delta \tau =Gb\epsilon ^{\tfrac {3}{2}}{\sqrt {c}}}
where c is the concentration of the solute atoms, G is the shear modulus, b is the magnitude of the Burger's vector, and
ϵ
{\displaystyle \epsilon }
is the lattice strain due to the solute. This is composed of two terms, one describing lattice distortion and the other local modulus change.
ϵ
=
|
ϵ
G
−
β
ϵ
a
|
{\displaystyle \epsilon =|\epsilon _{G}-\beta \epsilon _{a}|}
Here,
ϵ
G
{\displaystyle \epsilon _{G}}
the term that captures the local modulus change,
β
{\displaystyle \beta }
a constant dependent on the solute atoms and
ϵ
a
{\displaystyle \epsilon _{a}}
is the lattice distortion term.
The lattice distortion term can be described as:
ϵ
a
=
Δ
a
a
Δ
c
{\displaystyle \epsilon _{a}={\dfrac {\Delta a}{a\Delta c}}}
, where a is the lattice parameter of the material.
Meanwhile, the local modulus change is captured in the following expression:
ϵ
G
=
Δ
G
G
Δ
c
{\displaystyle \epsilon _{G}={\dfrac {\Delta G}{G\Delta c}}}
, where G is shear modulus of the solute material.
== Implications ==
In order to achieve noticeable material strengthening via solution strengthening, one should alloy with solutes of higher shear modulus, hence increasing the local shear modulus in the material. In addition, one should alloy with elements of different equilibrium lattice constants. The greater the difference in lattice parameter, the higher the local stress fields introduced by alloying.
Alloying with elements of higher shear modulus or of very different lattice parameters will increase the stiffness and introduce local stress fields respectively. In either case, the dislocation propagation will be hindered at these sites, impeding plasticity and increasing yield strength proportionally with solute concentration.
Solid solution strengthening depends on:
Concentration of solute atoms
Shear modulus of solute atoms
Size of solute atoms
Valency of solute atoms (for ionic materials)
For many common alloys, rough experimental fits can be found for the addition in strengthening provided in the form of:
Δ
σ
s
=
k
s
c
{\displaystyle \Delta \sigma _{s}=k_{s}{\sqrt {c}}}
where
k
s
{\displaystyle k_{s}}
is a solid solution strengthening coefficient and
c
{\displaystyle c}
is the concentration of solute in atomic fractions.
Nevertheless, one should not add so much solute as to precipitate a new phase. This occurs if the concentration of the solute reaches a certain critical point given by the binary system phase diagram. This critical concentration therefore puts a limit to the amount of solid solution strengthening that can be achieved with a given material.
== Examples ==
=== Aluminum alloys ===
An example of aluminum alloys where solid solution strengthening happens by adding magnesium and manganese into the aluminum matrix. Commercially Mn can be added to the AA3xxx series and Mg can be added to the AA5xxx series. Mn addition to the Aluminum alloys assists in the recrystallization and recovery of the alloy which influences the grain size as well. Both of these systems are used in low to medium-strength applications, with appreciable formability and corrosion resistance.
=== Nickel-based superalloys ===
Many nickel-based superalloys depend on solid solution as a strengthening mechanism. The most popular example is the Inconel family, where many of these alloys contain chromium and iron and some other additions of cobalt, molybdenum, niobium, and titanium. The nickel-based superalloys are well known for their intensive use in the industrial field especially the aeronautical and the aerospace industry due to their superior mechanical and corrosion properties at high temperatures.
An example of the use of the nickel-based superalloys in the industrial field would be turbine blades. In practice, this alloy is known as MAR—M200 and is solid solution strengthened by chromium, tungsten and cobalt in the matrix and is also precipitation hardened by carbide and boride precipitates at the grain boundaries. The key impacting factor for these turbine blades lies in the grain size which an increase in grain size can lead to a significant reduction in the strain rate. An example of this reduced strain rate in MAR--M200 can be seen in the figures to the right where the figure on the bottom has a grain size of 100um and the figure on the top has a grain size of 10mm.
This reduced strain rate is extremely important for turbine blade operation because they undergo significant mechanical stress and high temperatures which can lead to the onset of creep deformation. Therefore, the precise control of grain size in nickel-based superalloys is key to creep resistance and mechanical reliability and longevity. Some ways to control the grain size lie in the manufacturing techniques like directional solidification and single crystal casting.
=== Stainless steel ===
Stainless steel is one of the most commonly used metals in many industries. Solid solution strengthening of steel is one of the mechanisms used to enhance the properties of the alloy. Austenitic steels mainly contain chromium, nickel, molybdenum, and manganese. It is being used mostly for cookware, kitchen equipment, and in marine applications for its good corrosion properties in saline environments.
=== Titanium alloys ===
Titanium and titanium alloys have been wide usage in aerospace, medical, and maritime applications. The most known titanium alloy that adopts solid solution strengthening is Ti-6Al-4V. Also, the addition of oxygen to pure Ti alloy adopts a solid solution strengthening as a mechanism to the material, while adding it to Ti-6Al-4V alloy doesn’t have the same influence.
=== Copper alloys ===
Bronze and brass are both copper alloys that are solid solution strengthened. Bronze is the result of adding about 12% tin to copper while brass is the result of adding about 34% zinc to copper. Both of these alloys are being utilized in coins production, ship hardware, and art.
== See also ==
Strength of materials
Strengthening mechanisms of materials
== References ==
== External links ==
The Strengthening of Iron and Steel | Wikipedia/Solid_solution_strengthening |
Methods have been devised to modify the yield strength, ductility, and toughness of both crystalline and amorphous materials. These strengthening mechanisms give engineers the ability to tailor the mechanical properties of materials to suit a variety of different applications. For example, the favorable properties of steel result from interstitial incorporation of carbon into the iron lattice. Brass, a binary alloy of copper and zinc, has superior mechanical properties compared to its constituent metals due to solution strengthening. Work hardening (such as beating a red-hot piece of metal on anvil) has also been used for centuries by blacksmiths to introduce dislocations into materials, increasing their yield strengths.
== Basic description ==
Plastic deformation occurs when large numbers of dislocations move and multiply so as to result in macroscopic deformation. In other words, it is the movement of dislocations in the material which allows for deformation. If we want to enhance a material's mechanical properties (i.e. increase the yield and tensile strength), we simply need to introduce a mechanism which prohibits the mobility of these dislocations. Whatever the mechanism may be, (work hardening, grain size reduction, etc.) they all hinder dislocation motion and render the material stronger than previously.
The stress required to cause dislocation motion is orders of magnitude lower than the theoretical stress required to shift an entire plane of atoms, so this mode of stress relief is energetically favorable. Hence, the hardness and strength (both yield and tensile) critically depend on the ease with which dislocations move. Pinning points, or locations in the crystal that oppose the motion of dislocations, can be introduced into the lattice to reduce dislocation mobility, thereby increasing mechanical strength. Dislocations may be pinned due to stress field interactions with other dislocations and solute particles, creating physical barriers from second phase precipitates forming along grain boundaries. There are five main strengthening mechanisms for metals, each is a method to prevent dislocation motion and propagation, or make it energetically unfavorable for the dislocation to move. For a material that has been strengthened, by some processing method, the amount of force required to start irreversible (plastic) deformation is greater than it was for the original material.
In amorphous materials such as polymers, amorphous ceramics (glass), and amorphous metals, the lack of long range order leads to yielding via mechanisms such as brittle fracture, crazing, and shear band formation. In these systems, strengthening mechanisms do not involve dislocations, but rather consist of modifications to the chemical structure and processing of the constituent material.
The strength of materials cannot infinitely increase. Each of the mechanisms explained below involves some trade-off by which other material properties are compromised in the process of strengthening.
== Strengthening mechanisms in metals ==
=== Work hardening ===
The primary species responsible for work hardening are dislocations. Dislocations interact with each other by generating stress fields in the material. The interaction between the stress fields of dislocations can impede dislocation motion by repulsive or attractive interactions. Additionally, if two dislocations cross, dislocation line entanglement occurs, causing the formation of a jog which opposes dislocation motion. These entanglements and jogs act as pinning points, which oppose dislocation motion. As both of these processes are more likely to occur when more dislocations are present, there is a correlation between dislocation density and shear strength.
The shear strengthening provided by dislocation interactions can be described by:
Δ
τ
d
=
α
G
b
ρ
⊥
{\displaystyle \Delta \tau _{d}=\alpha Gb{\sqrt {\rho _{\perp }}}}
where
α
{\displaystyle \alpha }
is a proportionality constant,
G
{\displaystyle G}
is the shear modulus,
b
{\displaystyle b}
is the Burgers vector, and
ρ
⊥
{\displaystyle \rho _{\perp }}
is the dislocation density.
Dislocation density is defined as the dislocation line length per unit volume:
ρ
⊥
=
ℓ
ℓ
3
{\displaystyle \rho _{\perp }={\frac {\ell }{\ell ^{3}}}}
Similarly, the axial strengthening will be proportional to the dislocation density.
Δ
σ
y
∝
G
b
ρ
⊥
{\displaystyle \Delta \sigma _{y}\propto {Gb{\sqrt {\rho _{\perp }}}}}
This relationship does not apply when dislocations form cell structures. When cell structures are formed, the average cell size controls the strengthening effect.
Increasing the dislocation density increases the yield strength which results in a higher shear stress required to move the dislocations. This process is easily observed while working a material (by a process of cold working in metals). Theoretically, the strength of a material with no dislocations will be extremely high (
σ
≈
G
10
{\displaystyle \sigma \approx {\frac {G}{10}}}
) because plastic deformation would require the breaking of many bonds simultaneously. However, at moderate dislocation density values of around 107-109 dislocations/m2, the material will exhibit a significantly lower mechanical strength. Analogously, it is easier to move a rubber rug across a surface by propagating a small ripple through it than by dragging the whole rug. At dislocation densities of 1014 dislocations/m2 or higher, the strength of the material becomes high once again. Also, the dislocation density cannot be infinitely high, because then the material would lose its crystalline structure.
=== Solid solution strengthening and alloying ===
For this strengthening mechanism, solute atoms of one element are added to another, resulting in either substitutional or interstitial point defects in the crystal (see Figure on the right). The solute atoms cause lattice distortions that impede dislocation motion, increasing the yield stress of the material. Solute atoms have stress fields around them which can interact with those of dislocations. The presence of solute atoms impart compressive or tensile stresses to the lattice, depending on solute size, which interfere with nearby dislocations, causing the solute atoms to act as potential barriers.
The shear stress required to move dislocations in a material is:
Δ
τ
=
G
b
c
ϵ
3
/
2
{\displaystyle \Delta \tau =Gb{\sqrt {c}}\epsilon ^{3/2}}
where
c
{\displaystyle c}
is the solute concentration and
ϵ
{\displaystyle \epsilon }
is the strain on the material caused by the solute.
Increasing the concentration of the solute atoms will increase the yield strength of a material, but there is a limit to the amount of solute that can be added, and one should look at the phase diagram for the material and the alloy to make sure that a second phase is not created.
In general, the solid solution strengthening depends on the concentration of the solute atoms, shear modulus of the solute atoms, size of solute atoms, valency of solute atoms (for ionic materials), and the symmetry of the solute stress field. The magnitude of strengthening is higher for non-symmetric stress fields because these solutes can interact with both edge and screw dislocations, whereas symmetric stress fields, which cause only volume change and not shape change, can only interact with edge dislocations.
=== Precipitation hardening ===
In most binary systems, alloying above a concentration given by the phase diagram will cause the formation of a second phase. A second phase can also be created by mechanical or thermal treatments. The particles that compose the second phase precipitates act as pinning points in a similar manner to solutes, though the particles are not necessarily single atoms.
The dislocations in a material can interact with the precipitate atoms in one of two ways (see Figure 2). If the precipitate atoms are small, the dislocations would cut through them. As a result, new surfaces (b in Figure 2) of the particle would get exposed to the matrix and the particle-matrix interfacial energy would increase. For larger precipitate particles, looping or bowing of the dislocations would occur and result in dislocations getting longer. Hence, at a critical radius of about 5 nm, dislocations will preferably cut across the obstacle, while for a radius of 30 nm, the dislocations will readily bow or loop to overcome the obstacle.
The mathematical descriptions are as follows:
For particle bowing-
Δ
τ
=
G
b
L
−
2
r
{\displaystyle \Delta \tau ={Gb \over L-2r}}
For particle cutting-
Δ
τ
=
γ
π
r
b
L
{\displaystyle \Delta \tau ={\gamma \pi r \over bL}}
=== Dispersion strengthening ===
Dispersion strengthening is a type of particulate strengthening in which incoherent precipitates attract and pin dislocations. These particles are typically larger than those in the Orowon precipitation hardening discussed above. The effect of dispersion strengthening is effective at high temperatures whereas precipitation strengthening from heat treatments are typically limited to temperatures much lower than the melting temperature of the material. One common type of dispersion strengthening is oxide dispersion strengthening.
=== Grain boundary strengthening ===
In a polycrystalline metal, grain size has a tremendous influence on the mechanical properties. Because grains usually have varying crystallographic orientations, grain boundaries arise. While undergoing deformation, slip motion will take place. Grain boundaries act as an impediment to dislocation motion for the following two reasons:
1. Dislocation must change its direction of motion due to the differing orientation of grains.
2. Discontinuity of slip planes from grain one to grain two.
The stress required to move a dislocation from one grain to another in order to plastically deform a material depends on the grain size. The average number of dislocations per grain decreases with average grain size (see Figure 3). A lower number of dislocations per grain results in a lower dislocation 'pressure' building up at grain boundaries. This makes it more difficult for dislocations to move into adjacent grains. This relationship is the Hall-Petch relationship and can be mathematically described as follows:
σ
y
=
σ
y
,
0
+
k
d
x
{\displaystyle \sigma _{y}=\sigma _{y,0}+{k \over {d^{x}}}}
,
where
k
{\displaystyle k}
is a constant,
d
{\displaystyle d}
is the average grain diameter and
σ
y
,
0
{\displaystyle \sigma _{y,0}}
is the original yield stress.
The fact that the yield strength increases with decreasing grain size is accompanied by the caveat that the grain size cannot be decreased infinitely. As the grain size decreases, more free volume is generated resulting in lattice mismatch. Below approximately 10 nm, the grain boundaries will tend to slide instead; a phenomenon known as grain-boundary sliding. If the grain size gets too small, it becomes more difficult to fit the dislocations in the grain and the stress required to move them is less. It was not possible to produce materials with grain sizes below 10 nm until recently, so the discovery that strength decreases below a critical grain size is still finding new applications.
=== Transformation hardening ===
This method of hardening is used for steels.
High-strength steels generally fall into three basic categories, classified by the strengthening mechanism employed.
1- solid-solution-strengthened steels (rephos steels)
2- grain-refined steels or high strength low alloy steels (HSLA)
3- transformation-hardened steels
Transformation-hardened steels are the third type of high-strength steels. These steels use predominantly higher levels of C and Mn along with heat treatment to increase strength. The finished product will have a duplex micro-structure of ferrite with varying levels of degenerate
martensite. This allows for varying levels of strength. There are three basic types of transformation-hardened steels. These are dual-phase (DP), transformation-induced plasticity (TRIP), and martensitic steels.
The annealing process for dual -phase steels consists of first holding the steel in the alpha + gamma temperature region for a set period of time. During that time C and Mn diffuse into the austenite leaving a ferrite of greater purity. The steel is then quenched so that the austenite is transformed
into martensite, and the ferrite remains on cooling. The steel is then subjected to a temper cycle to allow some level of marten-site decomposition. By controlling the amount of martensite in the steel, as well as the degree of temper, the strength level can be controlled. Depending on
processing and chemistry, the strength level can range from 350 to 960 MPa.
TRIP steels also use C and Mn, along with heat treatment, in order to retain small amounts of austenite and bainite in a ferrite matrix. Thermal processing for TRIP steels again involves annealing the steel in the a + g region for a period of time sufficient to allow C and Mn to diffuse
into austenite. The steel is then quenched to a point above the martensite start temperature and held there. This allows the formation of bainite, an austenite decomposition product. While at this temperature, more C is allowed to enrich the retained austenite. This, in turn, lowers the
martensite start temperature to below room temperature. Upon final quenching a metastable austenite is retained in the predominantly ferrite matrix along with small amounts of bainite (and other forms of decomposed austenite). This combination of micro-structures has the added
benefits of higher strengths and resistance to necking during forming. This offers great improvements in formability over other high-strength steels. Essentially, as the TRIP steel is being formed, it becomes much stronger. Tensile strengths of TRIP steels are in the range of 600-960 MPa.
Martensitic steels are also high in C and Mn. These are fully quenched to martensite during processing. The martensite structure is then tempered back to the appropriate strength level, adding toughness to the steel. Tensile strengths for these steels range as high as 1500 MPa.
== Strengthening mechanisms in amorphous materials ==
=== Polymer ===
Polymers fracture via breaking of inter- and intra molecular bonds; hence, the chemical structure of these materials plays a huge role in increasing strength. For polymers consisting of chains which easily slide past each other, chemical and physical cross linking can be used to increase rigidity and yield strength. In thermoset polymers (thermosetting plastic), disulfide bridges and other covalent cross links give rise to a hard structure which can withstand very high temperatures. These cross-links are particularly helpful in improving tensile strength of materials which contain much free volume prone to crazing, typically glassy brittle polymers. In thermoplastic elastomer, phase separation of dissimilar monomer components leads to association of hard domains within a sea of soft phase, yielding a physical structure with increased strength and rigidity. If yielding occurs by chains sliding past each other (shear bands), the strength can also be increased by introducing kinks into the polymer chains via unsaturated carbon-carbon bonds.
Adding filler materials such as fibers, platelets, and particles is a commonly employed technique for strengthening polymer materials. Fillers such as clay, silica, and carbon network materials have been extensively researched and used in polymer composites in part due to their effect on mechanical properties. Stiffness-confinement effects near rigid interfaces, such as those between a polymer matrix and stiffer filler materials, enhance the stiffness of composites by restricting polymer chain motion. This is especially present where fillers are chemically treated to strongly interact with polymer chains, increasing the anchoring of polymer chains to the filler interfaces and thus further restricting the motion of chains away from the interface. Stiffness-confinement effects have been characterized in model nanocomposites, and shows that composites with length scales on the order of nanometers increase the effect of the fillers on polymer stiffness dramatically.
Increasing the bulkiness of the monomer unit via incorporation of aryl rings is another strengthening mechanism. The anisotropy of the molecular structure means that these mechanisms are heavily dependent on the direction of applied stress. While aryl rings drastically increase rigidity along the direction of the chain, these materials may still be brittle in perpendicular directions. Macroscopic structure can be adjusted to compensate for this anisotropy. For example, the high strength of Kevlar arises from a stacked multilayer macrostructure where aromatic polymer layers are rotated with respect to their neighbors. When loaded oblique to the chain direction, ductile polymers with flexible linkages, such as oriented polyethylene, are highly prone to shear band formation, so macroscopic structures which place the load parallel to the draw direction would increase strength.
Mixing polymers is another method of increasing strength, particularly with materials that show crazing preceding brittle fracture such as atactic polystyrene (APS). For example, by forming a 50/50 mixture of APS with polyphenylene oxide (PPO), this embrittling tendency can be almost completely suppressed, substantially increasing the fracture strength.
Interpenetrating polymer networks (IPNs), consisting of interlacing crosslinked polymer networks that are not covalently bonded to one another, can lead to enhanced strength in polymer materials. The use of an IPN approach imposes compatibility (and thus macroscale homogeneity) on otherwise immiscible blends, allowing for a blending of mechanical properties. For example, silicone-polyurethane IPNs show increased tear and flexural strength over base silicone networks, while preserving the high elastic recovery of the silicone network at high strains. Increased stiffness can also be achieved by pre-straining polymer networks and then sequentially forming a secondary network within the strained material. This takes advantage of the anisotropic strain hardening of the original network (chain alignment from stretching of the polymer chains) and provides a mechanism whereby the two networks transfer stress to one another due to the imposed strain on the pre-strained network.
=== Glass ===
Many silicate glasses are strong in compression but weak in tension. By introducing compression stress into the structure, the tensile strength of the material can be increased. This is typically done via two mechanisms: thermal treatment (tempering) or chemical bath (via ion exchange).
In tempered glasses, air jets are used to rapidly cool the top and bottom surfaces of a softened (hot) slab of glass. Since the surface cools quicker, there is more free volume at the surface than in the bulk melt. The core of the slab then pulls the surface inward, resulting in an internal compressive stress at the surface. This substantially increases the tensile strength of the material as tensile stresses exerted on the glass must now resolve the compressive stresses before yielding.
σ
y
=
m
o
d
i
f
i
e
d
=
σ
y
,
0
+
σ
c
o
m
p
r
e
s
s
i
v
e
{\displaystyle \sigma _{y=modified}=\sigma _{y,0}+\sigma _{compressive}}
Alternately, in chemical treatment, a glass slab treated containing network formers and modifiers is submerged into a molten salt bath containing ions larger than those present in the modifier. Due to a concentration gradient of the ions, mass transport must take place. As the larger cation diffuses from the molten salt into the surface, it replaces the smaller ion from the modifier. The larger ion squeezing into surface introduces compressive stress in the glass's surface. A common example is treatment of sodium oxide modified silicate glass in molten potassium chloride.
Examples of chemically strengthened glass are Gorilla Glass developed and manufactured by Corning, AGC Inc.'s Dragontrail and Schott AG's Xensation.
== Composite strengthening ==
Many of the basic strengthening mechanisms can be classified based on their dimensionality. At 0-D there is precipitate and solid solution strengthening with particulates strengthening structure, at 1-D there is work/forest hardening with line dislocations as the hardening mechanism, and at 2-D there is grain boundary strengthening with surface energy of granular interfaces providing strength improvement. The two primary types of composite strengthening, fiber reinforcement and laminar reinforcement, fall in the 1-D and 2-D classes, respectively. The anisotropy of fiber and laminar composite strength reflects these dimensionalities. The primary idea behind composite strengthening is to combine materials with opposite strengths and weaknesses to create a material which transfers load onto the stiffer material but benefits from the ductility and toughness of the softer material.
=== Fiber reinforcement ===
Fiber-reinforced composites (FRCs) consist of a matrix of one material containing parallel embedded fibers. There are two variants of fiber-reinforced composites, one with stiff fibers and a ductile matrix and one with ductile fibers and a stiff matrix. The former variant is exemplified by fiberglass which contains very strong but delicate glass fibers embedded in a softer plastic matrix resilient to fracture. The latter variant is found in almost all buildings as reinforced concrete with ductile, high tensile-strength steel rods embedded in brittle, high compressive-strength concrete. In both cases, the matrix and fibers have complimentary mechanical properties and the resulting composite material is therefore more practical for applications in the real world.
For a composite containing aligned, stiff fibers which span the length of the material and a soft, ductile matrix, the following descriptions provide a rough model.
==== Four stages of deformation ====
The condition of a fiber-reinforced composite under applied tensile stress along the direction of the fibers can be decomposed into four stages from small strain to large strain. Since the stress is parallel to the fibers, the deformation is described by the isostrain condition, i.e., the fiber and matrix experience the same strain. At each stage, the composite stress (
σ
c
{\displaystyle \sigma _{c}}
) is given in terms of the volume fractions of the fiber and matrix (
V
f
,
V
m
{\displaystyle V_{f},V_{m}}
), the Young's moduli of the fiber and matrix (
E
f
,
E
m
{\displaystyle E_{f},E_{m}}
), the strain of the composite (
ϵ
c
{\displaystyle \epsilon _{c}}
), and the stress of the fiber and matrix as read from a stress-strain curve (
σ
f
(
ϵ
c
)
,
σ
m
(
ϵ
c
)
{\displaystyle \sigma _{f}(\epsilon _{c}),\sigma _{m}(\epsilon _{c})}
).
Both fiber and composite remain in the elastic strain regime. In this stage, we also note that the composite Young's modulus is a simple weighted sum of the two component moduli.
σ
c
=
V
f
ϵ
c
E
f
+
V
m
ϵ
c
E
m
{\displaystyle \sigma _{c}=V_{f}\epsilon _{c}E_{f}+V_{m}\epsilon _{c}E_{m}}
E
c
=
V
f
E
f
+
V
m
E
m
{\displaystyle E_{c}=V_{f}E_{f}+V_{m}E_{m}}
The fiber remains in the elastic regime but the matrix yields and plastically deforms.
σ
c
=
V
f
ϵ
c
E
f
+
V
m
σ
m
(
ϵ
c
)
{\displaystyle \sigma _{c}=V_{f}\epsilon _{c}E_{f}+V_{m}\sigma _{m}(\epsilon _{c})}
Both fiber and composite yield and plastically deform. This stage often features significant Poisson strain which is not captured by model below.
σ
c
=
V
f
σ
f
(
ϵ
c
)
+
V
m
σ
m
(
ϵ
c
)
{\displaystyle \sigma _{c}=V_{f}\sigma _{f}(\epsilon _{c})+V_{m}\sigma _{m}(\epsilon _{c})}
The fiber fractures while the matrix continues to plastically deform. While in reality the fractured pieces of fiber still contribute some strength, it is left out of this simple model.
σ
c
≈
V
m
σ
m
(
ϵ
c
)
{\displaystyle \sigma _{c}\approx V_{m}\sigma _{m}(\epsilon _{c})}
==== Tensile strength ====
Due to the heterogeneous nature of FRCs, they also feature multiple tensile strengths (TS), one corresponding to each component. Given the assumptions outlined above, the first tensile strength would correspond to failure of the fibers, with some support from the matrix plastic deformation strength, and the second with failure of the matrix.
T
S
1
=
V
f
T
S
f
+
V
m
σ
m
(
ϵ
c
)
{\displaystyle TS_{1}=V_{f}TS_{f}+V_{m}\sigma _{m}(\epsilon _{c})}
T
S
2
=
V
m
T
S
m
{\displaystyle TS_{2}=V_{m}TS_{m}}
==== Anisotropy (Orientation effects) ====
As a result of the aforementioned dimensionality (1-D) of fiber reinforcement, significant anisotropy is observed in its mechanical properties. The following equations model the tensile strength of a FRC as a function of the misalignment angle (
θ
{\displaystyle \theta }
) between the fibers and the applied force, the stresses in the parallel and perpendicular, or
θ
=
0
{\displaystyle \theta =0}
and
90
{\displaystyle 90}
o, cases (
σ
|
|
,
σ
⊥
{\displaystyle \ \sigma _{||},\sigma _{\perp }}
), and the shear strength of the matrix (
τ
m
y
{\displaystyle \tau _{my}}
).
Small Misalignment Angle (longitudinal fracture)
The angle is small enough to maintain load transfer onto the fibers and prevent delamination of fibers and the misaligned stress samples a slightly larger cross-sectional area of the fiber so the strength of the fiber is not just maintained but actually increases compared to the parallel case.
T
S
(
θ
)
=
σ
|
|
cos
2
(
θ
)
{\displaystyle TS(\theta )={\frac {\sigma _{||}}{\cos ^{2}(\theta )}}}
Significant Misalignment Angle (shear failure)
The angle is large enough that the load is not effectively transferred to the fibers and the matrix experiences enough strain to fracture.
T
S
(
θ
)
=
τ
m
y
sin
(
θ
)
cos
(
θ
)
{\displaystyle TS(\theta )={\frac {\tau _{my}}{\sin(\theta )\cos(\theta )}}}
Near Perpendicular Misalignment Angle (transverse fracture)
The angle is close to 90o so most of the load remains in the matrix and thus tensile transverse matrix fracture is the dominant failure condition. This can be seen as complementary to the small angle case, with similar form but with an angle
90
−
θ
{\displaystyle 90-\theta }
.
T
S
(
θ
)
=
σ
⊥
sin
2
(
θ
)
{\displaystyle TS(\theta )={\frac {\sigma _{\perp }}{\sin ^{2}(\theta )}}}
=== Laminar reinforcement ===
== Applications ==
Strengthening of materials is useful in many applications. A primary application of strengthened materials is for construction. In order to have stronger buildings and bridges, one must have a strong frame that can support high tensile or compressive load and resist plastic deformation. The steel frame used to make the building should be as strong as possible so that it does not bend under the entire weight of the building. Polymeric roofing materials would also need to be strong so that the roof does not cave in when there is build-up of snow on the rooftop.
Research is also currently being done to increase the strength of metallic materials through the addition of polymer materials such as bonded carbon fiber reinforced polymer to (CFRP)[1].
== Current research ==
=== Molecular dynamics simulation assisted studies ===
The molecular dynamics (MD) method has been widely applied in materials science as it can yield information about the structure, properties, and dynamics on the atomic scale that cannot be easily resolved with experiments. The fundamental mechanism behind MD simulation is based on classical mechanics, from which we know the force exerted on a particle is caused by the negative gradient of the potential energy with respect to the particle position. Therefore, a standard procedure to conduct MD simulation is to divide the time into discrete time steps and solve the equations of motion over these intervals repeatedly to update the positions and energies of the particles. Direct observation of atomic arrangements and energetics of particles on the atomic scale makes it a powerful tool to study microstructural evolution and strengthening mechanisms.
=== Grain boundary strengthening ===
There have been extensive studies on different strengthening mechanisms using MD simulation. These studies reveal the microstructural evolution that cannot be either easily observed from an experiment or predicted by a simplified model. Han et al. investigated the grain boundary strengthening mechanism and the effects of grain size in nanocrystalline graphene through a series of MD simulations. Previous studies observed inconsistent grain size dependence of the strength of graphene at the length scale of nm and the conclusions remained unclear. Therefore, Han et al. utilized MD simulation to observe the structural evolution of graphene with nanosized grains directly. The nanocrystalline graphene samples were generated with random shapes and distribution to simulate well-annealed polycrystalline samples. The samples were then loaded with uniaxial tensile stress, and the simulations were carried out at room temperature. By decreasing the grain size of graphene, Han et al. observed a transition from an inverse pseudo Hall-Petch behavior to pseudo Hall-Petch behavior and the critical grain size is 3.1 nm. Based on the arrangement and energetics of simulated particles, the inverse pseudo Hall-Petch behavior can be attributed to the creation of stress concentration sites due to the increase in the density of grain boundary junctions. Cracks then preferentially nucleate on these sites and the strength decreases. However, when the grain size is below the critical value, the stress concentration at the grain boundary junctions decreases because of stress cancellation between 5 and 7 defects. This cancellation helps graphene sustain the tensile load and exhibit a pseudo Hall-Petch behavior. This study explains the previous inconsistent experimental observations and provides an in-depth understanding of the grain boundary strengthening mechanism of nanocrystalline graphene, which cannot be easily obtained from either in-situ or ex-situ experiments.
=== Precipitate strengthening ===
There are also MD studies done on precipitate strengthening mechanisms. Shim et al. applied MD simulations to study the precipitate strengthening effects of nanosized body-centered-cubic (bcc) Cu on face-centered-cubic (fcc) Fe. As discussed in the previous section, the precipitate strengthening effects are caused by the interaction between dislocations and precipitates. Therefore, the characteristics of dislocation play an important role on the strengthening effects. It is known that a screw dislocation in bcc metals has very complicated features, including a non-planar core and the twinning-anti-twinning asymmetry. This complicates the strengthening mechanism analysis and modeling and it cannot be easily revealed by high resolution electron microscopy. Thus, Shim et al. simulated coherent bcc Cu precipitates with diameters ranging from 1 to 4 nm embedded in the fcc Fe matrix. A screw dislocation is then introduced and driven to glide on a {112} plane by an increasing shear stress until it detaches from the precipitates. The shear stress that causes the detachment is regarded as the critical resolved shear stress (CRSS). Shim et al. observed that the screw dislocation velocity in the twinning direction is 2-4 times larger than that in the anti-twinning direction. The reduced velocity in the anti-twinning direction is mainly caused by a transition in the screw dislocation glide from the kink-pair to the cross-kink mechanism. In contrast, a screw dislocation overcomes the precipitates of 1–3.5 nm by shearing in the twinning direction. In addition, it also has been observed that the screw dislocation detachment mechanism with the larger, transformed precipitates involves annihilation-and-renucleation and Orowan looping in the twinning and anti-twinning direction, respectively. To fully characterize the involved mechanisms, it requires intensive transmission electron microscopy analysis and it is normally hard to give a comprehensive characterization.
=== Solid solution strengthening and alloying ===
A similar study has been done by Zhang et al. on studying the solid solution strengthening of Co, Ru, and Re of different concentrations in fcc Ni. The edge dislocation was positioned at the center of Ni and its slip system was set to be <110> {111}. Shear stress was then applied to the top and bottom surfaces of the Ni with a solute atom (Co, Ru, or Re) embedded at the center at 300 K. Previous studies have shown that the general view of size and modulus effects cannot fully explain the solid solution strengthening caused by Re in this system due to their small values. Zhang et al. took a step further to combine the first-principle DFT calculations with MD to study the influence of stacking fault energy (SFE) on strengthening, as partial dislocations can easily form in this material structure. MD simulation results indicate that Re atoms strongly drag to edge dislocation motion and the DFT calculation reveals a dramatic increase in SFE, which is due to the interaction between host atoms and solute atoms located in the slip plane. Further, similar relations have also been found in fcc Ni embedded with Ru and Co.
=== Limitation of the MD studies of strengthening mechanisms ===
These studies show great examples of how the MD method can assist the studies of strengthening mechanisms and provides more insights on the atomic scale. However, it is important to note the limitations of the method.
To obtain accurate MD simulation results, it is essential to build a model that properly describes the interatomic potential based on bonding. The interatomic potentials are approximations rather than exact descriptions of interactions. The accuracy of the description varies significantly with the system and complexity of the potential form. For example, if the bonding is dynamic, which means that there is a change in bonding depending on atomic positions, the dedicated interatomic potential is required to enable the MD simulation to yield accurate results. Therefore, interatomic potentials need to be tailored based on bonding. The following interatomic potential models are commonly used in materials science: Born-Mayer potential, Morse potential, Lennard Jones potential, and Mie potential. Although they give very similar results for the variation of potential energy with respect to the particle position, there is a non-negligible difference in their repulsive tails. These characteristics make them better describe materials systems with specific chemical bonds, respectively.
In addition to inherent errors in interatomic potentials, the number of atoms and the time steps in MD is limited by the computational power. Nowadays, it is common to simulate an MD system with multimillion atoms and it can even achieve simulations with multimillion atoms. However this still limits the length scale of the simulation to roughly a micron in size. The time steps in MD are also very small and a long simulation will only yield results at the time scale of a few nanoseconds. To further extend the scale of simulation time, it is common to apply a bias potential that changes the barrier height, therefore, accelerating the dynamics. This method is called hyperdynamics. The proper application of this method typically can extend the simulation times to microseconds.
=== Nanostructure fabrication for material strengthening ===
Based on the mechanism of strengthening discussed in the previous contents, nowadays people are also working on enhancing the strength by purposely fabricating nanostructures in materials. Here we introduce several representative methods, including hierarchical nanotwined structures, pushing the limit of grain size for strengthening and dislocation engineering.
=== Hierarchical nanotwinned structures ===
As mentioned in the previous content, hindering dislocation motion renders great strengthening to materials. Nanoscale twins – crystalline regions related by symmetry have the ability to effectively block the dislocation motion due to the microstructure change at the interface. The formation of hierarchical nanotwinned structures pushes the hindrance effect to the extreme, due to the construction of a complex 3D nanotwinned network. Thus, the delicate design of hierarchical nanotwinned structures is of great importance for inventing materials with super strength. For instance, Yue et al. constructed a diamond composite with hierarchically nanotwinned structure by manipulating the synthesis pressure. The obtained composite showed the higher strength than typical engineering metals and ceramics.
=== Pushing the limit of grain size for strengthening ===
The Hall-Petch effect illustrates that the yield strength of materials increases with decreasing grain size. However, many researchers have found that the nanocrystalline materials will soften when the grain size decreases to the critical point, which is called the inverse Hall-Petch effect. The interpretations of this phenomenon is that the extremely small grains are not able to support dislocation pileup which provides extra stress concentration in the large grains. At this point, the strengthening mechanism changes from dislocation-dominated strain hardening to growth softening and grain rotation. Typically, the inverse Hall-Petch effect will happens at grain size ranging from 10 nm to 30 nm and makes it hard for nanocrystalline materials to achieve a high strength. To push the limit of grain size for strengthening, the hindrance of grain rotation and growth could be achieved by grain boundary stabilization.
The construction of nanolaminated structure with low-angle grain boundaries is one method to obtain ultrafine grained materials with ultra-strength. Lu et al. applied a very high rate shear deformation with high strain gradients on the top surface layer of bulk Ni sample and introduced nanolaminated structures. This material exhibits an ultra-high hardness, higher than any reported ultrafine-grained nickel. The exceptional strength is resulted from the appearance of low-angle grain boundaries, which have low-energy states efficient for enhancing structure stability.
Another method to stabilize grain boundaries is the addition of nonmetallic impurities. Nonmetallic impurities often aggregate at grain boundaries and have the ability to impact the strength of materials by changing the grain boundary energy. Rupert et al. conducted first-principles simulations to study the impact of the addition of common nonmetallic impurities on Σ5 (310) grain boundary energy in Cu. They claimed that the decrease of covalent radius of the impurity and the increase of electronegativity of the impurity would lead to the increase of the grain boundary energy and further strengthen the materials. For instance, boron stabilized the grain boundaries by enhancing the charge density among the adjacent Cu atoms to improve the connection between two grain boundaries.
=== Dislocation engineering ===
Previous studies on the impact of dislocation motion on materials strengthening mainly focused on high density dislocation, which is effective for enhancing strength with the cost of reducing ductility. Engineering dislocation structures and distribution is promising to comprehensively improve the performance of material.
Solutes tend to aggregate at dislocations and are promising for dislocation engineering. Kimura et al. conducted atom probe tomograph and observed the aggregation of niobium atoms to the dislocations. The segregation energy was calculated to be almost the same as the grain boundary segregation energy. That's to say, the interaction between niobium atoms and dislocations hindered the recovery of dislocations and thus strengthened the materials.
Introducing dislocations with heterogeneous characteristics could also be utilized for material strengthening. Lu et al. introduced ordered oxygen complexes into TiZrHfNb alloy. Unlike the traditional interstitial strengthening, the introduction of the ordered oxygen complexes enhanced the strength of the alloy without the sacrifice of ductility. The mechanism was that the ordered oxygen complexes changed the dislocation motion mode from planar slip to wavy slip and promoted double cross-slip.
== See also ==
Grain boundary strengthening
Precipitation strengthening
Solid solution strengthening
Strength of materials
Tempering (metallurgy)
Work hardening
== References ==
== External links ==
Grain boundary strengthening in alumina by rare earth impurities
Mechanism of grain boundary strengthening of steels
An open source Matlab toolbox for analysis of slip transfer through grain boundaries | Wikipedia/Strengthening_mechanisms_of_materials |
The pound per square inch (abbreviation: psi) or, more accurately, pound-force per square inch (symbol: lbf/in2), is a unit of measurement of pressure or of stress based on avoirdupois units and used primarily in the United States. It is the pressure resulting from a force with magnitude of one pound-force applied to an area of one square inch. In SI units, 1 psi is approximately 6,895 pascals.
The pound per square inch absolute (psia) is used to make it clear that the pressure is relative to a vacuum rather than the ambient atmospheric pressure. Since atmospheric pressure at sea level is around 14.7 psi (101 kilopascals), this will be added to any pressure reading made in air at sea level. The converse is pound per square inch gauge (psig), indicating that the pressure is relative to atmospheric pressure. For example, a bicycle tire pumped up to 65 psig in a local atmospheric pressure at sea level (14.7 psi) will have a pressure of 79.7 psia (14.7 psi + 65 psi). When gauge pressure is referenced to something other than ambient atmospheric pressure, then the unit is pound per square inch differential (psid).
== Multiples ==
The kilopound per square inch (ksi) is a scaled unit derived from psi, equivalent to a thousand psi (1000 lbf/in2).
ksi are not widely used for gas pressures. They are mostly used in materials science, where the tensile strength of a material is measured as a large number of psi.
The conversion in SI units is 1 ksi = 6.895 MPa, or 1 MPa = 0.145 ksi.
The megapound per square inch (Mpsi) is another multiple equal to a million psi. It is used in mechanics for the elastic modulus of materials, especially for metals.
The conversion in SI units is 1 Mpsi = 6.895 GPa, or 1 GPa = 0.145 Mpsi.
== Magnitude ==
Inch of water: 0.036 psid
Blood pressure – clinically normal human blood pressure (120/80 millimetre of mercury (mmHg): 2.32 psig/1.55 psig
Natural gas residential piped in for consumer appliance; 4–6 psig.
Boost pressure provided by an automotive turbocharger (common): 6–15 psig
NFL football: 12.5–13.5 psig
Atmospheric pressure at sea level (standard): 14.7 psia
Automobile tire overpressure (common): 32 psig
Bicycle tire overpressure (common): 65 psig
Workshop or garage air tools: 90 psig
Railway air brakes or road brakes reservoir overpressure (common): 90–120 psig
Road racing bicycle tire overpressure: 120 psig
Steam locomotive fire tube boiler (UK, 20th century): 150–280 psig
Union Pacific Big Boy steam locomotive boiler: 300 psig
US Navy steam boiler pressure: 800 psi
Natural gas pipelines: 800–1,000 psig
Full SCBA (self-contained breathing apparatus) for IDLH (non-fire) atmospheres: 2,216 psig
Nuclear reactor primary loop: 2300 psi
Full SCUBA (self-contained underwater breathing apparatus) tank overpressure (common): 3,000 psig
Full SCBA (self-contained breathing apparatus) for interior firefighting operations: 4,500 psig
Airbus A380 hydraulic system: 5,000 psig
Land Rover Td5 diesel engine fuel injection pressure: 22,500 psi
Ultimate tensile strength of ASTM A36 steel: 58,000 psi
Water jet cutter: 40,000–100,000 psig
== Conversions ==
The conversions to and from SI are computed from exact definitions but result in a repeating decimal.
1
l
b
f
/
i
n
2
=
(
0.453
592
37
k
g
×
9.806
65
m
/
s
2
)
/
l
b
f
(
0.0254
m
/
i
n
)
2
l
b
f
/
i
n
2
=
8896
443
230
521
1290
320
000
P
a
≈
6894.757
P
a
1
P
a
=
1290
320
000
8896
443
230
521
l
b
f
/
i
n
2
≈
0.000
145
0377
l
b
f
/
i
n
2
1
k
P
a
≈
0.
145
0377
l
b
f
/
i
n
2
{\displaystyle {\begin{aligned}1\,\mathrm {lbf/in^{2}} &={\frac {(0.453\,592\,37\,\mathrm {kg} \times 9.806\,65\,\mathrm {m/s^{2}} )/\mathrm {lbf} }{(0.0254\,\mathrm {m/in} )^{2}}}\,\mathrm {lbf/in^{2}} \\&={\frac {8896\,443\,230\,521}{1290\,320\,000}}\,\mathrm {Pa} \\&\approx 6894.757\,\mathrm {Pa} \\1\,\mathrm {Pa} &={\frac {1290\,320\,000}{8896\,443\,230\,521}}\,\mathrm {lbf/in^{2}} \\&\approx 0.000\,145\,0377\,\mathrm {lbf/in^{2}} \\1\,\mathrm {kPa} &\approx 0.\,145\,0377\,\mathrm {lbf/in^{2}} \\\end{aligned}}}
Approximate conversions (rounded to some arbitrary number of digits, except when denoted by "≡") are shown in the following table.
== See also ==
Conversion of units: Pressure or mechanical stress
Pressure: Units
== References ==
== External links ==
Pressure measurement primer
Online pressure conversions | Wikipedia/Pounds-force_per_square_inch |
Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways.
Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to Fortune Global 500 companies.
== History ==
=== Civil engineering as a discipline ===
Civil engineering is the application of physical and scientific principles for solving the problems of society, and its history is intricately linked to advances in the understanding of physics and mathematics throughout history. Because civil engineering is a broad profession, including several specialized sub-disciplines, its history is linked to knowledge of structures, materials science, geography, geology, soils, hydrology, environmental science, mechanics, project management, and other fields.
Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. Knowledge was retained in guilds and seldom supplanted by advances. Structures, roads, and infrastructure that existed were repetitive, and increases in scale were incremental.
One of the earliest examples of a scientific approach to physical and mathematical problems applicable to civil engineering is the work of Archimedes in the 3rd century BC, including Archimedes' principle, which underpins our understanding of buoyancy, and practical solutions such as Archimedes' screw. Brahmagupta, an Indian mathematician, used arithmetic in the 7th century AD, based on Hindu-Arabic numerals, for excavation (volume) computations.
=== Civil engineering profession ===
Engineering has been an aspect of life since the beginnings of human existence. The earliest practice of civil engineering may have commenced between 4000 and 2000 BC in ancient Egypt, the Indus Valley civilization, and Mesopotamia (ancient Iraq) when humans started to abandon a nomadic existence, creating a need for the construction of shelter. During this time, transportation became increasingly important leading to the development of the wheel and sailing.
Until modern times there was no clear distinction between civil engineering and architecture, and the term engineer and architect were mainly geographical variations referring to the same occupation, and often used interchangeably. The constructions of pyramids in Egypt (c. 2700–2500 BC) constitute some of the first instances of large structure constructions in history. Other ancient historic civil engineering constructions include the Qanat water management system in modern-day Iran (the oldest is older than 3000 years and longer than 71 kilometres (44 mi)), the Parthenon by Iktinos in Ancient Greece (447–438 BC), the Appian Way by Roman engineers (c. 312 BC), the Great Wall of China by General Meng T'ien under orders from Ch'in Emperor Shih Huang Ti (c. 220 BC) and the stupas constructed in ancient Sri Lanka like the Jetavanaramaya and the extensive irrigation works in Anuradhapura. The Romans developed civil structures throughout their empire, including especially aqueducts, insulae, harbors, bridges, dams and roads.
In the 18th century, the term civil engineering was coined to incorporate all things civilian as opposed to military engineering. In 1747, the first institution for the teaching of civil engineering, the École Nationale des Ponts et Chaussées, was established in France; and more examples followed in other European countries, like Spain (Escuela Técnica Superior de Ingenieros de Caminos, Canales y Puertos). The first self-proclaimed civil engineer was John Smeaton, who constructed the Eddystone Lighthouse. In 1771 Smeaton and some of his colleagues formed the Smeatonian Society of Civil Engineers, a group of leaders of the profession who met informally over dinner. Though there was evidence of some technical meetings, it was little more than a social society.
In 1818 the Institution of Civil Engineers was founded in London, and in 1820 the eminent engineer Thomas Telford became its first president. The institution received a Royal charter in 1828, formally recognising civil engineering as a profession. Its charter defined civil engineering as:the art of directing the great sources of power in nature for the use and convenience of man, as the means of production and of traffic in states, both for external and internal trade, as applied in the construction of roads, bridges, aqueducts, canals, river navigation and docks for internal intercourse and exchange, and in the construction of ports, harbours, moles, breakwaters and lighthouses, and in the art of navigation by artificial power for the purposes of commerce, and in the construction and application of machinery, and in the drainage of cities and towns.
=== Civil engineering education ===
The first private college to teach civil engineering in the United States was Norwich University, founded in 1819 by Captain Alden Partridge. The first degree in civil engineering in the United States was awarded by Rensselaer Polytechnic Institute in 1835. The first such degree to be awarded to a woman was granted by Cornell University to Nora Stanton Blatch in 1905.
In the UK during the early 19th century, the division between civil engineering and military engineering (served by the Royal Military Academy, Woolwich), coupled with the demands of the Industrial Revolution, spawned new engineering education initiatives: the Class of Civil Engineering and Mining was founded at King's College London in 1838, mainly as a response to the growth of the railway system and the need for more qualified engineers, the private College for Civil Engineers in Putney was established in 1839, and the UK's first Chair of Engineering was established at the University of Glasgow in 1840.
== Education ==
Civil engineers typically possess an academic degree in civil engineering. The length of study is three to five years, and the completed degree is designated as a bachelor of technology, or a bachelor of engineering. The curriculum generally includes classes in physics, mathematics, project management, design and specific topics in civil engineering. After taking basic courses in most sub-disciplines of civil engineering, they move on to specialize in one or more sub-disciplines at advanced levels. While an undergraduate degree (BEng/BSc) normally provides successful students with industry-accredited qualifications, some academic institutions offer post-graduate degrees (MEng/MSc), which allow students to further specialize in their particular area of interest.
== Practicing engineers ==
In most countries, a bachelor's degree in engineering represents the first step towards professional certification, and a professional body certifies the degree program. After completing a certified degree program, the engineer must satisfy a range of requirements including work experience and exam requirements before being certified. Once certified, the engineer is designated as a professional engineer (in the United States, Canada and South Africa), a chartered engineer (in most Commonwealth countries), a chartered professional engineer (in Australia and New Zealand), or a European engineer (in most countries of the European Union). There are international agreements between relevant professional bodies to allow engineers to practice across national borders.
The benefits of certification vary depending upon location. For example, in the United States and Canada, "only a licensed professional engineer may prepare, sign and seal, and submit engineering plans and drawings to a public authority for approval, or seal engineering work for public and private clients." This requirement is enforced under provincial law such as the Engineers Act in Quebec. No such legislation has been enacted in other countries including the United Kingdom. In Australia, state licensing of engineers is limited to the state of Queensland. Almost all certifying bodies maintain a code of ethics which all members must abide by.
Engineers must obey contract law in their contractual relationships with other parties. In cases where an engineer's work fails, they may be subject to the law of tort of negligence, and in extreme cases, criminal charges. An engineer's work must also comply with numerous other rules and regulations such as building codes and environmental law.
== Sub-disciplines ==
There are a number of sub-disciplines within the broad field of civil engineering. General civil engineers work closely with surveyors and specialized civil engineers to design grading, drainage, pavement, water supply, sewer service, dams, electric and communications supply. General civil engineering is also referred to as site engineering, a branch of civil engineering that primarily focuses on converting a tract of land from one usage to another. Site engineers spend time visiting project sites, meeting with stakeholders, and preparing construction plans. Civil engineers apply the principles of geotechnical engineering, structural engineering, environmental engineering, transportation engineering and construction engineering to residential, commercial, industrial and public works projects of all sizes and levels of construction.
=== Coastal engineering ===
Coastal engineering is concerned with managing coastal areas. In some jurisdictions, the terms sea defense and coastal protection mean defense against flooding and erosion, respectively. Coastal defense is the more traditional term, but coastal management has become popular as well.
=== Construction engineering ===
Construction engineering involves planning and execution, transportation of materials, and site development based on hydraulic, environmental, structural, and geotechnical engineering. As construction firms tend to have higher business risk than other types of civil engineering firms, construction engineers often engage in more business-like transactions, such as drafting and reviewing contracts, evaluating logistical operations, and monitoring supply prices.
=== Earthquake engineering ===
Earthquake engineering involves designing structures to withstand hazardous earthquake exposures. Earthquake engineering is a sub-discipline of structural engineering. The main objectives of earthquake engineering are to understand interaction of structures on the shaky ground; foresee the consequences of possible earthquakes; and design, construct and maintain structures to perform at earthquake in compliance with building codes.
=== Environmental engineering ===
Environmental engineering is the contemporary term for sanitary engineering, though sanitary engineering traditionally had not included much of the hazardous waste management and environmental remediation work covered by environmental engineering. Public health engineering and environmental health engineering are other terms being used.
Environmental engineering deals with treatment of chemical, biological, or thermal wastes, purification of water and air, and remediation of contaminated sites after waste disposal or accidental contamination. Among the topics covered by environmental engineering are pollutant transport, water purification, waste water treatment, air pollution, solid waste treatment, recycling, and hazardous waste management. Environmental engineers administer pollution reduction, green engineering, and industrial ecology. Environmental engineers also compile information on environmental consequences of proposed actions.
=== Forensic engineering ===
Forensic engineering is the investigation of materials, products, structures or components that fail or do not operate or function as intended, causing personal injury or damage to property. The consequences of failure are dealt with by the law of product liability. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. The subject is applied most commonly in civil law cases, although it may be of use in criminal law cases. Generally the purpose of a Forensic engineering investigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of an accident. It can also involve investigation of intellectual property claims, especially patents.
=== Geotechnical engineering ===
Geotechnical engineering studies rock and soil supporting civil engineering systems. Knowledge from the field of soil science, materials science, mechanics, and hydraulics is applied to safely and economically design foundations, retaining walls, and other structures. Environmental efforts to protect groundwater and safely maintain landfills have spawned a new area of research called geo-environmental engineering.
Identification of soil properties presents challenges to geotechnical engineers. Boundary conditions are often well defined in other branches of civil engineering, but unlike steel or concrete, the material properties and behavior of soil are difficult to predict due to its variability and limitation on investigation. Furthermore, soil exhibits nonlinear (stress-dependent) strength, stiffness, and dilatancy (volume change associated with application of shear stress), making studying soil mechanics all the more difficult. Geotechnical engineers frequently work with professional geologists, Geological Engineering professionals and soil scientists.
=== Materials science and engineering ===
Materials science is closely related to civil engineering. It studies fundamental characteristics of materials, and deals with ceramics such as concrete and mix asphalt concrete, strong metals such as aluminum and steel, and thermosetting polymers including polymethylmethacrylate (PMMA) and carbon fibers.
Materials engineering involves protection and prevention (paints and finishes). Alloying combines two types of metals to produce another metal with desired properties. It incorporates elements of applied physics and chemistry. With recent media attention on nanoscience and nanotechnology, materials engineering has been at the forefront of academic research. It is also an important part of forensic engineering and failure analysis.
=== Site development and planning ===
Site development, also known as site planning, is focused on the planning and development potential of a site as well as addressing possible impacts from permitting issues and environmental challenges.
=== Structural engineering ===
Structural engineering is concerned with the structural design and structural analysis of buildings, bridges, towers, flyovers (overpasses), tunnels, off shore structures like oil and gas fields in the sea, aerostructure and other structures. This involves identifying the loads which act upon a structure and the forces and stresses which arise within that structure due to those loads, and then designing the structure to successfully support and resist those loads. The loads can be self weight of the structures, other dead load, live loads, moving (wheel) load, wind load, earthquake load, load from temperature change etc. The structural engineer must design structures to be safe for their users and to successfully fulfill the function they are designed for (to be serviceable). Due to the nature of some loading conditions, sub-disciplines within structural engineering have emerged, including wind engineering and earthquake engineering.
Design considerations will include strength, stiffness, and stability of the structure when subjected to loads which may be static, such as furniture or self-weight, or dynamic, such as wind, seismic, crowd or vehicle loads, or transitory, such as temporary construction loads or impact. Other considerations include cost, constructibility, safety, aesthetics and sustainability.
=== Surveying ===
Surveying is the process by which a surveyor measures certain dimensions that occur on or near the surface of the Earth. Surveying equipment such as levels and theodolites are used for accurate measurement of angular deviation, horizontal, vertical and slope distances. With computerization, electronic distance measurement (EDM), total stations, GPS surveying and laser scanning have to a large extent supplanted traditional instruments. Data collected by survey measurement is converted into a graphical representation of the Earth's surface in the form of a map. This information is then used by civil engineers, contractors and realtors to design from, build on, and trade, respectively. Elements of a structure must be sized and positioned in relation to each other and to site boundaries and adjacent structures.
Although surveying is a distinct profession with separate qualifications and licensing arrangements, civil engineers are trained in the basics of surveying and mapping, as well as geographic information systems. Surveyors also lay out the routes of railways, tramway tracks, highways, roads, pipelines and streets as well as position other infrastructure, such as harbors, before construction.
Land surveying
In the United States, Canada, the United Kingdom and most Commonwealth countries land surveying is considered to be a separate and distinct profession. Land surveyors are not considered to be engineers, and have their own professional associations and licensing requirements. The services of a licensed land surveyor are generally required for boundary surveys (to establish the boundaries of a parcel using its legal description) and subdivision plans (a plot or map based on a survey of a parcel of land, with boundary lines drawn inside the larger parcel to indicate the creation of new boundary lines and roads), both of which are generally referred to as Cadastral surveying. They collect data on important geological features below and on the land.
Construction surveying
Construction surveying is generally performed by specialized technicians. Unlike land surveyors, the resulting plan does not have legal status. Construction surveyors perform the following tasks:
Surveying existing conditions of the future work site, including topography, existing buildings and infrastructure, and underground infrastructure when possible;
"lay-out" or "setting-out": placing reference points and markers that will guide the construction of new structures such as roads or buildings;
Verifying the location of structures during construction;
As-Built surveying: a survey conducted at the end of the construction project to verify that the work authorized was completed to the specifications set on plans.
=== Transportation engineering ===
Transportation engineering is concerned with moving people and goods efficiently, safely, and in a manner conducive to a vibrant community. This involves specifying, designing, constructing, and maintaining transportation infrastructure which includes streets, canals, highways, rail systems, airports, ports, and mass transit. It includes areas such as transportation design, transportation planning, traffic engineering, some aspects of urban engineering, queueing theory, pavement engineering, Intelligent Transportation System (ITS), and infrastructure management.
=== Municipal or urban engineering ===
Municipal engineering is concerned with municipal infrastructure. This involves specifying, designing, constructing, and maintaining streets, sidewalks, water supply networks, sewers, street lighting, municipal solid waste management and disposal, storage depots for various bulk materials used for maintenance and public works (salt, sand, etc.), public parks and cycling infrastructure. In the case of underground utility networks, it may also include the civil portion (conduits and access chambers) of the local distribution networks of electrical and telecommunications services. It can also include the optimization of waste collection and bus service networks. Some of these disciplines overlap with other civil engineering specialties, however municipal engineering focuses on the coordination of these infrastructure networks and services, as they are often built simultaneously, and managed by the same municipal authority. Municipal engineers may also design the site civil works for large buildings, industrial plants or campuses (i.e. access roads, parking lots, potable water supply, treatment or pretreatment of waste water, site drainage, etc.)
=== Water resources engineering ===
Water resources engineering is concerned with the collection and management of water (as a natural resource). As a discipline, it therefore combines elements of hydrology, environmental science, meteorology, conservation, and resource management. This area of civil engineering relates to the prediction and management of both the quality and the quantity of water in both underground (aquifers) and above ground (lakes, rivers, and streams) resources. Water resource engineers analyze and model very small to very large areas of the earth to predict the amount and content of water as it flows into, through, or out of a facility. However, the actual design of the facility may be left to other engineers.
Hydraulic engineering concerns the flow and conveyance of fluids, principally water. This area of civil engineering is intimately related to the design of pipelines, water supply network, drainage facilities (including bridges, dams, channels, culverts, levees, storm sewers), and canals. Hydraulic engineers design these facilities using the concepts of fluid pressure, fluid statics, fluid dynamics, and hydraulics, among others.
=== Civil engineering systems ===
Civil engineering systems is a discipline that promotes using systems thinking to manage complexity and change in civil engineering within its broader public context. It posits that the proper development of civil engineering infrastructure requires a holistic, coherent understanding of the relationships between all of the crucial factors that contribute to successful projects while at the same time emphasizing the importance of attention to technical detail. Its purpose is to help integrate the entire civil engineering project life cycle from conception, through planning, designing, making, operating to decommissioning.
== See also ==
=== Associations ===
== References ==
== Further reading ==
Blockley, David (2014). Structural Engineering: a very short introduction. New York: Oxford University Press. ISBN 978-0-19-967193-9.
Chen, W.F.; Liew, J.Y. Richard, eds. (2002). The Civil Engineering Handbook. CRC Press. ISBN 978-0-8493-0958-8.
Muir Wood, David (2012). Civil Engineering: a very short introduction. New York: Oxford University Press. ISBN 978-0-19-957863-4.
Ricketts, Jonathan T.; Loftin, M. Kent; Merritt, Frederick S., eds. (2004). Standard handbook for civil engineers (5 ed.). McGraw Hill. ISBN 978-0-07-136473-7.
== External links ==
The Institution of Civil Engineers
Civil Engineering Software Database
The Institution of Civil Engineering Surveyors
Civil engineering classes, from MIT OpenCourseWare | Wikipedia/civil_engineering |
Wastewater treatment is a process which removes and eliminates contaminants from wastewater. It thus converts it into an effluent that can be returned to the water cycle. Once back in the water cycle, the effluent creates an acceptable impact on the environment. It is also possible to reuse it. This process is called water reclamation. The treatment process takes place in a wastewater treatment plant. There are several kinds of wastewater which are treated at the appropriate type of wastewater treatment plant. For domestic wastewater the treatment plant is called a Sewage Treatment. Municipal wastewater or sewage are other names for domestic wastewater. For industrial wastewater, treatment takes place in a separate Industrial wastewater treatment, or in a sewage treatment plant. In the latter case it usually follows pre-treatment. Further types of wastewater treatment plants include Agricultural wastewater treatment and leachate treatment plants.
One common process in wastewater treatment is phase separation, such as sedimentation. Biological and chemical processes such as oxidation are another example. Polishing is also an example. The main by-product from wastewater treatment plants is a type of sludge that is usually treated in the same or another wastewater treatment plant.: Ch.14 Biogas can be another by-product if the process uses anaerobic treatment. Treated wastewater can be reused as reclaimed water. The main purpose of wastewater treatment is for the treated wastewater to be able to be disposed or reused safely. However, before it is treated, the options for disposal or reuse must be considered so the correct treatment process is used on the wastewater.
The term "wastewater treatment" is often used to mean "sewage treatment".
== Types of treatment plants ==
Wastewater treatment plants may be distinguished by the type of wastewater to be treated. There are numerous processes that can be used to treat wastewater depending on the type and extent of contamination. The treatment steps include physical, chemical and biological treatment processes.
Types of wastewater treatment plants include:
Sewage treatment plants
Industrial wastewater treatment plants
Agricultural wastewater treatment plants
Leachate treatment plants
=== Sewage treatment plants ===
=== Industrial wastewater treatment plants ===
=== Agricultural wastewater treatment plants ===
=== Leachate treatment plants ===
Leachate treatment plants are used to treat leachate from landfills. Treatment options include: biological treatment, mechanical treatment by ultrafiltration, treatment with active carbon filters, electrochemical treatment including electrocoagulation by various proprietary technologies and reverse osmosis membrane filtration using disc tube module technology.
== Unit processes ==
The unit processes involved in wastewater treatment include physical processes such as settlement or flotation and biological processes such oxidation or anaerobic treatment. Some wastewaters require specialized treatment methods. At the simplest level, treatment of most wastewaters is carried out through separation of solids from liquids, usually by sedimentation. By progressively converting dissolved material into solids, usually a biological floc or biofilm, which is then settled out or separated, an effluent stream of increasing purity is produced.
=== Phase separation ===
Phase separation transfers impurities into a non-aqueous phase. Phase separation may occur at intermediate points in a treatment sequence to remove solids generated during oxidation or polishing. Grease and oil may be recovered for fuel or saponification. Solids often require dewatering of sludge in a wastewater treatment plant. Disposal options for dried solids vary with the type and concentration of impurities removed from water.
==== Sedimentation ====
Solids such as stones, grit, and sand may be removed from wastewater by gravity when density differences are sufficient to overcome dispersion by turbulence. This is typically achieved using a grit channel designed to produce an optimum flow rate that allows grit to settle and other less-dense solids to be carried forward to the next treatment stage. Gravity separation of solids is the primary treatment of sewage, where the unit process is called "primary settling tanks" or "primary sedimentation tanks". It is also widely used for the treatment of other types of wastewater. Solids that are denser than water will accumulate at the bottom of quiescent settling basins. More complex clarifiers also have skimmers to simultaneously remove floating grease such as soap scum and solids such as feathers, wood chips, or condoms. Containers like the API oil-water separator are specifically designed to separate non-polar liquids.: 111–138
=== Biological and chemical processes ===
==== Oxidation ====
Oxidation reduces the biochemical oxygen demand of wastewater, and may reduce the toxicity of some impurities. Secondary treatment converts organic compounds into carbon dioxide, water, and biosolids through oxidation and reduction reactions. Chemical oxidation is widely used for disinfection.
===== Biochemical oxidation (secondary treatment) =====
===== Chemical oxidation =====
Advanced oxidation processes are used to remove some persistent organic pollutants and concentrations remaining after biochemical oxidation.: 363–408 Disinfection by chemical oxidation kills bacteria and microbial pathogens by adding hydroxyl radicals such as ozone, chlorine or hypochlorite to wastewater.: 1220 These hydroxyl radical then break down complex compounds in the organic pollutants into simple compounds such as water, carbon dioxide, and salts.
==== Anaerobic treatment ====
Anaerobic wastewater treatment processes (for example UASB, EGSB) are also widely applied in the treatment of industrial wastewaters and biological sludge.
=== Polishing ===
Polishing refers to treatments made in further advanced treatment steps after the above methods (also called "fourth stage" treatment). These treatments may also be used independently for some industrial wastewater. Chemical reduction or pH adjustment minimizes chemical reactivity of wastewater following chemical oxidation.: 439 Carbon filtering removes remaining contaminants and impurities by chemical absorption onto activated carbon.: 1138 Filtration through sand (calcium carbonate) or fabric filters is the most common method used in municipal wastewater treatment.
== See also ==
List of largest wastewater treatment plants
List of wastewater treatment technologies
Water treatment
== References ==
== External links ==
Media related to Wastewater treatment at Wikimedia Commons | Wikipedia/Waste_water_treatment |
A science park (also called a "university research park", "technology park", "technopark", "technopolis", "technopole", or a "science and technology park" [STP]) is defined as being a property-based development that accommodates and fosters the growth of tenant firms and that are affiliated with a university (or government and private research bodies) based on proximity, ownership, and/or governance. This is so that knowledge can be shared, innovation promoted, technology transferred, and research outcomes progressed to viable commercial products. Science parks are also often perceived as contributing to national economic development, stimulating the formation of new high-technology firms, attracting foreign investment and promoting exports.
== Background ==
The world's first university research park, Stanford Research Park was launched in 1951 as a cooperative venture between Stanford University and the City of Palo Alto. Another early university research park was Research Triangle Park in North Carolina, which was launched in 1959. In 1969, Pierre Laffitte founded the Sophia Antipolis Science Park in France. Laffitte had travelled widely and developed a theory of "cross-fertilisation" where individuals could benefit mutually by the exchange of thoughts in many fields including culture, science and the arts.
Science parks are elements of the infrastructure of the global "knowledge economy". They provide concentration that foster innovation and the development and commercialization of technology and where governments, universities and private companies may collaborate. The developers work in fields such as information technology, pharmaceuticals, science and engineering. Science parks may also offer a number of shared resources, such as incubators, programs and collaboration activities, uninterruptible power supply, telecommunications hubs, reception and security, management offices, bank offices, convention center, parking, and internal transportation.
Science parks also aim to bring together people who assist the developers of technology to bring their work to commercial fruition, for example, experts in intellectual property law. They can be attractive to university students who may interact with prospective employers and encourage students to remain in the local area.
Science parks may be designed to enhance the quality of life of the workers. For example, they might be built with sports facilities, restaurants, crèches or pleasant outdoor areas. Apart from tenants, science parks create jobs for the local community.
Science parks are specific locations and differ from the wider area high-technology business districts in that they are more organized, planned, and managed. They differ from science centres in that they lead to commercialized products from research. They differ from industrial parks which focus on manufacturing and from business parks which focus on business office locations.
Science parks are found worldwide. They are most common in developed countries. In North America there are over 170 science parks. For example, in the 1980s, North Carolina State University, Raleigh lacked space. New possible sites included the state mental-health property and the Diocese of Raleigh property on 1,000 acres (4.0 km2) surrounding the Lake Raleigh Reservoir. The university's Centennial Campus was developed. Sandia Science and Technology Park, NASA Research Park at Ames and the East Tennessee Technology Park at Oak Ridge National Laboratory are examples of research parks that have been developed by or adjacent to US Federal government laboratories.
Science and technology park (STP) activity across the European Union has approximately doubled over the last 11–12 years, driven by the growth of the longer standing parks and the emergence of new parks. There are now an estimated 366 STPs in the EU member states that manage about 28 million m2 of completed building floor space, hosting circa 40,000 organisations that employ approximately 750,000 people, mostly in high value added jobs. In the period from 2000 – 2012, total capital investment into EU STPs was circa €11.7 billion (central estimate). During the same period, STPs spent circa €3 billion on the professional business support and innovation services they either deliver or finance to assist both their tenants and other similar knowledge based businesses in their locality.
Increasingly, the reasons why STPs are sound investments for public sector support are becoming better understood and articulated. The evidence base shows that better STPs are not simply the landlords of attractive and well specified office style buildings. Rather, they are complex organisations, often with multiple owners having objectives aligned with important elements of economic development public policy as well as an imperative to be financially self-sustaining in the longer term.
== Definitions ==
The Association of University Research Parks (AURP), is a non-profit association consisting of university-affiliated science parks, almost entirely based in North America. It defines "university research and science parks" as "property-based ventures with certain characteristics, including master planned property and buildings designed primarily for private/public research and development facilities, high technology and science based companies and support services; contractual, formal or operational relationships with one or more science or research institutions of higher education; roles in promoting the university's research and development through industry partnerships, assisting in the growth of new ventures and promoting economic development; roles in aiding the transfer of technology and business skills between university and industry teams and roles in promoting technology-led economic development for the community or region."
The International Association of Science Parks and Areas of Innovation (IASP), the worldwide network of science parks and areas of innovation, defines a science park as "an organisation managed by specialised professionals, whose main aim is to increase the wealth of its community by promoting the culture of innovation and the competitiveness of its associated businesses and knowledge-based institutions.
To enable these goals to be met, a Science Park stimulates and manages the flow of knowledge and technology amongst universities, R&D institutions, companies and markets; it facilitates the creation and growth of innovation-based companies through incubation and spin-off processes; and provides other value-added services together with high quality space and facilities.".
The Cabral-Dahab Science Park Management Paradigm, was first presented by Regis Cabral in ten points in 1990. According to this management paradigm, a science park must: "have access to qualified research and development personnel in the areas of knowledge in which the park has its identity; be able to market its high valued products and services; have the capability to provide marketing expertise and managerial skills to firms, particularly small and medium-sized enterprises, lacking such a resource; be inserted in a society that allows for the protection of product or process secrets, via patents, security or any other means; be able to select or reject which firms enter the park". A science park should: "have a clear identity, quite often expressed symbolically, as the park's name choice, its logo or the management discourse; have a management with established or recognized expertise in financial matters, and which has presented long-term economic development plans; have the backing of powerful, dynamic and stable economic actors, such as a funding agency, political institution or local university; include in its management an active person of vision, with the power of decision and with the high and visible profile, who is perceived by relevant actors in society as embodying the interface between academia and industry, long-term plans and good management; and include a prominent percentage of consultancy firms, as well as technical service firms, including laboratories and quality control firms".
The World Intellectual Property Organization defines Science technology parks as territories usually affiliated with a university or a research institution, which accommodate and foster the growth of companies based therein through technology transfer and open innovation.
== List of science parks ==
Some science parks include:
== See also ==
Science Parks For School
Business cluster
Business incubator
Cluster development
Mega-Site
== References ==
== See also ==
Battelle Technology Partnership Practice and Association of University Research Parks (2007) Characteristics and Trends in North American Research Parks. 21st Century Directions [1].
Cabral R. and Dahab S. S. (1993) "Science parks in developing countries: the case of BIORIO in Brazil" in Biotechnology Review, vol 1, p 165 - 178.
Cabral R. (1998) "Refining the Cabral-Dahab Science Park Management Paradigm" in Int. J. Technology Management vol 16 p 813 - 818.
Cabral R. (ed.) (2003) The Cabral-Dahab Science Park Management Paradigm in Asia-Pacific, Europe and the Americas Uminova Centre, Umeå, Sweden.
Echols A. E. and Meredith J. W. (1998) "A case study of the Virginia Tech Corporation Research Centre in the context of the Cabral-Dahab Paradigm, with comparison to other US research parks" in Int. J. Technology Management vol 16 p 761 - 777.
Flaghouse (2018) https://estateintel.com/development-flaghouse-abuja-technology-village-abuja/ retrieved 20/6/19.
Gregory, C. and Zoneveld, J. (2015) ULI Netherlands: Greg Clark discusses technology, real estate and the innovation economy [2].
Heilbron J. (ed.) and Cabral R. (2003) "Development, Science" in The Oxford Companion to The History of Modern Science Oxford University Press, New York, p 205 - 207.
National Research Council. (2009) Understanding Research, Science and Technology Parks: Global Best Practices: Report of a Symposium Washington, DC: The National Academies Press. [3].
Morisson A. (August 2005) Economic zones in the ASEAN. Industrial Parks, Special Economic Zones, Eco-Industrial Parks, Innovation Districts as Strategies for Industrial Competitiveness [4], UNIDO Country Office in Vietnam.
University Economic Development Association. (2019) Higher Education Engagement in Economic Development: Foundations for Strategy and Practice [5]
== External links ==
Ankidyne Science Park
International Association of Science Parks
Association of University Research Parks
UK Science Park Association
Cabral Dahab Science Park Management Paradigm
[6] | Wikipedia/Science_park |
The rational planning model is a model of the planning process involving a number of rational actions or steps. Taylor (1998) outlines five steps, as follows:
Definition of the problems and/or goals;
Identification of alternative plans/policies;
Evaluation of alternative plans/policies;
Implementation of plans/policies;
Monitoring of effects of plans/policies.
The rational planning model is used in planning and designing neighborhoods, cities, and regions. It has been central in the development of modern urban planning and transportation planning. The model has many limitations, particularly the lack of guidance on involving stakeholders and the community affected by planning, and other models of planning, such as collaborative planning, are now also widely used.
The very similar rational decision-making model, as it is called in organizational behavior, is a process for making logically sound decisions. This multi-step model and aims to be logical and follow the orderly path from problem identification through solution. Rational decision making is a multi-step process for making logically sound decisions that aims to follow the orderly path from problem identification through solution.
== Method ==
Rational decision-making or planning follows a series of steps detailed below:
=== Verify, define, detail the problem, give solution or alternative to the problem ===
Verifying, defining & detailing the problem (problem definition, goal definition, information gathering). This step includes recognizing the problem, defining an initial solution, and starting primary analysis. Examples of this are creative devising, creative ideas, inspirations, breakthroughs, and brainstorms.
The very first step which is normally overlooked by the top level management is defining the exact problem. Though we think that the problem identification is obvious, many times it is not. When defining the problem situation, framing is essential part of the process. With correct framing, the situation is identified and possible previous experience with same kind of situation can be utilized. The rational decision making model is a group-based decision making process. If the problem is not identified properly then we may face a problem as each and every member of the group might have a different definition of the problem.
=== Generate all possible solutions ===
This step encloses two to three final solutions to the problem and preliminary implementation to the site. In planning, examples of this are Planned Units of Development and downtown revitalizations.
This activity is best done in groups, as different people may contribute different ideas or alternative solutions to the problem. Without alternative solutions, there is a chance of arriving at a non-optimal or a rational decision. For exploring the alternatives it is necessary to gather information. Technology may help with gathering this information.
=== Generate objective assessment criteria ===
Evaluative criteria are measurements to determine success and failure of alternatives. This step contains secondary and final analysis along with secondary solutions to the problem. Examples of this are site suitability and site sensitivity analysis. After going thoroughly through the process of defining the problem, exploring for all the possible alternatives for that problem and gathering information this step says evaluate the information and the possible options to anticipate the consequences of each and every possible alternative that is thought of. At this point optional criteria for measuring the success or failure of the decision taken needs to be considered.
The rational model of planning rest largely on objective assessment.
=== Choose the best solution generated ===
This step comprises a final solution and secondary implementation to the site. At this point the process has developed into different strategies of how to apply the solutions to the site.
Based on the criteria of assessment and the analysis done in previous steps, choose the best solution generated. These four steps form the core of the Rational Decision Making Model.
=== Implement the preferred alternative ===
This step includes final implementation to the site and preliminary monitoring of the outcome and results of the site. This step is the building/renovations part of the process.
=== Monitor and evaluate outcomes and results ===
=== Feedback ===
Modify future decisions and actions taken based on the above evaluation of outcomes.
== Discourse of rational planning model used in policy making ==
The rational model of decision-making is a process for making sound decisions in policy making in the public sector. Rationality is defined as “a style of behavior that is appropriate to the achievement of given goals, within the limits imposed by given conditions and constraints”. It is important to note the model makes a series of assumptions in order for it to work, such as:
The model must be applied in a system that is stable,
The government is a rational and unitary actor and that its actions are perceived as rational choices,
The policy problem is unambiguous,
There are no limitations of time or cost.
Indeed, some of the assumptions identified above are also pin pointed out in a study written by the historian H.A. Drake, as he states:
In its purest form, the Rational Actor approach presumes that such a figure [as Constantine] has complete freedom of action to achieve goals that he or she has articulated through a careful process of rational analysis involving full and objective study of all pertinent information and alternatives. At the same time, it presumes that this central actor is so fully in control of the apparatus of government that a decision once made is as good as implemented. There are no staffs on which to rely, no constituencies to placate, no generals or governors to cajole. By attributing all decision making to one central figure who is always fully in control and who acts only after carefully weighing all options, the Rational Actor method allows scholars to filter out extraneous details and focus attention on central issues.
Furthermore, as we have seen, in the context of policy rational models are intended to achieve maximum social gain. For this purpose, Simon identifies an outline of a step by step mode of analysis to achieve rational decisions. Ian Thomas describes Simon's steps as follows:
Intelligence gathering— data and potential problems and opportunities are identified, collected and analyzed.
Identifying problems
Assessing the consequences of all options
Relating consequences to values— with all decisions and policies there will be a set of values which will be more relevant (for example, economic feasibility and environmental protection) and which can be expressed as a set of criteria, against which performance (or consequences) of each option can be judged.
Choosing the preferred option— given the full understanding of all the problems and opportunities, all the consequences and the criteria for judging options.
In similar lines, Wiktorowicz and Deber describe through their study on ‘Regulating biotechnology: a rational-political model of policy development’ the rational approach to policy development. The main steps involved in making a rational decision for these authors are the following:
The comprehensive organization and analysis of the information
The potential consequences of each option
The probability that each potential outcome would materialize
The value (or utility) placed on each potential outcome.
The approach of Wiktorowicz and Deber is similar to Simon and they assert that the rational model tends to deal with “the facts” (data, probabilities) in steps 1 to 3, leaving the issue of assessing values to the final step. According to Wiktorowicz and Deber values are introduced in the final step of the rational model, where the utility of each policy option is assessed.
Many authors have attempted to interpret the above-mentioned steps, amongst others, Patton and Sawicki who summarize the model as presented in the following figure (missing):
Defining the problem by analyzing the data and the information gathered.
Identifying the decision criteria that will be important in solving the problem. The decision maker must determine the relevant factors to take into account when making the decision.
A brief list of the possible alternatives must be generated; these could succeed to resolve the problem.
A critical analyses and evaluation of each criterion is brought through. For example, strength and weakness tables of each alternative are drawn and used for comparative basis. The decision maker then weights the previously identified criteria in order to give the alternative policies a correct priority in the decision.
The decision-maker evaluates each alternative against the criteria and selects the preferred alternative.
The policy is brought through.
The model of rational decision-making has also proven to be very useful to several decision making processes in industries outside the public sphere. Nonetheless, many criticisms of the model arise due to claim of the model being impractical and lying on unrealistic assumptions. For instance, it is a difficult model to apply in the public sector because social problems can be very complex, ill-defined and interdependent. The problem lies in the thinking procedure implied by the model which is linear and can face difficulties in extra ordinary problems or social problems which have no sequences of happenings. This latter argument can be best illustrated by the words of Thomas R. Dye, the president of the Lincoln Center for Public Service, who wrote in his book `Understanding Public Policy´ the following passage:
There is no better illustration of the dilemmas of rational policy making in America than in the field of health…the first obstacle to rationalism is defining the problem. Is our goal to have good health — that is, whether we live at all (infant mortality), how well we live (days lost to sickness), and how long we live (life spans and adult mortality)? Or is our goal to have good medical care — frequent visits to the doctor, wellequipped and accessible hospitals, and equal access to medical care by rich and poor alike?
The problems faced when using the rational model arise in practice because social and environmental values can be difficult to quantify and forge consensus around. Furthermore, the assumptions stated by Simon are never fully valid in a real world context.
However, as Thomas states the rational model provides a good perspective since in modern society rationality plays a central role and everything that is rational tends to be prized. Thus, it does not seem strange that “we ought to be trying for rational decision-making”.
=== Decision criteria for policy analysis — Step 2 ===
As illustrated in Figure 1, rational policy analysis can be broken into 6 distinct stages of analysis. Step 2 highlights the need to understand which factors should be considered as part of the decision making process. At this part of the process, all the economic, social, and environmental factors that are important to the policy decision need to be identified and then expressed as policy decision criteria. For example, the decision criteria used in the analysis of environmental policy is often a mix of —
Ecological impacts — such as biodiversity, water quality, air quality, habitat quality, species population, etc.
Economic efficiency — commonly expressed as benefits and costs.
Distributional equity — how policy impacts are distributed amongst different demographics. Factors that can affect the distribution of impacts include location, ethnicity, income, and occupation.
Social/Cultural acceptability — the extent to which the policy action may be opposed by current social norms or cultural values.
Operational practicality — the capacity required to actually operationalize the policy.
Legality — the potential for the policy to be implemented under current legislation versus the need to pass new legislation that accommodates the policy.
Uncertainty — the degree to which the level of policy impacts can be known.
Some criteria, such as economic benefit, will be more easily measurable or definable, while others such as environmental quality will be harder to measure or express quantitatively. Ultimately though, the set of decision criteria needs to embody all of the policy goals, and overemphasising the more easily definable or measurable criteria, will have the undesirable impact of biasing the analysis towards a subset of the policy goals.
The process of identifying a suitably comprehensive decision criteria set is also vulnerable to being skewed by pressures arising at the political interface. For example, decision makers may tend to give "more weight to policy impacts that are concentrated, tangible, certain, and immediate than to impacts that are diffuse, intangible, uncertain, and delayed."^8. For example, with a cap-and-trade system for carbon emissions the net financial cost in the first five years of policy implementation is a far easier impact to conceptualise than the more diffuse and uncertain impact of a country's improved position to influence global negotiations on climate change action.
=== Decision methods for policy analysis — Step 5 ===
Displaying the impacts of policy alternatives can be done using a policy analysis matrix (PAM) such that shown in Table 1. As shown, a PAM provides a summary of the policy impacts for the various alternatives and examination of the matrix can reveal the tradeoffs associated with the different alternatives.
Table 1. Policy analysis matrix (PAM) for SO2 emissions control.
Once policy alternatives have been evaluated, the next step is to decide which policy alternative should be implemented. This is shown as step 5 in Figure 1. At one extreme, comparing the policy alternatives can be relatively simple if all the policy goals can be measured using a single metric and given equal weighting. In this case, the decision method is an exercise in benefit cost analysis (BCA).
At the other extreme, the numerous goals will require the policy impacts to be expressed using a variety of metrics that are not readily comparable. In such cases, the policy analyst may draw on the concept of utility to aggregate the various goals into a single score. With the utility concept, each impact is given a weighting such that 1 unit of each weighted impact is considered to be equally valuable (or desirable) with regards to the collective well-being.
Weimer and Vining also suggest that the "go, no go" rule can be a useful method for deciding amongst policy alternatives^8. Under this decision making regime, some or all policy impacts can be assigned thresholds which are used to eliminate at least some of the policy alternatives. In their example, one criterion "is to minimize SO2 emissions" and so a threshold might be a reduction SO2 emissions "of at least 8.0 million tons per year". As such, any policy alternative that does not meet this threshold can be removed from consideration. If only a single policy alternative satisfies all the impact thresholds then it is the one that is considered a "go" for each impact. Otherwise it might be that all but a few policy alternatives are eliminated and those that remain need to be more closely examined in terms of their trade-offs so that a decision can be made.
=== Case study of rational policy analysis ===
To demonstrate the rational analysis process as described above, let’s examine the policy paper “Stimulating the use of biofuels in the European Union: Implications for climate change policy” by Lisa Ryan where the substitution of fossil fuels with biofuels has been proposed in the European Union (EU) between 2005–2010 as part of a strategy to mitigate greenhouse gas emissions from road transport, increase security of energy supply and support development of rural communities.
Considering the steps of Patton and Sawicki model as in Figure 1 above, this paper only follows components 1 to 5 of the rationalist policy analysis model:
Defining The Problem – the report identifies transportation fuels pose two important challenges for the European Union (EU). First, under the provisions of the Kyoto Protocol to the Climate Change Convention, the EU has agreed to an absolute cap on greenhouse gas emissions; while, at the same time increased consumption of transportation fuels has resulted in a trend of increasing greenhouse gas emissions from this source. Second, the dependence upon oil imports from the politically volatile Middle East generates concern over price fluctuations and possible interruptions in supply. Alternative fuel sources need to be used & substituted in place of fossil fuels to mitigate GHG emissions in the EU.
Determine the Evaluation Criteria – this policy sets Environmental impacts/benefits (reduction of GHG’s as a measure to reducing climate change effects) and Economical efficiency (the costs of converting to biofuels as alternative to fossil fuels & the costs of production of biofuels from its different potential sources) as its decision criteria. However, this paper does not exactly talk about the social impacts, this policy may have. It also does not compare the operational challenges involved between the different categories of biofuels considered.
Identifying Alternative Policies – The European Commission foresees that three alternative transport fuels: hydrogen, natural gas, and biofuels, will replace transport fossil fuels, each by 5% by 2020.
Evaluating Alternative Policies – Biofuels are an alternative motor vehicle fuel produced from biological material and are promoted as a transitional step until more advanced technologies have matured. By modelling the efficiency of the biofuel options the authors compute the economic and environmental costs of each biofuel option as per the evaluation criteria mentioned above.
Select The Preferred Policy – The authors suggest that the overall best biofuel comes from the sugarcane in Brazil after comparing the economic & the environmental costs. The current cost of subsidising the price difference between European biofuels and fossil fuels per tonne of CO2 emissions saved is calculated to be €229–2000. If the production of European biofuels for transport is to be encouraged, exemption from excise duties is the instrument that incurs the least transactions costs, as no separate administrative or collection system needs to be established. A number of entrepreneurs are producing biofuels at the lower margin of the costs specified here profitably, once an excise duty rebate is given. It is likely that growth in the volume of the business will engender both economies of scale and innovation that will reduce costs substantially.
== Requirements and limitations ==
However, there are a lot of assumptions, requirements without which the rational decision model is a failure. Therefore, they all have to be considered.
The model assumes that we have or should or can obtain adequate information, both in terms of quality, quantity and accuracy. This applies to the situation as well as the alternative technical situations. It further assumes that you have or should or can obtain substantive knowledge of the cause and effect relationships relevant to the evaluation of the alternatives. In other words, it assumes that you have a thorough knowledge of all the alternatives and the consequences of the alternatives chosen. It further assumes that you can rank the alternatives and choose the best of it.
The following are the limitations for the Rational Decision Making Model:
requires a great deal of time
requires great deal of information
assumes rational, measurable criteria are available and agreed upon
assumes accurate, stable and complete knowledge of all the alternatives, preferences, goals and consequences
assumes a rational, reasonable, non – political world
== Current status ==
While the rational planning model was innovative at its conception, the concepts are controversial and questionable processes today. The rational planning model has fallen out of mass use as of the last decade. Rather than conceptualising human agents as rational planners, Lucy Suchman argues, agents can better be understood as engaging in situated action. Going further, Guy Benveniste argued that the rational model could not be implemented without taking the political context into account.
== See also ==
Rationality and power
== Sources ==
See working paper #2 at http://ewp.uoregon.edu/publications/working
== References == | Wikipedia/Rational_planning_model |
Crime statistics refer to systematic, quantitative results about crime, as opposed to crime news or anecdotes. Notably, crime statistics can be the result of two rather different processes:
scientific research, such as criminological studies, victimisation surveys;
official figures, such as published by the police, prosecution, courts, and prisons.
However, in their research, criminologists often draw on official figures as well.
== Methods ==
There are several methods for the measuring of crime. Public surveys are occasionally conducted to estimate the amount of crime that has not been reported to police. Such surveys are usually more reliable for assessing trends. However, they also have their limitations and generally don't procure statistics useful for local crime prevention, often ignore offenses against children and do not count offenders brought before the criminal justice system.
Law enforcement agencies in some countries offer compilations of statistics for various types of crime.
Two major methods for collecting crime data are law enforcement reports, which only reflect crimes that are reported, recorded, and not subsequently canceled; and victim study (victimization statistical surveys), which rely on individual memory and honesty. For less frequent crimes such as intentional homicide and armed robbery, reported incidents are generally more reliable, but suffer from under-recording; for example, no criming in the United Kingdom sees over one third of reported violent crimes being not recorded by the police. Because laws and practices vary between jurisdictions, comparing crime statistics between and even within countries can be difficult: typically only violent deaths (homicide or manslaughter) can reliably be compared, due to consistent and high reporting and relative clear definition.
The U.S. has two major data collection programs, the Uniform Crime Reports from the FBI and the National Crime Victimization Survey from the Bureau of Justice Statistics. However, the U.S. has no comprehensive infrastructure to monitor crime trends and report the information to related parties such as law enforcement.
Research using a series of victim surveys in 18 countries of the European Union, funded by the European Commission, has reported (2005) that the level of crime in Europe has fallen back to the levels of 1990, and notes that levels of common crime have shown declining trends in the U.S., Canada, Australia and other industrialized countries as well. The European researchers say a general consensus identifies demographic change as the leading cause for this international trend. Although homicide and robbery rates rose in the U.S. in the 1980s, by the end of the century they had declined by 40%.
However, the European research suggests that "increased use of crime prevention measures may indeed be the common factor behind the near universal decrease in overall levels of crime in the Western world", since decreases have been most pronounced in property crime and less so, if at all, in contact crimes.
== Counting rules ==
Relatively few standards exist and none that permit international comparability beyond a very limited range of offences. However, many jurisdictions accept the following:
There must be a prima facie case that an offence has been committed before it is recorded. That is either police find evidence of an offence or receive a believable allegation of an offense being committed. Some jurisdictions count offending only when certain processes happen, such as an arrest is made, ticket issued, charges laid in Court or only upon securing a conviction.
Multiple reports of the same offence usually count as one offence. Some jurisdictions count each report separately, others count each victim of offending separately.
Where several offences are committed at the same time, in one act of offending, only the most serious offense is counted. Some jurisdictions record and count each and every offense separately, others count cases, or offenders, that can be prosecuted.
Where multiple offenders are involved in the same act of offending only one act is counted when counting offenses but each offender is counted when apprehended.
Offending is counted at the time it comes to the attention of a law enforcement officer. Some jurisdictions record and count offending at the time it occurs.
As "only causing pain" is counted as assault in some countries, it let higher assault rates except in Austria, Finland, Germany, the Netherlands, Portugal and Sweden. But there are exceptions, like Czech Republic and Latvia. France was the contrasting exception having a high assault ratio without counting minor assaults.
Offending that is a breach of the law but for which no punishment exists is often not counted. For example: Suicide, which is technically illegal in most countries, may not be counted as a crime, although attempted suicide and assisting suicide are.
Also traffic offending and other minor offending that might be dealt with by using fines rather than imprisonment, is often not counted as crime. However separate statistics may be kept for this sort of offending.
== Surveys ==
Because of the difficulties in quantifying how much crime actually occurs, researchers generally take two approaches to gathering statistics about crime.
However, as officers can only record crime that comes to their attention and might not record a matter as a crime if the matter is considered minor and is not perceived as a crime by the officer concerned.
For example, when faced with a domestic violence dispute between a couple, a law enforcement officer may decide it is far less trouble to arrest the male party to the dispute, because the female may have children to care for, despite both parties being equally culpable for the dispute. This sort of pragmatic decisionmaking asked if they are victims of crime, without needing to provide any supporting evidence. In these surveys it is the participant's perception, or opinion, that a crime occurred, or even their understanding about what constitutes a crime that is being measured.
As a consequence differing methodologies may make comparisons with other surveys difficult.
One way in which, while other types of crime are under reported. These surveys also give insights as to why crime is reported, or not. The surveys show that the need to make an insurance claim, seek medical assistance, and the seriousness of an offence tend to increase the level of reporting, while the inconvenience of reporting, the involvement of intimate partners and the nature of the offending tend to decrease reporting.
This allows degrees of confidence to be assigned to various crime statistics. For example: Motor vehicle thefts are generally well reported because the victim may need to make the report for an insurance claim, while domestic violence, domestic child abuse and sexual offences are frequently significantly under-reported because of the intimate relationships involved, embarrassment and other factors that make it difficult for the victim to make a report.
Attempts to use victimisation surveys from different countries for international comparison had failed in the past. A standardised survey project called the International Crime Victims Survey Results from this project have been briefly discussed earlier in this article. In 2019, the Global Organized Crime Index found that DRC had the highest rate of criminality.
Annual estimates of crimes committed in the United States range from eleven to thirty million as many acts go unreported.
An estimated hundred million Americans have a criminal record.
== Classification ==
While most jurisdictions could probably agree about what constitutes a murder, what constitutes a homicide may be more problematic, while a crime against the person could vary widely. Legislation differences often means the ingredients of offences vary between jurisdictions.
The International Crime victims Survey has been done in over 70 countries to date and has been a 'de facto' standard for defining common crimes. Complete list of countries participating and the 11 defined crimes can be found at the project web site.
In March 2015 the UNODC published the first version of the "International Classification of Crime for Statistical Purposes" (ICCS). According to the UNODC, there have been more than three million homicides in Africa since the year 2000.
== Measures ==
More complex measures involve measuring the numbers of discrete victims and offenders as well as repeat victimisation rates and recidivism. Repeat victimisation involves measuring how often the same victim is subjected to a repeat occurrence of an offence, often by the same offender. Repetition rate measures are often used to assess the effectiveness of interventions.
== List of crime statistics ==
Assault#Statistics
Bribery#Statistics
Burglary#Statistics
Domestic violence#By country
Estimates of sexual violence#By country
Fraud#Statistics
Gang population
Global Organized Crime Index
Intimate partner sexual violence#Incidence by country
Kidnapping#Statistics
List of countries by intentional homicide rate
Homicide statistics by gender
List of countries by incarceration rate
List of countries with annual rates and counts for killings by law enforcement officers
Money laundering#Statistics
Moral statistics
Motor vehicle theft#Statistics
Rape statistics
Robbery#Robbery statistics
Sexual assault#By country
Theft#Statistics
=== By country ===
Crime in the United States
United States cities by crime rate
Crime statistics in the United Kingdom
== See also ==
Capital punishment by country
Clearance rate
Crime drop
Crime science
Dark figure of crime
Immigration and crime
List of national legal systems
Questionnaire
Self report study
Statistical correlations of criminal behaviour
The International Crime Victims Survey
Victim study
Victimology
== Notes ==
== Further reading ==
Van Dijk, J. J. M. (2008). The World of crime; breaking the silence on problems of crime, justice and development. Thousand Oaks: Sage Publications.
Catalano, S. M. (2006). The Measurement of Crime: Victim Reporting and Police Recording. New York: LFB Scholarly Pub. ISBN 1-59332-155-4.
Jupp, V. (1989). Methods of Criminological Research. Contemporary Social Research Series. London: Unwin Hyman. ISBN 0-04-445066-4.
Van der Westhuizen, J. (1981). Measurement of crime. Pretoria: University of South Africa. ISBN 0-86981-197-5.
Van Dijk, J. J. M.; van Kesteren, J. N.; Smit, P. (2008). Criminal Victimisation in International Perspective, Key findings from the 2004-2005 ICVS and EU ICS (PDF). The Hague: Boom Legal Publishers. Archived from the original (PDF) on 2008-06-25.
== External links ==
crime-statistics.co.uk, UK Crime Statistics and Crime Statistic Comparisons
A Continent of Broken Windows – Alexander, Gerard The Weekly Standard (Volume 11, Issue 10, 21 November 2005)
United States: Uniform Crime Report -- State Statistics from 1960 - 2005
Experience and Communication as explanations for Criminal Risk Perception
Regional crime rates 2011
Crime statistics for 2013 released, FBI | Wikipedia/Crime_rate |
The Industrial Age is a period of history that encompasses the changes in economic and social organization that began around 1760 in Great Britain and later in other countries, characterized chiefly by the replacement of hand tools with power-driven machines such as the power loom and the steam engine, and by the concentration of industry in large establishments.
While it is commonly believed that the Industrial Age was supplanted by the Information Age in the late 20th century, a view that has become common since the Revolutions of 1989, much of the Third World economy is still based on manufacturing, although mobile phones are now commonplace even in the poorest of countries, enabling access to global information networks. Even though many developing countries remain largely industrial, the Information Age is increasingly on the ground.
== Origins ==
Huge changes in agricultural methods made the Industrial Revolution possible. This agricultural revolution started with changes in farming in the Netherlands, later developed by the British.
The Industrial Age began in Great Britain in the mid-18th century and was fueled by coal mining from places such as Wales and County Durham.
The Industrial Revolution began in Great Britain because it had the factors of production, land (all natural resources), capital, and labour. Britain had plenty of harbors that enabled trade, Britain had access to capital, such as goods and money, for example, tools, machinery, equipment, and inventory. Britain, lastly, had an abundance of labor, or industrial workers in this case. There are many other conditions that help show why the Industrial Revolution began in Great Britain. The British Isles and colonies overseas represented huge markets that created a large demand for British goods. Britain also had one of the largest spheres of influence due to its massive navy and merchant marine. The British government's concern for commercial interests was also important.
The steam engine allowed for steamboats and the locomotives, which made transportation much faster. By the mid-19th century the Industrial Revolution had spread to Continental Europe and North America, and since then it has spread to most of the world.
== The textile industry ==
The cotton industry was the first industry to go through mechanization, the use of automatic machinery to increase production. The domestic system sprouted as a result of when businesses began importing raw cotton, employing spinners and weavers to make it into cloth from their home. James Hargreaves invented the spinning jenny, which could produce eight times as much thread as a single spinning wheel, and Richard Arkwright made it driven by water. Later Arkwright opened a spinning mill which marked the beginning of the factory system. In 1785, Edmund Cartwright invented a loom which was powered by water.
== Steam engines ==
In 1712, Thomas Newcomen produced the first successful steam engine, and in 1769, James Watt patented the modern steam engine. As a result, steam replaced water as industry's major power source.
The steam engine allowed for steamboats and the locomotives, which made transportation much faster. By the mid-19th century the Industrial Revolution had spread to Continental Europe and North America, and since then it has spread to most of the world.
The Industrial Age is defined by mass production, broadcasting, the rise of the nation state, power, modern medicine and running water. The quality of human life has increased dramatically during the Industrial Age. Life expectancy today worldwide is more than twice as high as it was when the Industrial Revolution began.
== See also ==
Information Age
Imagination Age
== References == | Wikipedia/Industrial_age |
Savannah College of Art and Design (SCAD) is a private art school with locations in Savannah, Georgia; Atlanta, Georgia; and Lacoste, France. It was founded in 1978 to provide degrees in programs not yet offered in the southeast of the United States. The university enrolls more than 16,000 students from across the United States and around the world with international students comprising up to 17 percent of the student population. SCAD is accredited by the Southern Association of Colleges and Schools Commission on Colleges and other professional accrediting bodies.
== History ==
Richard G. Rowan, Paula S. Wallace, May L. Poetter and Paul E. Poetter legally incorporated the Savannah College of Art and Design September 29, 1978. In September 1979, the university first began offering classes with four staff members, seven faculty members, and 71 students. Initially, the school offered eight majors: ceramics, graphic design, historic preservation, textile design, interior design, painting, photography, and printmaking. In May 1981, the first graduate received a degree. The following year, the first graduating class received degrees. In 1982, the enrollment grew to more than 500 students, then to 1,000 in 1986, and 2,000 in 1989. In 2014, the university enrolled more than 11,000 students.
In the late 1980s and early 1990s, a rash of faculty suicides prompted a nervous reaction from school administrators. The unrest led a competing art school to open downtown, igniting an "all-out war."
Student unrest grew in the early 1990s regarding student representation within the school, culminating in 1992 with the detonation of an explosive device at the administration building, and two more later that year, at the Savannah Civic Center.
SCAD opened a study abroad location in Lacoste, France in 2002 that provides programming for the various academic departments offered by the university's degree-granting locations. It launched an online learning program in 2003 that U.S. News & World Report ranks as among the best for bachelor's programs in the nation. In 2005 the university opened a location in Midtown Atlanta that merged with the Atlanta College of Art in 2006. In September 2010, SCAD opened a Hong Kong location in the Sham Shui Po district.
Richard Rowan (who was married to Paula Wallace at the time) served as president of the college from its inception in 1978 until April 2000, when SCAD's board of trustees promoted him to chancellor. As chancellor, Rowan spent most of his time traveling and recruiting international students and staff. In 2001, he resigned the job and left the college.
Paula S. Wallace is the current president. Wallace, formerly Paula S. Rowan, served as SCAD's provost and dean of academics before becoming president. As president, Wallace directs the internal management of the institution. Wallace has led the collaboration for several annual events, such as the Sidewalk Arts Festival, Savannah Film Festival, a Fashion Show, SCAD Style, deFine Art Festival, Art Educators' Forum and Rising Star. Questions have been raised about the unusual pay packages granted to Wallace and her family. Paula Wallace received $9.6 million in compensation in 2014, and 13 members of her family have received $60 million over the past 20 years.
The university's second museum, SCAD FASH Museum of Fashion + Film, opened in 2015, at SCAD Atlanta.
In 2018, a student started a petition calling for better mental health services for students after two suicides occurred after the beginning of the 2018 academic year. In 2019, SCAD increased the number of professional counseling staff and created Bee Well, which provides virtual and physical counseling, wellness workshops, and a 24/7 toll-free emotional support hotline.
In March 2020, in response to the COVID-19 pandemic, SCAD transitioned to entirely virtual learning for all students, while allowing international students and others to remain in residence halls following social distancing protocols.
In June 2020, SCAD discontinued studies at its Hong Kong location, citing concerns about student safety and academic quality following the 2019–20 Hong Kong protests and the COVID-19 pandemic. The North Kowloon Magistracy will be returned to the city.
In June 2020, in the midst of Black Lives Matter protests around the U.S., SCAD created an office of inclusion and announced related initiatives to address systemic racism, including the addition of 15 endowed scholarships for Black students.
== Campus ==
=== Facilities ===
SCAD's efforts to work with the city of Savannah to preserve its architectural heritage include restoring buildings for use as college facilities, for which it has been recognized by the American Institute of Architects, the National Trust for Historic Preservation, the Historic Savannah Foundation and the Victorian Society of America. The college campus includes 67 buildings throughout the grid-and-park system of downtown Savannah. Many buildings are on the 22 squares of the old town, which are laden with monuments, live oaks and a Southern-Gothic feel.
Located in Atlanta's Midtown, SCAD Atlanta includes classroom and exhibition space, computer labs, library, photography darkrooms, printmaking and sculpture studios, a dining hall, fitness center, swimming pool and residence hall. SCAD Atlanta's Ivy Hall (also known as the Edward C. Peters House) opened in 2008 after extensive restoration. In 2009, SCAD Atlanta opened the Digital Media Center.
The SCAD Lacoste campus is made up of 15th- and 16th-century structures. The campus includes an art gallery, guest houses, computer lab and printmaking lab. In Hong Kong, SCAD occupies renovated historic North Kowloon Magistracy Building, with more than 80,000 square feet (7,400 m2). It is equipped with classrooms, meeting areas, computer labs, an art gallery and library.
The college's first academic building was the Savannah Volunteer Guards Armory, which was purchased and renovated in 1979. Built in 1892, the Romanesque Revival red brick structure is included on the National Register of Historic Places. Originally named Preston Hall, the building was renamed Poetter Hall in honor of co-founders May and Paul Poetter. SCAD soon expanded rapidly, acquiring buildings in Savannah's downtown historic and Victorian districts, restoring old and often derelict buildings that had exhausted their original functions.
The college operates four libraries: Jen Library in Savannah, Georgia; ACA Library in Atlanta, Georgia; Hong Kong Library in Hong Kong; and Lacoste Library in Lacoste, France. There is also a large amount of resources available via the eLearning Library.
The most notable of the group is Jen Library for the size of its collection. The Jen Library houses approximately 42,000 books, 11,000 bound volumes of periodicals, and 1,600 videotapes in an 85,000 square foot building. The building, itself, once served as a Maas Brothers department store before being acquired and repurposed by the university. Its structural and design features include a large glass staircase and floor-to-ceiling windows on opposite corners of the building. The Jen Library houses multiple rare collections containing both books and visual arts materials including the Don Bluth Collection of Animation and the Newton Collection of British and American Art. It is also home to the Gutstein Gallery, an assemblage of contemporary art from both nationally recognized artists as well as SCAD alumni.
In April 2021, the college announced plans of expanding its film and digital media studio, which would make it the largest college movie studio in the country. Plans include a new digital stage and three new soundstages house at a 10.9-acre backlot.
=== Student housing ===
In Atlanta, the university provides three residence halls, ACA Residence Hall of SCAD, Brookwood Courtyard, and the Forty. The Hong Kong residence hall is the Hong Kong Gold Coast residences. The residence halls in Savannah are Barnard Village, Boundary Village, Montgomery House, Oglethorpe House, Turner House, Chatham House, Victory Village, Turner Annex, and the Hive student housing complex, consisting of Apiary, Bumble, Colony, Dance, Everest, Flower, Garden, and Honey at The Hive. Students in Lacoste live in Maison Pitot, Fortunee, Renard, Murier, Olivier, and Basse.
=== Museums and galleries ===
SCAD operates museums, galleries, and exhibition spaces across its campuses, including the SCAD Museum of Art, located on the site of the former Central of Georgia Railway headquarters in Savannah, Georgia, and SCAD FASH Museum of Fashion + Film in Atlanta, Georgia. Rafael Gomes is the director of fashion exhibitions and has curated several shows including ‘Robert Fairer Backstage Pass: Dior, Galliano, Jacobs, and McQueen.'
University galleries include Gutstein Gallery, Pei Ling Chan Gallery, Pinnacle Gallery and La Galerie Bleue in Savannah; Gallery 1600, Trois Gallery and Gallery See in Atlanta; and Moot Gallery in Hong Kong.
== Academics ==
SCAD offers fine art degrees. In Fall 2019, SCAD enrolled more than 14,840 students (12,167 undergraduates; 2,673 postgraduates) from all 50 states, and more than 110 countries. As of 2020, international student enrollment was 17 percent.
=== Accreditation ===
SCAD is accredited by the Commission on Colleges of the Southern Association of Colleges and Schools to award bachelor's and master's degrees. The university confers Bachelor of Arts, Bachelor of Fine Arts, Master of Architecture, Master of Arts, Master of Arts in Teaching, Master of Fine Arts and Master of Urban Design degrees, as well as undergraduate and graduate certificates. The professional M.Arch. degree is accredited by the National Architectural Accrediting Board. The Master of Arts in Teaching degrees offered by SCAD are approved by the Georgia Professional Standards Commission. SCAD is licensed by the South Carolina Commission on Higher Education. The SCAD interior design Bachelor of Fine Arts degree is accredited by the Council for Interior Design Accreditation.
=== Study abroad ===
The university offers a study-abroad campus in Lacoste, France. In Fall 2010, SCAD opened SCAD Hong Kong in the former North Kowloon Magistracy.
=== Schools and departments ===
The university is divided into nine schools:
School of Building Arts
School of Business Innovation
School of Communication Arts
School of Design
School of Fashion
School of Digital Media
School of Entertainment Arts
School of Fine Arts
School of Liberal Arts
== Student activities ==
There are 80 student organizations related to academic and non-academic programs and activities. SCAD has no fraternities or sororities.
=== Student media ===
The university has multiple student-run media organizations at its Savannah and Atlanta locations.
Savannah
District, an online-only news publication, in print from 1995 to 2008
The Manor, an online fashion magazine published since 2014
Port City Review, an annual literary and arts journal published since 2013
The HoneyDripper, a sequential art and illustration blog published since 2016
SCAD Radio, an online webcasting station broadcasting since 2002
Women's Empowerment Club (WEC), discussion based group dedicated to intersectional feminism and social awareness
Atlanta
The Connector, an online-only news publication, in print from 2006 to 2008
SCAN Magazine, a quarterly general interest magazine published since 2009
SCAD Atlanta Radio, an online webcasting station broadcasting since 2007
SCADMC, an online gaming media experience since 2024
== Athletics ==
=== SCAD Savannah Bees ===
The athletic teams of the SCAD Savannah campus are called the Bees. The college is a member of the National Association of Intercollegiate Athletics (NAIA), primarily competing in the Sun Conference (formerly known as the Florida Sun Conference (FSC) until after the 2007–08 school year) since the 2004–05 academic year; The Bees previously competed as an NAIA Independent during the 2003–04 school year (which they were a member on a previous stint from 1987–88 (when the school began intercollegiate athletics) to 1991–92); as well as a member of the Division III ranks of the National Collegiate Athletic Association (NCAA) as an NCAA D-III Independent from 1992–93 to 2002–03.
SCAD Savannah competes in 22 intercollegiate varsity sports. Men's sports include bowling, cross country, cycling, golf, lacrosse, soccer, swimming, tennis and track & field (indoor and outdoor); while women's sports include bowling, cross country, cycling, golf, lacrosse, soccer, swimming, tennis and track & field (indoor and outdoor); and co-ed sports include equestrian and eSports. Former sports included men's & women's basketball, cheerleading and co-ed fishing.
Club/intramural sports
Fencing is offered as a club sport. Opportunities for athletics participation also exist through the college's intramural programs. Volleyball, beach volleyball, basketball, soccer, flag football, softball and various other activities are available at the intramural level.
NCAA to NAIA
On June 17, 2003, Savannah College of Art and Design executive vice president Brian Murphy and athletic director Jud Damon announced that the university would be changing athletic affiliation from the Division III ranks of the NCAA and re-joining the NAIA. SCAD had been a Division III member since 1992, but would now be joining the Florida Sun Conference. The college was a member of the NAIA from 1987 to 1992 and renewed membership in the NAIA and the FSC (now the Sun Conference) beginning with the 2003–04 season.
=== SCAD Atlanta Bees ===
The athletic teams of the SCAD Atlanta campus are likewise called the Bees. The college is a member of the National Association of Intercollegiate Athletics (NAIA), primarily competing in the Appalachian Athletic Conference (AAC) since the 2012–13 academic year; after spending two seasons as an NAIA Independent within the Association of Independent Institutions (AII) from 2010–11 (when the school began intercollegiate athletics and joined the NAIA) to 2011–12.
SCAD Savannah competes in 16 intercollegiate varsity sports. Men's sports include bowling, cross country, cycling, fencing, golf, tennis and track & field (indoor and outdoor); while women's sports include bowling, cross country, cycling, fencing, golf, tennis and track & field (indoor and outdoor).
Origins
In 2010, SCAD Atlanta entered the NAIA in men's and women's golf, men's and women's tennis and men's and women's cross country.
== Annual events ==
=== Savannah Film Festival ===
The college holds numerous lectures, performances and film screenings at two historic theaters it owns, the Trustees Theater and the Lucas Theatre for the Arts. These theaters also are used once a year for the Savannah Film Festival in late October/early November. Past guests of the festival include Roger Ebert, Peter O'Toole, Tommy Lee Jones, Norman Jewison, Ellen Burstyn, Sir Ian McKellen, Oliver Stone, Liam Neeson, James Franco, Sidney Lumet, Miloš Forman, Michael Douglas, Woody Harrelson, John Goodman, Claire Danes, James Gandolfini, Patrick Stewart, Holly Hunter and many others. With average attendance more than 40,000, the event includes a week of lectures, workshops and screenings of student and professional films. There also is a juried competition.
=== deFINE ART ===
Founded in 2010, deFINE ART brings leading contemporary artists to Savannah and Atlanta annually in February to present new projects, commissioned works, and new performances. Since 2010, guests have included artists such as Lawrence Weiner, Marilyn Minter, Hank Willis Thomas, Carlos Cruz-Diez, and others.
=== Sidewalk Arts and Sand Arts Festivals ===
Each April, SCAD hosts the Sidewalk Arts Festival in downtown Forsyth Park. The festival consists primarily of the chalk-drawing competition, which is divided into group and individual categories of students, alumni and prospective students. Similar is the Sand Arts Festival. This sand festival is held every spring on the beaches of nearby Tybee Island. Contestants can work alone or in groups of up to four people. The competition is divided into sand relief, sand sculpture, sand castle and wind sculpture divisions.
=== Other events ===
Individual departments host yearly and quarterly shows to promote student work. Annual festivals such as SCAD AnimationFest, SCAD GamingFest, SCAD aTVfest, and events such as SCAD Style and offer opportunities for networking.
Students also frequent en masse non-SCAD-affiliated events if they are held in the historic district, such as the Savannah Jazz Festival and the St. Patrick's Day celebration.
== Controversies ==
=== Clarence Thomas Center for Historic Preservation ===
SCAD has received repeated backlash for naming one of its academic halls after Supreme Court Justice Clarence Thomas. Thomas was born and raised in Savannah, and served as an altar boy at a convent located at 439 East Broad Street. In 2010, the building was acquired by the school and renamed the Clarence Thomas Center for Historic Preservation, with Thomas attending the dedication. Following the renewed interest of the Anita Hill hearings during Brett Kavanaugh’s Supreme Court nomination, several petitions were formed by SCAD students and alumni demanding the school change the building’s name. Despite one petition receiving over 2,000 signatures, SCAD refused to rename the building. Students also launched a petition to keep Thomas’ name on the building, which received over 18,000 signatures. In 2022, in response to the Supreme Court’s decision to overturn Roe v. Wade with the decision of Dobbs v. Jackson Women's Health Organization, SCAD once again received backlash for the building’s name. Thomas voted with the majority holding that the U.S. Constitution did not confer a right to abortion, returning to individual states the power to regulate any aspect of abortion not protected by federal law, a decision which sparked protests across the country and in Savannah. Another petition was started by a SCAD student which amassed over 2,000 signatures. Following this renewed backlash, SCAD removed the sign with Thomas’ name from the building, but issued no statement on the matter.
=== Impact on Savannah ===
SCAD has had a significant impact on tourism in Savannah. In a report published by SCAD in 2018, the school claimed to have generated over $3 billion for the city and attracted 14.5 million visitors. A similar report by SCAD in 2020 claimed that the school’s Atlanta and Savannah campuses brought in $766.2 million in annual economic impact for the state. Yet many Savannah residents and SCAD students have expressed dissatisfaction with SCAD’s growth, specifically in Savannah. SCAD does not pay property taxes in Savannah, and the continued growth of the school’s facilities has raised property taxes in many of Savannah’s lower-income neighborhoods. In 2023, the first large-scale protest against SCAD’s expansion was held by community members at the SCAD Museum of Art in response to SCAD’s continued displacement of black families in Savannah. The school has issued no comment on the matter.
In 2022, it was reported that SCAD has claimed "nearly $800 million of property out of local tax revenue" while luring luxury developers to further displace local residents.
=== Racial discrimination lawsuits ===
Between 2020 and 2022, three former instructors at SCAD filed suit claiming racial discrimination and retaliation for speaking out.
=== Bobby Zarem lawsuit ===
In 2014, the New York Post reported that former SCAD employee and influential publicist Bobby Zarem was suing SCAD for dismissing him after he spoke out about a series of sexual assaults on campus.
=== Censure of SCAD ===
SCAD was censured by the American Association of University Professors for issues surrounding academic freedom, tenure, and the dismissal of faculty members. The first censure came in 1993. After working with the AAUP to be removed from the list in 2010, the organization and school came to an impasse and again, in 2012, the AAUP renewed its censure, where it remains as of 2025.
== Notable faculty ==
== Notable alumni ==
== References ==
== External links ==
Official website
Athletics website (Savannah)
Athletics website (Atlanta) | Wikipedia/Savannah_College_of_Art_and_Design |
Industrial district (ID) is a place where workers and firms, specialised in a main industry and auxiliary industries, live and work. The concept was initially used by Alfred Marshall to describe some aspects of the industrial organisation of nations. At the end of the 1990s the industrial districts in developed or developing countries had gained a recognised attention in international debates on industrialisation and policies of regional development.
== History of the term ==
The term was used the first time by Alfred Marshall in The Principles of Economics (1890, 1922) and in his Industry and Trade Marshall talks of a.... "thickly peopled industrial district".
The term was also used in political struggle. The 1917 handbook of the Industrial Workers of the World states:-
"In order that every given industrial district shall have complete industrial solidarity among the workers in all industries as well as among the workers of each an INDUSTRIAL DISTRICT COUNCIL is formed ..."
The term also appears in English literature. For instance, in a short story of 1920 by D. H. Lawrence, You Touched Me (aka 'Hadrian'):-
"Matilda and Emmie were already old maids. In a thorough industrial district, it is not easy for the girls who have expectations above the common to find husbands. The ugly industrial town was full of men, young men who were ready to marry. But they were all colliers or pottery-hands, mere workmen."
The strong specialisation of the workers and an appropriate support of public goods and institutions are supported by an "Industrial Atmosphere" related to a locally developing division of labour. Competences and knowledge are shared in informal way with processes of learning by doing and learning by using, and this promotes innovation over time. Local firms, families and civic organisations are connected by way of both market mechanisms and non-market mechanisms, like trust within bilateral or team exchanges, and collective action supporting the availability of local industrial, social and environmental infrastructure. Also, the notion that firms located in geographical proximity benefit from agglomeration effects in having a common or collective infrastructure is frequently mentioned as one of the main bases in the industrial district literature.
== Recent evolution of the use of the term ==
Within the study of economics, the term has evolved. Giacomo Becattini rediscovered the concept to describe the Italian industrial configuration of the middle of the 20th century. Since the 1980s, the dynamic industrial development in NEC (North, East and Centre) of Italy, where after the Second World War geographical concentration of specialised small and medium-sized enterprises (SME) raised up, led to an increasing attention to the Marshall' seminal works. A growing literature with an accompanying cloud of definitions of what is meant as an industrial district characterised the international dabate, e.g. Cluster. Industrial districts in Italy have a coherent location and a narrow specialisation profile, e.g. Prato in woollen fabric, Sassuolo in ceramic tiles or Brenta in ladies' footwear.
The success of SME-based Italian districts in the last century and the alternate fortunes of the current ones led to investigate more thoroughly some related aspects. The general characteristics of the ID are consistent with gradual change supported by processes of innovation from below, or decentralized industrial creativity. However, the globalisation processes asked non-gradual changes to the historical IDs and technical and organisational difficulties could hit them.
In the Industry 4.0 era, the specialised capabilities of these areas seem to have the possibility to encourage the emergence of the New Artisans, Maker in the context of adapted models like the "ID mark 3.0".
== See also ==
Company town
Creative city
Industrial park
Mega-Site
Mill town
== References == | Wikipedia/Industrial_district |
Discourse on the Method of Rightly Conducting One's Reason and of Seeking Truth in the Sciences (French: Discours de la Méthode pour bien conduire sa raison, et chercher la vérité dans les sciences) is a philosophical and autobiographical treatise published by René Descartes in 1637. It is best known as the source of the famous quotation "Je pense, donc je suis" ("I think, therefore I am", or "I am thinking, therefore I exist"), which occurs in Part IV of the work. A similar argument, without this precise wording, is found in Meditations on First Philosophy (1641), and a Latin version of the same statement Cogito, ergo sum is found in Principles of Philosophy (1644).
Discourse on the Method is one of the most influential works in the history of modern philosophy, and important to the development of natural sciences. In this work, Descartes tackles the problem of skepticism, which had previously been studied by other philosophers. While addressing some of his predecessors and contemporaries, Descartes modified their approach to account for a truth he found to be incontrovertible; he started his line of reasoning by doubting everything, so as to assess the world from a fresh perspective, clear of any preconceived notions.
The book was originally published in Leiden, in the Netherlands. Later, it was translated into Latin and published in 1656 in Amsterdam. The book was intended as an introduction to three works: Dioptrique, Météores, and Géométrie. Géométrie contains Descartes's initial concepts that later developed into the Cartesian coordinate system. The text was written and published in French so as to reach a wider audience than Latin, the language in which most philosophical and scientific texts were written and published at that time, would have allowed. Most of Descartes' other works were written in Latin.
Together with Meditations on First Philosophy, Principles of Philosophy and Rules for the Direction of the Mind, it forms the base of the epistemology known as Cartesianism.
== Organization ==
The book is divided into six parts, described in the author's preface as:
Various considerations touching the Sciences
The principal rules of the Method which the Author has discovered
Certain of the rules of Morals which he has deduced from this Method
The reasonings by which he establishes the existence of God and of the Human Soul
The order of the Physical questions which he has investigated, and, in particular, the explication of the motion of the heart and of some other difficulties pertaining to Medicine, as also the difference between the soul of man and that of the brutes
What the Author believes to be required in order to greater advancement in the investigation of Nature than has yet been made, with the reasons that have induced him to write
=== Part I: Various scientific considerations ===
Descartes begins by allowing himself some wit:
Good sense is, of all things among men, the most equally distributed; for every one thinks himself so abundantly provided with it, that those even who are the most difficult to satisfy in everything else, do not usually desire a larger measure of this quality than they already possess.
A similar observation can be found in Hobbes, when he writes about human abilities, specifically wisdom and "their own wit": "But this proveth rather that men are in that point equal, than unequal. For there is not ordinarily a greater sign of the equal distribution of anything than that every man is contented with his share," but also in Montaigne, whose formulation indicates that it was a commonplace at the time: "Tis commonly said that the justest portion Nature has given us of her favors is that of sense; for there is no one who is not contented with his share." Descartes continues with a warning:
For to be possessed of a vigorous mind is not enough; the prime requisite is rightly to apply it. The greatest minds, as they are capable of the highest excellences, are open likewise to the greatest aberrations; and those who travel very slowly may yet make far greater progress, provided they keep always to the straight road, than those who, while they run, forsake it.
Descartes describes his disappointment with his education: "[A]s soon as I had finished the entire course of study…I found myself involved in so many doubts and errors, that I was convinced I had advanced no farther…than the discovery at every turn of my own ignorance." He notes his special delight with mathematics, and contrasts its strong foundations to "the disquisitions of the ancient moralists [which are] towering and magnificent palaces with no better foundation than sand and mud."
=== Part II: Principal rules of the Method ===
Descartes was in Germany, attracted thither by the wars in that country, and describes his intent by a "building metaphor" (see also: Neurath's boat). He observes that buildings, cities or nations that have been planned by a single hand are more elegant and commodious than those that have grown organically. He resolves not to build on old foundations, nor to lean upon principles which he had taken on faith in his youth. Descartes seeks to ascertain the true method by which to arrive at the knowledge of whatever lies within the compass of his powers. He presents four precepts:
The first was never to accept anything for true which I did not clearly know to be such; that is to say, carefully to avoid precipitancy and prejudice, and to comprise nothing more in my judgment than what was presented to my mind so clearly and distinctly as to exclude all ground of doubt.
The second, to divide each of the difficulties under examination into as many parts as possible, and as might be necessary for its adequate solution.
The third, to conduct my thoughts in such order that, by commencing with objects the simplest and easiest to know, I might ascend by little and little, and, as it were, step by step, to the knowledge of the more complex; assigning in thought a certain order even to those objects which in their own nature do not stand in a relation of antecedence and sequence.
And the last, in every case to make enumerations so complete, and reviews so general, that I might be assured that nothing was omitted.
=== Part III: Morals and Maxims of conducting the Method ===
Descartes uses the analogy of rebuilding a house from secure foundations, and extends the analogy to the idea of needing a temporary abode while his own house is being rebuilt. Descartes adopts the following "three or four" maxims in order to remain effective in the "real world" while experimenting with his method of radical doubt. They form a rudimentary belief system from which to act before his new system is fully developed:
The first was to obey the laws and customs of my country, adhering firmly to the faith in which, by the grace of God, I had been educated from my childhood; and regulating my conduct in every other matter according to the most moderate opinions, and the farthest removed from extremes, which should happen to be adopted in practice with general consent of the most judicious of those among whom I might be living.
Be as firm and resolute in my actions as I was able.
Endeavor always to conquer myself rather than fortune, and change my desires rather than the order of the world, and in general, accustom myself to the persuasion that, except our own thoughts, there is nothing absolutely in our power; so that when we have done our best in things external to us, our ill-success cannot possibly be failure on our part.
Finally, Descartes states his resolute belief that there is no better use of his time than to cultivate his reason and to advance his knowledge of the truth according to his method.
=== Part IV: Proof of God and the Soul ===
Applying the method to itself, Descartes challenges his own reasoning and reason itself. But Descartes believes three things are not susceptible to doubt and the three support each other to form a stable foundation for the method. He cannot doubt that something has to be there to do the doubting: I think, therefore I am. The method of doubt cannot doubt reason as it is based on reason itself. By reason there exists a God, and God is the guarantor that reason is not misguided. Descartes supplies three different proofs for the existence of God, including what is now referred to as the ontological proof of the existence of God.
=== Part V: Physics, the heart, and the soul of man and animals ===
Descartes briefly sketches how in an unpublished treatise (published posthumously as Le Monde) he had laid out his ideas regarding the laws of nature, the sun and stars, the moon as the cause of "ebb and flow" (meaning the tides), gravitation, light, and heat. Describing his work on light, he states:
[I] expounded at considerable length what the nature of that light must be which is found in the sun and the stars, and how thence in an instant of time it traverses the immense spaces of the heavens.
His work on such physico-mechanical laws is, however, framed as applying not to our world but to a theoretical "new world" created by God
somewhere in the imaginary spaces [with] matter sufficient to compose ... [a "new world" in which He] ... agitate[d] variously and confusedly the different parts of this matter, so that there resulted a chaos as disordered as the poets ever feigned, and after that did nothing more than lend his ordinary concurrence to nature, and allow her to act in accordance with the laws which he had established.
Descartes does this "to express my judgment regarding ... [his subjects] with greater freedom, without being necessitated to adopt or refute the opinions of the learned." (Descartes' hypothetical world would be a deistic universe.)
He goes on to say that he "was not, however, disposed, from these circumstances, to conclude that this world had been created in the manner I described; for it is much more likely that God made it at the first such as it was to be." Despite this admission, it seems that Descartes' project for understanding the world was that of re-creating creation—a cosmological project which aimed, through Descartes' particular brand of experimental method, to show not merely the possibility of such a system, but to suggest that this way of looking at the world—one with (as Descartes saw it) no assumptions about God or nature—provided the only basis upon which he could see knowledge progressing (as he states in Book II).
Thus, in Descartes' work, we can see some of the fundamental assumptions of modern cosmology in evidence—the project of examining the historical construction of the universe through a set of quantitative laws describing interactions which would allow the ordered present to be constructed from a chaotic past.
He goes on to the motion of the blood in the heart and arteries, endorsing the findings of "a physician of England" about the circulation of blood, referring to William Harvey and his work De motu cordis in a marginal note.: 51 But then he disagrees strongly about the function of the heart as a pump, ascribing the motive power of the circulation to heat rather than muscular contraction. He describes that these motions seem to be totally independent of what we think, and concludes that our bodies are separate from our souls.
He does not seem to distinguish between mind, spirit, and soul, all of which he identifies with our faculty for rational thinking. Hence the term "I think, therefore I am." All three of these words (particularly "mind" and "soul") can be signified by the single French term âme.
=== Part VI: Prerequisites for advancing the investigation of Nature ===
Descartes begins by obliquely referring to the recent trial of Galileo for heresy and the Church's condemnation of heliocentrism; he explains that for these reasons he has held back his own treatise from publication. However, he says, because people have begun to hear of his work, he is compelled to publish these small parts of it (that is, the Discourse, Dioptrique, Météores, and Géométrie) in order that people not wonder why he doesn't publish.
The discourse ends with some discussion of scientific experimentation: Descartes believes that experimentation is indispensable, time-consuming, and yet not easily delegated to others. He exhorts the reader to investigate the claims laid out in Dioptrique, Météores, and Géométrie and communicate their findings or criticisms to his publisher; he commits to publishing any such queries he receives along with his answers.
== Influencing future science ==
Skepticism had previously been discussed by philosophers such as Sextus Empiricus, Al-Kindi, Al-Ghazali, Francisco Sánchez and Michel de Montaigne. Descartes started his line of reasoning by doubting everything, so as to assess the world from a fresh perspective, clear of any preconceived notions or influences.
This is summarized in the book's first precept to "never to accept anything for true which I did not clearly know to be such". This method of pro-foundational skepticism is considered to be the start of modern philosophy.
== Quotations ==
"The most widely shared thing in the world is good sense, for everyone thinks he is so well provided with it that even those who are the most difficult to satisfy in everything else do not usually desire to have more good sense than they have. It is not likely that everyone is mistaken in this…" (part I, AT p. 1 sq.)
"I know how very liable we are to delusion in what relates to ourselves; and also how much the judgments of our friends are to be suspected when given in our favor." (part I, AT p. 3)
"… I believed that I had already given sufficient time to languages, and likewise to the reading of the writings of the ancients, to their histories and fables. For to hold converse with those of other ages and to travel, are almost the same thing." (part I, AT p. 6)
"Of philosophy I will say nothing, except that when I saw that it had been cultivated for so many ages by the most distinguished men; and that yet there is not a single matter within its sphere which is still not in dispute and nothing, therefore, which is above doubt, I did not presume to anticipate that my success would be greater in it than that of others." (part I, AT p. 8)
"… I entirely abandoned the study of letters, and resolved no longer to seek any other science than the knowledge of myself, or of the great book of the world.…" (part I, AT p. 9)
"The first was to include nothing in my judgments than what presented itself to my mind so clearly and distinctly that I had no occasion to doubt it." (part II, AT p. 18)
"… In what regards manners, everyone is so full of his own wisdom, that there might be as many reformers as heads.…" (part VI, AT p. 61)
"… And although my speculations greatly please myself, I believe that others have theirs, which perhaps please them still more." (part VI, AT p. 61)
== See also ==
Mathesis universalis
Great Conversation
== References ==
== External links ==
Descartes, René (1637). Discours de la méthode pour bien conduire sa raison et chercher la vérité dans les sciences, plus la dioptrique, les météores et la géométrie (in French), BnF Gallica{{cite book}}: CS1 maint: postscript (link)
Discourse on the Method at Project Gutenberg
Discours de la Méthode at Project Gutenberg (édition Victor Cousin, Paris 1824)
Discours de la méthode, par Adam et Tannery, Paris 1902. (academic standard edition of the original text, 1637), Pdf, 80 pages, 362 kB.
Contains Discourse on the Method, slightly modified for easier reading
Free audiobook at librivox.org or at audioofclassics | Wikipedia/Discourse_on_the_Method |
Village design statement (VDS) is a term of English rural planning practice. A VDS is a document that describes the distinctive characteristics of the locality, and provides design guidance to influence future development and improve the physical qualities of the area.
Drawing up the VDS provides an opportunity for communities to describe how they feel the physical character of their parish can be enhanced. Rural community councils support local communities in the production of village design statements.
== See also ==
Parish plan
Auroville Village Action Group
Village development committee (India)
== References ==
== External links ==
Chelmsford Borough VDS
Rippingale Village Design Committee website | Wikipedia/Village_design_statement |
Model dwellings companies (MDCs) were a group of private companies in Victorian Britain that sought to improve the housing conditions of the working classes by building new homes for them, at the same time receiving a competitive rate of return on any investment. The principle of philanthropic intention with capitalist return was given the label "five per cent philanthropy".
== Background ==
The precursor to the aims of MDCs was the work of Edwin Chadwick and others in exposing the sanitary conditions of slums in large metropolitan areas. Once Chadwick's reforms had been implemented poverty remained rife in the overcrowded inner cities, and reformers had to look elsewhere for the solution to the problems of the working class. The publication of Engels' The Condition of the Working Class in England in 1844 and The Communist Manifesto, as well as fear of further uprisings such as that of the Chartists in 1848, increased concern for the welfare of the working class amongst the middle and upper classes.
== Model dwellings ==
Out of this environment, various societies and companies were formed to meet the housing needs of the working classes. Improved accommodation was seen as a way of ameliorating overcrowding, as well as the moral and sanitary problems resulting from that. The movement started in a small way in London, with the Metropolitan Association for Improving the Dwellings of the Industrious Classes and Society for Improving the Condition of the Labouring Classes finding difficulty in raising sufficient capital to build commercially viable projects. Support from public figures and demonstrations at the Great Exhibition all improved public awareness, if not raising investment.
The middle of the century saw the peak in MDC building, with around twenty-eight separate companies operating in London prior to the 1875 Cross Act. The movement picked up pace again after the Act, which granted local authorities the right to clear slum dwellings, however the entrepreneurial focus of the companies was restricted by an inability to make a competitive return and the intervention of large-scale municipal housing. The most successful builders post-1875 were those making a smaller return, such as the Four Per Cent Industrial Dwellings Company, and the East End Dwellings Company, often founded on religious principles as much as commercial.
== Companies ==
=== The Society for Improving the Condition of the Labouring Classes ===
The first of these companies was formed out of the Labourer's Friend Society, which in 1844 agreed to change its name and purpose towards building houses for labourers that might be adopted by others as a template. Their first urban building project was completed in 1846 at Bagnigge Wells, Pentonville, designed by Henry Roberts.
Although the Society for Improving the Condition of the Labouring Classes (SICLC) had the Prince Consort as its first president and contributed to the Great Exhibition of 1851, their block dwellings, in particular, were subject to criticism. The design of SICLC dwellings paid particular attention to sanitation and ventilation but was otherwise functional and utilitarian, and the resulting estate was seen as grim and unpleasant.
=== The Metropolitan Association for Improving the Dwellings of the Industrious Classes ===
The Metropolitan Association for Improving the Dwellings of the Industrious Classes (MAIDIC) was formed in 1841, earlier than the SICLC, but spent several years acquiring capital to begin its building projects. These commenced after the company obtained a royal charter which established the company on more commercial grounds, guaranteeing a minimum return of five per cent on investment. This was outlined in the company's resolution:
That an association be formed for the purpose of providing the labouring man with an increase of the comforts and conveniences of life, with full return to the capitalist.
The first MAIDIC blocks were completed in 1848, constituting twenty-one two room apartments and ninety three room apartments in Old St Pancras Road, again on an 'associated' model - that is, with shared amenities such as lavatories and kitchen. This type of large, block residence with shared facilities became the norm for model dwellings companies.
The MAIDIC was one of the largest MDCs and by 1900 housed over 6,000 people.
=== The Peabody Trust ===
The Peabody Trust was founded after an unprecedented donation in 1862 of £150,000, by the American banker George Peabody for the good of the poor in London. A committee was set up to choose the most appropriate way to spend the money, and it was decided to build a number of block dwellings for the very poorest of the city. These apartments were of similar design to other companies, but rents were offered at lower levels, leading to complaints from other MDCs.
Tenancy in a Peabody Dwelling came with strict rules: rents had to be paid weekly and punctually, and many trades were not permitted to be carried on at the dwellings. There was also a night-time curfew and a set of moral standards to be adhered to.
=== The Improved Industrial Dwellings Company ===
The largest MDC working in central London was the Improved Industrial Dwellings Company (IIDC), founded by Sir Sydney Waterlow in 1863, which housed around 30,000 individuals by 1900. Its rigorous selection procedure, rules and financial regulations meant that the IIDC was one of the more financially successful of these firms.
=== The Artizans', Labourers' and General Dwellings Company ===
The Artizans' Company became one of the largest of the MDCs, concentrating on suburban, low-rise estates rather than the central, high-rise model of other companies. It was founded by a former labourer, William Austin, in 1867 and immediately set about building and selling model dwellings first in Battersea, then Salford, Gosport and elsewhere. Their first major contribution to the MDC movement came at Shaftesbury Park in Battersea, a large, suburban estate opened by Lord Shaftesbury in 1872 as a "workmen's city" for "clerks, artisans and labourers". Building continued at a larger estate in Kilburn, Queen's Park, then a still larger estate at Hornsey, Noel Park, and finally Leigham Court in Streatham. The company also diversified into block dwellings and other, more commercially minded estates such as Pinnerwood Park near Harrow.
By 1900, the Artizans' Company provided dwellings for 42,000 people in over 6,400 residences
=== East End Dwellings Company ===
The EEDC was founded in 1882 by a committee from the parish of St Jude, Whitechapel, headed by Canon Samuel Barnett. The company was one of the most successful providers of housing to the very poor in the East End of London, being founded along religious lines rather than being preoccupied with capital return on investment, which was the biggest reason behind the lack of success of earlier builders.
Following Octavia Hill's principles of female residence managers, the company employed female rent collectors including Beatrice Potter (later Webb, co-founder of the London School of Economics) and Ella Pycroft. The company built a large number of dwellings in what is now the London Borough of Tower Hamlets, starting with Katharine Buildings in 1885.
=== Four Per Cent Industrial Dwellings Company ===
The Four Per Cent Company was founded by a group of Anglo-Jewish philanthropists in 1885, headed by the banker Nathan Rothschild, 1st Baron Rothschild. They built large residences across Spitalfields and Whitechapel, later branching out towards Hackney and South London, with a remit to provide (although not exclusively) for destitute Jews in the East End.
The company later renamed itself the Industrial Dwellings Society (1885) Ltd., and is today known as IDS.
=== Other companies ===
There were a large number of companies operating in the nineteenth century, particularly in London, around twenty-eight at the time of the Cross Act. Other names include the South London Dwellings Company (founded by Emma Cons), the Chelsea Park Dwellings Company, the National Dwellings Society, the City and Central Dwellings Company, the London Labourers' Dwellings Society (founded by William Alexander Greenhill), the Real Property Investment Association and later the Guinness Trust, Lewis Trust and Sutton Trust.
Outside of London, the Pilrig Model Dwellings Company and Edinburgh Co-Operative Building Company were active in Edinburgh, Scotland, building a number of what have come to be referred to as colony houses. Other companies, such as the Chester Cottage Improvement Company and the Newcastle upon Tyne Improved Industrial Dwellings Company built in specific areas only. Other buildings were erected by individuals, such as Hugh Jackson's New Court, in Camden Town, London, and Sir James Gowans' Rosebank Cottages in Edinburgh.
The Newcastle upon Tyne Improved Industrial Dwellings Company was set up by James Hall of Hall Brothers Steamship Company, Tynemouth, after visiting Sir Sydney Waterlow's establishment in London. It built 108 flats at Garth Heads between 1869 and 1878; the chairman, directors and shareholders were mostly prominent local businessmen. The company was wound up in 1968 and the buildings at Garth Heads are currently used for private student accommodation.
== Other schemes ==
=== Baroness Burdett Coutts ===
Baroness Burdett-Coutts was a private philanthropist who gave to many and varied charitable endeavours. One of the most significant private inputs into the provision of working class housing was Columbia Square in Bethnal Green, a block estate completed in 1857. Architecturally, it was a precursor to the imposing Peabody Dwellings, having been designed by Peabody's architect, Henry Darbishire. The addition of a grand marketplace modelled on Saint Chapelle in Paris made the design distinct, but the project was seen overall as a failure, finally being demolished in 1960.
== Criticism and support ==
=== Contemporary ===
The MDC movement was strongly supported by individuals like Lord Shaftesbury, who was president of the Artizan's Company for some time, for providing a plan to "completely alter for the better the domiciliary habits of the people of the metropolis". Others, such as Engels, criticised the movement as "Proudhonist", and a means of ensuring the longevity of capitalism through a process of embourgeoisement.
=== Other ===
In the twentieth century and beyond, opinions over the MDC movements have tended towards two positions. The first, adopted by free market economists, asserts that the financial success of some of these companies shows that they could have been a significant help to the poor, if their operation was not interrupted by the arrival of social housing in the form of London County Council estates. Others argue that the failure of MDCs to meet the needs of the very poorest demonstrates that they were a stepping stone towards the inevitable necessity of state intervention to solve the housing crisis.
MDCs have been particularly criticised for failing to provide for the very poorest of society, concentrating on the labour aristocracy, the upper strata of the working classes.
== Further reading ==
Dennis, R. (1989) The Geography of Victorian Values: philanthropic housing in London, 1840–1900. Journal of Historical Geography 15(1), pp. 40–54
Morris, S. (2001) Market solutions for social problems: working-class housing in nineteenth-century London. Economic History Review 54(3), pp. 525–54
Stedman Jones, G. (1984) Outcast London: a study in the relationship between classes in Victorian society. London: Penguin
Tarn, J.N. (1973) Five Per Cent Philanthropy. London: Cambridge University Press
Wohl, A.S. (1977) The Eternal Slum: housing and social policy in Victorian London. London: Edward Arnold
== See also ==
List of existing model dwellings
Prince Albert's Model Cottage
== References == | Wikipedia/Model_dwellings_company |
A model village is a mostly self-contained community, built from the late 18th century onwards by landowners and business magnates to house their workers. "Model" implies an ideal to which other developments could aspire. Although the villages are located close to the workplace, they are generally physically separated from them and often consist of relatively high-quality housing, with integrated community amenities and attractive physical environments.
== Great Britain and Ireland ==
According to Jeremy Burchardt, the term model village was first used by the Victorians to describe the new settlements created on the rural estates of the landed gentry in the eighteenth century. As landowners sought to improve their estates for aesthetic reasons, new landscapes were created and the cottages of the poor were demolished and rebuilt out of sight of their country house vistas. However, according to the Oxford English Dictionary (2024), the first use of the term model village is post-Victorian, dating to 1906.
Starting in the 18th century, new villages were created at Nuneham Courtenay when the village was rebuilt as plain brick dwellings either side of the main road, at Milton Abbas the village was moved and rebuilt in a rustic style and Blaise Hamlet in Bristol had individually designed buildings, some with thatched roofs.
The Swing Riots of 1830 highlighted poor housing in the countryside, ill health and immorality and landowners had a responsibility to provide cottages with basic sanitation. The best landlords provided accommodation but many adopted a paternalistic attitude when they built model dwellings and imposed their own standards on the tenants charging low rents but paying low wages.
As the Industrial Revolution took hold, industrialists who built factories in rural locations provided housing for workers clustered around the workplace. An early example of an industrial model village was New Lanark built by Robert Owen. Philanthropic coal owners provided decent accommodation for miners from the early nineteenth century. Earl Fitzwilliam, a paternalistic colliery owner provided houses near his coal pits in Elsecar near Barnsley that were "...of a class superior in size and arrangement, and in conveniences attached, to those of working classes." They had four rooms and a pantry, and outside a small garden and pig sty.
Others were established by Edward Akroyd at Copley between 1849 and 1853 and Akroydon 1861-63. Akroyd employed George Gilbert Scott. Titus Salt built a model village at Saltaire. Henry Ripley, owner of Bowling Dyeworks, began construction of Ripley Ville in Bradford in 1866. Industrial communities were established at Price's Village by Price's Patent Candle Company and at Aintree by Hartley's, who made jam, in 1888. William Lever's Port Sunlight had a village green and its houses espoused an idealised rural vernacular style. Quaker industrialists, George Cadbury and Rowntrees built model villages by their factories. Cadbury built Bournville between 1898 and 1905 and a second phase from 1914 and New Earswick was built in 1902 for Rowntrees.
As coal mining expanded villages were built to house coal miners. In Yorkshire, Grimethorpe, Goldthorpe, Woodlands, Fitzwilliam and Bottom Boat were built to house workers at the collieries. The architect who designed Woodlands and Creswell Model Villages, Percy B. Houfton was influential in the development of the garden city movement.
In the 1920s, Silver End model village in Essex was built for Francis Henry Crittall. Its houses were designed in an art deco-style with flat roofs and Crittall windows.
=== England ===
(Chronological order)
Trowse, Norfolk (1805)
Blaise Hamlet, Gloucestershire (1811)
Selworthy, Somerset (1828)
Barrow Bridge, Bolton (1830s)
Vulcan Village, Merseyside (1833)
Snelston, Derbyshire (1840s)
Swindon Railway Village, Wiltshire (1840s)
Withnell Fold, Lancashire (1844)
Meltham, Yorkshire (1850)
Bromborough Pool ("Price's Village"), Merseyside (1853)
Saltaire, Yorkshire (1853)
Akroydon, Yorkshire (1859)
Nenthead, Cumberland (1861)
New Sharlston Colliery Village, Yorkshire (1864)
Ripley Ville, Yorkshire (1866)
Copley, Yorkshire (1874)
Howe Bridge, Lancashire (1873–79)
Bournville, Worcestershire (1879)
Barwick, Hertfordshire (1888)
Port Sunlight, Merseyside (1888)
Creswell Model Village, Derbyshire (1895)
New Bolsover model village, Derbyshire (1896)
Vickerstown, Lancashire (1901)
New Earswick, Yorkshire (1904)
Woodlands, Yorkshire (1905)
Whiteley Village, Surrey (1907)
The Garden Village, Kingston upon Hull, Yorkshire (1908)
Silver End, Essex (1926)
Stewartby, Bedfordshire (1926)
=== Ireland ===
Milford, County Armagh, Northern Ireland (1800s)
Portlaw, County Waterford, Republic of Ireland (1825)
Sion Mills, County Tyrone, Northern Ireland (1835)
Bessbrook, County Armagh, Northern Ireland (1845)
Laurelvale, County Armagh, Northern Ireland (1850s)
Model Village, County Cork (1910s; usually called Tower, the name of the pre-existing hamlet)
=== Scotland ===
New Lanark, Lanarkshire (1786)
=== Wales ===
Tremadog, Caernarfonshire (1798)
Elan Village, Powys (1892)
Portmeirion, Merioneth (1925)
== Europe ==
=== Czech Republic ===
Zlín, located in Moravia, was organized and built by Tomáš Baťa to house and efficiently organize the workers of Bata Shoes.
=== Germany ===
Stadt des KdF-Wagens was built for the Volkswagen factory.
=== Italy ===
Crespi d'Adda in the Lombardy region, is a well-preserved model workers' village, and World Heritage Site since 1995. It was built from scratch, starting in 1878, to provide housing and social services for the workers in a cotton textile factory on the banks of the river Adda.
=== Spain ===
Nuevo Baztán outside Madrid dates from the mercantilist and entrepreneurial ambitions of an industrialist from the early-eighteenth century.
== Australasia ==
=== Australia ===
Australian Newsprint Mills established a worker's village at Boyer, Tasmania to accommodate workers of the Boyer Mill
Cadbury established the Cadbury's Estate in Claremont, Tasmania in 1921
EZ Industries constructed homes at Lutana, Tasmania for workers of the nearby Risdon Zinc Works, commencing in 1916
=== New Zealand ===
Barrhill was laid out by its Scottish owner for the workers on his large sheep farm
== Asia ==
=== China ===
Huawei Ox Horn Campus, research and development buildings of technology company Huawei
== See also ==
Company town
New Towns in the United Kingdom
Garden city movement
== References ==
=== Citations ===
=== Bibliography ===
== Further reading ==
Gillian Darley's 'Villages of Vision: A Study of Strange Utopias' first published 1975 (Architectural Press, pb 1978 Paladin) and republished with fully revised gazetteer 2007 (Five Leaves Publications)
== External links == | Wikipedia/Model_village |
Prototype theory is a theory of categorization in cognitive science, particularly in psychology and cognitive linguistics, in which there is a graded degree of belonging to a conceptual category, and some members are more central than others. It emerged in 1971 with the work of psychologist Eleanor Rosch, and it has been described as a "Copernican Revolution" in the theory of categorization for its departure from the traditional Aristotelian categories. It has been criticized by those that still endorse the traditional theory of categories, like linguist Eugenio Coseriu and other proponents of the structural semantics paradigm.
In this prototype theory, any given concept in any given language has a real world example that best represents this concept. For example: when asked to give an example of the concept furniture, a couch is more frequently cited than, say, a wardrobe. Prototype theory has also been applied in linguistics, as part of the mapping from phonological structure to semantics.
In formulating prototype theory, Rosch drew in part from previous insights in particular the formulation of a category model based on family resemblance by Wittgenstein (1953), and by Roger Brown's How shall a thing be called? (1958).
== Overview and terminology ==
The term prototype, as defined in psychologist Eleanor Rosch's study "Natural Categories", was initially defined as denoting a stimulus, which takes a salient position in the formation of a category, due to the fact that it is the first stimulus to be associated with that category. Rosch later defined it as the most central member of a category.
Rosch and others developed prototype theory as a response to, and radical departure from, the classical theory of concepts, which defines concepts by necessary and sufficient conditions. Necessary conditions refers to the set of features every instance of a concept must present, and sufficient conditions are those that no other entity possesses. Rather than defining concepts by features, the prototype theory defines categories based on either a specific artifact of that category or by a set of entities within the category that represent a prototypical member. The prototype of a category can be understood in lay terms by the object or member of a class most often associated with that class. The prototype is the center of the class, with all other members moving progressively further from the prototype, which leads to the gradation of categories. Every member of the class is not equally central in human cognition. As in the example of furniture above, couch is more central than wardrobe. Contrary to the classical view, prototypes and gradations lead to an understanding of category membership not as an all-or-nothing approach, but as more of a web of interlocking categories which overlap.
Further development of prototype theory by psychologist James Hampton, and others replaced the notion of prototypes being the most typical exemplar, with the proposal that a prototype is a bundle of correlated features. These features may or may not be true of all members of the class (necessary or defining features), but they will all be associated with being a typical member or the class. By this means, two aspects of concept structure can be explained. Some exemplars are more typical of a category than others, because they are a better fit to the concept prototype, having more of the features. Importantly, Hampton's prototype model explains the vagueness that can occur at the boundary of conceptual categories. While some may think of pictures, telephones or cookers as atypical furniture, others will say they are not furniture at all. Membership of a category can be a matter of degree, and the same features that give rise to typicality structure are also responsible for graded degrees of category membership.
In Cognitive linguistics it has been argued that linguistic categories also have a prototype structure, like categories of common words in a language.
== Categories ==
=== Basic level categories ===
The other notion related to prototypes is that of a basic level in cognitive categorization. Basic categories are relatively homogeneous in terms of sensory-motor affordances — a chair is associated with bending of one's knees, a fruit with picking it up and putting it in your mouth, etc. At the subordinate level (e.g. [dentist's chairs], [kitchen chairs] etc.) few significant features can be added to that of the basic level; whereas at the superordinate level, these conceptual similarities are hard to pinpoint. A picture of a chair is easy to draw (or visualize), but drawing furniture would be more difficult.
Psychologists Eleanor Rosch, Carolyn Mervis and colleagues defined the basic level as that level that has the highest degree of cue validity and category validity. Thus, a category like [animal] may have a prototypical member, but no cognitive visual representation. On the other hand, basic categories in [animal], i.e. [dog], [bird], [fish], are full of informational content and can easily be categorized in terms of Gestalt and semantic features. Basic level categories tend to have the same parts and recognizable images.
Clearly semantic models based on attribute-value pairs fail to identify privileged levels in the hierarchy. Functionally, it is thought that basic level categories are a decomposition of the world into maximally informative categories. Thus, they
maximize the number of attributes shared by members of the category, and
minimize the number of attributes shared with other categories
However, the notion of Basic-ness as a Level can be problematic. Linguistically, types of bird (swallow, robin, gull) are basic level - they have mono-morphemic nouns, which fall under the superordinate BIRD, and have subordinates expressed by noun phrases (herring gull, male robin). Yet in psychological terms, bird behaves as a basic level term. At the same time, atypical birds such as ostrich and penguin are themselves basic level terms, having very distinct outlines and not sharing obvious parts with other birds.
More problems arise when the notion of a prototype is applied to lexical categories other than the noun. Verbs, for example, seem to defy a clear prototype: [to run] is hard to split up in more or less central members.
In her 1975 paper, Rosch asked 200 American college students to rate, on a scale of 1 to 7, whether they regarded certain items as good examples of the category furniture. These items ranged from chair and sofa, ranked number 1, to a love seat (number 10), to a lamp (number 31), all the way to a telephone, ranked number 60.
While one may differ from this list in terms of cultural specifics, the point is that such a graded categorization is likely to be present in all cultures. Further evidence that some members of a category are more privileged than others came from experiments involving:
1. Response Times: in which queries involving prototypical members (e.g. is a robin a bird) elicited faster response times than for non-prototypical members.
2. Priming: When primed with the higher-level (superordinate) category, subjects were faster in identifying if two words are the same. Thus, after flashing furniture, the equivalence of chair-chair is detected more rapidly than stove-stove.
3. Exemplars: When asked to name a few exemplars, the more prototypical items came up more frequently.
Subsequent to Rosch's work, prototype effects have been investigated widely in areas such as colour cognition, and also for more abstract notions: subjects may be asked, e.g. "to what degree is this narrative an instance of telling a lie?". Similar work has been done on actions (verbs like look, kill, speak, walk [Pulman:83]), adjectives like "tall", etc.
Another aspect in which Prototype Theory departs from traditional Aristotelian categorization is that there do not appear to be natural kind categories (bird, dog) vs. artifacts (toys, vehicles).
A common comparison is the use of prototype or the use of exemplars in category classification. Medin, Altom, and Murphy found that using a mixture of prototype and exemplar information, participants were more accurately able to judge categories. Participants who were presented with prototype values classified based on similarity to stored prototypes and stored exemplars, whereas participants who only had experience with exemplar only relied on the similarity to stored exemplars. Smith and Minda looked at the use of prototypes and exemplars in dot-pattern category learning. They found that participants used more prototypes than they used exemplars, with the prototypes being the center of the category, and exemplars surrounding it.
== Distance between concepts ==
The notion of prototypes is related to Wittgenstein's (later) discomfort with the traditional notion of category. This influential theory has resulted in a view of semantic components more as possible rather than necessary contributors to the meaning of texts. His discussion on the category game is particularly incisive:Consider for example the proceedings that we call 'games'. I mean board games, card games, ball games, Olympic games, and so on. What is common to them all? Don't say, "There must be something common, or they would not be called 'games'"--but look and see whether there is anything common to all. For if you look at them you will not see something common to all, but similarities, relationships, and a whole series of them at that. To repeat: don't think, but look! Look for example at board games, with their multifarious relationships. Now pass to card games; here you find many correspondences with the first group, but many common features drop out, and others appear. When we pass next to ball games, much that is common is retained, but much is lost. Are they all 'amusing'? Compare chess with noughts and crosses. Or is there always winning and losing, or competition between players? Think of patience. In ball games there is winning and losing; but when a child throws his ball at the wall and catches it again, this feature has disappeared. Look at the parts played by skill and luck; and at the difference between skill in chess and skill in tennis. Think now of games like ring-a-ring-a-roses; here is the element of amusement, but how many other characteristic features have disappeared! And we can go through the many, many other groups of games in the same way; can see how similarities crop up and disappear. And the result of this examination is: we see a complicated network of similarities overlapping and criss-crossing: sometimes overall similarities, sometimes similarities of detail.
Wittgenstein's theory of family resemblance describes the phenomenon when people group concepts based on a series of overlapping features, rather than by one feature which exists throughout all members of the category. For example, basketball and baseball share the use of a ball, and baseball and chess share the feature of a winner, etc., rather than one defining feature of "games". Therefore, there is a distance between focal, or prototypical members of the category, and those that continue outwards from them, linked by shared features.
Peter Gärdenfors has elaborated a possible partial explanation of prototype theory in terms of multi-dimensional feature spaces called conceptual spaces, where a category is defined in terms of a conceptual distance. More central members of a category are "between" the peripheral members. He postulates that most natural categories exhibit a convexity in conceptual space, in that if x and y are elements of a category, and if z is between x and y, then z is also likely to belong to the category.
== Combining categories ==
Within language we find instances of combined categories, such as tall man or small elephant. Combining categories was a problem for extensional semantics, where the semantics of a word such as red is to be defined as the set of objects having this property. This does not apply as well to modifiers such as small; a small mouse is very different from a small elephant.
These combinations pose a lesser problem in terms of prototype theory. In situations involving adjectives (e.g. tall), one encounters the question of whether or not the prototype of [tall] is a 6 foot tall man, or a 400-foot skyscraper. The solution emerges by contextualizing the notion of prototype in terms of the object being modified. This extends even more radically in compounds such as red wine or red hair which are hardly red in the prototypical sense, but the red indicates merely a shift from the prototypical colour of wine or hair respectively. The addition of red shifts the prototype from the one of hair to that of red hair. The prototype is changed by additional specific information, and combines features from the prototype of red and wine.
== Dynamic structure and distance ==
Mikulincer, Mario & Paz, Dov & Kedem, Perry focused on the dynamic nature of prototypes and how represented semantic categories actually changes due to emotional states. The 4 part study assessed the relationships between situational stress and trait anxiety and the way people organize the hierarchical level at which semantic stimuli are categorized, the way people categorize natural objects, the narrowing of the breadth of categories and the proneness to use less inclusive levels of categorization instead of more inclusive ones.
== Critique ==
Prototype theory has been criticized by those that still endorse the classic theory of categories, like linguist Eugenio Coseriu and other proponents of the structural semantics paradigm.
=== Exemplar theory ===
Douglas L. Medin and Marguerite M. Schaffer showed by experiment that a context theory of classification which derives concepts purely from exemplars (cf. exemplar theory) worked better than a class of theories that included prototype theory.
=== Graded categorization ===
Linguists, including Stephen Laurence writing with Eric Margolis, have suggested problems with the prototype theory. In their 1999 paper, they raise several issues. One of which is that prototype theory does not intrinsically guarantee graded categorization. When subjects were asked to rank how well certain members exemplify the category, they rated some members above others. For example, robins were seen as being "birdier" than ostriches, but when asked whether these categories are "all-or-nothing" or have fuzzier boundaries, the subjects stated that they were defined, "all-or-nothing" categories. Laurence and Margolis concluded that "prototype structure has no implication for whether subjects represent a category as being graded" (p. 33).
=== Compound concepts ===
Daniel Osherson and Edward Smith raised the issue of pet fish for which the prototype might be a guppy kept in a bowl in someone's house. The prototype for pet might be a dog or cat, and the prototype for fish might be trout or salmon. However, the features of these prototypes do not present in the prototype for pet fish, therefore this prototype must be generated from something other than its constituent parts.
James Hampton found that prototypes for conjunctive concepts such as pet fish are produced by a compositional function operating on the features of each concept. Initially all features of each concept are added to the prototype of the conjunction. There is then a consistency check - for example pets are warm and cuddly but fish cannot be. Fish are often eaten for dinner, but pets are never. Hence the conjunctive prototype fails to inherit features of either concept that are incompatible with the other concept. A final stage in the process looks for knowledge of the class in long term memory, and if the class is familiar may add extra features - a process called "extensional feedback". The model was tested by showing how apparently logical syntactic conjunctions or disjunctions, such as "A sport which is also a game" or "Vehicles that are not Machines", or "Fruits or Vegetables" fail to conform to Boolean set logic. Chess is considered to be a sport which is a game, but is not considered to be a sport. Mushrooms are considered to be either a fruit or a vegetable, but when asked separately very few people consider them to be a vegetable and no-one considers them to be a fruit.
Antonio Lieto and Gian Luca Pozzato have proposed a typicality-based compositional logic (TCL) that is able to account for both complex human-like concept combinations (like the PET-FISH problem) and conceptual blending. Their framework shows how concepts expressed as prototypes can account for the phenomenon of prototypical compositionality in concept combination.
== See also ==
Composite photography – British eugenist, polymath, and behavioural geneticist (1822–1911)
Composite portrait – compositing of images such as faces to produce an Ideal typePages displaying wikidata descriptions as a fallback
Exemplar theory – Psychological categorization proposal
Family resemblance – Philosophical idea popularized by Ludwig Wittgenstein
Folksonomy – Classification based on users' tags
Frame semantics – Linguistic theory
Intuitive statistics – cognitive phenomenon where organisms use data to make generalizations and predictions about the worldPages displaying wikidata descriptions as a fallback
Platonic ideal – Philosophical theory attributed to PlatoPages displaying short descriptions of redirect targets
Semantic feature-comparison model
Similarity (philosophy) – Relation of resemblance between objects
== Footnotes ==
== References == | Wikipedia/Context_theory |
Crime prevention through environmental design (CPTED) is a system for developing the built environment to reduce the possibility of opportunistic crime and limit the perception of crime in a given neighborhood.
CPTED originated in the United States around 1960, when urban designers recognized that urban renewal strategies were risking the social framework needed for self-policing. Architect Oscar Newman created the concept of "defensible space", developed further by criminologist C. Ray Jeffery, who coined the term CPTED. The growing interest in environmental criminology led to a detailed study of specific topics such as natural surveillance, access control, and territoriality. The "broken window" principle, that neglected zones invite crime, reinforced the need for good property maintenance to assert visible ownership of space. Appropriate environmental design can also increase the perceived likelihood of detection and apprehension, the most significant crime deterrent. There has also been a new interest in the interior design of prisons as an environment that significantly affects offending decisions.
Wide-ranging recommendations to architects include planting trees and shrubs, eliminating escape routes, correcting the use of lighting, and encouraging pedestrian and bicycle traffic in streets. Tests show that the application of CPTED measures reduces criminal activity.
== History ==
CPTED was coined initially and formulated by criminologist C. Ray Jeffery. A more limited approach, termed defensible space, was developed concurrently by architect Oscar Newman. Both men built on the previous work of Elizabeth Wood, Jane Jacobs and Schlomo Angel. Jeffery's book, "Crime Prevention Through Environmental Design" came out in 1971, but his work was ignored throughout the 1970s. Newman's book, "Defensible Space: – Crime Prevention through Urban Design" came out in 1972. His principles were widely adopted but with mixed success. The defensible space approach was subsequently revised with additional built environment approaches supported by CPTED. Newman represented this as CPTED and credited Jeffery as the originator of the CPTED term. Newman's CPTED-improved defensible space approach enjoyed broader success and resulted in a reexamination of Jeffery's work. Jeffery continued to expand the multi-disciplinary aspects of the approach, advances which he published, with the last one published in 1990. The Jeffery CPTED model is more comprehensive than the Newman CPTED model, which limits itself to the built environment. Later, CPTED models were developed based on the Newman Model, with criminologist Tim Crowe being the most popular.
As of 2004, CPTED is popularly understood to refer strictly to the Newman/Crowe type models, with the Jeffery model treated more as a multi-disciplinary approach to crime prevention that incorporates biology and psychology, a situation accepted even by Jeffery himself. (Robinson, 1996). A revision of CPTED, initiated in 1997 and termed 2nd Generation CPTED, adapts CPTED to offender individuality, further indicating that Jeffery's work is not popularly considered to be already a part of CPTED. In 2012, Woodbridge introduced and developed CPTED in prison and showed how design flaws allowed criminals to keep offending.
=== 1960s ===
In the 1960s, Elizabeth Wood developed guidelines for addressing security issues while working with the Chicago Housing Authority, emphasizing design features that would support natural survivability. Her guidelines were never implemented, but they stimulated some of the original thinking that led to CPTED.
Jane Jacobs' book, The Death and Life of Great American Cities (1961), argued that urban diversity and vitality were being destroyed by urban planners and their urban renewal strategies. She was challenging the basic tenets of urban planning of the time: that neighborhoods should be isolated from each other, that an empty street is safer than a crowded one, and that the car represents progress over the pedestrian. An editor for Architectural Forum magazine (1952–1964), she had no formal training in urban planning, but her work emerged as a founding text for a new way of seeing cities. She felt that the way cities were being designed and built meant that the general public could not develop the social framework needed for effective self-policing. She pointed out that the new forms of urban design broke down many of the traditional controls on criminal behavior, for example, the ability of residents to watch the street and the presence of people using the street both night and day. She suggested that the lack of "natural guardianship" in the environment promoted crime. Jacobs developed the concept that crime flourishes when people do not meaningfully interact with their neighbors. In Death and Life, Jacobs listed the three attributes needed to make a city street safe: a clear distinction between private and public space; diversity of use; and a high level of pedestrian use of the sidewalks.
Schlomo Angel pioneered CPTED and studied under noted planner Christopher Alexander. Angel's Ph.D. thesis, Discouraging Crime Through City Planning, (1968), was a study of street crime in Oakland, CA. In it, he states: "The physical environment can exert a direct influence on crime settings by delineating territories, reducing or increasing accessibility by the creation or elimination of boundaries and circulation networks, and by facilitating surveillance by the citizenry and the police." He asserted that crime was inversely related to the level of activity on the street and that the commercial strip environment was particularly vulnerable to crime because it thinned out activity, making it easier for people to commit street crime. Angel developed and published CPTED concepts in 1970 in work supported and widely distributed by the United States Department of Justice (Luedtke, 1970).
=== 1970s ===
The phrase crime prevention through environmental design (CPTED) was first used by C. Ray Jeffery, a criminologist from Florida State University. The phrase began to gain acceptance after publishing his 1971 book of the same name.
Jeffery's work was based on the precepts of experimental psychology represented in modern learning theory. (Jeffery and Zahm, 1993:329) Jeffery's CPTED concept arose out of his experiences with a rehabilitative project in Washington, D.C., that attempted to control the school environment of juveniles in the area. Rooted deeply in the psychological learning theory of B.F. Skinner, Jeffery's CPTED approach emphasized the role of the physical environment in the development of pleasurable and painful experiences for the offender that would have the capacity to alter behavioral outcomes. His original CPTED model was a stimulus-response (S-R) model positing that the organism learned from environmental punishments and reinforcements. Jeffery "emphasized material rewards ... and the use of the physical environment to control behavior" (Jeffery and Zahm, 1993:330). The major idea here was that by removing the reinforcements for crime, it would not occur. (Robinson, 1996)
An often overlooked contribution of Jeffery in his 1971 book is outlining four critical factors in crime prevention that have stood the test of time. These are the degrees to which one can manipulate the opportunity for a crime to occur, the motivation for the crime to occur, the risk to the offender if the crime occurs, and the history of the offender who might consider committing the crime. The first three of these are within the control of the potential victim while the last is not.
Jeffery's work was ignored throughout the 1970s for reasons that have received little attention. Jeffery explains that when the world wanted prescriptive design solutions, his work presented a comprehensive theory and used it to identify a wide range of crime prevention functions that should drive design and management standards.
Concurrent with Jeffery's essentially theoretical work was Oscar Newman and George Rand's empirical study of the crime-environment connection conducted in the early 1970s. As an architect, Newman emphasized the specific design features, an emphasis missing in Jeffery's work. Newman's "Defensible Space – Crime Prevention through Urban Design (1972) includes an extensive discussion of crime related to the physical form of housing based on crime data analysis from New York City public housing. "Defensible Space" changed the nature of the crime prevention and environmental design field. Within two years of its publication, substantial federal funding became available to demonstrate and study defensible space concepts.
As established by Newman, defensible space must contain two components. First, defensible space should allow people to see and be seen continuously. Ultimately, this diminishes residents' fear because they know that a potential offender can easily be observed, identified, and apprehended. Second, people must be willing to intervene or report crime when it occurs. Increasing the sense of security in settings where people live, and work encourages people to take control of the areas and assume a role of ownership. When people feel safe in their neighborhoods, they are more likely to interact with one another and intervene when crime occurs. These remain central to most implementations of CPTED as of 2004.
In 1977, Jeffery's second edition of Crime Prevention Through Environmental Design expanded his theoretical approach to embrace a more complex model of behavior in which variable physical environments, offender behavior as individuals, and behavior of individual members of the general public have reciprocal influences on one another. This laid the foundation for Jeffery to develop a behavioral model to predict the effects of modifying both the external and internal environments of individual offenders.
=== 1980s ===
By the 1980s, the defensible space prescriptions of the 1970s were determined to have mixed effectiveness. They worked best in residential settings, especially where residents were relatively free to respond to cues to increase social interaction. Defensible space design tools were observed to be marginally effective in institutional and commercial settings. As a result, Newman and others moved to improve defensible space, adding CPTED-based features. They also deemphasized less effective aspects of defensible space. Contributions to the advance of CPTED in the 1980s included:
The "broken windows" theory, put forth by James Q. Wilson and George L. Kelling in 1982, explored the impact that visible deterioration and neglect in neighborhoods have on behavior. Property maintenance was added as a CPTED strategy on par with surveillance, access control, and territoriality. The Broken Windows theory may go hand in hand with CPTED. Crime is attracted to the areas that are not taken care of or that have been abandoned. CPTED adds a feeling of pride and ownership to the community. With no more "broken windows" in specific neighborhoods, crime will continue to decline and eventually fall out entirely.
Canadian academicians Patricia and Paul Brantingham published Environmental Criminology in 1981. According to the authors, a crime occurs when all essential elements are present. These elements include a law, an offender, a target, and a place. They characterize these as "the four dimensions of crime," with environmental criminology studying the last of the four dimensions.
British criminologists Ronald V. Clarke and Patricia Mayhew developed their "situational crime prevention" approach: reducing the opportunity to offend by improving the design and management of the environment.
Criminologist Timothy Crowe developed his CPTED training programs.
=== 1990s ===
Criminology: An Interdisciplinary Approach (1990) was Jeffery's final contribution to CPTED. The Jeffery CPTED model evolved to one which assumes that
The environment never influences behavior directly, but only through the brain. Any model of crime prevention must include both the brain and the physical environment. ... Because the approach contained in Jeffery's CPTED model is today based on many fields, including scientific knowledge of modern brain sciences, a focus on only external environmental crime prevention is inadequate as it ignores another entire dimension of CPTED – i.e., the internal environment. (Robinson, 1996)
Crime Prevention Through Environmental Design (1991) by criminologist Tim Crowe provided a solid foundation for CPTED's progress throughout the 1990s.
From 1994 through 2002, Sparta Consulting Corporation, led by Severin Sorensen, CPP, managed the US Government's largest CPTED technical assistance and training program titled Crime Prevention Through Environmental Design (CPTED) in Public Housing Technical Assistance and Training Program, funded by the US Department of Housing and Urban Development. During this period, Sorensen worked with Ronald V. Clarke and the Sparta team to develop a new CPTED Curriculum that used Situational Crime Prevention as an underlying theoretical basis for CPTED measures. A curriculum was developed and trained for public and assisted housing stakeholders, and follow-up CPTED assessments were conducted at various sites. The Sparta-led CPTED projects showed statistical reductions in self-reported FBI UCR Part I crimes between 17% and 76% depending on the basket of CPTED measures employed in specific high-crime, low-income settings in the United States.
In 1996, Oscar Newman published an update to his earlier CPTED works, titled, Creating Defensible Space, Institute for Community Design Analysis, Office of Planning and Development Research (PDR), US Department of Housing and Urban Development (HUD).
In 1997, Greg Saville and Gerry Cleveland, 2nd Generation CPTED, wrote an article exhorting CPTED practitioners to consider the original social ecology origins of CPTED, including social and psychological issues beyond the built environment.
=== 2000s ===
By 2004, elements of the CPTED approach had gained wide international acceptance due to law enforcement efforts to embrace it. The CPTED term "environment" is commonly used to refer to the external environment of the place. Jeffery's intention that CPTED also embrace the internal environment of the offender seems to have been lost, even on those promoting the expansion of CPTED to include social ecology and psychology under the banner of 2nd Generation CPTED.
In 2012, Woodbridge introduced and developed the concept of CPTED within a prison environment, a place where crime continues after conviction. Jeffery's understanding of the criminal mind from his study in rehabilitative facilities over forty years ago was now being used to reduce crime in those same types of facilities. Woodbridge showed how prison design allowed offending to continue and introduced changes to reduce crime.
CPTED techniques are increasingly benefiting from integration with design technologies. For instance, models of proposed buildings developed in Building Information Modeling may be imported into video game engines to assess their resilience to different forms of crime.
== Strategies for the built environment ==
CPTED strategies rely on influencing offender decisions that precede criminal acts. Research into criminal behavior shows that the decision to offend or not to offend is more influenced by cues to the perceived risk of being caught than by cues to reward or ease of entry. The certainty of being caught is the central deterrence for criminals, not the severity of the punishment. By emphasizing a certainty of capture, criminal actions can be decreased. Consistent with this research, CPTED-based strategies enhance the perceived risk of detection and apprehension.
Consistent with the widespread implementation of defensible space guidelines in the 1970s, most implementations of CPTED by 2004 were based solely upon the theory that the proper design and effective use of the built environment can reduce crime, reduce the fear of crime, and improve the quality of life. Built environment implementations of CPTED seek to dissuade offenders from committing crimes by manipulating the built environment in which those crimes occur. The six main concepts, according to Moffat, are territoriality, surveillance, access control, image/maintenance, activity support, and target hardening. Applying these strategies is crucial when trying to prevent crime in any neighborhood, whether crime-ridden or not.
Natural surveillance and access control strategies limit the opportunity for crime. Territorial reinforcement promotes social control through a variety of measures. Image/maintenance and activity support provide the community with reassurance and the ability to inhibit crime through citizen activities. Target hardening strategies work within CPTED, delaying entry sufficiently to ensure a certainty of capture in the criminal mind.
=== Natural surveillance ===
Natural surveillance increases the perceived risk of attempting deviant actions by improving the visibility of potential offenders to the general public. Natural surveillance occurs by designing the placement of physical features, activities, and people in such a way as to maximize the visibility of the space and its users, fostering positive social interaction among legitimate users of private and public space. Potential offenders feel increased scrutiny and thus inherently perceive an increase in risk. This perceived increase in risk extends to the perceived lack of viable and covert escape routes.
Design streets to increase pedestrian and bicycle traffic
Place windows overlooking sidewalks and parking lots.
Leave window shades open.
Use passing vehicular traffic as a surveillance asset.
Create landscape designs that provide surveillance, especially near designated and opportunistic entry points.
Use the shortest, least sight-limiting fence appropriate for the situation.
Use transparent weather vestibules at building entrances.
When creating lighting design, avoid poorly placed lights that create blind spots for potential observers and miss critical areas. Ensure potential problem areas are well lit: pathways, stairs, entrances/exits, parking areas, ATMs, phone kiosks, mailboxes, bus stops, children's play areas, recreation areas, pools, laundry rooms, storage areas, dumpster and recycling areas, etc.
Avoid too-bright security lighting that creates blinding glare and deep shadows, hindering potential observers' views. Eyes adapt to night lighting and have trouble adjusting to severe lighting disparities. Using lower-intensity lights often requires more fixtures.
Use shielded or cut-off luminaires to control glare.
Place lighting along pathways and other pedestrian-use areas at proper heights for lighting the faces of the people in the space (and to identify the faces of potential attackers).
Use curved streets with multiple viewpoints to multiple houses' entrances, making the escape route difficult to follow.
Mechanical and organizational measures can complement natural surveillance measures. For example, closed-circuit television (CCTV) cameras can be added in areas where window surveillance is unavailable.
=== Natural access control ===
Natural access control limits the opportunity for crime by taking steps to differentiate between public space and private space. Natural access control occurs by selectively placing entrances and exits, fencing, lighting, and landscape to limit access or control flow.
Use a single, clearly identifiable point of entry
Use structures to divert persons to reception areas
Incorporate maze entrances in public restrooms. This avoids the isolation that is produced by an anteroom or double-door entry system
Use low, thorny bushes beneath ground-level windows. Use rambling or climbing thorny plants next to fences to discourage intrusion.
Eliminate design features that provide access to roofs or upper levels
In the front yard, use waist-level, picket-type fencing along residential property lines to control access and encourage surveillance.
Use a locking gate between the front and backyards.
Use shoulder-level, open-type fencing along lateral residential property lines between side yards and extending to between backyards. They should be sufficiently unencumbered with landscaping to promote social interaction between neighbors.
Use substantial, high, closed fencing (for example, masonry) between a backyard and a public alley instead of a wall that blocks the view from all angles.
Natural access control complements mechanical and operational access control measures, such as target hardening.
=== Natural territorial reinforcement ===
Territorial reinforcement promotes social control through an increased space definition and improved proprietary concern.
An environment designed to delineate private space does two things. First, it creates a sense of ownership. Owners have a vested interest and are more likely to challenge intruders or report them to the police. Second, the sense of owned space creates an environment where "strangers" or "intruders" stand out and are more easily identified. Natural territorial reinforcement uses buildings, fences, pavement, signs, lighting, and landscape to express ownership and define public, semi-public, and private spaces. Additionally, these objectives can be achieved by assignment of space to designated users in previously unassigned locations.
Maintained premises and landscaping to communicate an alert and active presence occupying the space.
Provide trees in residential areas. Research results indicate that contrary to traditional views within the law enforcement community, outdoor residential spaces with more trees are seen as significantly more attractive, safer, and more likely to be used than similar spaces without trees.
Restrict private activities to defined private areas.
Display security system signage at access points.
Avoid chain link fencing and razor-wire fence topping, as they communicate the absence of a physical presence and reduce the risk of being detected.
Placing amenities such as seating or refreshments in common areas in a commercial or institutional setting helps to attract larger numbers of desired users.
Scheduling activities in common areas increases proper use, attracts more people, and increases the perception that these areas are controlled.
Motion sensor lights at all entry points into the residence.
Territorial reinforcement measures make the intended user feel safe and make the potential offender aware of a substantial risk of apprehension or scrutiny. When people take pride in what they own and take the proper measures to protect their belongings, crime is deterred from those areas because it makes it more of a challenge.
=== Other CPTED elements ===
Support and maintenance activities complement physical design elements.
==== Maintenance ====
Maintenance is an expression of ownership of property. Deterioration indicates less control by a site's intended users and a greater tolerance of disorder. The Broken Windows Theory is a valuable tool in understanding the importance of maintenance in deterring crime. Broken Windows theory proponents support a zero tolerance approach to property maintenance, observing that a broken window will entice vandals to break more nearby windows. The sooner broken windows are fixed, the less likely such vandalism will occur. Vandalism falls into the broken windows category as well. The faster the graffiti is painted over, the less likely one is to repeat it because no one saw what has been done. A positive image in the community shows a sense of pride and self-worth that no one can take away from the property owner.
==== Activity support ====
Activity support increases the use of a built environment for safe activities to increase the risk of detecting criminal and undesirable activities. Natural surveillance by the intended users is casual, and there is no specific plan for people to watch out for criminal activity. By placing signs cautioning children to play and signs for certain activities in the area, the citizens of that area will be more involved in what is happening around them. They will be more tuned in to who is and is not supposed to be there and what looks suspicious.
== Effectiveness and criticism ==
CPTED strategies are most successful when they inconvenience the end user the least and when the CPTED design process relies upon the combined efforts of environmental designers, land managers, community activists, and law enforcement professionals. These strategies cannot be fulfilled without the community's help, and they require the whole community in the location to make the environment safer. A meta-analysis of multiple-component CPTED initiatives in the United States has found that they have decreased robberies between 30% and 84% (Casteel and Peek-Asa, 2000).
In terms of effectiveness, a more accurate title for the strategy would be crime deterrence through environmental design. Research demonstrates that offenders might not always be prevented from committing some crimes by using CPTED. CPTED relies upon changes to the physical environment that will cause an offender to make certain behavioral decisions, and some of those decisions will include desisting from crime. Those changes deter rather than conclusively "prevent" behavior.
== See also ==
Crime-Free Multi-Housing
Environmental psychology
Hostile architecture
Social architecture
Urban vitality
== Notes ==
== References ==
== External links ==
International CPTED Association
European Designing Out Crime Association
Stichting Veilig Ontwerp en Beheer (the Netherlands)
Crime prevention and the built environment.
Washington State University CPTED Annotated Bibliography. Url last accessed May 6, 2006.
https://www.crcpress.com/21st-Century-Security-and-CPTED-Designing-for-Critical-Infrastructure-Protection/Atlas/p/book/9781439880210 | Wikipedia/Crime_prevention_through_environmental_design |
The Planning Inspectorate (sometimes referred to as PINS) is an executive agency of the Ministry of Housing, Communities and Local Government of the United Kingdom Government with responsibility for making decisions and providing recommendations and advice on a range of land use planning-related issues across England. It also makes recommendations on nationally significant infrastructure projects in Wales.
== History ==
The Planning Inspectorate traces its roots back to the Housing, Town Planning, &c. Act 1909 and the birth of the planning system in the UK. John Burns (1858–1943), the first member of the working class to become a government Minister, was President of the Local Government Board and responsible for the 1909 Housing Act. He appointed Thomas Adams (1871–1940) as Town Planning Assistant – a precursor to the current role of Chief Planning Inspector.
Subsequent Acts have included the Housing, Town Planning, &c. Act 1919, the Town Planning Act of 1925, the Town and Country Planning Acts of 1932, 1947 and 990.
Between 1977 and 2001 the inspectorate was based in Tollgate House, Bristol before moving to its current headquarters at Temple Quay House, Bristol.
The National Planning Policy Framework (Community Involvement) Bill 2013-14 proposed to abolish the Planning Inspectorate.
On 9 May 2019, in a Written Statement, the Welsh Government signalled its intention to establish a separate, dedicated Planning Inspectorate for Wales due to the ongoing divergence of the regimes in England and Wales. On 01 October 2021, the staff and functions of Planning Inspectorate for Wales transferred back to the Welsh Government. The new division of Welsh Government is called Planning and Environment Decisions Wales (Welsh: Penderfyniadau Cynllunio ac Amgylchedd Cymru).
In 2024, the Planning Inspectorate rejected a proposal to build 1,322 homes a year in Oxford amid a local housing crisis. The Planning Inspectorate said there were no exceptional circumstances justifying the need for more homes.
== Organisation and work ==
The Inspectorate is headquartered at Temple Quay House in Bristol.
The Inspectorate employs a full range of other professions found in similar organisations. Most of its Inspectors are salaried, but some are commercial contractors. Until about 2023 the commercial contractors were called Non-Salaried Inspectors (NSIs).
When deciding appeals, Planning Inspectors are appointed by the Secretary of State and said 'to stand in the shoes of the Secretary of State'. For planning related appeals this authority comes from Schedule 6 to the Town and Country Planning Act 1990 and the Town and Country Planning (Determination of Appeals by Appointed Persons) (Prescribed Classes) Regulations 1997 (SI 1997/420). Planning related appeals mostly occur when local planning authorities refuse to grant planning permission, listed building consent, advertisement consent, permission for works to protected trees, lawful development certificates; or serve an enforcement notice requiring an alleged breach of planning control to end. The Town and Country Planning Act 1990 (as amended) is the primary legislation for the appeals system.
Applications for Nationally Significant Infrastructure Projects are considered by one to five Inspectors appointed to a formal Examining Authority. After considering the application, the Examining Authority makes a written recommendation to the relevant Secretary of State (e.g. Secretary of State for Transport for a road scheme), who then decides the application. The Planning Act 2008 (as amended) contains the consenting regime for Nationally Significant Infrastructure Projects.
The Local Plans system is covered by the Planning and Compulsory Purchase Act 2004. All local areas are expected to produce development plans. Inspectors examines those plans on behalf of the Secretary of State to ensure they meet legal tests and are consistent with national policy. They then report their findings to the council or other body that prepared the plan.
Frameworks established by related legislation cover other areas of work such as Compulsory Purchase Orders, applications in designated poorly performing local planning authorities, environmental appeals for the Environment Agency, public rights of way and commons casework for the Department for Environment, Food and Rural Affairs, and a range of highways and transport orders for the Department for Transport.
The Planning Inspectorate has three primary roles:
to help communities shape where they live;
to operate a fair and sustainable planning system; and
to help meet future infrastructure needs.
== See also ==
Planning and Environment Decisions Wales, for similar functions in Wales
Scottish Executive Inquiry Reporters' Unit, for similar functions in Scotland
Planning Appeals Commission, for similar functions in Northern Ireland
== References ==
== External links ==
The Planning Inspectorate
Appeals Casework Portal
National Infrastructure Planning | Wikipedia/Planning_Inspectorate |
Sir John Leopold Egan (born 7 November 1939) is a British industrialist, associated with businesses in the automotive, airports, construction and water industries. He was chief executive and chairman of Jaguar Cars from 1980 to 1990 and chairman of Jaguar plc from 1985 to 1990, and then served as chief executive of BAA from 1990 to 1999. He is also notable for chairing the construction industry task force that produced the 1998 Egan Report (Rethinking Construction) and the follow-up report, Accelerating Change, in 2002. During 2004, undertook the Egan Review of Skills for Sustainable Communities for the Blair Government. In 2004, after completing two years as president of the Confederation of British Industry, he was appointed chairman of Severn Trent.
== Career ==
John Egan was born in Rawtenstall, Lancashire, the son of a garage owner. The family moved to Coventry where he went to Bablake School. He studied petroleum engineering at Imperial College London and subsequently from 1962 to 1966 worked for Shell in the Middle East. After further studies, this time at London Business School, he moved to AC Delco in 1968 and then British Leyland where he played a part in boosting the fortunes of its Unipart business.
After a four-year spell as Corporate Parts Director of Massey Ferguson, Egan was appointed chairman of Jaguar Cars in 1980, turning round what had been a struggling business. A carmaker facing closure when he took over was sold ten years later to Ford for £1.6bn, at which time (March 1990) Egan moved to become Chief Executive of BAA.
Egan then assumed a variety of non-executive business roles and served as president of the CBI from 2002 to 2004, when he took on the chairmanship of Midlands water company Severn Trent.
Egan is currently the President of the Jaguar Drivers' Club, the only Jaguar owners club to be officially sanctioned by Sir William Lyons and Jaguar cars.
== Roles ==
His other major roles include:
non-executive chairman of motor distributor Inchcape plc
non-executive chairman of Harrison Lovegrove
non-executive vice chairman Legal & General
chairman, Asite Ltd (2001–2004)
president, Confederation of British Industry (2002–2004)
chairman Central London Partnership
chairman, London Tourist Board (September 1993 - December 1997)
president, London Tourist Board
deputy chairman of London First
vice-president, Marketing Council
member of Council, Institute of Directors
president Institute of Management
chancellor, Coventry University (2007-2017)
president, Jaguar Drivers' Club
== Honours ==
Honours include:
Honorary Graduate, Doctor of Laws, University of Bath, 1988
Bicentenary Medal of the Royal Society of Arts, 1995
Honorary doctorate, Brunel University, 1997
Deputy Lieutenant of the County of Warwickshire
Knighted in the 1986 Birthday Honours
Honorary Texas Citizen 1985
Fellow of Imperial College 1987
Senior Fellow Royal College of Art 1987
Fellow of London Business School 1988
MBA of the year 1988
University of Manitoba Distinguished International Entrepreneur 1989
University of Westminster Doctor of Letters 1998
Coventry Award of Merit.
== References == | Wikipedia/John_Egan_(industrialist) |
The Farmer Review of the UK Construction Labour Model, commonly known as the Farmer Review or by its subtitle Modernise or Die, was a 2016 report commissioned by the British Government. Written by industry veteran Mark Farmer, it identified key failings in the British construction industry. Farmer stated that research and development was almost non-existent, productivity was low and cost inflation high. He also noted a lack of skilled workers required to deliver the government's infrastructure and housebuilding targets. Farmer made ten key recommendations for the industry to follow which included reform of the Construction Industry Training Board, greater use of off-site construction techniques, greater promotion of the industry to school children, reform of tax and planning processes and for implementation of a 0.5% tax on clients in projects that do not follow the recommendations. The government later agreed to implement all of the recommendations except for the additional taxation.
== Report ==
The report was written by Mark Farmer, the founding director and CEO of Cast Consultancy and an expert with 25 years' experience in the construction industry. It had been commissioned through the UK Construction Leadership Council at the request of the departments for Communities and Local Government (CLG) and for Business, Energy and Industrial Strategy (BEIS). The government were keen to reform the industry which had a reputation for poor productivity, poor reputation as a good career with young people, a reliance on casual labour, poor investment in training and a fragmented supply chain. Repeated government reviews since the early 1990s has done little to transform it. Farmer produced an 80-page report in October 2016 and subtitled it "Modernise or Die – Time to decide the industry's future."
Farmer stated that he had found that the arrangements for training in the industry were dysfunctional and that investment in research and development was almost non-existent. As a result, the sector's productivity was low, levels of innovation were poor and cost inflation was high. He also found a shortage of skilled workers exacerbated by a larger number of people leaving the industry each year than were joining it. On the latter issue Farmer stated that the available workforce would decrease by 25% over the following ten years, a scenario that would be significantly worsened by the impact of Brexit. This shortage could jeopardise the industry's ability to meet the infrastructure and housebuilding targets set by the government. He was particularly critical of the housebuilding side of the industry.
Farmer's review made ten key recommendations:
For the Construction Leadership Council (CLC) to co-ordinate the implementation of the recommendations and to itself be reformed to better represent the make-up of the industry.
For the Construction Industry Training Board (CITB) to be fundamentally reformed and a review undertaken of its current training levy to make it more efficient.
For contractors, consultants, clients and the government to work within the framework provided by the CLC to improve relationships and collaboration across the supply chain and to increase investment in research and development.
For contractors, consultants, clients and the government to work within the framework provided by the CLC to investigate innovative solutions to the housing shortage, for example through the sharing of off-site construction facilities with small and medium enterprises.
For the CITB to change its grant funding process to focus on those skills relevant to the future of the industry and for industry bodies and professional institutions to become more involved in the way talent is developed.
For the CITB or a new body to take charge of improving the public image of the industry particularly through school outreach programmes aimed at those aged 11 and over.
For the government to take a more interventionist stance on the industry with new policies on education, planning, tax and employment. For the government to implement reform of the way section 106 agreements are made to streamline the planning process. For the treasury to reform the Construction Industry Scheme to disincentivise false declarations of self employment.
For the government to encourage use of pre-fabricated components in construction (off-site construction methods) by providing tax incentives for research and development funding, by encouraging housing authorities to specify such techniques and by introducing advantages in the planning system for schemes using these methods.
For the government to implement a new housebuilding pipeline, similar to the national infrastructure pipeline already in place, to give private sector companies greater visibility of the predicted future demand for housing.
For the government to introduce a 0.5% tax of the total project value on every client that does not implement the other recommendations. For this tax to be introduced within the next five years and for all money raised to be reinvested into technological development.
Farmer warned that the British government should not simply cherry pick the ideas that it liked or that were easiest to implement but should take the review as a whole. He wanted change to be driven primarily by private sector clients - who make up around 75% by value of all construction work commissioned in the UK. Farmer said at the time of his report's release that "we're entering a phase in construction where productivity's going to make or break the industry" and that "the only way we're going to [deliver the government's infrastructure and commercial targets] is by getting more productive". He was keen to see the industry make greater use of robotics, machine learning and automated planning decisions by use of digital design.
== Impact ==
The official government response to the report was issued in July 2017 and written by Baron Prior, Parliamentary Under Secretary of State at the BEIS; Alok Sharma, Housing and Planning minister at the CLG and Anne Milton, Minister of State for Skills and Apprenticeships at the Department for Education and Minister for Women. They agreed to implement all of Farmer's recommendations except the last - the 0.5% project value tax - saying they feared it would reduce the number of construction projects carried out. They committed to a reform of the CITB and its levy and stated that there was support for greater use of off-site manufacturing in an existing housing white paper. Farmer commented that he was happy that the government had agreed to implement the vast majority of his recommendations and that it was working to bring about a "modern and fit for purpose construction industry".
The review was described in the press at the time of its release as a "damning report" that warned that the industry faced "inexorable decline" unless it reforms. Other commentators stated that to help contractors to achieve the recommendations there would need to be more major, long-term projects and larger frameworks brought forward to provide certainty that the contractor would be able to achieve a return on their investment. In July 2018 the Farmer Review was cited as an example of the reforms needed for the construction industry in New Zealand. The CLC's July 2018 report Procuring for Value built on the Farmer Review and advocated serious reform for the procurement and tender process with a shift away from the "lowest cost bid" mentality towards an approach that takes into account quality and risk.
== References == | Wikipedia/Farmer_Review_of_the_UK_Construction_Labour_Model |
Jacobs Solutions Inc. is an American international technical professional services firm based in Dallas. The company provides engineering, technical, professional, and construction services as well as scientific and specialty consulting for a broad range of clients globally, including companies, organizations, and government agencies. Jacobs has consistently ranked No. 1 on both Engineering News-Record (ENR)'s 2018, 2019, 2020, 2021, 2022, and 2023 Top 500 Design Firms and Trenchless Technology’s 2018, 2019, 2020, and 2021 Top 50 Trenchless Engineering Firms. Its worldwide annual revenue were over $14 billion in the 2021 fiscal year, and earnings rose to $477 million.: F-4
== Overview ==
Jacobs Engineering was founded in 1947, by Joseph J. Jacobs. The company's chief executive officer is Bob Pragada. He has been the CEO since January 2023. Steve Demetriou, the CEO from 2015 to 2023, now serves as the executive chair. The previous president and CEO was Craig L. Martin from 2006 until 2014.
The company is publicly traded as a Fortune 500 company. As of September 2018, Jacobs had more than 80,800 employees globally, and more than 400 offices in North America, South America, Europe, the Middle East, Australia, Africa, and Asia. In October 2016, the company moved its headquarters from Pasadena, California to Dallas.
On August 9, 2017, the Pentagon awarded a $4.6 billion Integrated Research & Development for Enterprise Solutions (IRES) follow-on contract to Jacobs Technology Inc, a unit of Jacobs Engineering Group Inc. to provide products and services for the Missile Defense Agency and its Missile Defense Integration and Operations Center. In October 2018, Jacobs agreed to sell its Energy, Chemicals and Resources (ECR) segment to WorleyParsons, a company in North Sydney, Australia. In April 2021, the Institute on Taxation and Economic Policy listed the top 55 corporations which paid $0 in taxes for the year 2020. Jacobs' federal income taxes for that year were negative $37 million dollars for an effective tax rate of −17.4%.
As of 2023, the company forms part of the Dow Jones Sustainability Indices. In 2024, Jacobs spun-off its Critical Mission Solutions and Cyber and Intelligence Government Services businesses which merged with privately-held Amentum Government Services Holding LLC to create a new, publicly-traded company, Amentum.
== Acquisitions ==
On December 10, 1998, it was announced that Jacobs would acquire closely held engineering firm, Sverdrup for $200 million. In 2001, Jacobs acquired the international operations, including the international consultancy Sir Alexander Gibb & Partners (Gibb Ltd) based in the UK, from Law Engineering and Environmental Services in Atlanta.
In FY 2007, Jacobs acquired the privately held planning, engineering and design firm, Edwards and Kelcey of Morristown, New Jersey for an undisclosed amount. In FY 2008, Jacobs spent $264 million to acquire Carter and Burgess, Lindsey Engineering and a 60% stake in Zamel and Turbag Consulting Engineers. In FY 2010, Jacobs acquired TechTeam, Tybrin, and Jordan, Jones and Goulding. They paid $259.5 million for the three companies.
In FY 2014, Jacobs announced it completed a merger transaction with Sinclair Knight Merz (SKM), a 6,900-person professional services firm headquartered in Sydney. The purchase price was an enterprise value of AUS$1.2 billion (US$1.1 billion) plus adjustments for cash, debt and other items. On August 2, 2017, Jacobs acquired CH2M Hill based in Englewood, Colorado– an engineering firm in infrastructure and government service sectors, including water, transportation, environmental and nuclear, in a $3.27 billion cash-and-stock deal.
In March 2020, Jacobs acquired Wood Nuclear, the nuclear services arm of John Wood Group of the UK, for £250 million, adding 2000 staff. Jacobs' total UK workforce was now almost 11,000. In December 2020, Jacobs announced it would be investing in PA Consulting based in London, in a deal valued at £1.825 billion. Completion of the deal was expected to take place by the end of Q1 2021. On February 7, 2022, Jacobs announced that it would enter into a joint venture with the Qatar based entity Locus Engineering Management and Services Co. W.L.L, an Asset Management company with interests in building maintenance, infrastructure, oil and gas support services, and engineering. The terms of the venture were not disclosed.
== Controversies ==
=== Kingston coal ash cleanup ===
The Kingston Fossil Plant coal fly ash slurry spill was an environmental and industrial disaster which occurred on Monday December 22, 2008, when a dike ruptured at a coal ash pond at the Tennessee Valley Authority's Kingston Fossil Plant in Harriman, Tennessee releasing 1.1 billion U.S. gallons (4.2 million cubic meters) of coal fly ash slurry. The Tennessee Valley Authority hired Jacobs Engineering to clean up the spill. In the years after the spill at the cleanup site, a number of workers suffered health effects.
As early as 2012, workers began to report illnesses which they believed were caused by the cleanup, and by the ten year anniversary of the event, hundreds of workers had been sickened and more than 30 had died. In May 2023, it was reported that more than 50 workers had died and 150 were sick. In 2013, 50 workers and their families filed a lawsuit against contractor Jacobs Engineering. They were represented by Knoxville lawyer James K. Scott and the lawsuit was dismissed by judge Thomas A. Varlan, chief justice for the U.S. District Court for the Eastern District of Tennessee in 2014. This ruling was reversed by the U.S. Court of Appeals for the Sixth Circuit after evidence was discovered that Jacobs Engineering had misled the workers about the dangers of coal ash.
A federal jury ruled in favor of the workers seeking compensation in November 2018. The ruling held that Jacobs Engineering had failed to keep the workers safe from environmental hazards, and had misled them about the dangers of coal ash, mainly by claiming that extra protective equipment, such as masks and protective clothing, was unnecessary. In a phase two of the trial, the Kingston cleanup workers will be able to seek damages. In April 2020, 52 workers rejected a $10 million settlement offered by Jacobs Engineering.
=== Hinkley Point ===
Jacobs Engineering is building the Hinkley Point C nuclear reactor, controversial for the reason of its excessive delays and cost overruns. “It’s three times over cost and three times over time where it’s been built in Finland and France,” said Paul Dorfman of UCL (University College London) Energy Institute. The companies involved have been accused of a conflict of interest as the company advising the UK about cost management was owned by Jacobs Engineering, while Jacobs was working for the company managing an Électricité de France project. Thus, a subsidiary of a company hired by EDF was advising the UK how much money to grant EDF.
=== Woonsocket Regional Wastewater Treatment Facility ===
The Rhode Island Department of Environmental Management is investigating the WRWTF plant, which is run by Jacobs, for spillage of an estimated 10 million gallons of wastewater with incomplete treatment into the Blackstone River in June 2022. Previous investigations resulted in letters of noncompliance given to Jacobs in 2021 and 2020.
== See also ==
Top 100 Contractors of the U.S. federal government
== References ==
== External links ==
Official website
Business data for Jacobs Solutions Inc.: | Wikipedia/Jacobs_Solutions |
Heidelberg Materials UK is a British-based building materials company, headquartered in Maidenhead. Previously known as Hanson UK, the company has been a subsidiary of the German company HeidelbergCement since August 2007, and was formerly listed on the London Stock Exchange and a constituent of the FTSE 100 Index.
Originally trading as Wiles Group; the company was transformed into Hanson Trust Ltd by James Hanson and Gordon White in 1964. Over a thirty year period, Hanson pursued a principal strategy of raising shareholder value through a series of acquisitions. Several large businesses were purchased throughout the 1980s, such as the United Drapery Stores in 1983, Imperial Tobacco in 1986 and Kidde in 1987. Some of these acquisitions drew criticism and opposition. During 1991, Hanson Plc attempted its largest-yet acquisition of Imperial Chemical Industries (ICI), but this was hotly contested and ultimately unsuccessful.
By the start of the 1990s, Hanson Plc had become a sizable conglomerate and one of the largest firms based in Britain. However, amid negative perceptions of the conglomerate model, the company was reorganised into four separate listed firms during the mid 1990s, these being: Hanson plc, Imperial Tobacco, The Energy Group and Millennium Chemicals. In 2007, HeidelbergCement purchased Hanson Plc in exchange for £8 billion to create the second largest cement and building materials company in the world. In October 2023, the company announced that it was rebranding as Heidelberg Materials.
== History ==
=== Growth through acquisition ===
Originally known as Hanson Trust plc, the company was built up by James Hanson, later Lord Hanson, and Gordon White, later Baron White of Hull, who created Hanson Trust out of the former Wiles Group in 1964.
Hanson and White were willing to take a wide range of measures to maximise value, including mass redundancies, and therefore attracted opposition and accusations that they were asset strippers. From 1979, the company was successful from the shareholders' point of view and respected during the early 1980s; Hanson (who donated millions of pounds to the Conservatives) was given a life peerage by Britain's then-Prime Minister, Margaret Thatcher, in June 1983. It has been alleged that Hanson benefitted from political favouritism that may have swayed decisions made by the Monopolies and Mergers Commission (MMC).
One of the most notable takeovers, at least to the general public, was the acquisition in 1983, of the United Drapery Stores (otherwise known as UDS Group), which owned many of Britain's most well known high street clothes shops and department stores, including John Collier, Richard Shops and the chain of Allders department stores. To fund this purchase, Hanson broke up UDS and sold John Collier via a management buyout and Richard Shops to Habitat, keeping only the core department store business. In January 1986, Hanson bought SCM, an American chemicals to typewriters business, which included the paper division that was formerly the Allied Paper Corporation. Hanson promptly sold most of the SCM business units and the headquarters building in New York City for a significant profit.
Hanson's most significant single purchase was probably its takeover of Imperial Tobacco Group in 1986. Hanson paid £2.5 billion for the group then undertook a major reorganisation; divestitures netted £2.3 billion, leaving Hanson with the hugely profitable tobacco business for "next to nothing." Hanson sold off the food brand, Golden Wonder, to Dalgety plc in 1986. Hanson was also involved in the politically charged Westland affair of the mid-1980s, giving its backing to the successful British Government-backed bidder for the British aerospace firm Westland Helicopters.
In mid 1987, the firm acquired the American consumer products group Kidde at a cost of $1.7 billion; during October of that year, Black Monday hit and stock valuations plunged, leading to criticism that Hanson had allegedly overpaid for Kidde. In November 1988, Hanson bought Consolidated Gold Fields in exchange for £3.5 billion. The Gold Survey was taken on by a new company, now known as GFMS.
During mid 1991, the company attempted to acquire Imperial Chemical Industries (ICI), a business that was once viewed by many in Britain as the nation's leading company but was by then in decline. Hanson had acquired a 2.8 per cent stake in the company as part of its hostile takeover attempt, which ICI's management team chose to oppose. The envisioned acquisition became hotly contested and controversial, partially as it would have been the biggest takeover in British history at that point. In October 1991, Hanson opted not to proceed with the deal.
During September 1991, Hanson acquired Beazer, a major British housebuilder, in exchange for $609 million. Two years later, it also purchased a portion of the Watt Housing Corporation under a £116 million (£76 million) deal.
During the mid-1990s, conglomerates were falling out of favour with the investment community. Some of the manufacturing businesses were spun off as US Industries in February 1995. In January 1996, Hanson ended its time as a diversified conglomerate by breaking itself up into four separate listed companies: Hanson plc, Imperial Tobacco, The Energy Group and Millennium Chemicals. This restructuring had reportedly cost the group £95 million in professional fees by August 1996.
=== Building materials focus ===
During December 1997, Lord Hanson stepped down as chairman. Led by Andrew Dougal, chief executive from 1997 until 2002, the company focused on building materials. By December 1999, Hanson had become the world's biggest aggregates supplier and the second largest supplier of ready-mixed concrete. In November 1999, Hanson acquired Australian building materials business Pioneer International.
In early 2002, Dougal parted ways with Hanson, leaving with a controversially large pay-off (variously reported at between £400,000 and £660,000, plus a pension top-up of £636,700).
=== Acquisition by HeidelbergCement ===
In May 2007, HeidelbergCement announced its intent to purchase Hanson Plc for £11 per share, which valued it at approximately £8 billion. This deal made the combined company the second largest cement and building materials company in the world. The transaction was completed through Heidelberg subsidiary Lehigh UK on 22 August 2007. In December 2014, Heidelberg Cement agreed to sell its Hanson Building Products division to the private equity firm Lone Star for £900 million.
During 2023, Hanson was reportedly planning the construction of a new carbon capture facility that was aimed at reducing the emissions from their Padeswood cement works. The British government chose Hanson, along with other companies, to present progress plans for carbon reduction solutions.
In October 2023, the company announced that it was rebranding as Heidelberg Materials, as part as a branding rationalisation by its parent company.
== Operations ==
The principal markets of Heidelberg Materials UK are the major conurbations in England and Wales and the central belt of Scotland. The company supplies heavy building materials such as ready-mixed concrete, asphalt and cement to the UK construction industry.
In March 2024, residents of Glyncoch, near Pontypridd in South Wales, started a series of protests around the over-riding of the local authority's opposition to extend quarrying by the Minister of Climate Change, Julie James. This successful appeal will allow a further 15.7 million tonnes of rock to be extracted for road surfacing and runways. The quarry operations will continue until 2047 and will come within 164 meters of schools and housing as well as destroying a community green space and a wildlife sanctuary.
The appeal report claimed that "The dust assessments concluded that the potential impacts associated with both the continuation of existing activities and the proposed extension would be slight adverse at most." and that "From all that I have seen and read there are no objections or concerns relating to landscape, visual impact, ecology, hydrology, cultural heritage, agricultural land quality impacts"
== See also ==
Hanson Cement
== References ==
== External links ==
Official website | Wikipedia/Heidelberg_Materials_UK |
Domestic housing in the United Kingdom presents a possible opportunity for achieving the 20% overall cut in UK greenhouse gas emissions targeted by the Government for 2010. However, the process of achieving that drop is proving problematic given the very wide range of age and condition of the UK housing stock.
== Carbon emissions ==
Although carbon emissions from housing have remained fairly stable since 1990 (due to the increase in household energy use having been compensated for by the 'dash for gas'), housing accounted for around 30% of all the UK's carbon dioxide emissions in 2004 (40 million tonnes of carbon) up from 26.42% in 1990 as a proportion of the UK's total emissions. The Select Committee on Environmental Audit noted that emissions from housing could constitute over 55% of the UK's target for carbon emissions in 2050.
A 2006 report commissioned by British Gas estimated the average carbon emissions for housing in each of the local authorities in Great Britain, the first time that this had been done. This indicated that housing in Uttlesford (Essex) produced the highest emissions (8,092 kg of carbon dioxide per dwelling). This was 250% higher than housing in Camden (London) which produced the least (averaging 3,255 kg). Among the 23 towns included, Reading had the highest emissions (6,189 kg), with Hull the lowest (4,395 kg). The variations are due to a number of factors, including the age, size and type of the housing stock, together with the efficiency of heating systems, the mix of fuels used, the ownership of appliances, occupancy levels and the habits of the occupants.
=== Zero carbon ambition ===
In the December 2006 Pre-Budget Report, the Government announced their 'ambition' that all new homes will be 'zero-carbon' by 2016 (i.e. built to zero-carbon building standards). To encourage this, an exemption from Stamp duty land tax was to be granted, lasting until 2012, for all new zero-carbon homes up to £500,000 in value.
Whilst some organisations applauded the initial announcement of the scheme, in the pre-budget statement from the then UK Chancellor, Gordon Brown, others are concerned about the government's ability to deliver on the promise.
== Domestic energy use ==
In 2003 the housing stock in the United Kingdom was amongst the least energy efficient in Europe. In 2004, housing (including space heating, hot water, lighting, cooking, and appliances) accounted for 30.23% of all energy use in the UK (up from 27.70% in 1990). The figure for London is higher at approximately 37%.
In view of the progressive tightening of the Building Regulations' requirements for energy efficiency since the 1970s (see the history section below), it might be expected that a significant cut in domestic energy use would have occurred, however this has not yet been the case.
Although insulation standards have been increasing, so has the standard of home heating. In 1970, only 31% of homes had central heating. By 2003 it had been installed in 92% of British homes, leading in turn to a rise in the average temperature within them (from 12.1 °C to 18.20 °C). Even in homes with central heating, average temperatures rose 4.55 °C during this period.
At the same time, the increase in the number of households, increasing numbers of domestic electrical appliances, an increase in the number of light fittings, reduction in the average number of occupants per household, plus other factors, had led to an increase in total national domestic energy consumption from around 25% in 1970 to about 30% in 2001, and remained on an upward trend (BRE figures).
The figures for energy consumed by end use for 2003.
Space heating - 60.51% (57.61% in 1990)
Water heating - 23.60% (25.23% in 1990)
Appliances and lighting - 13.15% (13.4% in 1990)
Cooking - 2.74% (3.76%)
== The Green Deal ==
The Green Deal provided low interest loans for energy efficiency improvements to the energy bills of the properties the upgrades are performed on. These debts are passed onto new occupiers when they take over the payment of energy bills. The costs of the loan repayments should be less than the savings on the bills from the upgrades, however this will be a guideline and not legally enforceable guarantee. It is believed that tying repayment to energy bills will give investors a secure return.
The Green Deal for the domestic property market was launched in October 2012. The Commercial Green Deal was launched in January 2012 and released in a series of stages to help with the varying needs and requirements of commercial properties.
== Building regulations ==
The 1965 Building Regulations introduced the first limits on the amount of energy that could be lost through certain elements of the fabric of new houses. This thermal performance was expressed in the imperial units of the time (the amount of heat in BTU lost per square foot, for each degree Fahrenheit of temperature difference between inside and outside).
The 1965 regulations were the first set of truly nationwide standards, prior to this local authorities set their own local regimes. Part F of the 1965 regulations defines minimum thermal performance, and schedule 11 of the same document gives examples of compliant methods for builders and architects to refer to. While novel at the time, by modern standards these limits set quite a low target, unfilled cavity walls, 2 inches of loft insulation and uninsulated concrete floors were deemed adequate for the reference standards.
The 1972 regulations retained the same standards but converted to metric units (mm) and the modern u-value (the amount of heat lost per square metre, for each degree Celsius of temperature difference between inside and outside).
In effect, the Target Insulation was u-values of 1.42 W/m2·K for the roof and floor and 1.7 W/m2·K for external walls. Modern performance standards require values less than 20% of this. This u-value approach is slightly regressive in that richer people live in bigger houses which tend to have a lower ratio of surface area to floor area, although they are often detached, which can offset the advantage over smaller row houses.
These first limits were tightened following the 1973 oil crisis, and on several subsequent occasions, including the addition of limits for doors and windows in the year 2000 that effectively required double glazing for the first time. (see below). Despite this, UK insulation levels have been introduced later or remained lower compared to the EU average.
=== Changes in 2006 ===
The energy policy of the United Kingdom through the 2003 Energy White Paper articulated directions for more energy efficient building construction. Hence, the year 2006 saw a significant tightening of energy efficiency requirements within the Building Regulations (for earlier regulations, see separate section below).
With the long-term aim of cutting overall emissions by 60% by 2050, and by 80% by 2100, the intention of the 2006 changes was to cut energy use in new housing by 20% compared to a similar building constructed to the 2002 standards. The changes were the first to the regulations brought about by the desire to reduce emissions, though some have raised doubts about whether they will actually achieve the 20% cut (see criticisms section).
In the 2006 regulations, the u-value was replaced as the primary measure of energy efficiency by the Dwelling Carbon Dioxide Emission Rate (DER), an estimate of carbon dioxide emissions per m2 of floor area. This is calculated using the Government's Standard Assessment Procedure for Energy Rating of Dwellings (SAP 2005).
In addition to the levels of insulation provide by the structure of the building, the DER also takes into account the airtightness of the building, the efficiency of space and water heating, the efficiency of lighting, and any savings from solar power or other energy generation technologies employed, and other factors. For the first time, it also became compulsory to upgrade the energy efficiency in existing houses when extensions or certain other works are carried out.
Some organisations have raised doubts over the claim that the changes will result in a 20% saving. Issues cited have included alleged problems with the calculation methods, the limitations of the modelling software, and the specification of the reference building used in the model. For example, a 2005 study sponsored by the Pilkington Energy Efficiency Trust indicated that the savings would only be in the region of 9%.
There are also concerns about enforcement, with a Building Research Establishment study in 2004 indicating that 60% of new homes do not conform to existing regulations. A 2006 survey for the Energy Saving Trust revealed that Building Control Officers considered energy efficiency 'a low priority' and that few would take any action over failure to comply with the Building Regulations because the matter 'seemed trivial'.
=== Further changes ===
In December 2006, the government announced their ambition that all new housing should be built to zero-carbon standards from 2016; i.e., that the carbon emitted during a typical year should be balanced by renewable energy generation. Despite being the first country in the world to adopt such a policy the initiative was generally welcomed by the industry in principle, despite some subsequent concern over the practicalities.
On 1 April 2011 the WWF resigned from the taskforce on Zero-Carbon homes, stating that 'the zero-carbon policy is now in tatters' after the Government unilaterally decided to change the scope of the 'zero carbon' policy to exclude some emissions not currently covered by the building regulations. The UK Green Building Council estimated that the change, published at the time of the March 2011 budget, will result in only two-thirds of the emissions of a new home being mitigated.
In 2004, the Government indicated that the next revision to the energy performance standards of the Building Regulations would be in 2010. In the consultation document Building a Greener Future: Towards Zero Carbon Development it was proposed that the 2010 revision should require a further 25% improvement in the energy/carbon performance, in line with the 2004 proposals. It was further envisaged that there would be a 44% improvement in 2013, compared to 2006 levels. This would then be followed by the adoption of a zero carbon requirement in 2016, applied to all home energy use including appliances. These steps in performance would align the energy efficiency requirement of the Building Regulations with those of Levels 3, 4 and 6 of the Code for Sustainable Homes in 2010, 2013 and 2016 respectively.
== Home energy labelling ==
Originally, from June 2007, all homes (and other buildings) in the UK would have to undergo Energy Performance Certification (also commonly known as an EPC Certificate) before they are sold or let, in order to meet the requirements of the European Energy Performance of Buildings Directive (Directive 2002/91/EC). The scheme provides the owner or landlord with an 'energy label' so that they can demonstrate the energy efficiency of the property, and is also included in the new Home Information Packs. The scheme has been criticized for its methodology and superficial approach, especially for old buildings. For example, it ignores thick walls with their low heat transmission, and its recommendations for compact fluorescent lamps, which can damage sensitive textiles and paintings.
It is hoped that energy labelling will raise awareness of energy efficiency, and encourage upgrading to make properties more marketable. Incentives may be available for carrying out energy conservation measures. Research by comparethemarket.com showed homes across the UK were worth more when scored highly in an EPC.
For new building, SAP calculations are to form the basis for the certification, while RDSAP (Reduced Data SAP) will be used to assess existing properties. It is estimated that only 10% of the nation's housing will score above 60 on the scale, although most will score above 40.
== Other rating schemes ==
Another rating scheme of note is the Government sponsored EcoHomes rating, mostly used in public sector housing, and only applicable to new properties or major refurbishments. This actually measures a range of sustainability issues, of which energy efficiency is only one. EcoHomes is to be replaced by the Government's Code for Sustainable Homes in 2007.
The Energy Saving Trust set requirements for 'good practice' and 'advanced practice' for achieving lower energy buildings, while the Association for Environment Conscious Building's CarbonLite programme specifies Silver and Gold standards, the latter approaching a zero energy building.
In Wales, where 'zero-carbon homes' are the aspiration for 2011 (although 2012 is more likely), the requirements are for Code for Sustainable Homes or equivalent. This has opened the doors for standards like Passiv Haus and the CarbonLite programme. Another lesser known building type that does not rely on airtightness in order to get its energy rating is Bio-Solar-Haus. This is not a well known type of house, but it has a range of positive advantages like it is built out of renewable resources and it is a breathable structure thus making it much healthier to live in.
== Grants ==
The Government's low carbon buildings programme was launched in 2006 to replace the earlier Clear Skies and Solar PV programmes. It offers grants towards the costs of solar thermal heating, small wind turbine, micro hydro, ground source heat pump, and biomass installations. As of January 2007 funding for grants is proving insufficient to meet demand.
A similar scheme, the Scottish Community and Household Renewables Initiative operates in Scotland, which also offers grants towards the cost of air source heat pumps.
== Local government ==
Under the Home Energy Conservation Act 1995, local authorities are required to consider measures to improve the energy efficiency of all residential accommodation in their areas, although they are not required to implement any measures. Most local authorities provide free advice on energy conservation and some also provide home visits, often targeting those in social housing and the fuel poor. Some also demand minimum levels of energy efficiency in newly constructed buildings. It was expected that the Act would result in a 30% cut in energy usage between 1996 and 2010. An overall cumulative improvement of 14.7% was reported to DEFRA for the year ending March 2004, but a large part of this would have happened without HECA.
In the South, most local authority housing was sold off in the 1980s-90s under RTB (Right to buy scheme), so the remaining stock is small. Much social housing has also been transferred to housing associations.
== Demonstration and pioneering projects ==
One of the most important energy efficiency demonstration projects was the 1986 Energy World exhibition in Milton Keynes, which attracted international interest. Fifty-one houses were built, designed to be at least 30% more efficient than the Building Regulations then in force. This was calculated using the Milton Keynes Energy Cost Index (MKECI), a test-bed for the subsequent SAP rating system and the National Home Energy Rating scheme. Energy World was preceded by the earlier Salford low-energy houses, built in the early 1980s, which continue to be 40% more efficient than the 2010 Building Regulations.
The Beddington Zero Energy Development (BedZED), a non-traditional housing scheme of 82 dwellings near Beddington, London included zero fossil energy usage as one of its key design features. The project was completed in 2002 and is the UK's largest eco-development. As designed, the energy used is generated from renewables on site. In use, BedZED has yielded considerable useful feedback, not least that energy efficiency and passive design features delivery more reliable reduced carbon emissions than active systems. Due to their superinsulation, the properties use 88% less energy (measured) for space heating compared to those built to the 2002 Building Regulations, while the reduction for water heating is 57%. Measured electrical use for cooking, appliances and occupant's plug loads ('unregulated energy' consumption) are some 55% lower than UK norms (bedzed-seven-years-on).
The Green Building in Manchester City Centre and has been built to high energy efficiency standards and won a 2006 Civic Trust Award for its sustainable design. The cylindrical shape of the ten-storey tower provides the smallest surface area related to the volume, ensuring less energy is lost through thermal dissipation. Other technologies including solar water heating, a wind turbine and triple glazing.
The South Yorkshire Energy Centre at Heeley City Farm in Sheffield is an example of refurbishing an existing property to show the options available.
The EcoHouse in Leicester is to be renovated in 2011 to provide a demonstration of Retrofit for the Future energy efficiency standards.
The Old Home SuperHome initiative features many owner occupied, existing home retrofits which achieve a 60% carbon saving which can be visited by the public. Many of the homes have dramatically improved their energy efficiency to achieve these carbon savings, while some have also installed renewable energy technologies.
== International comparisons ==
International comparisons of particular note include:
The 1977 Danish BR77 standard (the first to set demanding energy efficiency requirements).
The SBN-80 (Svensk Bygg Norm) 1980 Swedish Building Standards, which in 1983 was in advance of the UK 2002 standards.
The voluntary Canadian R-2000 standard, to which around 14,000 houses had been built in the 10 years to 1992.
Since then many more have been built in Canada, in Japan, and in various other countries including a number in the UK. Currently energy savings of 30% to 40% are typically achieved in Canada.
The voluntary German Passivhaus standard. Properties built to the standards use approximately 85% less energy and produce 95% less carbon dioxide compared to properties built to the UK's 2002 standards. Over 6,000 such houses have been built across several European countries.
The voluntary Swiss Minergie standard which requires that general energy consumption must not to be higher than 75% of that of average buildings and that fossil-fuel consumption must not be higher than 50% of the consumption of such buildings, and the Minergie-P standard, requiring virtually zero energy consumption.
== Research ==
In 2005, the Select Committee on Environmental Audit expressed their concern that there was a lack of significant funding for research and development of sustainable construction methods, with funding for the Building Research Establishment having been "drastically" cut in the previous 4 years. As a result, many of the sustainable building materials used in the UK are imported from Germany, Switzerland and Austria—some of the countries that have been prominent in research.
== Existing housing stock ==
Even if all new housing does become zero carbon by 2016, the energy efficiency of the remainder of the housing stock would need to be addressed.
The 2006 Review of the Sustainability of Existing Buildings revealed that 6.1 million homes lacked an adequate thickness of loft insulation, 8.5 million homes had uninsulated cavity walls, and that there is a potential to insulate 7.5 million homes that have solid external walls. These three measures alone have the potential to save 8.5 million tonnes of carbon emissions each year. Despite this, 95% of homeowners think that the heating of their own home is currently effective.
See UK Government policy for improving home energy efficiency for further information on policies from 1945 to 2016 and their effectiveness.
== Historic building regulations energy efficiency requirements ==
The u-value limits introduced in 1965 were:
1.7 for walls
1.4 for roofs
Following the 1973 oil crisis, these were tightened in 1976 to:
1.0 for exposed walls, floors and non-solid ground and exposed floors
1.7 for semi-exposed walls
1.8 average for walls and windows combined
0.6 for roofs
1985 saw the second tightening of these limits, to:
0.6 for exposed walls, floors and ground floors
1.0 for semi-exposed walls
0.35 for roofs
These limits were reduced again in 1990:
0.45 for exposed walls, floors and ground floors
0.6 for semi-exposed walls
0.25 for roofs
plus a requirement that the area of windows should not be more than 15% of the floor area.
Like the 2006 changes, it was predicted that the introduction of these limits would result in a 20% reduction in energy use for heating. A survey by Liverpool John Moores University predicted that the actual figure would be 6% (Johnson, JA "Building Regulations Research Project").
In the 1995 Building Regulations, insulation standards were cut to the following U-values:
0.45 for exposed walls, floors and ground floors
0.6 for semi-exposed walls and floors
0.25 for roofs
the limit on window area was raised to 22.5%
The 2002 regulations reduced the U-values, and made additional elements of the building fabric subject to control. Although there was in practice considerable flexibility and the ability to 'trade off' reductions in one area for increases in another, the 'target' limits became:
0.35 for walls
0.25 for floors
0.20 or 0.25 for pitched roofs (depending on the construction)
0.16 for flat roofs
2.2 for metal framed doors and windows
2.0 for other doors and windows
the limit on window area was raised again to 25%
Similar limits were introduced into Scotland in 2002 & 2006, though with a lower limit of 0.3 or 0.27 for walls, and some other variations.
It was claimed by Government that these measures should cut the heating requirement by 25% compared to the 1995 Regulations. It was subsequently also claimed that they had achieved a 50% cut compared to the 1990 Regulations.
While the u-value ceased being the sole consideration in 2006, u-value limits similar to those in the 2002 regulations still apply, but are no longer sufficient by themselves. The DER, and TER (Target Emission rate) calculated through either the UK Government's Standard Assessment Procedure for Energy Rating of Dwellings (SAP rating), 2005 edition, or the newer SBEM (*Simplified Building Energy Model) which is aimed at non-dwellings, became the only acceptable calculation methods. Several commercial energy modeling software packages have now also been verified as producing acceptable evidence by the BRE Global & UK Government. Calculations using previous versions of SAP had been an optional way of demonstrating compliance since 1991(?). They are now a statutory requirement (B. Reg.17C et al.) for all building regulations applications, involving new dwelling/buildings and large extensions to existing non-domestic buildings.
== See also ==
== References ==
== External links ==
BRE: Domestic Energy Fact File Archived 18 May 2006 at the Wayback Machine
Energy Efficiency in Existing Buildings: The Role of Building Regulations Archived 27 September 2007 at the Wayback Machine
DEFRA - Domestic energy research
University of Oxford - Oxford University 40% House initiative
Leeds Metropolitan University: York Energy Demonstration Project (1998) Archived 27 September 2007 at the Wayback Machine
Heriot-Watt University's Tarbase Project
Resources
Centre for Alternative Technology
The Association for Environment Conscious Building
In the media
Jan 2007, HBF: Landmark summit to determine how to deliver Government's environmental vision
Sep 2006, New Builder: Housing Minister: Homes should exceed Scandinavian specifications in 10 years
Jul 2006, BBC: Eco-targets at risk unless households cut resource use | Wikipedia/Energy_efficiency_in_British_housing |
The Strategic Forum for Construction is a United Kingdom construction industry organisation established in 2001 as the principal point of liaison between UK government and the major construction membership organisations. It also enables different representatives of the UK industry to discuss strategic issues facing construction and to develop joint strategies for industry improvement.
== History ==
The Strategic Forum was established by ministers in 2001 as a successor to the Construction Industry Board (established following a recommendation in the 1994 Latham Report) and the Construction Task Force, established by the then Deputy Prime Minister John Prescott in 1997. Parts of the construction industry had withdrawn support for the Construction Industry Board, so construction minister Nick Raynsford MP established it initially as a Government-funded body. The Task Force had produced the 1998 Egan Report, and Sir John Egan was appointed the Forum's first chairman.
In 2002, the Construction Industry Council, with backing from other umbrella bodies and Raynsford's successor as construction minister, Brian Wilson MP, changed the Forum to an independent industry group; Peter Rogers of property developer Stanhope plc succeeded Egan as chairman, serving until 2006. Prior to its 2016 reformation, the Forum was chaired by Lord O'Neill; the Forum is now chaired on a rotating basis by representatives from each of its six members.
The Forum has been repeatedly criticised for not speaking on behalf of the entire industry (a role also claimed by the CBI's construction council). In August 2012, the then chief construction adviser Paul Morrell, speaking in a personal capacity, proposed to radically shake up the Forum's governance structure to present a unified industry voice to lobby the government, with Balfour Beatty chief executive Ian Tyler to chair a new advisory council to the Government Construction Board. The forum's role also came under scrutiny following the government's 2013 formation of a Construction Leadership Council.
== Structure ==
The Strategic Forum initially had six key sector representatives, each looking after the interests of a particular sector:
industry clients (represented by the Construction Clients' Group, which includes public and private sector organisations, including Defence Estates, the BBC, BAA and Royal Mail, responsible for significant annual investment in construction projects)
professionals (Construction Industry Council - this presents the views of professional membership organisations, research institutions and specialist construction business associations, collectively comprising over 350,000 professionals and around 25,000 construction consultancy firms)
contractors (the UK Contractors Group and the Construction Alliance - UKCG accounted for about 30% of total industry output, while the Alliance represents over 13,500 firms, mainly SME contractors)
specialist contractors (National Specialist Contractors Council and Specialist Engineering Contractors Group - the NSCC presents the views of 32 specialist trade organisations, while the main umbrella organisations in the SECG represent 60,000 companies, mostly SMEs)
product suppliers and manufacturers (Construction Products Association - the CPA represents 85% by value, of all UK manufacturers and suppliers of construction products, and covers 43 sector trade associations)
site workers (trade unions, represented by UCATT)
In February 2016, the Forum was relaunched. Changes reflected a 2015 reformation of the Construction Leadership Council, and merger of the National Specialist Contractors Council with the UK Contractors Group to form Build UK. Now excluding site worker representation, the reconstituted Forum's membership comprised:
Construction Clients' Group
Construction Industry Council
Build UK
Construction Alliance
Specialist Engineering Contractors Group (superseded in 2021 by Actuate UK)
Construction Products Association
The Home Builders Federation and CBI also attend meetings.
== Activities ==
In September 2002, the Strategic Forum published Accelerating Change. This set a headline target that 50% of projects should be undertaken by integrated teams and supply chains by 2007 (progress was made, but the target was not achieved). To help achieve the target, in 2003 it published an online Integration toolkit.
The Strategic Forum seeks to promote and to monitor industry progress on six key areas (described in its Construction Commitments):
Procurement and integration
Commitment to people
Client leadership
Sustainability
Design quality
Health and safety
As appropriate the Forum works with other bodies including Constructing Excellence (which also provides administrative support to the Construction Clients' Group) and CITB.
== References ==
== External links ==
Official website | Wikipedia/Strategic_Forum_for_Construction |
A building control body is an organisation authorised by the Building Act 1984 (as amended 1 October 2023 by the Building Safety Act 2022) to control building work that is subject to the Building Regulations in England and Wales (similar systems are provided in Northern Ireland, and in Scotland where the term 'building standards' is used. Such regulations or standards are also known as the building codes in other parts of the world.
== Overview ==
Building control roles are exercised by public officers within local authorities and by private sector employees of Registered Building Control Approvers (RBCAs) which replaced the former "Approved Inspectors", once licensed by CICAIR Ltd, a body authorised by the Secretary of State for Housing, Communities and Local Government under the Building Act 1984 (as amended).
In England and Wales, each local authority is the "Local Building Control Authority" (LBCA). The collective term "Local Authority Building Control" (LABC) refers to the non-statutory organisation (a 'trade body') of that name representing all local authority building control services in England and Wales. LABC is controlled by its members - the local authorities. The LABC operates a range of generic websites offering "advice" and "information" of the Building Regulations, to both building industry professionals and the general public.
The title "building control officer" (BCO) (also known as a "building inspector" or a "building control surveyor") is used predominantly by local authorities, which confer the title of "officer" to many staff who have regulatory, supervision or enforcement roles.
In 2021, the House of Commons considered a draft Building Safety Bill to implement post Grenfell Tower fire inquiry recommendations for better safety in the erection of future higher-risk buildings, and better management of all existing (and all still under construction) higher-risk block of flats and student accommodation (over six floors or 18m above ground level). The Building Safety Act 2022 is now statute law in England.
Building Regulations are a devolved area of law and different administrative regime existing between the four nations that make up the United Kingdom of Great Britain (UK). Scotland has always had very different "building safety standards" under its separate Scottish legal system.
== Qualifications and appointment ==
The Building Safety Act 2022 created a regulated and legally protected profession for all building control professionals. To practice in the public sector with local authorities and/or in the private sector as employees of (RBCAs) companies or as self-employed individuals, all individual building control professionals must register with the Building Safety Regulator (BSR) a statutory body created by the Building Safety Act 2022. Thus, since 1 October 2023, it is a criminal offence to claim to be a RBI (Registered Building Inspector) unless registered with the BSR.
There are now three main non-statutory professional bodies - the Royal Institution of Chartered Surveyors (RICS), the Chartered Institute of Building (CIOB) and the Chartered Association of Building Engineers (CABE) - with members in the construction industry and local authority or private sector "building control".
In July 2019, there were 95 Approved Inspectors operating in the UK, but rising insurance premiums following the Grenfell disaster meant some could be forced out of business.
== Functions ==
The main function of building control is to ensure that the requirements of the building regulations are met in all types of non-exempt development. Generally they examine plans, specifications and other documents submitted for approval, and survey work as it proceeds. Most building control surveyors are now actively involved at design stage for many schemes and are acknowledged to provide valuable input at all stages of development.
Many building control surveyors who work for local authorities are involved with other legislation such as safety at sports grounds, dealing with dangerous structures and demolitions, and various development and building matters.
Local authorities (as the Local Building Control Authority) and the Building Safety Regulator have statutory powers under the Building Act 1984 (as amended by the Building Safety Act 2022) to administer and enforce compliance with the relevant requirements of the building regulations and to have work altered or removed that does not comply. These powers have not been conferred on anyone working in the private sector.
There is a clear legal duty on all RBIs to ensure compliance with the relevant requirements of the building regulations; a mandatory code of professional conduct must be followed.
=== Dutyholders ===
The Building Regulations 2010 were amended on 1 October 2023 to impose clear legal duties on all clients, designers (architects, etc) and contractors (builders, installers, specialists, etc) to comply with the relevant requirements of the Building Regulations 2010 [See the new "Part 2A of the Building Regulations 2010"].
== Organisations ==
Local Authority Building Control (LABC) is a non-statutory membership organisation representing the 370 local authority building control teams in England.
LABSS (Local Authority Building Standards Scotland) is a not-for-profit membership organisation representing all local authority building standards verifiers in Scotland.
Formed in 1996, the Association of Consultant Approved Inspectors (ACAI) promotes private sector building control as a commercial, professional and cost-effective alternative to local authority inspectors.
The ACAI and LABC joined with the CABE, CIOB and RICS to form the Building Control Alliance, incorporated in 2008. The Alliance was dissolved in 2018.
The Association of Registered Building Inspectors (ARBI) is currently being formed, by RBIs, following the new legislation, on 1 October 2023.
== See also ==
Building regulations approval
Building regulations in the United Kingdom
Energy efficiency in British housing
Planning permission in the United Kingdom
== Notes and references ==
== External links ==
The Building Regulations 2010 (SI 2010/2214)
The Building (Registered Building Control Approvers) Regulations 2024 (SI 2024/110) | Wikipedia/Building_control_body |
The Hardy Cross method is an iterative method for determining the flow in pipe network systems where the inputs and outputs are known, but the flow inside the network is unknown.
The method was first published in November 1936 by its namesake, Hardy Cross, a structural engineering professor at the University of Illinois at Urbana–Champaign. The Hardy Cross method is an adaptation of the Moment distribution method, which was also developed by Hardy Cross as a way to determine the forces in statically indeterminate structures.
The introduction of the Hardy Cross method for analyzing pipe flow networks revolutionized municipal water supply design. Before the method was introduced, solving complex pipe systems for distribution was extremely difficult due to the nonlinear relationship between head loss and flow. The method was later made obsolete by computer solving algorithms employing the Newton–Raphson method or other numerical methods that eliminate the need to solve nonlinear systems of equations by hand.
== History ==
In 1930, Hardy Cross published a paper called "Analysis of Continuous Frames by Distributing Fixed-End Moments" in which he described the moment distribution method, which would change the way engineers in the field performed structural analysis. The moment distribution method was used to determine the forces in statically indeterminate structures and allowed for engineers to safely design structures from the 1930s through the 1960s, until the development of computer oriented methods. In November 1936, Cross applied the same geometric method to solving pipe network flow distribution problems, and published a paper called "Analysis of flow in networks of conduits or conductors."
== Derivation ==
The Hardy Cross method is an application of continuity of flow and continuity of potential to iteratively solve for flows in a pipe network. In the case of pipe flow, conservation of flow means that the flow in is equal to the flow out at each junction in the pipe. Conservation of potential means that the total directional head loss along any loop in the system is zero (assuming that a head loss counted against the flow is actually a head gain).
Hardy Cross developed two methods for solving flow networks. Each method starts by maintaining either continuity of flow or potential, and then iteratively solves for the other.
=== Assumptions ===
The Hardy Cross method assumes that the flow going in and out of the system is known and that the pipe length, diameter, roughness and other key characteristics are also known or can be assumed. The method also assumes that the relation between flow rate and head loss is known, but the method does not require any particular relation to be used.
In the case of water flow through pipes, a number of methods have been developed to determine the relationship between head loss and flow. The Hardy Cross method allows for any of these relationships to be used.
The general relationship between head loss and flow is:
h
f
=
k
⋅
Q
n
{\displaystyle h_{f}=k\cdot Q^{n}}
where k is the head loss per unit flow and n is the flow exponent. In most design situations the values that make up k, such as pipe length, diameter, and roughness, are taken to be known or assumed and thus the value of k can be determined for each pipe in the network. The values that make up k and the value of n change depending on the relation used to determine head loss. However, all relations are compatible with the Hardy Cross method.
It is also worth noting that the Hardy Cross method can be used to solve simple circuits and other flow like situations. In the case of simple circuits,
V
=
K
⋅
I
{\displaystyle V=K\cdot I}
is equivalent to
h
f
=
k
⋅
Q
n
{\displaystyle h_{f}=k\cdot Q^{n}}
.
By setting the coefficient k to K, the flow rate Q to I and the exponent n to 1, the Hardy Cross method can be used to solve a simple circuit. However, because the relation between the voltage drop and current is linear, the Hardy Cross method is not necessary and the circuit can be solved using non-iterative methods.
=== Method of balancing heads ===
The method of balancing heads uses an initial guess that satisfies continuity of flow at each junction and then balances the flows until continuity of potential is also achieved over each loop in the system.
==== Proof (r denotes k) ====
The following proof is taken from Hardy Cross's paper, “Analysis of flow in networks of conduits or conductors.”, and can be verified by National Programme on Technology Enhanced Learning Water and Wastewater Engineering page, and Fundamentals of Hydraulic Engineering Systems by Robert J. Houghtalen.
If the initial guess of flow rates in each pipe is correct, the change in head over a loop in the system,
Σ
r
Q
n
{\displaystyle \Sigma rQ^{n}}
would be equal to zero. However, if the initial guess is not correct, then the change in head will be non-zero and a change in flow,
Δ
Q
{\displaystyle \Delta Q}
must be applied. The new flow rate,
Q
=
Q
0
+
Δ
Q
{\displaystyle Q=Q_{0}+\Delta Q}
is the sum of the old flow rate and some change in flow rate such that the change in head over the loop is zero. The sum of the change in head over the new loop will then be
Σ
r
(
Q
0
+
Δ
Q
)
n
=
0
{\displaystyle \Sigma r(Q_{0}+\Delta Q)^{n}=0}
.
The value of
Σ
r
(
Q
0
+
Δ
Q
)
n
{\displaystyle \Sigma r(Q_{0}+\Delta Q)^{n}}
can be approximated using the Taylor expansion.
Σ
r
(
Q
0
+
Δ
Q
)
n
=
Σ
r
(
Q
0
n
+
n
Q
0
n
−
1
Δ
Q
+
.
.
.
)
=
0
{\displaystyle \Sigma r(Q_{0}+\Delta Q)^{n}=\Sigma r(Q_{0}^{n}+nQ_{0}^{n-1}\Delta Q+...)=0}
For a small
Δ
Q
{\displaystyle \Delta Q}
compared to
Q
0
{\displaystyle Q_{0}}
the additional terms vanish, leaving:
Σ
r
(
Q
0
n
+
n
Q
0
n
−
1
Δ
Q
)
=
0
{\displaystyle \Sigma r(Q_{0}^{n}+nQ_{0}^{n-1}\Delta Q)=0}
And solving for
Δ
Q
{\displaystyle \Delta Q}
Σ
r
Q
0
n
=
−
Σ
n
r
Q
0
n
−
1
Δ
Q
{\displaystyle \Sigma rQ_{0}^{n}=-\Sigma nrQ_{0}^{n-1}\Delta Q}
Δ
Q
=
−
Σ
r
Q
0
n
Σ
n
r
Q
0
n
−
1
{\displaystyle \Delta Q=-{\frac {\Sigma rQ_{0}^{n}}{\Sigma nrQ_{0}^{n-1}}}}
The change in flow that will balance the head over the loop is approximated by
Δ
Q
=
−
Σ
r
Q
0
n
Σ
n
r
Q
0
n
−
1
{\displaystyle \Delta Q=-{\frac {\Sigma rQ_{0}^{n}}{\Sigma nrQ_{0}^{n-1}}}}
. However, this is only an approximation due to the terms that were ignored from the Taylor expansion. The change in head over the loop may not be zero, but it will be smaller than the initial guess. Multiple iterations of finding a new
Δ
Q
{\displaystyle \Delta Q}
will approximate to the correct solution.
==== Process ====
The method is as follows:
Guess the flows in each pipe, making sure that the total in flow is equal to the total out flow at each junction. (The guess doesn't have to be good, but a good guess will reduce the time it takes to find the solution.)
Determine each closed loop in the system.
For each loop, determine the clockwise head losses and counter-clockwise head losses. Head loss in each pipe is calculated using
h
f
=
r
Q
n
{\displaystyle h_{f}=rQ^{n}}
. Clockwise head losses are from flows in the clockwise direction and likewise for counter-clockwise.
Determine the total head loss in each loop,
Σ
r
Q
n
{\displaystyle \Sigma rQ^{n}}
, by subtracting the counter-clockwise head loss from the clockwise head loss.
For each loop, find
Σ
n
r
Q
n
−
1
{\displaystyle \Sigma nrQ^{n-1}}
without reference to direction (all values should be positive).
The change in flow is equal to
Σ
r
Q
n
Σ
n
r
Q
n
−
1
{\displaystyle {\frac {\Sigma rQ^{n}}{\Sigma nrQ^{n-1}}}}
.
If the change in flow is positive, apply it to all pipes of the loop in the counter-clockwise direction. If the change in flow is negative, apply it to all pipes of the loop in the clockwise direction.
Continue from step 3 until the change in flow is within a satisfactory range.
=== Method of balancing flows (section incomplete) ===
The method of balancing flows uses an initial guess that satisfies continuity of potential over each loop and then balances the flows until continuity of flow is also achieved at each junction.
== Advantages of the Hardy Cross method ==
=== Simple mathematics ===
The Hardy Cross method is useful because it relies on only simple mathematics, circumventing the need to solve a system of equations. Without the Hardy Cross methods, engineers would have to solve complex systems of equations with variable exponents that cannot easily be solved by hand.
=== Self correcting ===
The Hardy Cross method iteratively corrects for the mistakes in the initial guess used to solve the problem. Subsequent mistakes in calculation are also iteratively corrected. If the method is followed correctly, the proper flow in each pipe can still be found if small mathematical errors are consistently made in the process. As long as the last few iterations are done with attention to detail, the solution will still be correct. In fact, it is possible to intentionally leave off decimals in the early iterations of the method to run the calculations faster.
== Example ==
The Hardy Cross method can be used to calculate the flow distribution in a pipe network. Consider the example of a simple pipe flow network shown at the right. For this example, the in and out flows will be 10 liters per second. We will consider n to be 2, and the head loss per unit flow r, and initial flow guess for each pipe as follows:
We solve the network by method of balancing heads, following the steps outlined in method process above.
1. The initial guesses are set up so that continuity of flow is maintained at each junction in the network.
2. The loops of the system are identified as loop 1-2-3 and loop 2-3-4.
3. The head losses in each pipe are determined.
For loop 1-2-3, the sum of the clockwise head losses is 25 and the sum of the counter-clockwise head losses is 125.
For loop 2-3-4, the sum of the clockwise head losses is 125 and the sum of the counter-clockwise head losses is 25.
4. The total clockwise head loss in loop 1-2-3 is
25
−
125
=
−
100
{\displaystyle 25-125=-100}
. The total clockwise head loss in loop 2-3-4 is
125
−
25
=
100
{\displaystyle 125-25=100}
.
5. The value of
Σ
n
r
Q
n
−
1
{\displaystyle \Sigma nrQ^{n-1}}
is determined for each loop. It is found to be 60 in both loops (due to symmetry), as shown in the figure.
6. The change in flow is found for each loop using the equation
Σ
r
Q
n
Σ
n
r
Q
n
−
1
{\displaystyle {\frac {\Sigma rQ^{n}}{\Sigma nrQ^{n-1}}}}
. For loop 1-2-3, the change in flow is equal to
−
100
/
60
=
−
1.66
{\displaystyle -100/60=-1.66}
and for loop 2-3-4 the change in flow is equal to
100
/
60
=
1.66
{\displaystyle 100/60=1.66}
.
7. The change in flow is applied across the loops. For loop 1-2-3, the change in flow is negative so its absolute value is applied in the clockwise direction. For loop 2-3-4, the change in flow is positive so its absolute value is applied in the counter-clockwise direction. For pipe 2-3, which is in both loops, the changes in flow are cumulative.
The process then repeats from step 3 until the change in flow becomes sufficiently small or goes to zero.
3. The total head loss in Loop 1-2-3 is
Notice that the clockwise head loss is equal to the counter-clockwise head loss. This means that the flow in this loop is balanced and the flow rates are correct. The total head loss in loop 2-3-4 will also be balanced (again due to symmetry).
In this case, the method found the correct solution in one iteration. For other networks, it may take multiple iterations until the flows in the pipes are correct or approximately correct.
== See also ==
Pipe network analysis
Moment distribution method
== References == | Wikipedia/Hardy_Cross_method |
Energy development is the field of activities focused on obtaining sources of energy from natural resources. These activities include the production of renewable, nuclear, and fossil fuel derived sources of energy, and for the recovery and reuse of energy that would otherwise be wasted. Energy conservation and efficiency measures reduce the demand for energy development, and can have benefits to society with improvements to environmental issues.
Societies use energy for transportation, manufacturing, illumination, heating and air conditioning, and communication, for industrial, commercial, agricultural and domestic purposes. Energy resources may be classified as primary resources, where the resource can be used in substantially its original form, or as secondary resources, where the energy source must be converted into a more conveniently usable form. Non-renewable resources are significantly depleted by human use, whereas renewable resources are produced by ongoing processes that can sustain indefinite human exploitation.
Thousands of people are employed in the energy industry. The conventional industry comprises the petroleum industry, the natural gas industry, the electrical power industry, and the nuclear industry. New energy industries include the renewable energy industry, comprising alternative and sustainable manufacture, distribution, and sale of alternative fuels.
== Classification of resources ==
Energy resources may be classified as primary resources, suitable for end use without conversion to another form, or secondary resources, where the usable form of energy required substantial conversion from a primary source. Examples of primary energy resources are wind power, solar power, wood fuel, fossil fuels such as coal, oil and natural gas, and uranium. Secondary resources are those such as electricity, hydrogen, or other synthetic fuels.
Another important classification is based on the time required to regenerate an energy resource. "Renewable resources" are those that recover their capacity in a time significant by human needs. Examples are hydroelectric power or wind power, when the natural phenomena that are the primary source of energy are ongoing and not depleted by human demands. Non-renewable resources are those that are significantly depleted by human usage and that will not recover their potential significantly during human lifetimes. An example of a non-renewable energy source is coal, which does not form naturally at a rate that would support human use.
== Fossil fuels ==
Fossil fuel (primary non-renewable fossil) sources burn coal or hydrocarbon fuels, which are the remains of the decomposition of plants and animals. There are three main types of fossil fuels: coal, petroleum, and natural gas. Another fossil fuel, liquefied petroleum gas (LPG), is principally derived from the production of natural gas. Heat from burning fossil fuel is used either directly for space heating and process heating, or converted to mechanical energy for vehicles, industrial processes, or electrical power generation. These fossil fuels are part of the carbon cycle and allow solar energy stored in the fuel to be released.
The use of fossil fuels in the 18th and 19th century set the stage for the Industrial Revolution.
Fossil fuels make up the bulk of the world's current primary energy sources. In 2005, 81% of the world's energy needs was met from fossil sources. The technology and infrastructure for the use of fossil fuels already exist. Liquid fuels derived from petroleum deliver much usable energy per unit of weight or volume, which is advantageous when compared with lower energy density sources such as batteries. Fossil fuels are currently economical for decentralized energy use.
Energy dependence on imported fossil fuels creates energy security risks for dependent countries. Oil dependence in particular has led to war, funding of radicals, monopolization, and socio-political instability.
Fossil fuels are non-renewable resources, which will eventually decline in production and become exhausted. While the processes that created fossil fuels are ongoing, fuels are consumed far more quickly than the natural rate of replenishment. Extracting fuels becomes increasingly costly as society consumes the most accessible fuel deposits. Extraction of fossil fuels results in environmental degradation, such as the strip mining and mountaintop removal for coal.
Fuel efficiency is a form of thermal efficiency, meaning the efficiency of a process that converts chemical potential energy contained in a carrier fuel into kinetic energy or work. The fuel economy is the energy efficiency of a particular vehicle, is given as a ratio of distance travelled per unit of fuel consumed. Weight-specific efficiency (efficiency per unit weight) may be stated for freight, and passenger-specific efficiency (vehicle efficiency) per passenger. The inefficient atmospheric combustion (burning) of fossil fuels in vehicles, buildings, and power plants contributes to urban heat islands.
Conventional production of oil peaked, conservatively, between 2007 and 2010. In 2010, it was estimated that an investment of $8 trillion in non-renewable resources would be required to maintain current levels of production for 25 years. In 2010, governments subsidized fossil fuels by an estimated $500 billion a year. Fossil fuels are also a source of greenhouse gas emissions, leading to concerns about global warming if consumption is not reduced.
The combustion of fossil fuels leads to the release of pollution into the atmosphere. The fossil fuels are mainly carbon compounds. During combustion, carbon dioxide is released, and also nitrogen oxides, soot and other fine particulates. The carbon dioxide is the main contributor to recent climate change.
Other emissions from fossil fuel power station include sulphur dioxide, carbon monoxide (CO), hydrocarbons, volatile organic compounds (VOC), mercury, arsenic, lead, cadmium, and other heavy metals including traces of uranium.
A typical coal plant generates billions of kilowatt hours of electrical power per year.
== Nuclear ==
=== Fission ===
Nuclear power is the use of nuclear fission to generate useful heat and electricity. Fission of uranium produces nearly all economically significant nuclear power. Radioisotope thermoelectric generators form a very small component of energy generation, mostly in specialized applications such as deep space vehicles.
Nuclear power plants, excluding naval reactors, provided about 5.7% of the world's energy and 13% of the world's electricity in 2012.
In 2013, the IAEA report that there are 437 operational nuclear power reactors, in 31 countries, although not every reactor is producing electricity. In addition, there are approximately 140 naval vessels using nuclear propulsion in operation, powered by some 180 reactors. As of 2013, attaining a net energy gain from sustained nuclear fusion reactions, excluding natural fusion power sources such as the Sun, remains an ongoing area of international physics and engineering research. More than 60 years after the first attempts, commercial fusion power production remains unlikely before 2050.
There is an ongoing debate about nuclear power. Proponents, such as the World Nuclear Association, the IAEA and Environmentalists for Nuclear Energy contend that nuclear power is a safe, sustainable energy source that reduces carbon emissions. Opponents contend that nuclear power poses many threats to people and the environment.
Nuclear power plant accidents include the Chernobyl disaster (1986), Fukushima Daiichi nuclear disaster (2011), and the Three Mile Island accident (1979). There have also been some nuclear submarine accidents. In terms of lives lost per unit of energy generated, analysis has determined that nuclear power has caused less fatalities per unit of energy generated than the other major sources of energy generation. Energy production from coal, petroleum, natural gas and hydropower has caused a greater number of fatalities per unit of energy generated due to air pollution and energy accident effects. However, the economic costs of nuclear power accidents is high, and meltdowns can take decades to clean up. The human costs of evacuations of affected populations and lost livelihoods is also significant.
Comparing Nuclear's latent cancer deaths, such as cancer with other energy sources immediate deaths per unit of energy generated(GWeyr). This study does not include fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident" classification, which would be an accident with more than 5 fatalities.
As of 2012, according to the IAEA, worldwide there were 68 civil nuclear power reactors under construction in 15 countries, approximately 28 of which in the People's Republic of China (PRC), with the most recent nuclear power reactor, as of May 2013, to be connected to the electrical grid, occurring on February 17, 2013, in Hongyanhe Nuclear Power Plant in the PRC. In the United States, two new Generation III reactors are under construction at Vogtle. U.S. nuclear industry officials expect five new reactors to enter service by 2020, all at existing plants. In 2013, four aging, uncompetitive, reactors were permanently closed.
Recent experiments in extraction of uranium use polymer ropes that are coated with a substance that selectively absorbs uranium from seawater. This process could make the considerable volume of uranium dissolved in seawater exploitable for energy production. Since ongoing geologic processes carry uranium to the sea in amounts comparable to the amount that would be extracted by this process, in a sense the sea-borne uranium becomes a sustainable resource.
Nuclear power is a low carbon power generation method of producing electricity, with an analysis of the literature on its total life cycle emission intensity finding that it is similar to renewable sources in a comparison of greenhouse gas (GHG) emissions per unit of energy generated. Since the 1970s, nuclear fuel has displaced about 64 gigatonnes of carbon dioxide equivalent (GtCO2-eq) greenhouse gases, that would have otherwise resulted from the burning of oil, coal or natural gas in fossil-fuel power stations.
==== Nuclear power phase-out and pull-backs ====
Japan's 2011 Fukushima Daiichi nuclear accident, which occurred in a reactor design from the 1960s, prompted a rethink of nuclear safety and nuclear energy policy in many countries. Germany decided to close all its reactors by 2022, and Italy has banned nuclear power. Following Fukushima, in 2011 the International Energy Agency halved its estimate of additional nuclear generating capacity to be built by 2035.
===== Fukushima =====
Following the 2011 Fukushima Daiichi nuclear disaster – the second worst nuclear incident, that displaced 50,000 households after radioactive material leaked into the air, soil and sea, and with subsequent radiation checks leading to bans on some shipments of vegetables and fish – a global public support survey by Ipsos (2011) for energy sources was published and nuclear fission was found to be the least popular
==== Fission economics ====
The economics of new nuclear power plants is a controversial subject, since there are diverging views on this topic, and multibillion-dollar investments ride on the choice of an energy source. Nuclear power plants typically have high capital costs for building the plant, but low direct fuel costs. In recent years there has been a slowdown of electricity demand growth and financing has become more difficult, which affects large projects such as nuclear reactors, with very large upfront costs and long project cycles which carry a large variety of risks. In Eastern Europe, a number of long-established projects are struggling to find finance, notably Belene in Bulgaria and the additional reactors at Cernavoda in Romania, and some potential backers have pulled out. Where cheap gas is available and its future supply relatively secure, this also poses a major problem for nuclear projects.
Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power plants were developed by state-owned or regulated utility monopolies where many of the risks associated with construction costs, operating performance, fuel price, and other factors were borne by consumers rather than suppliers. Many countries have now liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants.
==== Costs ====
Costs are likely to go up for currently operating and new nuclear power plants, due to increased requirements for on-site spent fuel management and elevated design basis threats. While first of their kind designs, such as the EPRs under construction are behind schedule and over-budget, of the seven South Korean APR-1400s presently under construction worldwide, two are in S.Korea at the Hanul Nuclear Power Plant and four are at the largest nuclear station construction project in the world as of 2016, in the United Arab Emirates at the planned Barakah nuclear power plant. The first reactor, Barakah-1 is 85% completed and on schedule for grid-connection during 2017.
Two of the four EPRs under construction (in Finland and France) are significantly behind schedule and substantially over cost.
== Renewable sources ==
Renewable energy is generally defined as energy that comes from resources which are naturally replenished on a human timescale such as sunlight, wind, rain, tides, waves and geothermal heat. Renewable energy replaces conventional fuels in four distinct areas: electricity generation, hot water/space heating, motor fuels, and rural (off-grid) energy services.
Including traditional biomass usage, about 19% of global energy consumption is accounted for by renewable resources. Wind powered energy production is being turned to as a prominent renewable energy source, increasing global wind power capacity by 12% in 2021. While not the case for all countries, 58% of sample countries linked renewable energy consumption to have a positive impact on economic growth. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. National renewable energy markets are projected to continue to grow strongly in the coming decade and beyond.[76]
Unlike other energy sources, renewable energy sources are not as restricted by geography. Additionally deployment of renewable energy is resulting in economic benefits as well as combating climate change. Rural electrification has been researched on multiple sites and positive effects on commercial spending, appliance use, and general activities requiring electricity as energy. Renewable energy growth in at least 38 countries has been driven by the high electricity usage rates. International support for promoting renewable sources like solar and wind have continued grow.
While many renewable energy projects are large-scale, renewable technologies are also suited to rural and remote areas and developing countries, where energy is often crucial in human development. To ensure human development continues sustainably, governments around the world are beginning to research potential ways to implement renewable sources into their countries and economies. For example, the UK Government’s Department for Energy and Climate Change 2050 Pathways created a mapping technique to educate the public on land competition between energy supply technologies. This tool provides users the ability to understand what the limitations and potential their surrounding land and country has in terms of energy production.
=== Hydroelectricity ===
Hydroelectricity is electric power generated by hydropower; the force of falling or flowing water. In 2015 hydropower generated 16.6% of the world's total electricity and 70% of all renewable electricity and was expected to increase about 3.1% each year for the following 25 years.
Hydropower is produced in 150 countries, with the Asia-Pacific region generating 32 percent of global hydropower in 2010. China is the largest hydroelectricity producer, with 721 terawatt-hours of production in 2010, representing around 17 percent of domestic electricity use. There are now three hydroelectricity plants larger than 10 GW: the Three Gorges Dam in China, Itaipu Dam across the Brazil/Paraguay border, and Guri Dam in Venezuela.
The cost of hydroelectricity is relatively low, making it a competitive source of renewable electricity. The average cost of electricity from a hydro plant larger than 10 megawatts is 3 to 5 U.S. cents per kilowatt-hour. Hydro is also a flexible source of electricity since plants can be ramped up and down very quickly to adapt to changing energy demands. However, damming interrupts the flow of rivers and can harm local ecosystems, and building large dams and reservoirs often involves displacing people and wildlife. Once a hydroelectric complex is constructed, the project produces no direct waste, and has a considerably lower output level of the greenhouse gas carbon dioxide than fossil fuel powered energy plants.
=== Wind ===
Wind power harnesses the power of the wind to propel the blades of wind turbines. These turbines cause the rotation of magnets, which creates electricity. Wind towers are usually built together on wind farms. There are offshore and onshore wind farms. Global wind power capacity has expanded rapidly to 336 GW in June 2014, and wind energy production was around 4% of total worldwide electricity usage, and growing rapidly.
Wind power is widely used in Europe, Asia, and the United States. Several countries have achieved relatively high levels of wind power penetration, such as 21% of stationary electricity production in Denmark, 18% in Portugal, 16% in Spain, 14% in Ireland, and 9% in Germany in 2010.: 11 By 2011, at times over 50% of electricity in Germany and Spain came from wind and solar power. As of 2011, 83 countries around the world are using wind power on a commercial basis.: 11
Many of the world's largest onshore wind farms are located in the United States, China, and India. Most of the world's largest offshore wind farms are located in Denmark, Germany and the United Kingdom. The two largest offshore wind farm are currently the 630 MW London Array and Gwynt y Môr.
=== Solar ===
=== Biofuels ===
A biofuel is a fuel that contains energy from geologically recent carbon fixation. These fuels are produced from living organisms. Examples of this carbon fixation occur in plants and microalgae. These fuels are made by a biomass conversion (biomass refers to recently living organisms, most often referring to plants or plant-derived materials). This biomass can be converted to convenient energy containing substances in three different ways: thermal conversion, chemical conversion, and biochemical conversion. This biomass conversion can result in fuel in solid, liquid, or gas form. This new biomass can be used for biofuels. Biofuels have increased in popularity because of rising oil prices and the need for energy security.
Bioethanol is an alcohol made by fermentation, mostly from carbohydrates produced in sugar or starch crops such as corn or sugarcane. Cellulosic biomass, derived from non-food sources, such as trees and grasses, is also being developed as a feedstock for ethanol production. Ethanol can be used as a fuel for vehicles in its pure form, but it is usually used as a gasoline additive to increase octane and improve vehicle emissions. Bioethanol is widely used in the USA and in Brazil. Current plant design does not provide for converting the lignin portion of plant raw materials to fuel components by fermentation.
Biodiesel is made from vegetable oils and animal fats. Biodiesel can be used as a fuel for vehicles in its pure form, but it is usually used as a diesel additive to reduce levels of particulates, carbon monoxide, and hydrocarbons from diesel-powered vehicles. Biodiesel is produced from oils or fats using transesterification and is the most common biofuel in Europe. However, research is underway on producing renewable fuels from decarboxylation
In 2010, worldwide biofuel production reached 105 billion liters (28 billion gallons US), up 17% from 2009, and biofuels provided 2.7% of the world's fuels for road transport, a contribution largely made up of ethanol and biodiesel. Global ethanol fuel production reached 86 billion liters (23 billion gallons US) in 2010, with the United States and Brazil as the world's top producers, accounting together for 90% of global production. The world's largest biodiesel producer is the European Union, accounting for 53% of all biodiesel production in 2010. As of 2011, mandates for blending biofuels exist in 31 countries at the national level and in 29 states or provinces.: 13–14 The International Energy Agency has a goal for biofuels to meet more than a quarter of world demand for transportation fuels by 2050 to reduce dependence on petroleum and coal.
=== Geothermal ===
Geothermal energy is thermal energy generated and stored in the Earth. Thermal energy is the energy that determines the temperature of matter. The geothermal energy of the Earth's crust originates from the original formation of the planet (20%) and from radioactive decay of minerals (80%). The geothermal gradient, which is the difference in temperature between the core of the planet and its surface, drives a continuous conduction of thermal energy in the form of heat from the core to the surface. The adjective geothermal originates from the Greek roots γη (ge), meaning earth, and θερμος (thermos), meaning hot.
Earth's internal heat is thermal energy generated from radioactive decay and continual heat loss from Earth's formation. Temperatures at the core-mantle boundary may reach over 4000 °C (7,200 °F). The high temperature and pressure in Earth's interior cause some rock to melt and solid mantle to behave plastically, resulting in portions of mantle convecting upward since it is lighter than the surrounding rock. Rock and water is heated in the crust, sometimes up to 370 °C (700 °F).
From hot springs, geothermal energy has been used for bathing since Paleolithic times and for space heating since ancient Roman times, but it is now better known for electricity generation. Worldwide, 11,400 megawatts (MW) of geothermal power is online in 24 countries in 2012. An additional 28 gigawatts of direct geothermal heating capacity is installed for district heating, space heating, spas, industrial processes, desalination and agricultural applications in 2010.
Geothermal power is cost effective, reliable, sustainable, and environmentally friendly, but has historically been limited to areas near tectonic plate boundaries. Recent technological advances have dramatically expanded the range and size of viable resources, especially for applications such as home heating, opening a potential for widespread exploitation. Geothermal wells release greenhouse gases trapped deep within the earth, but these emissions are much lower per energy unit than those of fossil fuels. As a result, geothermal power has the potential to help mitigate global warming if widely deployed in place of fossil fuels.
The Earth's geothermal resources are theoretically more than adequate to supply humanity's energy needs, but only a very small fraction may be profitably exploited. Drilling and exploration for deep resources is very expensive. Forecasts for the future of geothermal power depend on assumptions about technology, energy prices, subsidies, and interest rates. Pilot programs like EWEB's customer opt in Green Power Program show that customers would be willing to pay a little more for a renewable energy source like geothermal. But as a result of government assisted research and industry experience, the cost of generating geothermal power has decreased by 25% over the past two decades. In 2001, geothermal energy cost between two and ten US cents per kWh.
=== Oceanic ===
Marine Renewable Energy (MRE) or marine power (also sometimes referred to as ocean energy, ocean power, or marine and hydrokinetic energy) refers to the energy carried by the mechanical energy of ocean waves, currents, and tides, shifts in salinity gradients, and ocean temperature differences. MRE has the potential to become a reliable and renewable energy source because of the cyclical nature of the oceans. The movement of water in the world's oceans creates a vast store of kinetic energy or energy in motion. This energy can be harnessed to generate electricity to power homes, transport, and industries.
The term marine energy encompasses both wave power, i.e. power from surface waves, and tidal power, i.e. obtained from the kinetic energy of large bodies of moving water. Offshore wind power is not a form of marine energy, as wind power is derived from the wind, even if the wind turbines are placed over water. The oceans have a tremendous amount of energy and are close to many if not most concentrated populations. Ocean energy has the potential to provide a substantial amount of new renewable energy around the world.
Marine energy technology is in its first stage of development. To be developed, MRE needs efficient methods of storing, transporting, and capturing ocean power, so it can be used where needed. Over the past year, countries around the world have started implementing market strategies for MRE to commercialize. Canada and China introduced incentives, such as feed-in tariffs (FiTs), which are above-market prices for MRE that allow investors and project developers a stable income. Other financial strategies consist of subsidies, grants, and funding from public-private partnerships (PPPs). China alone approved 100 ocean projects in 2019. Portugal and Spain recognize the potential of MRE in accelerating decarbonization, which is fundamental to meeting the goals of the Paris Agreement. Both countries are focusing on solar and offshore wind auctions to attract private investment, ensure cost-effectiveness, and accelerate MRE growth. Ireland sees MRE as a key component to reduce its carbon footprint. The Offshore Renewable Energy Development Plan (OREDP) supports the exploration and development of the country's significant offshore energy potential. Additionally, Ireland has implemented the Renewable Electricity Support Scheme (RESS) which includes auctions designed to provide financial support for communities, increase technology diversity, and guarantee energy security.
However, while research is increasing, there have been concerns associated with threats to marine mammals, habitats, and potential changes to ocean currents. MRE can be a renewable energy source for coastal communities helping their transition from fossil fuel, but researchers are calling for a better understanding of its environmental impacts. Because ocean-energy areas are often isolated from both fishing and sea traffic, these zones may provide shelter from humans and predators for some marine species. MRE devices can be an ideal home for many fish, crayfish, mollusks, and barnacles; and may also indirectly affect seabirds, and marine mammals because they feed on those species. Similarly, such areas may create an "artificial reef effect" by boosting biodiversity nearby. Noise pollution generated from the technology is limited, also causing fish and mammals living in the area of the installation to return. In the most recent State of Science Report about MRE, the authors claim that there is no evidence for fish, mammals, or seabirds to be injured by either collision, noise pollution, or the electromagnetic field. The uncertainty of its environmental impact comes from the low quantity of MRE devices in the ocean today where data is collected.
=== 100% renewable energy ===
The incentive to use 100% renewable energy, for electricity, transport, or even total primary energy supply globally, has been motivated by global warming and other ecological as well as economic concerns. Renewable energy use has grown much faster than anyone anticipated. The Intergovernmental Panel on Climate Change has said that there are few fundamental technological limits to integrating a portfolio of renewable energy technologies to meet most of total global energy demand. At the national level, at least 30 nations around the world already have renewable energy contributing more than 20% of energy supply. Also, Stephen W. Pacala and Robert H. Socolow have developed a series of "stabilization wedges" that can allow us to maintain our quality of life while avoiding catastrophic climate change, and "renewable energy sources," in aggregate, constitute the largest number of their "wedges."
Mark Z. Jacobson says producing all new energy with wind power, solar power, and hydropower by 2030 is feasible and existing energy supply arrangements could be replaced by 2050. Barriers to implementing the renewable energy plan are seen to be "primarily social and political, not technological or economic". Jacobson says that energy costs with a wind, solar, water system should be similar to today's energy costs.
Similarly, in the United States, the independent National Research Council has noted that "sufficient domestic renewable resources exist to allow renewable electricity to play a significant role in future electricity generation and thus help confront issues related to climate change, energy security, and the escalation of energy costs ... Renewable energy is an attractive option because renewable resources available in the United States, taken collectively, can supply significantly larger amounts of electricity than the total current or projected domestic demand." .
Critics of the "100% renewable energy" approach include Vaclav Smil and James E. Hansen. Smil and Hansen are concerned about the variable output of solar and wind power, but Amory Lovins argues that the electricity grid can cope, just as it routinely backs up nonworking coal-fired and nuclear plants with working ones.
Google spent $30 million on their "Renewable Energy Cheaper than Coal" project to develop renewable energy and stave off catastrophic climate change. The project was cancelled after concluding that a best-case scenario for rapid advances in renewable energy could only result in emissions 55 percent below the fossil fuel projections for 2050.
== Increased energy efficiency ==
Although increasing the efficiency of energy use is not energy development per se, it may be considered under the topic of energy development since it makes existing energy sources available to do work.: 22
Efficient energy use reduces the amount of energy required to provide products and services. For example, insulating a home allows a building to use less heating and cooling energy to maintain a comfortable temperature. Installing fluorescent lamps or natural skylights reduces the amount of energy required for illumination compared to incandescent light bulbs. Compact fluorescent lights use two-thirds less energy and may last 6 to 10 times longer than incandescent lights. Improvements in energy efficiency are most often achieved by adopting an efficient technology or production process.
Reducing energy use may save consumers money, if the energy savings offsets the cost of an energy efficient technology. Reducing energy use reduces emissions. According to the International Energy Agency, improved energy efficiency in buildings, industrial processes and transportation could reduce the global energy demand in 2050 to around 8% smaller than today, but serving an economy more than twice as big and a population of about 2 billion more people.
Energy efficiency and renewable energy are said to be the twin pillars of sustainable energy policy. In many countries energy efficiency is also seen to have a national security benefit because it can be used to reduce the level of energy imports from foreign countries and may slow down the rate at which domestic energy resources are depleted.
It's been discovered "that for OECD countries, wind, geothermal, hydro and nuclear have the lowest hazard rates among energy sources in production".
== Transmission ==
While new sources of energy are only rarely discovered or made possible by new technology, distribution technology continually evolves. The use of fuel cells in cars, for example, is an anticipated delivery technology. This section presents the various delivery technologies that have been important to historic energy development. They all rely in way on the energy sources listed in the previous section.
=== Shipping and pipelines ===
Coal, petroleum and their derivatives are delivered by boat, rail, or road. Petroleum and natural gas may also be delivered by pipeline, and coal via a Slurry pipeline. Fuels such as gasoline and LPG may also be delivered via aircraft. Natural gas pipelines must maintain a certain minimum pressure to function correctly. The higher costs of ethanol transportation and storage are often prohibitive.
=== Wired energy transfer ===
Electricity grids are the networks used to transmit and distribute power from production source to end user, when the two may be hundreds of kilometres away. Sources include electrical generation plants such as a nuclear reactor, coal burning power plant, etc. A combination of sub-stations and transmission lines are used to maintain a constant flow of electricity. Grids may suffer from transient blackouts and brownouts, often due to weather damage. During certain extreme space weather events solar wind can interfere with transmissions. Grids also have a predefined carrying capacity or load that cannot safely be exceeded. When power requirements exceed what's available, failures are inevitable. To prevent problems, power is then rationed.
Industrialised countries such as Canada, the US, and Australia are among the highest per capita consumers of electricity in the world, which is possible thanks to a widespread electrical distribution network. The US grid is one of the most advanced, although infrastructure maintenance is becoming a problem. CurrentEnergy provides a realtime overview of the electricity supply and demand for California, Texas, and the Northeast of the US. African countries with small scale electrical grids have a correspondingly low annual per capita usage of electricity. One of the most powerful power grids in the world supplies power to the state of Queensland, Australia.
=== Wireless energy transfer ===
Wireless power transfer is a process whereby electrical energy is transmitted from a power source to an electrical load that does not have a built-in power source, without the use of interconnecting wires. Currently available technology is limited to short distances and relatively low power level.
Orbiting solar power collectors would require wireless transmission of power to Earth. The proposed method involves creating a large beam of microwave-frequency radio waves, which would be aimed at a collector antenna site on the Earth. Formidable technical challenges exist to ensure the safety and profitability of such a scheme.
== Storage ==
Energy storage is accomplished by devices or physical media that store energy to perform useful operation at a later time. A device that stores energy is sometimes called an accumulator.
All forms of energy are either potential energy (e.g. Chemical, gravitational, electrical energy, temperature differential, latent heat, etc.) or kinetic energy (e.g. momentum). Some technologies provide only short-term energy storage, and others can be very long-term such as power to gas using hydrogen or methane and the storage of heat or cold between opposing seasons in deep aquifers or bedrock. A wind-up clock stores potential energy (in this case mechanical, in the spring tension), a battery stores readily convertible chemical energy to operate a mobile phone, and a hydroelectric dam stores energy in a reservoir as gravitational potential energy. Ice storage tanks store ice (thermal energy in the form of latent heat) at night to meet peak demand for cooling. Fossil fuels such as coal and gasoline store ancient energy derived from sunlight by organisms that later died, became buried and over time were then converted into these fuels. Even food (which is made by the same process as fossil fuels) is a form of energy stored in chemical form.
== History ==
Since prehistory, when humanity discovered fire to warm up and roast food, through the Middle Ages in which populations built windmills to grind the wheat, until the modern era in which nations can get electricity splitting the atom. Man has sought endlessly for energy sources.
Except nuclear, geothermal and tidal, all other energy sources are from current solar isolation or from fossil remains of plant and animal life that relied upon sunlight. Ultimately, solar energy itself is the result of the Sun's nuclear fusion. Geothermal power from hot, hardened rock above the magma of the Earth's core is the result of the decay of radioactive materials present beneath the Earth's crust, and nuclear fission relies on man-made fission of heavy radioactive elements in the Earth's crust; in both cases these elements were produced in supernova explosions before the formation of the Solar System.
Since the beginning of the Industrial Revolution, the question of the future of energy supplies has been of interest. In 1865, William Stanley Jevons published The Coal Question in which he saw that the reserves of coal were being depleted and that oil was an ineffective replacement. In 1914, U.S. Bureau of Mines stated that the total production was 5.7 billion barrels (910,000,000 m3). In 1956, Geophysicist M. King Hubbert deduces that U.S. oil production would peak between 1965 and 1970 and that oil production will peak "within half a century" on the basis of 1956 data. In 1989, predicted peak by Colin Campbell In 2004, OPEC estimated, with substantial investments, it would nearly double oil output by 2025
=== Sustainability ===
The environmental movement has emphasized sustainability of energy use and development. Renewable energy is sustainable in its production; the available supply will not be diminished for the foreseeable future - millions or billions of years. "Sustainability" also refers to the ability of the environment to cope with waste products, especially air pollution. Sources which have no direct waste products (such as wind, solar, and hydropower) are brought up on this point. With global demand for energy growing, the need to adopt various energy sources is growing. Energy conservation is an alternative or complementary process to energy development. It reduces the demand for energy by using it efficiently.
=== Resilience ===
Some observers contend that idea of "energy independence" is an unrealistic and opaque concept. The alternative offer of "energy resilience" is a goal aligned with economic, security, and energy realities. The notion of resilience in energy was detailed in the 1982 book Brittle Power: Energy Strategy for National Security. The authors argued that simply switching to domestic energy would not be secure inherently because the true weakness is the often interdependent and vulnerable energy infrastructure of a country. Key aspects such as gas lines and the electrical power grid are often centralized and easily susceptible to disruption. They conclude that a "resilient energy supply" is necessary for both national security and the environment. They recommend a focus on energy efficiency and renewable energy that is decentralized.
In 2008, former Intel Corporation Chairman and CEO Andrew Grove looked to energy resilience, arguing that complete independence is unfeasible given the global market for energy. He describes energy resilience as the ability to adjust to interruptions in the supply of energy. To that end, he suggests the U.S. make greater use of electricity. Electricity can be produced from a variety of sources. A diverse energy supply will be less affected by the disruption in supply of any one source. He reasons that another feature of electrification is that electricity is "sticky" – meaning the electricity produced in the U.S. is to stay there because it cannot be transported overseas. According to Grove, a key aspect of advancing electrification and energy resilience will be converting the U.S. automotive fleet from gasoline-powered to electric-powered. This, in turn, will require the modernization and expansion of the electrical power grid. As organizations such as The Reform Institute have pointed out, advancements associated with the developing smart grid would facilitate the ability of the grid to absorb vehicles en masse connecting to it to charge their batteries.
=== Present and future ===
Extrapolations from current knowledge to the future offer a choice of energy futures. Predictions parallel the Malthusian catastrophe hypothesis. Numerous are complex models based scenarios as pioneered by Limits to Growth. Modeling approaches offer ways to analyze diverse strategies, and hopefully find a road to rapid and sustainable development of humanity. Short term energy crises are also a concern of energy development. Extrapolations lack plausibility, particularly when they predict a continual increase in oil consumption.
Energy production usually requires an energy investment. Drilling for oil or building a wind power plant requires energy. The fossil fuel resources that are left are often increasingly difficult to extract and convert. They may thus require increasingly higher energy investments. If investment is greater than the value of the energy produced by the resource, it is no longer an effective energy source. These resources are no longer an energy source but may be exploited for value as raw materials. New technology may lower the energy investment required to extract and convert the resources, although ultimately basic physics sets limits that cannot be exceeded.
Between 1950 and 1984, as the Green Revolution transformed agriculture around the globe, world grain production increased by 250%. The energy for the Green Revolution was provided by fossil fuels in the form of fertilizers (natural gas), pesticides (oil), and hydrocarbon fueled irrigation. The peaking of world hydrocarbon production (peak oil) may lead to significant changes, and require sustainable methods of production. One vision of a sustainable energy future involves all human structures on the earth's surface (i.e., buildings, vehicles and roads) doing artificial photosynthesis (using sunlight to split water as a source of hydrogen and absorbing carbon dioxide to make fertilizer) efficiently than plants.
With contemporary space industry's economic activity and the related private spaceflight, with the manufacturing industries, that go into Earth's orbit or beyond, delivering them to those regions will require further energy development. Researchers have contemplated space-based solar power for collecting solar power for use on Earth. Space-based solar power has been in research since the early 1970s. Space-based solar power would require construction of collector structures in space. The advantage over ground-based solar power is higher intensity of light, and no weather to interrupt power collection.
== Energy technology ==
Energy technology is an interdisciplinary engineering science having to do with the efficient, safe, environmentally friendly, and economical extraction, conversion, transportation, storage, and use of energy, targeted towards yielding high efficiency whilst skirting side effects on humans, nature, and the environment.
For people, energy is an overwhelming need, and as a scarce resource, it has been an underlying cause of political conflicts and wars. The gathering and use of energy resources can be harmful to local ecosystems and may have global outcomes.
Energy is also the capacity to do work. We can get energy from food. Energy can be of different forms such as kinetic, potential, mechanical, heat, light etc. Energy is required for individuals and the whole society for lighting, heating, cooking, running, industries, operating transportation and so forth. Basically there are two types of energy depending on the source s they are;
1.Renewable Energy Sources
2.Non-Renewable Energy Sources
=== Interdisciplinary fields ===
As an interdisciplinary science Energy technology is linked with many interdisciplinary fields in sundry, overlapping ways.
Physics, for thermodynamics and nuclear physics
Chemistry for fuel, combustion, air pollution, flue gas, battery technology and fuel cells.
Electrical engineering
Engineering, often for fluid energy machines such as combustion engines, turbines, pumps and compressors.
Geography, for geothermal energy and exploration for resources.
Mining, for petrochemical and fossil fuels.
Agriculture and forestry, for sources of renewable energy.
Meteorology for wind and solar energy.
Water and Waterways, for hydropower.
Waste management, for environmental impact.
Transportation, for energy-saving transportation systems.
Environmental studies, for studying the effect of energy use and production on the environment, nature and climate change.
(Lighting Technology), for Interior and Exterior Natural as well as Artificial Lighting Design, Installations, and Energy Savings
(Energy Cost/Benefit Analysis), for Simple Payback and Life Cycle Costing of Energy Efficiency/Conservation Measures Recommended
=== Electrical engineering ===
Electric power engineering deals with the production and use of electrical energy, which can entail the study of machines such as generators, electric motors and transformers. Infrastructure involves substations and transformer stations, power lines and electrical cable. Load management and power management over networks have meaningful sway on overall energy efficiency. Electric heating is also widely used and researched.
=== Thermodynamics ===
Thermodynamics deals with the fundamental laws of energy conversion and is drawn from theoretical Physics.
=== Thermal and chemical energy ===
Thermal and chemical energy are intertwined with chemistry and environmental studies. Combustion has to do with burners and chemical engines of all kinds, grates and incinerators along with their energy efficiency, pollution and operational safety.
Exhaust gas purification technology aims to lessen air pollution through sundry mechanical, thermal and chemical cleaning methods. Emission control technology is a field of process and chemical engineering. Boiler technology deals with the design, construction and operation of steam boilers and turbines (also used in nuclear power generation, see below), drawn from applied mechanics and materials engineering.
Energy conversion has to do with internal combustion engines, turbines, pumps, fans and so on, which are used for transportation, mechanical energy and power generation. High thermal and mechanical loads bring about operational safety worries which are dealt with through many branches of applied engineering science.
=== Nuclear energy ===
Nuclear technology deals with nuclear power production from nuclear reactors, along with the processing of nuclear fuel and disposal of radioactive waste, drawing from applied nuclear physics, nuclear chemistry and radiation science.
Nuclear power generation has been politically controversial in many countries for several decades but the electrical energy produced through nuclear fission is of worldwide importance. There are high hopes that fusion technologies will one day replace most fission reactors but this is still a research area of nuclear physics.
=== Renewable energy ===
Renewable energy has many branches.
==== Wind power ====
Wind turbines convert wind energy into electricity by connecting a spinning rotor to a generator. Wind turbines draw energy from atmospheric currents and are designed using aerodynamics along with knowledge taken from mechanical and electrical engineering. The wind passes across the aerodynamic rotor blades, creating an area of higher pressure and an area of lower pressure on either side of the blade. The forces of lift and drag are formed due to the difference in air pressure. The lift force is stronger than the drag force; therefore the rotor, which is connected to a generator, spins. The energy is then created due to the change from the aerodynamic force to the rotation of the generator.
Being recognized as one of the most efficient renewable energy sources, wind power is becoming more and more relevant and used in the world. Wind power does not use any water in the production of energy making it a good source of energy for areas without much water. Wind energy could also be produced even if the climate changes in line with current predictions, as it relies solely on wind.
==== Geothermal ====
Deep within the Earth, is an extreme heat producing layer of molten rock called magma. The very high temperatures from the magma heats nearby groundwater. There are various technologies that have been developed in order to benefit from such heat, such as using different types of power plants (dry, flash or binary), heat pumps, or wells. These processes of harnessing the heat incorporate an infrastructure which has in one form or another a turbine which is spun by either the hot water or the steam produced by it. The spinning turbine, being connected to a generator, produces energy. A more recent innovation involves the use of shallow closed-loop systems that pump heat to and from structures by taking advantage of the constant temperature of soil around 10 feet deep.
==== Hydropower ====
Hydropower draws mechanical energy from rivers, ocean waves and tides. Civil engineering is used to study and build dams, tunnels, waterways and manage coastal resources through hydrology and geology. A low speed water turbine spun by flowing water can power an electrical generator to produce electricity.
==== Bioenergy ====
Bioenergy deals with the gathering, processing and use of biomasses grown in biological manufacturing, agriculture and forestry from which power plants can draw burning fuel. Ethanol, methanol (both controversial) or hydrogen for fuel cells can be had from these technologies and used to generate electricity.
==== Enabling technologies ====
Heat pumps and Thermal energy storage are classes of technologies that can enable the utilization of renewable energy sources that would otherwise be inaccessible due to a temperature that is too low for utilization or a time lag between when the energy is available and when it is needed. While enhancing the temperature of available renewable thermal energy, heat pumps have the additional property of leveraging electrical power (or in some cases mechanical or thermal power) by using it to extract additional energy from a low quality source (such as seawater, lake water, the ground, the air, or waste heat from a process).
Thermal storage technologies allow heat or cold to be stored for periods of time ranging from hours or overnight to interseasonal, and can involve storage of sensible energy (i.e. by changing the temperature of a medium) or latent energy (i.e. through phase changes of a medium, such between water and slush or ice). Short-term thermal storages can be used for peak-shaving in district heating or electrical distribution systems. Kinds of renewable or alternative energy sources that can be enabled include natural energy (e.g. collected via solar-thermal collectors, or dry cooling towers used to collect winter's cold), waste energy (e.g. from HVAC equipment, industrial processes or power plants), or surplus energy (e.g. as seasonally from hydropower projects or intermittently from wind farms). The Drake Landing Solar Community (Alberta, Canada) is illustrative. borehole thermal energy storage allows the community to get 97% of its year-round heat from solar collectors on the garage roofs, which most of the heat collected in summer. Types of storages for sensible energy include insulated tanks, borehole clusters in substrates ranging from gravel to bedrock, deep aquifers, or shallow lined pits that are insulated on top. Some types of storage are capable of storing heat or cold between opposing seasons (particularly if very large), and some storage applications require inclusion of a heat pump. Latent heat is typically stored in ice tanks or what are called phase-change materials (PCMs).
== See also ==
World energy supply and consumption
Technology
Water-energy nexus
Policy
Energy policy, Energy policy of the United States, Energy policy of China, Energy policy of India, Energy policy of the European Union, Energy policy of the United Kingdom, Energy policy of Russia, Energy policy of Brazil, Energy policy of Canada, Energy policy of the Soviet Union, Energy Industry Liberalization and Privatization (Thailand)
General
Seasonal thermal energy storage (Interseasonal thermal energy storage), Geomagnetically induced current, Energy harvesting, Timeline of sustainable energy research 2020–present
Feedstock
Raw material, Biomaterial, Energy consumption, Materials science, Recycling, Upcycling, Downcycling
Others
Thorium-based nuclear power, List of oil pipelines, List of natural gas pipelines, Ocean thermal energy conversion, Growth of photovoltaics
== References ==
== Sources ==
Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016.
Serra, J. "Alternative Fuel Resource Development", Clean and Green Fuels Fund, (2006).
Bilgen, S. and K. Kaygusuz, Renewable Energy for a Clean and Sustainable Future, Energy Sources 26, 1119 (2004).
Energy analysis of Power Systems, UIC Nuclear Issues Briefing Paper 57 (2004).
Silvestre B. S., Dalcol P. R. T. (2009). "Geographical proximity and innovation: Evidences from the Campos Basin oil & gas industrial agglomeration — Brazil". Technovation. 29 (8): 546–561. doi:10.1016/j.technovation.2009.01.003.
== Journals ==
Energy Sources, Part A: Recovery, Utilization and Environmental Effects
Energy Sources, Part B: Economics, Planning and Policy
International Journal of Green Energy
== External links ==
Bureau of Land Management 2012 Renewable Energy Priority Projects
Energypedia - a wiki about renewable energies in the context of development cooperation
Hidden Health and Environmental Costs Of Energy Production and Consumption In U.S.
IEA-ECES - International Energy Agency - Energy Conservation through Energy Conservation programme.
IEA HPT TCP - International Energy Agency - Technology Collaboration Programme on Heatpumping Technologies.
IEA-SHC - International Energy Agency - Solar Heating and Cooling programme.
SDH - Solar District Heating Platform. (European Union) | Wikipedia/Energy_production |
In fluid dynamics, pipe network analysis is the analysis of the fluid flow through a hydraulics network, containing several or many interconnected branches. The aim is to determine the flow rates and pressure drops in the individual sections of the network. This is a common problem in hydraulic design.
== Description ==
To direct water to many users, municipal water supplies often route it through a water supply network. A major part of this network will consist of interconnected pipes. This network creates a special class of problems in hydraulic design, with solution methods typically referred to as pipe network analysis. Water utilities generally make use of specialized software to automatically solve these problems. However, many such problems can also be addressed with simpler methods, like a spreadsheet equipped with a solver, or a modern graphing calculator.
== Deterministic network analysis ==
Once the friction factors of the pipes are obtained (or calculated from pipe friction laws such as the Darcy-Weisbach equation), we can consider how to calculate the flow rates and head losses on the network. Generally the head losses (potential differences) at each node are neglected, and a solution is sought for the steady-state flows on the network, taking into account the pipe specifications (lengths and diameters), pipe friction properties and known flow rates or head losses.
The steady-state flows on the network must satisfy two conditions:
At any junction, the total flow into a junction equals the total flow out of that junction (law of conservation of mass, or continuity law, or Kirchhoff's first law)
Between any two junctions, the head loss is independent of the path taken (law of conservation of energy, or Kirchhoff's second law). This is equivalent mathematically to the statement that on any closed loop in the network, the head loss around the loop must vanish.
If there are sufficient known flow rates, so that the system of equations given by (1) and (2) above is closed (number of unknowns = number of equations), then a deterministic solution can be obtained.
The classical approach for solving these networks is to use the Hardy Cross method. In this formulation, first you go through and create guess values for the flows in the network. The flows are expressed via the volumetric flow rates Q. The initial guesses for the Q values must satisfy the Kirchhoff laws (1). That is, if Q7 enters a junction and Q6 and Q4 leave the same junction, then the initial guess must satisfy Q7 = Q6 + Q4. After the initial guess is made, then, a loop is considered so that we can evaluate our second condition. Given a starting node, we work our way around the loop in a clockwise fashion, as illustrated by Loop 1. We add up the head losses according to the Darcy–Weisbach equation for each pipe if Q is in the same direction as our loop like Q1, and subtract the head loss if the flow is in the reverse direction, like Q4. In other words, we add the head losses around the loop in the direction of the loop; depending on whether the flow is with or against the loop, some pipes will have head losses and some will have head gains (negative losses).
To satisfy the Kirchhoff's second laws (2), we should end up with 0 about each loop at the steady-state solution. If the actual sum of our head loss is not equal to 0, then we will adjust all the flows in the loop by an amount given by the following formula, where a positive adjustment is in the clockwise direction.
Δ
Q
=
−
∑
head loss
c
−
∑
head loss
c
c
n
⋅
(
∑
head loss
c
Q
c
+
∑
head loss
c
c
Q
c
c
)
,
{\displaystyle \Delta Q=-{\frac {\sum {\scriptstyle {\text{head loss}}_{c}}-\sum {\scriptstyle {\text{head loss}}_{cc}}}{n\cdot (\sum {\frac {{\text{head loss}}_{c}}{Q_{c}}}+\sum {\frac {{\text{head loss}}_{cc}}{Q_{cc}}})}},}
where
n is 1.85 for Hazen-Williams and
n is 2 for Darcy–Weisbach.
The clockwise specifier (c) means only the flows that are moving clockwise in our loop, while the counter-clockwise specifier (cc) is only the flows that are moving counter-clockwise.
This adjustment doesn't solve the problem, since most networks have several loops. It is okay to use this adjustment, however, because the flow changes won't alter condition 1, and therefore, the other loops still satisfy condition 1. However, we should use the results from the first loop before we progress to other loops.
An adaptation of this method is needed to account for water reservoirs attached to the network, which are joined in pairs by the use of 'pseudo-loops' in the Hardy Cross scheme. This is discussed further on the Hardy Cross method site.
The modern method is simply to create a set of conditions from the above Kirchhoff laws (junctions and head-loss criteria). Then, use a Root-finding algorithm to find Q values that satisfy all the equations. The literal friction loss equations use a term called Q2, but we want to preserve any changes in direction. Create a separate equation for each loop where the head losses are added up, but instead of squaring Q, use |Q|·Q instead (with |Q| the absolute value of Q) for the formulation so that any sign changes reflect appropriately in the resulting head-loss calculation.
== Probabilistic network analysis ==
In many situations, especially for real water distribution networks in cities (which can extend between thousands to millions of nodes), the number of known variables (flow rates and/or head losses) required to obtain a deterministic solution will be very large. Many of these variables will not be known, or will involve considerable uncertainty in their specification. Furthermore, in many pipe networks, there may be considerable variability in the flows, which can be described by fluctuations about mean flow rates in each pipe. The above deterministic methods are unable to account for these uncertainties, whether due to lack of knowledge or flow variability.
For these reasons, a probabilistic method for pipe network analysis has recently been developed, based on the maximum entropy method of Jaynes. In this method, a continuous relative entropy function is defined over the unknown parameters. This entropy is then maximized subject to the constraints on the system, including Kirchhoff's laws, pipe friction properties and any specified mean flow rates or head losses, to give a probabilistic statement (probability density function) which describes the system. This can be used to calculate mean values (expectations) of the flow rates, head losses or any other variables of interest in the pipe network. This analysis has been extended using a reduced-parameter entropic formulation, which ensures consistency of the analysis regardless of the graphical representation of the network. A comparison of Bayesian and maximum entropy probabilistic formulations for the analysis of pipe flow networks has also been presented, showing that under certain assumptions (Gaussian priors), the two approaches lead to equivalent predictions of mean flow rates.
Other methods of stochastic optimization of water distribution systems rely on metaheuristic algorithms, such as simulated annealing and genetic algorithms.
== See also ==
== References ==
== Further reading ==
N. Hwang, R. Houghtalen, "Fundamentals of Hydraulic Engineering Systems" Prentice Hall, Upper Saddle River, NJ. 1996.
L.F. Moody, "Friction factors for pipe flow," Trans. ASME, vol. 66, 1944.
C. F. Colebrook, "Turbulent flow in pipes, with particular reference to the transition region between smooth and rough pipe laws," Jour. Ist. Civil Engrs., London (Feb. 1939).
Eusuff, Muzaffar M.; Lansey, Kevin E. (2003). "Optimization of Water Distribution Network Design Using the Shuffled Frog Leaping Algorithm". Journal of Water Resources Planning and Management. 129 (3): 210-225. | Wikipedia/Pipe_network_analysis |
Build–operate–transfer (BOT) or build–own–operate–transfer (BOOT) is a form of project delivery method, usually for large-scale infrastructure projects, wherein a private entity receives a concession from the public sector (or the private sector on rare occasions) to finance, design, construct, own, and operate a facility stated in the concession contract. The private entity will have the right to operate it for a set period of time. This enables the project proponent to recover its investment and operating and maintenance expenses in the project.
BOT is usually a model used in public–private partnerships. Due to the long-term nature of the arrangement, the fees are usually raised during the concession period. The rate of increase is often tied to a combination of internal and external variables, allowing the proponent to reach a satisfactory internal rate of return for its investment.
Countries where BOT is prevalent include Thailand, Turkey, Taiwan, Bahrain, Pakistan, Saudi Arabia, Israel, India, Iran, Croatia, Japan, China, Vietnam, Malaysia, Philippines, Egypt, Myanmar and a few US states (California, Florida, Indiana, Texas, and Virginia). However, in some countries, such as Canada, Australia, New Zealand and Nepal, the term used is build–own–operate–transfer (BOOT).
The first BOT was for the China Hotel, built in 1979 by the Hong Kong listed conglomerate Hopewell Holdings Ltd (controlled by Sir Gordon Wu).
== BOT framework ==
BOT finds extensive application in infrastructure projects and public–private partnership. In the BOT framework a third party, for example the public administration, delegates to a private sector entity to design and build infrastructure and to operate and maintain these facilities for a certain period. During this period, the private party has the responsibility to raise the finance for the project and is entitled to retain all revenues generated by the project and is the owner of the regarded facilities. The facility will be then transferred to the public administration at the end of the concession agreement, without any remuneration of the private entity involved.
Some or even all of the following different parties could be involved in any BOT project:
The host government: Normally, the government is the initiator of the infrastructure project and decides if the BOT model is appropriate to meet its needs. In addition, the political and economic circumstances are main factors for this decision. The government provides normally support for the project in some form (provision of the land/ changed laws).
The concessionaire: The project sponsors who act as concessionaire create a special purpose entity which is capitalised through their financial contributions.
Lending banks: Most BOT projects are funded to a big extent by commercial debt. The bank will be expected to finance the project on "non-recourse" basis meaning that it has recourse to only the special purpose entity and all its assets for the repayment of the debt.
Other lenders: The special purpose entity might have other lenders such as national or regional development banks.
Parties to the project contracts: Because the special purpose entity has only limited workforce, it will subcontract a third party to perform its obligations under the concession agreement. Additionally, it has to assure that it has adequate supply contracts in place for the supply of raw materials and other resources necessary for the project.
A BOT project is typically used to develop a discrete asset rather than a whole network and is generally entirely new or greenfield in nature (although refurbishment may be involved). In a BOT project the project company or operator generally obtains its revenues through a fee charged to the utility/ government rather than tariffs charged to consumers. A number of projects are called concessions, such as toll road projects, which are new build and have a number of similarities to BOTs.
In general, a project is financially viable for the private entity if the revenues generated by the project cover its cost and provide sufficient return on investment. On the other hand, the viability of the project for the host government depends on its efficiency in comparison with the economics of financing the project with public funds. Even if the host government could borrow money on better conditions than a private company could, other factors could offset this particular advantage. For example, the expertise and efficiency that the private entity is expected to bring as well as the risk transfer. Therefore, the private entity bears a substantial part of the risk. These are some types of the most common risks involved:
Political risk: especially in the developing countries because of the possibility of dramatic overnight political change.
Technical risk: construction difficulties, for example unforeseen soil conditions, breakdown of equipment
Financing risk: foreign exchange rate risk and interest rate fluctuation, market risk (change in the price of raw materials), income risk (over-optimistic cash-flow forecasts), cost overrun risk
== Alternatives to BOT ==
The scale of investment by the private sector and type of arrangement means there is typically no strong incentive for early completion of a project or to deliver a product at a reasonable price. This type of private sector participation is also known as design-build.
Modified versions of the "turnkey" procurement and BOT "build-operate-transfer" models exist for different types of public-private partnership (PPP) projects, in which the main contractor is appointed to design and construct the works. This contrasts with the traditional procurement route (the build-design model), where the client first appoints consultants to design the development and then a contractor to construct the work.
The private contractor designs and builds a facility for a fixed fee, rate, or total cost, which is one of the key criteria in selecting the winning bid. The contractor assumes the risks involved in the design and construction phases.
Turnkey procurement under a design-build contract means that the design-build team would serve as the owner’s representative to determine the specific needs of the user groups; meet with the vendors to select the best options and pricing; advise the owner on the most logical options; plan and build the spaces to accommodate the function of the project; coordinate purchases and timelines; install the infrastructure; facilitate training of staff to use the equipment; and outline care and maintenance. In addition to being responsible for the design and construction of the work to the employer’s requirements, the contractor is also responsible for operating and maintaining the completed facility. The operation and maintenance period will span decades, during which time the contractor is said to have the "concession," is responsible for the operation of the facility, and benefits from operational income. The facility itself, however, remains the property of the employer.
A DBO(design-build-operate) contract is a project delivery model in which a single contractor is appointed to design and build a project and then to operate it for a period of time.
The common form of such a contract is a PPP (public-private partnership), in which a public client (e.g., a government or public agency) enters into a contract with a private contractor to design, build, and then operate the project, while the client finances the project and retains ownership.
DBFO stands for design-build-finance-operate, which also assigns the responsibility to the private organization to design, build, finance, and operate. Financing your competitive project may be easy when there is a high demand for a service right now, and investors will throw money at any project that claims the spoils, such as opening a new airport in a busy metropolis.
BLT stands for build-lease-transfer, in which the public sector partner leases the project from the contractor and also takes responsibility for its operation.
ROT (renovate-operate-transfer) is a procurement method for infrastructure that already exists but is performing substandardly.
As you know, when essential services are no longer operating efficiently or effectively, repairs can be costly. When an obsolete facility or amenity (any public service such as telephone lines, etc.) becomes outdated and requires expensive repairs, it can be financed through public-private partnerships between public entities and private contractors that are able to provide renovation services and operate the project management after the repairs have been completed.
== Economic theory ==
In contract theory, several authors have studied the pros and cons of bundling the building and operating stages of infrastructure projects. In particular, Oliver Hart (2003) has used the incomplete contracting approach in order to investigate whether incentives to make non-contractible investments are smaller or larger when the different stages of the project are combined under one private contractor. Hart (2003) argues that under bundling incentives to make cost-reducing investments are larger than under unbundling. However, sometimes the incentives to make cost-reducing investments may be excessive because they lead to overly large reductions of quality, so it depends on the details of the project whether bundling or unbundling is optimal. Hart's (2003) work has been extended in many directions. For example, Bennett and Iossa (2006) and Martimort and Pouyet (2008) investigate the interaction of bundling and ownership rights, while Hoppe and Schmitz (2013, 2021) explore the implications of bundling for making innovations.
== See also ==
Adelaide–Darwin railway
Central Texas Turnpike System
Confederation Bridge
Pay on production
Private finance initiative
Privatization
Project finance
Shadow toll
== References == | Wikipedia/Build–operate–transfer |
Project delivery methods defines the characteristics of how a construction project is designed and built and the responsibilities of the parties involved in the construction (owner, designer and contractor). They are used by a construction manager who is working as an agent to the owner or by the owner itself to carry-out a construction project while mitigating the risks to the scope of work, time, budget, quality and safety of the project. These risks ranges from cost overruns, time delays and conflict among the various parties.
== History ==
=== Trends in delivery methods ===
Though DBB is now used for most private projects and the majority of public projects, it has not historically been the predominant delivery method of choice. The master builders of centuries past acted both as designers and constructors for both public and private clients. In the United States, Zane's Post Road in Ohio and the IRT in New York City were both originally developed under more integrated delivery methods, as were most infrastructure projects until 1933. Integrated Project Delivery offers a new delivery method to remove considerable waste from the construction process while improving quality and a return to more collaborative methods from the past.
In an effort to assist industry professionals with the selection of appropriate project delivery systems, construction management researchers have prepared a Procurement Method and Contract Selection Model, which can be used for high level decision making for construction projects on a case-by-case basis.
== Types ==
Common project delivery methods include:
=== Design-Bid-Build (DBB) or Design-Award-Build (DAB) ===
In Design-Bid-Build, owner develops contract documents with an architect or an engineer consisting of a set of blueprints and a detailed specification. Bids are solicited from contractors based on these documents; a contract is then awarded to the lowest responsive and responsible bidder. This is the traditional model for public sector infrastructure projects.
==== DBB with Construction Management (DBB with CM) ====
DBB with Construction Management is a modified version of the Design-bid-build approach With partially completed contract documents, an owner will hire a construction manager to act as an agent. As substantial portions of the documents are completed, the construction manager will solicit bids from suitable subcontractors. This allows construction to proceed more quickly and allows the owner to share some of the risk inherent in the project with the construction manager.
=== Design-Build (DB) or Design-Construct (DC) ===
In Design-Build, an owner develops a conceptual plan for a project, then solicits bids from joint ventures of architects and/or engineer and builders for the design and construction of the project. This is an alternative to the traditional model for public infrastructure projects that does not involve Private Financing.
==== Design-Build-Operate-Maintain (DBOM) ====
DBOM takes DB one step further by including the operations and maintenance of the completed project in the same original contract
=== Integrated Project Delivery (IPD) ===
Integrated Project Delivery seeks to involve all participants (people, systems, business structures and practices) through all phases of design, fabrication, and construction, with the goal of improving project efficiency and reducing "waste" in project delivery (i.e. any processes that do no directly add value to the final product). IPD is closely associated with the philosophy of Lean construction.
==== Job Order Contracting (JOC) ====
A form of Integrated Project Delivery (IPD) specifically for repair, renovation, maintenance, sustainability, and "minor" new construction. Each job order contract uses a Unit Price Book for pricing each job via a multi-year umbrella contract.
=== Public-private partnership (PPP, 3P, or P3) ===
A public–private partnership is a cooperative arrangement between one or more public entities (typically the owner) and another (typically private sector) entity to design, build, finance, and at times operate and maintain, the project for a specified period of time on behalf of the owner. A minima, public-private partnership refers to the idea of cooperation between the public sector and the Private sector.
The following models are usually used for P3 projects, though they are also sometimes used for private sector projects.
==== Build-Finance (BF) ====
The private actor builds the asset and finances the cost during the construction period, afterwards the responsibility is handed over to the public entity. In terms of private-sector risk and involvement, this model is again on the lower end of the spectrum for both measures.
==== Build-Operate-Transfer (BOT) ====
Build-Operate-Transfer represents a complete integration of the project delivery: the same contract governs the design, construction, operations, maintenance, and financing of the project. After some concessionary period, the facility is transferred back to the owner.
==== Build–own–operate–transfer (BOOT) ====
A BOOT structure differs from BOT in that the private entity owns the works. During the concession period, the private company owns and operates the facility with the prime goal to recover the costs of investment and maintenance while trying to achieve a higher margin on the project. BOOT has been used in projects like highways, roads mass transit, railway transport and power generation.
==== Build–own–operate (BOO) ====
In a BOO project ownership of the project remains usually with the project company, such as a mobile phone network. Therefore, the private company gets the benefits of any residual value of the project. This framework is used when the physical life of the project coincides with the concession period. A BOO scheme involves large amounts of finance and long payback period. Some examples of BOO projects come from the water treatment plants.
==== Build–lease–transfer (BLT) ====
Under BLT, a private entity builds a complete project and leases it to the government. In this way the control over the project is transferred from the project owner to a lessee. In other words, the ownership remains by the shareholders but operation purposes are leased. After the expiry of the leasing the ownership of the asset and the operational responsibility is transferred to the government at a previously agreed price.
==== Design-Build-Finance-Maintain (DBFM) ====
"The private sector designs, builds and finances an asset and provides hard facility management or maintenance services under a long-term agreement." The owner (usually the public sector) operates the facility. This model is in the middle of the spectrum for private sector risk and involvement.
==== Design–build–finance–operate-maintain (DBFOM) or Design–build–finance–maintain-operate (DBFMO) ====
Design–build–finance–operate-maintain (DBFOM) also referred to as Design–build–finance–maintain-operate (DBFMO) is a project delivery method very similar to BOOT except that there is no actual ownership transfer. Moreover, the contractor assumes the risk of financing until the end of the contract period. The owner then assumes the responsibility for maintenance and operation. This model is extensively used in specific infrastructure projects such as toll roads. The private construction company is responsible for the design and construction of a piece of infrastructure for the government, which is the true owner. Moreover, the private entity has the responsibility to raise finance during the construction and the exploitation period. Usually, the public sector begins payments to the private sector for use of the asset post-construction. This is the most commonly used model in the EU according to the European Court of Auditors.
==== Design–build–operate–transfer (DBOT) ====
This funding option is common when the client has no knowledge of what the project entails. Hence the project is contracted to a company to design, build, operate, and then transfer it. Examples of such projects are refinery constructions.
==== Design–construct–manage–finance (DCMF) ====
A private entity is entrusted to design, construct, manage, and finance a facility, based on the specifications of the government. Project cash flows result from the government's payment for the rent of the facility. Some examples of the DCMF model are prisons or public hospitals.
== Conceptual differences between delivery methods ==
There are two key variables which account for the bulk of the variation between delivery methods:
The extent of the integration of the various service providers.
The extent to which the owner is directly financing the project.
When the various service providers are segmented, the owner has the most control, but this control is costly and does not give each provider an incentive to optimize its contribution for the next service. When there is tight integration amongst providers, each step of the delivery is undertaken with future activities in mind, resulting in cost savings, but limiting the owner's influence throughout the project.
The owner's direct financing of a project simply means that the owner directly pays the providers for their services. In the case of a facility with a consistent revenue stream, indirect financing becomes possible: rather than be paid by the owner, the providers are paid with the revenue collected from the facility's operation.
Indirect financing risks being mistaken for privatization. Though the providers do have a concession to operate and collect revenue from a facility that they built and financed, the structure itself remains the property of the owner (usually a government agency in the case of public infrastructure).
=== Level of private involvement ===
== References == | Wikipedia/Project_delivery_method |
A designer is a person who plans the form or structure of something before it is made, by preparing drawings or plans. In practice, anyone who creates tangible or intangible objects, products, processes, laws, games, graphics, services, or experiences can be called a designer.
== Overview ==
A designer is someone who conceptualizes and creates new concepts, ideas, or products for consumption by the general public. It is different from an artist who creates art for a select few to understand or appreciate. However, both domains require some understanding of aesthetics. The design of clothing, furniture, and other common artifacts were left mostly to tradition or artisans specializing in hand making them.
With the increasing complexity in industrial design of today's society, and due to the needs of mass production where more time is usually associated with more cost, the production methods became more complex and with them, the way designs and their production are created. The classical areas are now subdivided into smaller and more specialized domains of design (landscape design, urban design, interior design, industrial design, furniture design, fashion design, and much more) according to the product designed or perhaps its means of production. Despite various specializations within the design industry, all of them have similarities in terms of the approach, skills, and methods of working.
Using design methods and design thinking to resolve problems and create new solutions are the most important aspects of being a designer. Part of a designer's job is to get to know the audience they intend on serving.
In education, the methods of teaching or the program and theories followed vary according to schools and field of study. In industry, a design team for large projects is usually composed of a number of different types of designers and specialists. The relationships between team members will vary according to the proposed product, the processes of production or the research followed during the idea development, but normally they give an opportunity to everyone in the team to take a part in the creation process.
== Design professions ==
Different types of designers include:
Animation
Architecture
Communication design
Costume design
Engineering design
Fashion design
Floral design
Furniture design
Game design
Graphic design
Industrial design
Instructional design
Interaction design
Interior design
Jewelry design
Landscape design
Logo design
Lighting design
Packaging design
Product design
Scenic design
Service design
Software design
Sound design
Strategic design
Systems design
Textile design
Urban design
User experience design
User interface design
Web design
== See also ==
Architect
Design
Design engineer
Design firm
Design thinking
Visual arts
== References == | Wikipedia/Designers |
The Federal Law Enforcement Training Centers (FLETC, pronounced ) serves as an interagency law enforcement training body for 105 United States government federal law enforcement agencies. The stated mission of FLETC is to "...train those who protect our homeland". Through the Rural Policing Institute (RPI) and the Office of State and Local Training, it provides tuition-free and low-cost training to state, local, campus and tribal law enforcement agencies.
== History ==
Studies conducted in the late 1960s revealed an urgent need for training by professional instructors using modern training facilities and standardized course content. Congress authorized funds for planning and constructing the Consolidated Federal Law Enforcement Training Center (CFLETC). In 1970, the CFLETC was established as a bureau of the U.S. Department of the Treasury (Treasury Order #217) and began training operations in temporary facilities in Washington, D.C.
The permanent location of the center was originally planned for the Washington, D.C., area. However, a three-year construction delay resulted in Congress requesting that surplus federal installations be surveyed to determine if one could serve as the permanent site. In May 1975, after a review of existing facilities, the former Naval Air Station Glynco was selected. In the summer of 1975, the newly renamed Federal Law Enforcement Training Center (FLETC) relocated from Washington, D.C., and began training in September of that year at Glynco, Georgia. Glynco is the headquarters site and main campus for the FLETC and houses the senior leadership of the organization.
On March 1, 2003, FLETC formally transferred from the Treasury Department to the newly established U.S. Department of Homeland Security (DHS), along with some 22 other federal agencies and entities. The move reflected the centrality of the FLETC's mission in support of the unified homeland security effort.
== Headquarters ==
The FLETC headquarters are at the former Naval Air Station Glynco in the Glynco area of unincorporated Glynn County, Georgia, near the port city of Brunswick, Georgia, and about halfway between Savannah, Georgia, and Jacksonville, Florida. The FLETC Orlando team located at Naval Air Warfare Center Training Systems Division in Orlando, Florida trains with branches of the United States Armed Forces evaluating new and existing training technologies for their ability to meet law enforcement training needs. The Los Angeles Regional Maritime Law Enforcement Training Center in Los Angeles, California has worked a partnership with FLETC along with the Los Angeles County Sheriff's Department along with state and local agencies to develop comprehensive maritime training. FLETC has oversight and program management responsibilities for the International Law Enforcement Academies (ILEA) in Gaborone, Botswana; San Salvador, El Salvador; and Lima, Peru. It also supports training at ILEAs in Budapest, Hungary and Bangkok, Thailand.
=== Locations ===
Artesia, New Mexico
Charleston, South Carolina
Cheltenham, Maryland
Brunswick, Georgia
== Parent department ==
The FLETC's parent department, the DHS, supervises its administrative and financial activities. As an interagency training organization, FLETC has professionals from diverse backgrounds to serve on its faculty and staff. Approximately one-third of the instructor staff are permanent FLETC employees. The remainder are federal officers and investigators on short-term assignment from their parent organizations. Agencies take part in curriculum review and development conferences and help develop policies and directives.
== Affiliations ==
Partner organizations have input regarding training issues and functional aspects of the Center. The current partner organizations are:
Administrative Office of the United States Courts
U.S. Probation and Pretrial Services System
Amtrak (National Railroad Passenger Corporation)
Office of Inspector General
Police Department
Central Intelligence Agency
The Directorate of Digital Innovation
The Directorate of Analysis
The Directorate of Operations
The Directorate of Support
The Directorate of Science and Technology
Executive Office
Department of Agriculture
Animal Plant Health Inspection Services
Food Safety Inspection Service
Office of the Inspector General
United States Forest Service
Department of Commerce
Bureau of Industry and Security, Office of Export Enforcement
National Institute of Standards and Technology
National Oceanic and Atmospheric Administration
National Marine Fisheries Service
Department of Defense
Department of the Air Force
United States Air Force Office of Special Investigations
Department of the Army
Army Criminal Investigation Division
Army Counterintelligence Command
Department of the Navy
Commander, Navy Installations Command
Naval Criminal Investigative Service
Defense Intelligence Agency
Defense Logistics Agency
National Geospatial-Intelligence Agency
National Security Agency
Office of Inspector General
Defense Criminal Investigative Service
Pentagon Force Protection Agency
Department of Education
Office of the Inspector General
Department of Energy
National Nuclear Security Administration - Office of Secure Transportation
Office of Health, Safety and Security
Office of Inspector General
Department of Health and Human Services
Center for Disease Control and Prevention – Office of Safety, Security and Asset Management
Food and Drug Administration - Office of Criminal Investigations
National Institute of Health
Office of the Inspector General
Department of Homeland Security
Customs and Border Protection
United States Border Patrol
Immigration & Customs Enforcement
Enforcement and Removal Operations
Federal Emergency Management Agency - Office of Security
Federal Protective Service
Homeland Security Investigations
Intelligence Analysis Operations
Office of Inspector General
Office of Professional Responsibility
Transportation Security Administration
Federal Air Marshal Service
United States Secret Service
United States Citizenship and Immigration Services
United States Coast Guard
Coast Guard Investigative Service
Marine Law Enforcement Academy
Department of Housing and Urban Development
Office of the Inspector General
Protective Services Division
Department of Interior
Bureau of Indian Affairs
Bureau of Land Management
Bureau of Reclamation
Office of Inspector General
Office of Law Enforcement and Security
Office of Surface Mining Reclamation and Enforcement
National Park Service
United States Park Police
United States Park Rangers
United States Fish and Wildlife Service
Law Enforcement
Refuge
Department of Justice
Bureau of Alcohol, Tobacco, Firearms and Explosives
Federal Bureau of Prisons
Office of the Inspector General
United States Marshals Service
Department of Labor
Office of Inspector General
Office of Labor-Management Standards
Department of State
Diplomatic Security Service
Office of Inspector General
United States Agency for International Development - Office of Inspector General
Department of Transportation
Office of Inspector General
Federal Aviation Administration
Department of the Treasury
Bureau of Engraving and Printing
Financial Crimes Enforcement Network
Internal Revenue Service, Criminal Investigation
Office of Terrorism and Financial Intelligence - Office of Foreign Assets Control
Office of the Inspector General
Treasury Inspector General for Tax Administration
United States Mint
United States Mint Police
Department of Veterans Affairs
Law Enforcement Training Center
Office of the Inspector General
Environmental Protection Agency
Criminal Investigation Division
Office of the Inspector General
Federal Deposit Insurance Corporation - Office of the Inspector General
Federal Reserve System
General Services Administration - Office of the Inspector General
Government Publishing Office
Office of the Inspector General
Security Services
National Aeronautics and Space Administration - Office of the Inspector General
Nuclear Regulatory Commission
Office of the Inspector General
Office of Personnel Management
Office of the Inspector General
Railroad Retirement Board - Office of the Inspector General
Small Business Administration
Office of the Inspector General
Smithsonian Institution
National Zoological Park Police
Office of Protective Services
Social Security Administration
Office of the Inspector General
Tennessee Valley Authority
Office of the Inspector General
Police Department
United States Capitol Police
United States Postal Service
Office of the Inspector General
United States Supreme Court Police
== See also ==
FBI Academy
== References ==
Attribution
This article incorporates public domain material from websites or documents of the United States Department of Homeland Security.
== Further reading ==
== External links ==
Official website
FLETC Orientation on YouTube | Wikipedia/Federal_Law_Enforcement_Training_Centers |
Interior design is the art and science of enhancing the interior of a building to achieve a healthier and more aesthetically pleasing environment for the people using the space. With a keen eye for detail and a creative flair, an interior designer is someone who plans, researches, coordinates, and manages such enhancement projects. Interior design is a multifaceted profession that includes conceptual development, space planning, site inspections, programming, research, communicating with the stakeholders of a project, construction management, and execution of the design.
== History and current terms ==
In the past, interiors were put together instinctively as a part of the process of building.
The profession of interior design has been a consequence of the development of society and the complex architecture that has resulted from the development of industrial processes.
The pursuit of effective use of space, user well-being and functional design has contributed to the development of the contemporary interior design profession. The profession of interior design is separate and distinct from the role of interior decorator, a term commonly used in the US; the term is less common in the UK, where the profession of interior design is still unregulated and therefore, strictly speaking, not yet officially a profession.
In ancient India, architects would also function as interior designers. This can be seen from the references of Vishwakarma the architect—one of the gods in Indian mythology. In these architects' design of 17th-century Indian homes, sculptures depicting ancient texts and events are seen inside the palaces, while during the medieval times wall art paintings were a common feature of palace-like mansions in India commonly known as havelis. While most traditional homes have been demolished to make way to modern buildings, there are still around 2000 havelis in the Shekhawati region of Rajasthan that display wall art paintings.
In ancient Egypt, "soul houses" (or models of houses) were placed in tombs as receptacles for food offerings. From these, it is possible to discern details about the interior design of different residences throughout the different Egyptian dynasties, such as changes in ventilation, porticoes, columns, loggias, windows, and doors.
Painting interior walls has existed for at least 5,000 years, with examples found as far north as the Ness of Brodgar, as have templated interiors, as seen in the associated Skara Brae settlement. It was the Greeks, and later Romans who added co-ordinated, decorative mosaics floors, and templated bath houses, shops, civil offices, Castra (forts) and temple, interiors, in the first millennia BC. With specialised guilds dedicated to producing interior decoration, and formulaic furniture, in buildings constructed to forms defined by Roman architects, such as Vitruvius: De architectura, libri decem (The Ten Books on Architecture).
Throughout the 17th and 18th century and into the early 19th century, interior decoration was the concern of the homemaker, or an employed upholsterer or craftsman who would advise on the artistic style for an interior space. Architects would also employ craftsmen or artisans to complete interior design for their buildings.
=== Commercial interior design and management ===
In the mid-to-late 19th century, interior design services expanded greatly, as the middle class in industrial countries grew in size and prosperity and began to desire the domestic trappings of wealth to cement their new status. Large furniture firms began to branch out into general interior design and management, offering full house furnishings in a variety of styles. This business model flourished from the mid-century to 1914, when this role was increasingly usurped by independent, often amateur, designers. This paved the way for the emergence of the professional interior design in the mid-20th century.
In the 1950s and 1960s, upholsterers began to expand their business remits. They framed their business more broadly and in artistic terms and began to advertise their furnishings to the public. To meet the growing demand for contract interior work on projects such as offices, hotels, and public buildings, these businesses became much larger and more complex, employing builders, joiners, plasterers, textile designers, artists, and furniture designers, as well as engineers and technicians to fulfil the job. Firms began to publish and circulate catalogs with prints for different lavish styles to attract the attention of expanding middle classes.
As department stores increased in number and size, retail spaces within shops were furnished in different styles as examples for customers. One particularly effective advertising tool was to set up model rooms at national and international exhibitions in showrooms for the public to see. Some of the pioneering firms in this regard were Waring & Gillow, James Shoolbred, Mintons, and Holland & Sons. These traditional high-quality furniture making firms began to play an important role as advisers to unsure middle class customers on taste and style, and began taking out contracts to design and furnish the interiors of many important buildings in Britain.
This type of firm emerged in America after the Civil War. The Herter Brothers, founded by two German émigré brothers, began as an upholstery warehouse and became one of the first firms of furniture makers and interior decorators. With their own design office and cabinet-making and upholstery workshops, Herter Brothers were prepared to accomplish every aspect of interior furnishing including decorative paneling and mantels, wall and ceiling decoration, patterned floors, and carpets and draperies.
A pivotal figure in popularizing theories of interior design to the middle class was the architect Owen Jones, one of the most influential design theorists of the nineteenth century. Jones' first project was his most important—in 1851, he was responsible for not only the decoration of Joseph Paxton's gigantic Crystal Palace for the Great Exhibition but also the arrangement of the exhibits within. He chose a controversial palette of red, yellow, and blue for the interior ironwork and, despite initial negative publicity in the newspapers, was eventually unveiled by Queen Victoria to much critical acclaim. His most significant publication was The Grammar of Ornament (1856), in which Jones formulated 37 key principles of interior design and decoration.
Jones was employed by some of the leading interior design firms of the day; in the 1860s, he worked in collaboration with the London firm Jackson & Graham to produce furniture and other fittings for high-profile clients including art collector Alfred Morrison as well as Ismail Pasha, Khedive of Egypt.
In 1882, the London Directory of the Post Office listed 80 interior decorators. Some of the most distinguished companies of the period were Crace, Waring & Gillowm and Holland & Sons; famous decorators employed by these firms included Thomas Edward Collcutt, Edward William Godwin, Charles Barry, Gottfried Semper, and George Edmund Street.
=== Transition to professional interior design ===
By the turn of the 20th century, amateur advisors and publications were increasingly challenging the monopoly that the large retail companies had on interior design. English feminist author Mary Haweis wrote a series of widely read essays in the 1880s in which she derided the eagerness with which aspiring middle-class people furnished their houses according to the rigid models offered to them by the retailers. She advocated the individual adoption of a particular style, tailor-made to the individual needs and preferences of the customer:One of my strongest convictions, and one of the first canons of good taste, is that our houses, like the fish's shell and the bird's nest, ought to represent our individual taste and habits.
The move toward decoration as a separate artistic profession, unrelated to the manufacturers and retailers, received an impetus with the 1899 formation of the Institute of British Decorators; with John Dibblee Crace as its president, it represented almost 200 decorators around the country. By 1915, the London Directory listed 127 individuals trading as interior decorators, of which 10 were women. Rhoda Garrett and Agnes Garrett were the first women to train professionally as home decorators in 1874. The importance of their work on design was regarded at the time as on a par with that of William Morris. In 1876, their work – Suggestions for House Decoration in Painting, Woodwork and Furniture – spread their ideas on artistic interior design to a wide middle-class audience.
By 1900, the situation was described by The Illustrated Carpenter and Builder:Until recently when a man wanted to furnish he would visit all the dealers and select piece by piece of furniture ....Today he sends for a dealer in art furnishings and fittings who surveys all the rooms in the house and he brings his artistic mind to bear on the subject.In America, Candace Wheeler was one of the first woman interior designers and helped encourage a new style of American design. She was instrumental in the development of art courses for women in a number of major American cities and was considered a national authority on home design. An important influence on the new profession was The Decoration of Houses, a manual of interior design written by Edith Wharton with architect Ogden Codman in 1897 in America. In the book, the authors denounced Victorian-style interior decoration and interior design, especially those rooms that were decorated with heavy window curtains, Victorian bric-a-brac, and overstuffed furniture. They argued that such rooms emphasized upholstery at the expense of proper space planning and architectural design and were, therefore, uncomfortable and rarely used. The book is considered a seminal work, and its success led to the emergence of professional decorators working in the manner advocated by its authors, most notably Elsie de Wolfe.
Elsie De Wolfe was one of the first interior designers. Rejecting the Victorian style she grew up with, she chose a more vibrant scheme, along with more comfortable furniture in the home. Her designs were light, with fresh colors and delicate Chinoiserie furnishings, as opposed to the Victorian preference of heavy, red drapes and upholstery, dark wood and intensely patterned wallpapers. Her designs were also more practical; she eliminated the clutter that occupied the Victorian home, enabling people to entertain more guests comfortably. In 1905, de Wolfe was commissioned for the interior design of the Colony Club on Madison Avenue; its interiors garnered her recognition almost over night. She compiled her ideas into her widely read 1913 book, The House in Good Taste.
In England, Syrie Maugham became a legendary interior designer credited with designing the first all-white room. Starting her career in the early 1910s, her international reputation soon grew; she later expanded her business to New York City and Chicago. Born during the Victorian Era, a time characterized by dark colors and small spaces, she instead designed rooms filled with light and furnished in multiple shades of white and mirrored screens. In addition to mirrored screens, her trademark pieces included: books covered in white vellum, cutlery with white porcelain handles, console tables with plaster palm-frond, shell, or dolphin bases, upholstered and fringed sleigh beds, fur carpets, dining chairs covered in white leather, and lamps of graduated glass balls, and wreaths.
=== Expansion ===
The interior design profession became more established after World War II. From the 1950s onwards, spending on the home increased. Interior design courses were established, requiring the publication of textbooks and reference sources. Historical accounts of interior designers and firms distinct from the decorative arts specialists were made available. Organisations to regulate education, qualifications, standards and practices, etc. were established for the profession.
Interior design was previously seen as playing a secondary role to architecture. It also has many connections to other design disciplines, involving the work of architects, industrial designers, engineers, builders, craftsmen, etc. For these reasons, the government of interior design standards and qualifications was often incorporated into other professional organisations that involved design. Organisations such as the Chartered Society of Designers, established in the UK in 1986, and the American Designers Institute, founded in 1938, governed various areas of design.
It was not until later that specific representation for the interior design profession was developed. The US National Society of Interior Designers was established in 1957, while in the UK the Interior Decorators and Designers Association was established in 1966. Across Europe, other organisations such as The Finnish Association of Interior Architects (1949) were being established and in 1994 the International Interior Design Association was founded.
Ellen Mazur Thomson, author of Origins of Graphic Design in America (1997), determined that professional status is achieved through education, self-imposed standards and professional gate-keeping organizations. Having achieved this, interior design became an accepted profession.
== Interior decorators and interior designers ==
Interior design is the art and science of understanding people's behavior to create functional spaces, that are aesthetically pleasing, within a building. Decoration is the furnishing or adorning of a space with decorative elements, sometimes complemented by advice and practical assistance. In short, interior designers may decorate, but decorators do not design.
=== Interior designer ===
Interior designer implies that there is more of an emphasis on planning, functional design and the effective use of space, as compared to interior decorating. An interior designer in fine line design can undertake projects that include arranging the basic layout of spaces within a building as well as projects that require an understanding of technical issues such as window and door positioning, acoustics, and lighting. Although an interior designer may create the layout of a space, they may not alter load-bearing walls without having their designs stamped for approval by a structural engineer. Interior designers often work directly with architects, engineers and contractors.
Interior designers must be highly skilled in order to create interior environments that are functional, safe, and adhere to building codes, regulations and ADA requirements. They go beyond the selection of color palettes and furnishings and apply their knowledge to the development of construction documents, occupancy loads, healthcare regulations and sustainable design principles, as well as the management and coordination of professional services including mechanical, electrical, plumbing, and life safety—all to ensure that people can live, learn or work in an innocuous environment that is also aesthetically pleasing.
Someone may wish to specialize and develop technical knowledge specific to one area or type of interior design, such as residential design, commercial design, hospitality design, healthcare design, universal design, exhibition design, furniture design, and spatial branding.
Interior design is a creative profession that is relatively new, constantly evolving, and often confusing to the public. It is not always an artistic pursuit and can rely on research from many fields to provide a well-trained understanding of how people are often influenced by their environments.
=== Color in interior design ===
Color is a powerful design tool in decoration, as well as in interior design, which is the art of composing and coordinating colors together to create a stylish scheme on the interior architecture of the space.
It can be important to interior designers to acquire a deep experience with colors, understand their psychological effects, and understand the meaning of each color in different locations and situations in order to create suitable combinations for each place. Color is something that an interior design needs to understand. Color can affect the way that humans think, feel, or look at space. Color can have a major effect on human behavior through all ages. An interior designer must understand that different colors can easily overstimulate people depending on the environment. Color can also have effects on a room. For example, if someone is claustrophobic then painting a room in darker colors could make the room feel smaller therefore the person could feel trapped.
Combining colors together could result in creating a state of mind as seen by the observer, and could eventually result in positive or negative effects on them. Colors can make the room feel either more calm, cheerful, comfortable, stressful, or dramatic. Color combinations can make a tiny room seem larger or smaller. So it is for the Interior designer to choose appropriate colors for a place towards achieving how clients would want to look at, and feel in, that space.
In 2024, red-colored home accessories were popularized on social media and in several design magazines for claiming to enhance interior design. This was coined the Unexpected Red Theory.
== Lighting ==
Lighting is very important when designing a space. Lighting in a room can affect the way that a room is shown. By adding natural and artificial lighting a designer can enhance the features in space and make it more pleasing. When an interior designer places lighting in a home it is important to know what lighting to put where and how to use lighting to highlight important places in the room. Lighting can enhance the aesthetic appeal of a place, setting the mood for the room. For example, when putting lighting into an office you want tp make sure there is overhead lighting, task/ desk lighting and natural lighting. Making sure there is enough lighting in a workspace is important so the person using the place does not strain their eyesight.
== Specialties ==
=== Residential ===
Residential design is the design of the interior of private residences. As this type of design is specific for individual situations, the needs and wants of the individual are paramount in this area of interior design. The interior designer may work on the project from the initial planning stage or may work on the remodeling of an existing structure. It is often a process that takes months to fine-tune and create a space with the vision of the client.
=== Commercial ===
Commercial design encompasses a wide range of subspecialties.
Retail: includes malls and shopping centers, department stores, specialty stores, visual merchandising, and showrooms.
Visual and spatial branding: The use of space as a medium to express a corporate brand.
Corporate: office design for any kind of business such as banks.
Healthcare: the design of hospitals, assisted living facilities, medical offices, dentist offices, psychiatric facilities, laboratories, medical specialist facilities.
Hospitality and recreation: includes hotels, motels, resorts, cruise ships, cafes, bars, casinos, nightclubs, theaters, music and concert halls, opera houses, sports venues, restaurants, gyms, health clubs and spas, etc.
Institutional: government offices, financial institutions (banks and credit unions), schools and universities, religious facilities, etc.
Industrial facilities: manufacturing and training facilities as well as import and export facilities.
Exhibition: includes museums, gallery, exhibition hall, specially the design for showroom and exhibition gallery.
Traffic building: includes bus station, subway station, airports, pier, etc.
Sports: includes gyms, stadiums, swimming rooms, basketball halls, etc.
Teaching in a private institute that offer classes of interior design.
Self-employment.
Employment in private sector firms.
=== Other ===
Other areas of specialization include amusement and theme park design, museum and exhibition design, exhibit design, event design (including ceremonies, weddings, baby and bridal showers, parties, conventions, and concerts), interior and prop styling, craft styling, food styling, product styling, tablescape design, theatre and performance design, stage and set design, scenic design, and production design for film and television. Beyond those, interior designers, particularly those with graduate education, can specialize in healthcare design, gerontological design, educational facility design, and other areas that require specialized knowledge. Some university programs offer graduate studies in theses and other areas. For example, both Cornell University and the University of Florida offer interior design graduate programs in environment and behavior studies.
== Profession ==
=== Education ===
There are various paths that one can take to become a professional interior designer. All of these paths involve some form of training. Working with a successful professional designer is an informal method of training and has previously been the most common method of education. In many states, however, this path alone cannot lead to licensing as a professional interior designer. Training through an institution such as a college, art or design school or university is a more formal route to professional practice.
In many countries, several university degree courses are now available, including those on interior architecture, taking three or four years to complete.
A formal education program, particularly one accredited by or developed with a professional organization of interior designers, can provide training that meets a minimum standard of excellence and therefore gives a student an education of a high standard. There are also university graduate and Ph.D. programs available for those seeking further training in a specific design specialization (i.e. gerontological or healthcare design) or those wishing to teach interior design at the university level.
=== Working conditions ===
There are a wide range of working conditions and employment opportunities within interior design. Large and tiny corporations often hire interior designers as employees on regular working hours. Designers for smaller firms and online renovation platforms usually work on a contract or per-job basis. Self-employed designers, who made up 32% of interior designers in 2020, usually work the most hours. Interior designers often work under stress to meet deadlines, stay on budget, and meet clients' needs and wishes.
In some cases, licensed professionals review the work and sign it before submitting the design for approval by clients or construction permitting. The need for licensed review and signature varies by locality, relevant legislation, and scope of work. Their work can involve significant travel to visit different locations. However, with technology development, the process of contacting clients and communicating design alternatives has become easier and requires less travel.
== Styles ==
=== Art Deco ===
The Art Deco style began in Europe in the early years of the 20th century, with the waning of Art Nouveau. The term "Art Deco" was taken from the Exposition Internationale des Arts Decoratifs et Industriels Modernes, a world's fair held in Paris in 1925. Art Deco rejected many traditional classical influences in favour of more streamlined geometric forms and metallic color. The Art Deco style influenced all areas of design, especially interior design, because it was the first style of interior decoration to spotlight new technologies and materials.
Art Deco style is mainly based on geometric shapes, streamlining, and clean lines. The style offered a sharp, cool look of mechanized living utterly at odds with anything that came before.
Art Deco rejected traditional materials of decoration and interior design, opting instead to use more unusual materials such as chrome, glass, stainless steel, shiny fabrics, mirrors, aluminium, lacquer, inlaid wood, sharkskin, and zebra skin. The use of harder, metallic materials was chosen to celebrate the machine age. These materials reflected the dawning modern age that was ushered in after the end of the First World War. The innovative combinations of these materials created contrasts that were very popular at the time – for example the mixing together of highly polished wood and black lacquer with satin and furs. The barber shop in the Austin Reed store in London was designed by P. J. Westwood. It was soon regarded as the trendiest barber shop in Britain due to its use of metallic materials.
The color themes of Art Deco consisted of metallic color, neutral color, bright color, and black and white. In interior design, cool metallic colors including silver, gold, metallic blue, charcoal grey, and platinum tended to predominate. Serge Chermayeff, a Russian-born British designer made extensive use of cool metallic colors and luxurious surfaces in his room schemes. His 1930 showroom design for a British dressmaking firm had a silver-grey background and black mirrored-glass wall panels.
Black and white was also a very popular color scheme during the 1920s and 1930s. Black and white checkerboard tiles, floors and wallpapers were very trendy at the time. As the style developed, bright vibrant colors became popular as well.
Art Deco furnishings and lighting fixtures had a glossy, luxurious appearance with the use of inlaid wood and reflective finishes. The furniture pieces often had curved edges, geometric shapes, and clean lines. Art Deco lighting fixtures tended to make use of stacked geometric patterns.
=== Modern art ===
Modern design grew out of the decorative arts, mostly from the Art Deco, in the early 20th century. One of the first to introduce this modernist style was Frank Lloyd Wright, who had not become hugely popularized until completing the house called Fallingwater in the 1930s. Modern art reached its peak during the 1950s and '60s, which is why designers and decorators today may refer to modern design as being "mid-century". Modern art does not refer to the era or age of design and is not the same as contemporary design, a term used by interior designers for a shifting group of recent styles and trends.
=== Arab materials ===
"Majlis painting", also called nagash painting, is the decoration of the majlis, or front parlor of traditional Arabic homes, in the Asir province of Saudi Arabia and adjoining parts of Yemen. These wall paintings, an arabesque form of mural or fresco, show various geometric designs in bright colors: "Called 'nagash' in Arabic, the wall paintings were a mark of pride for a woman in her house."
The geometric designs and heavy lines seem to be adapted from the area's textile and weaving patterns. "In contrast with the sobriety of architecture and decoration in the rest of Arabia, exuberant color and ornamentation characterize those of Asir. The painting extends into the house over the walls and doors, up the staircases, and onto the furniture itself. When a house is being painted, women from the community help each other finish the job. The building then displays their shared taste and knowledge. Mothers pass these on to their daughters. This artwork is based on a geometry of straight lines and suggests the patterns common to textile weaving, with solid bands of different colors. Certain motifs reappear, such as the triangular mihrab or 'niche' and the palmette. In the past, paint was produced from mineral and vegetable pigments. Cloves and alfalfa yielded green. Blue came from the indigo plant. Red came from pomegranates and a certain mud. Paintbrushes were created from the tough hair found in a goat's tail. Today, however, women use modern manufactured paint to create new looks, which have become an indicator of social and economic change."
Women in the Asir province often complete the decoration and painting of the house interior. "You could tell a family's wealth by the paintings," Um Abdullah says: "If they didn't have much money, the wife could only paint the motholath, the basic straight, simple lines, in patterns of three to six repetitions in red, green, yellow and brown." When women did not want to paint the walls themselves, they could barter with other women who would do the work. Several Saudi women have become famous as majlis painters, such as Fatima Abou Gahas.
The interior walls of the home are brightly painted by the women, who work in defined patterns with lines, triangles, squares, diagonals and tree-like patterns. "Some of the large triangles represent mountains. Zigzag lines stand for water and also for lightning. Small triangles, especially when the widest area is at the top, are found in pre-Islamic representations of female figures. That the small triangles found in the wall paintings in 'Asir are called banat may be a cultural remnant of a long-forgotten past."
"Courtyards and upper pillared porticoes are principal features of the best Nadjdi architecture, in addition to the fine incised plaster wood (jiss) and painted window shutters, which decorate the reception rooms. Good examples of plasterwork can often be seen in the gaping ruins of torn-down buildings- the effect is light, delicate and airy. It is usually around the majlis, around the coffee hearth and along the walls above where guests sat on rugs, against cushions. Doughty wondered if this "parquetting of jis", this "gypsum fretwork... all adorning and unenclosed" originated from India. However, the Najd fretwork seems very different from that seen in the Eastern Province and Oman, which are linked to Indian traditions, and rather resembles the motifs and patterns found in ancient Mesopotamia. The rosette, the star, the triangle and the stepped pinnacle pattern of dadoes are all ancient patterns, and can be found all over the Middle East of antiquity. Al-Qassim Province seems to be the home of this art, and there it is normally worked in hard white plaster (though what you see is usually begrimed by the smoke of the coffee hearth). In Riyadh, examples can be seen in unadorned clay.
=== Sustainable Design ===
Sustainable Design is becoming more important today. This type of style includes eco-friendly, energy efficient, and sustainable design while keeping the space functional. Modern design prioritizes energy efficient design styles and eco-friendly design styles.
== Media popularization ==
Interior design has become the subject of television shows. In the United Kingdom, popular interior design and decorating programs include 60 Minute Makeover (ITV), Changing Rooms (BBC), and Selling Houses (Channel 4). Famous interior designers whose work is featured in these programs include Linda Barker and Laurence Llewelyn-Bowen. In the United States, the TLC Network aired a popular program called Trading Spaces, a show based on the UK program Changing Rooms. In addition, both HGTV and the DIY Network also televise many programs about interior design and decorating, featuring the works of a variety of interior designers, decorators, and home improvement experts in a myriad of projects.
Fictional interior decorators include the Sugarbaker sisters on Designing Women and Grace Adler on Will & Grace. There is also another show called Home MADE. There are two teams and two houses and whoever has the designed and made the worst room, according to the judges, is eliminated. Another show on the Style Network, hosted by Niecy Nash, is Clean House where they re-do messy homes into themed rooms that the clients would like. Other shows include Design on a Dime, Designed to Sell, and The Decorating Adventures of Ambrose Price. The show called Design Star has become more popular through the five seasons that have already aired. The winners of this show end up getting their own TV shows, of which are Color Splash hosted by David Bromstad, Myles of Style hosted by Kim Myles, Paint-Over! hosted by Jennifer Bertrand, The Antonio Treatment hosted by Antonio Ballatore, and finally Secrets from a Stylist hosted by Emily Henderson. Bravo also has a variety of shows that explore the lives of interior designers. These include Flipping Out, which explores the life of Jeff Lewis and his team of designers; Million Dollar Decorators explores the lives of interior designers Nathan Turner, Jeffrey Alan Marks, Mary McDonald, Kathryn Ireland, and Martyn Lawrence Bullard.
Interior design has also become the subject of radio shows. In the U.S., popular interior design & lifestyle shows include Martha Stewart Living and Living Large featuring Karen Mills. Famous interior designers whose work is featured on these programs include Bunny Williams, Barbara Barry, and Kathy Ireland, among others.
Many interior design magazines exist to offer advice regarding color palette, furniture, art, and other elements that fall under the umbrella of interior design. These magazine often focus on related subjects to draw a more specific audience. For instance, architecture as a primary aspect of Dwell, while Veranda is well known as a luxury living magazine. Lonny Magazine and the newly relaunched, Domino Magazine, cater to a young, hip, metropolitan audience, and emphasize accessibility and a do-it-yourself (DIY) approach to interior design.
== Gallery ==
== Notable interior decorators ==
Other early interior decorators:
Sibyl Colefax
Dorothy Draper
Pierre François Léonard Fontaine
Syrie Maugham
Margery Hoffman Smith
Elsie de Wolfe
Arthur Stannard Vernay
Frank Lloyd Wright
Many of the most famous designers and decorators during the 20th century had no formal training. Some examples include Sister Parish, Robert Denning and Vincent Fourcade, Kerry Joyce, Kelly Wearstler, Stéphane Boudin, Georges Geffroy, Emilio Terry, Carlos de Beistegui, Nina Petronzio, Lorenzo Mongiardino, Mary Jean Thompson and David Nightingale Hicks.
Notable interior designers in the world today include Scott Salvator, Troy Adams, Jonathan Adler, Michael S. Smith, Martin Brudnizki, Mary Douglas Drysdale, Kelly Hoppen, Kelly Wearstler, Nina Campbell, David Collins, Nate Berkus, Sandra Espinet, Jo Hamilton and Nicky Haslam.
== See also ==
1960s decor
American Society of Interior Designers
Blueprint
British Institute of Interior Design
Chartered Society of Designers
Environmental psychology
Experiential interior design
Fuzzy architectural spatial analysis
Interior architecture
Interior design psychology
Interior design regulation in the United States
Japanese interior design
Primitive decorating
Wall decals
Window treatment
== References ==
== External links ==
Candace Wheeler: The Art and Enterprise of American Design, 1875–1900, a full text exhibition catalog from The Metropolitan Museum of Art, which includes a great deal of content about early interior design | Wikipedia/Interior_designer |
In queueing theory, a discipline within the mathematical theory of probability, a Jackson network (sometimes Jacksonian network) is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’
Jackson was inspired by the work of Burke and Reich, though Jean Walrand notes "product-form results … [are] a much less immediate result of the output theorem than Jackson himself appeared to believe in his fundamental paper".
An earlier product-form solution was found by R. R. P. Jackson for tandem queues (a finite chain of queues where each customer must visit each queue in order) and cyclic networks (a loop of queues where each customer must visit each queue in order).
A Jackson network consists of a number of nodes, where each node represents a queue in which the service rate can be both node-dependent (different nodes have different service rates) and state-dependent (service rates change depending on queue lengths). Jobs travel among the nodes following a fixed routing matrix. All jobs at each node belong to a single "class" and jobs follow the same service-time distribution and the same routing mechanism. Consequently, there is no notion of priority in serving the jobs: all jobs at each node are served on a first-come, first-served basis.
Jackson networks where a finite population of jobs travel around a closed network also have a product-form solution described by the Gordon–Newell theorem.
== Necessary conditions for a Jackson network ==
A network of m interconnected queues is known as a Jackson network or Jacksonian network if it meets the following conditions:
if the network is open, any external arrivals to node i form a Poisson process,
All service times are exponentially distributed and the service discipline at all queues is first-come, first-served,
a customer completing service at queue i will either move to some new queue j with probability
P
i
j
{\displaystyle P_{ij}}
or leave the system with probability
1
−
∑
j
=
1
m
P
i
j
{\displaystyle 1-\sum _{j=1}^{m}P_{ij}}
, which, for an open network, is non-zero for some subset of the queues,
the utilization of all of the queues is less than one.
== Theorem ==
In an open Jackson network of m M/M/1 queues where the utilization
ρ
i
{\displaystyle \rho _{i}}
is less than 1 at every queue, the equilibrium state probability distribution exists and for state
(
k
1
,
k
2
,
…
,
k
m
)
{\displaystyle \scriptstyle {(k_{1},k_{2},\ldots ,k_{m})}}
is given by the product of the individual queue equilibrium distributions
π
(
k
1
,
k
2
,
…
,
k
m
)
=
∏
i
=
1
m
π
i
(
k
i
)
=
∏
i
=
1
m
[
ρ
i
k
i
(
1
−
ρ
i
)
]
.
{\displaystyle \pi (k_{1},k_{2},\ldots ,k_{m})=\prod _{i=1}^{m}\pi _{i}(k_{i})=\prod _{i=1}^{m}[\rho _{i}^{k_{i}}(1-\rho _{i})].}
The result
π
(
k
1
,
k
2
,
…
,
k
m
)
=
∏
i
=
1
m
π
i
(
k
i
)
{\displaystyle \pi (k_{1},k_{2},\ldots ,k_{m})=\prod _{i=1}^{m}\pi _{i}(k_{i})}
also holds for M/M/c model stations with ci servers at the
i
th
{\displaystyle i^{\text{th}}}
station, with utilization requirement
ρ
i
<
c
i
{\displaystyle \rho _{i}<c_{i}}
.
== Definition ==
In an open network, jobs arrive from outside following a Poisson process with rate
α
>
0
{\displaystyle \alpha >0}
. Each arrival is independently routed to node j with probability
p
0
j
≥
0
{\displaystyle p_{0j}\geq 0}
and
∑
j
=
1
J
p
0
j
=
1
{\displaystyle \sum _{j=1}^{J}p_{0j}=1}
. Upon service completion at node i, a job may go to another node j with probability
p
i
j
{\displaystyle p_{ij}}
or leave the network with probability
p
i
0
=
1
−
∑
j
=
1
J
p
i
j
{\displaystyle p_{i0}=1-\sum _{j=1}^{J}p_{ij}}
.
Hence we have the overall arrival rate to node i,
λ
i
{\displaystyle \lambda _{i}}
, including both external arrivals and internal transitions:
λ
i
=
α
p
0
i
+
∑
j
=
1
J
λ
j
p
j
i
,
i
=
1
,
…
,
J
.
(
1
)
{\displaystyle \lambda _{i}=\alpha p_{0i}+\sum _{j=1}^{J}\lambda _{j}p_{ji},i=1,\ldots ,J.\qquad (1)}
(Since the utilisation at each node is less than 1, and we are looking at the equilibrium distribution i.e. the long-run-average behaviour, the rate of jobs transitioning from j to i is bounded by a fraction of the arrival rate at j and we ignore the service rate
μ
j
{\displaystyle \mu _{j}}
in the above.)
Define
a
=
(
α
p
0
i
)
i
=
1
J
{\displaystyle a=(\alpha p_{0i})_{i=1}^{J}}
, then we can solve
λ
=
(
I
−
P
T
)
−
1
a
{\displaystyle \lambda =(I-P^{T})^{-1}a}
.
All jobs leave each node also following Poisson process, and define
μ
i
(
x
i
)
{\displaystyle \mu _{i}(x_{i})}
as the service rate of node i when there are
x
i
{\displaystyle x_{i}}
jobs at node i.
Let
X
i
(
t
)
{\displaystyle X_{i}(t)}
denote the number of jobs at node i at time t, and
X
=
(
X
i
)
i
=
1
J
{\displaystyle \mathbf {X} =(X_{i})_{i=1}^{J}}
. Then the equilibrium distribution of
X
{\displaystyle \mathbf {X} }
,
π
(
x
)
=
P
(
X
=
x
)
{\displaystyle \pi (\mathbf {x} )=P(\mathbf {X} =\mathbf {x} )}
is determined by the following system of balance equations:
π
(
x
)
∑
i
=
1
J
[
α
p
0
i
+
μ
i
(
x
i
)
(
1
−
p
i
i
)
]
=
∑
i
=
1
J
[
π
(
x
−
e
i
)
α
p
0
i
+
π
(
x
+
e
i
)
μ
i
(
x
i
+
1
)
p
i
0
]
+
∑
i
=
1
J
∑
j
≠
i
π
(
x
+
e
i
−
e
j
)
μ
i
(
x
i
+
1
)
p
i
j
.
(
2
)
{\displaystyle {\begin{aligned}&\pi (\mathbf {x} )\sum _{i=1}^{J}[\alpha p_{0i}+\mu _{i}(x_{i})(1-p_{ii})]\\={}&\sum _{i=1}^{J}[\pi (\mathbf {x} -\mathbf {e} _{i})\alpha p_{0i}+\pi (\mathbf {x} +\mathbf {e} _{i})\mu _{i}(x_{i}+1)p_{i0}]+\sum _{i=1}^{J}\sum _{j\neq i}\pi (\mathbf {x} +\mathbf {e} _{i}-\mathbf {e} _{j})\mu _{i}(x_{i}+1)p_{ij}.\qquad (2)\end{aligned}}}
where
e
i
{\displaystyle \mathbf {e} _{i}}
denote the
i
th
{\displaystyle i^{\text{th}}}
unit vector.
=== Theorem ===
Suppose a vector of independent random variables
(
Y
1
,
…
,
Y
J
)
{\displaystyle (Y_{1},\ldots ,Y_{J})}
with each
Y
i
{\displaystyle Y_{i}}
having a probability mass function as
P
(
Y
i
=
n
)
=
p
(
Y
i
=
0
)
⋅
λ
i
n
M
i
(
n
)
,
(
3
)
{\displaystyle P(Y_{i}=n)=p(Y_{i}=0)\cdot {\frac {\lambda _{i}^{n}}{M_{i}(n)}},\quad (3)}
where
M
i
(
n
)
=
∏
j
=
1
n
μ
i
(
j
)
{\displaystyle M_{i}(n)=\prod _{j=1}^{n}\mu _{i}(j)}
. If
∑
n
=
1
∞
λ
i
n
M
i
(
n
)
<
∞
{\displaystyle \sum _{n=1}^{\infty }{\frac {\lambda _{i}^{n}}{M_{i}(n)}}<\infty }
i.e.
P
(
Y
i
=
0
)
=
(
1
+
∑
n
=
1
∞
λ
i
n
M
i
(
n
)
)
−
1
{\displaystyle P(Y_{i}=0)=\left(1+\sum _{n=1}^{\infty }{\frac {\lambda _{i}^{n}}{M_{i}(n)}}\right)^{-1}}
is well defined, then the equilibrium distribution of the open Jackson network has the following product form:
π
(
x
)
=
∏
i
=
1
J
P
(
Y
i
=
x
i
)
.
{\displaystyle \pi (\mathbf {x} )=\prod _{i=1}^{J}P(Y_{i}=x_{i}).}
for all
x
∈
Z
+
J
{\displaystyle \mathbf {x} \in {\mathcal {Z}}_{+}^{J}}
.⟩
This theorem extends the one shown above by allowing state-dependent service rate of each node. It relates the distribution of
X
{\displaystyle \mathbf {X} }
by a vector of independent variable
Y
{\displaystyle \mathbf {Y} }
.
=== Example ===
Suppose we have a three-node Jackson network shown in the graph, the coefficients are:
α
=
5
,
p
01
=
p
02
=
0.5
,
p
03
=
0
,
{\displaystyle \alpha =5,\quad p_{01}=p_{02}=0.5,\quad p_{03}=0,\quad }
P
=
[
0
0.5
0.5
0
0
0
0
0
0
]
,
μ
=
[
μ
1
(
x
1
)
μ
2
(
x
2
)
μ
3
(
x
3
)
]
=
[
15
12
10
]
for all
x
i
>
0
{\displaystyle P={\begin{bmatrix}0&0.5&0.5\\0&0&0\\0&0&0\end{bmatrix}},\quad \mu ={\begin{bmatrix}\mu _{1}(x_{1})\\\mu _{2}(x_{2})\\\mu _{3}(x_{3})\end{bmatrix}}={\begin{bmatrix}15\\12\\10\end{bmatrix}}{\text{ for all }}x_{i}>0}
Then by the theorem, we can calculate:
λ
=
(
I
−
P
T
)
−
1
a
=
[
1
0
0
−
0.5
1
0
−
0.5
0
1
]
−
1
[
0.5
×
5
0.5
×
5
0
]
=
[
1
0
0
0.5
1
0
0.5
0
1
]
[
2.5
2.5
0
]
=
[
2.5
3.75
1.25
]
{\displaystyle \lambda =(I-P^{T})^{-1}a={\begin{bmatrix}1&0&0\\-0.5&1&0\\-0.5&0&1\end{bmatrix}}^{-1}{\begin{bmatrix}0.5\times 5\\0.5\times 5\\0\end{bmatrix}}={\begin{bmatrix}1&0&0\\0.5&1&0\\0.5&0&1\end{bmatrix}}{\begin{bmatrix}2.5\\2.5\\0\end{bmatrix}}={\begin{bmatrix}2.5\\3.75\\1.25\end{bmatrix}}}
According to the definition of
Y
{\displaystyle \mathbf {Y} }
, we have:
P
(
Y
1
=
0
)
=
(
∑
n
=
0
∞
(
2.5
15
)
n
)
−
1
=
5
6
{\displaystyle P(Y_{1}=0)=\left(\sum _{n=0}^{\infty }\left({\frac {2.5}{15}}\right)^{n}\right)^{-1}={\frac {5}{6}}}
P
(
Y
2
=
0
)
=
(
∑
n
=
0
∞
(
3.75
12
)
n
)
−
1
=
11
16
{\displaystyle P(Y_{2}=0)=\left(\sum _{n=0}^{\infty }\left({\frac {3.75}{12}}\right)^{n}\right)^{-1}={\frac {11}{16}}}
P
(
Y
3
=
0
)
=
(
∑
n
=
0
∞
(
1.25
10
)
n
)
−
1
=
7
8
{\displaystyle P(Y_{3}=0)=\left(\sum _{n=0}^{\infty }\left({\frac {1.25}{10}}\right)^{n}\right)^{-1}={\frac {7}{8}}}
Hence the probability that there is one job at each node is:
π
(
1
,
1
,
1
)
=
5
6
⋅
2.5
15
⋅
11
16
⋅
3.75
12
⋅
7
8
⋅
1.25
10
≈
0.00326
{\displaystyle \pi (1,1,1)={\frac {5}{6}}\cdot {\frac {2.5}{15}}\cdot {\frac {11}{16}}\cdot {\frac {3.75}{12}}\cdot {\frac {7}{8}}\cdot {\frac {1.25}{10}}\approx 0.00326}
Since the service rate here does not depend on state, the
Y
i
{\displaystyle Y_{i}}
s simply follow a geometric distribution.
== Generalized Jackson network ==
A generalized Jackson network allows renewal arrival processes that need not be Poisson processes, and independent, identically distributed non-exponential service times. In general, this network does not have a product-form stationary distribution, so approximations are sought.
=== Brownian approximation ===
Under some mild conditions the queue-length process
Q
(
t
)
{\displaystyle Q(t)}
of an open generalized Jackson network can be approximated by a reflected Brownian motion defined as
RBM
Q
(
0
)
(
θ
,
Γ
;
R
)
.
{\displaystyle \operatorname {RBM} _{Q(0)}(\theta ,\Gamma ;R).}
, where
θ
{\displaystyle \theta }
is the drift of the process,
Γ
{\displaystyle \Gamma }
is the covariance matrix, and
R
{\displaystyle R}
is the reflection matrix. This is a two-order approximation obtained by relation between general Jackson network with homogeneous fluid network and reflected Brownian motion.
The parameters of the reflected Brownian process is specified as follows:
θ
=
α
−
(
I
−
P
T
)
μ
{\displaystyle \theta =\alpha -(I-P^{T})\mu }
Γ
=
(
Γ
k
ℓ
)
with
Γ
k
ℓ
=
∑
j
=
1
J
(
λ
j
∧
μ
j
)
[
p
j
k
(
δ
k
ℓ
−
p
j
ℓ
)
+
c
j
2
(
p
j
k
−
δ
j
k
)
(
p
j
ℓ
−
δ
j
ℓ
)
]
+
α
k
c
0
,
k
2
δ
k
ℓ
{\displaystyle \Gamma =(\Gamma _{k\ell }){\text{ with }}\Gamma _{k\ell }=\sum _{j=1}^{J}(\lambda _{j}\wedge \mu _{j})[p_{jk}(\delta _{k\ell }-p_{j\ell })+c_{j}^{2}(p_{jk}-\delta _{jk})(p_{j\ell }-\delta _{j\ell })]+\alpha _{k}c_{0,k}^{2}\delta _{k\ell }}
R
=
I
−
P
T
{\displaystyle R=I-P^{T}}
where the symbols are defined as:
== See also ==
Gordon–Newell network
BCMP network
G-network
Little's law
== References == | Wikipedia/Jackson_network |
In queueing theory, a discipline within the mathematical theory of probability, Beneš approach or Beneš method is a result for an exact or good approximation to the probability distribution of queue length. It was introduced by Václav E. Beneš in 1963.
The method introduces a quantity referred to as the "virtual waiting time" to define the remaining workload in the queue at any time. This process is a step function which jumps upward with new arrivals to the system and otherwise is linear with negative gradient. By giving a relation for the distribution of unfinished work in terms of the excess work, the difference between arrivals and potential service capacity, it turns a time-dependent virtual waiting time problem into "an integral that, in principle, can be solved."
== References == | Wikipedia/Beneš_method |
In probability theory, a balance equation is an equation that describes the probability flux associated with a Markov chain in and out of states or set of states.
== Global balance ==
The global balance equations (also known as full balance equations) are a set of equations that characterize the equilibrium distribution (or any stationary distribution) of a Markov chain, when such a distribution exists.
For a continuous time Markov chain with state space
S
{\displaystyle {\mathcal {S}}}
, transition rate from state
i
{\displaystyle i}
to
j
{\displaystyle j}
given by
q
i
j
{\displaystyle q_{ij}}
and equilibrium distribution given by
π
{\displaystyle \pi }
, the global balance equations are given by
π
i
=
∑
j
∈
S
π
j
q
j
i
,
{\displaystyle \pi _{i}=\sum _{j\in S}\pi _{j}q_{ji},}
or equivalently
π
i
∑
j
∈
S
∖
{
i
}
q
i
j
=
∑
j
∈
S
∖
{
i
}
π
j
q
j
i
.
{\displaystyle \pi _{i}\sum _{j\in S\setminus \{i\}}q_{ij}=\sum _{j\in S\setminus \{i\}}\pi _{j}q_{ji}.}
for all
i
∈
S
{\displaystyle i\in S}
. Here
π
i
q
i
j
{\displaystyle \pi _{i}q_{ij}}
represents the probability flux from state
i
{\displaystyle i}
to state
j
{\displaystyle j}
. So the left-hand side represents the total flow from out of state i into states other than i, while the right-hand side represents the total flow out of all states
j
≠
i
{\displaystyle j\neq i}
into state
i
{\displaystyle i}
. In general it is computationally intractable to solve this system of equations for most queueing models.
== Detailed balance ==
For a continuous time Markov chain (CTMC) with transition rate matrix
Q
{\displaystyle Q}
, if
π
i
{\displaystyle \pi _{i}}
can be found such that for every pair of states
i
{\displaystyle i}
and
j
{\displaystyle j}
π
i
q
i
j
=
π
j
q
j
i
{\displaystyle \pi _{i}q_{ij}=\pi _{j}q_{ji}}
holds, then by summing over
j
{\displaystyle j}
, the global balance equations are satisfied and
π
{\displaystyle \pi }
is the stationary distribution of the process. If such a solution can be found the resulting equations are usually much easier than directly solving the global balance equations.
A CTMC is reversible if and only if the detailed balance conditions are satisfied for every pair of states
i
{\displaystyle i}
and
j
{\displaystyle j}
.
A discrete time Markov chain (DTMC) with transition matrix
P
{\displaystyle P}
and equilibrium distribution
π
{\displaystyle \pi }
is said to be in detailed balance if for all pairs
i
{\displaystyle i}
and
j
{\displaystyle j}
,
π
i
p
i
j
=
π
j
p
j
i
.
{\displaystyle \pi _{i}p_{ij}=\pi _{j}p_{ji}.}
When a solution can be found, as in the case of a CTMC, the computation is usually much quicker than directly solving the global balance equations.
== Local balance ==
In some situations, terms on either side of the global balance equations cancel. The global balance equations can then be partitioned to give a set of local balance equations (also known as partial balance equations, independent balance equations or individual balance equations). These balance equations were first considered by Peter Whittle. The resulting equations are somewhere between detailed balance and global balance equations. Any solution
π
{\displaystyle \pi }
to the local balance equations is always a solution to the global balance equations (we can recover the global balance equations by summing the relevant local balance equations), but the converse is not always true. Often, constructing local balance equations is equivalent to removing the outer summations in the global balance equations for certain terms.
During the 1980s it was thought local balance was a requirement for a product-form equilibrium distribution, but Gelenbe's G-network model showed this not to be the case.
== Notes == | Wikipedia/Balance_equation |
In queueing theory, a discipline within the mathematical theory of probability, a G-network (generalized queueing network, often called a Gelenbe network) is an open network of G-queues first introduced by Erol Gelenbe as a model for queueing systems with specific control functions, such as traffic re-routing or traffic destruction, as well as a model for neural networks. A G-queue is a network of queues with several types of novel and useful customers:
positive customers, which arrive from other queues or arrive externally as Poisson arrivals, and obey standard service and routing disciplines as in conventional network models,
negative customers, which arrive from another queue, or which arrive externally as Poisson arrivals, and remove (or 'kill') customers in a non-empty queue, representing the need to remove traffic when the network is congested, including the removal of "batches" of customers
"triggers", which arrive from other queues or from outside the network, and which displace customers and move them to other queues
A product-form solution superficially similar in form to Jackson's theorem, but which requires the solution of a system of non-linear equations for the traffic flows, exists for the stationary distribution of G-networks while the traffic equations of a G-network are in fact surprisingly non-linear, and the model does not obey partial balance. This broke previous assumptions that partial balance was a necessary condition for a product-form solution. A powerful property of G-networks is that they are universal approximators for continuous and bounded functions, so that they can be used to approximate quite general input-output behaviours.
== Definition ==
A network of m interconnected queues is a G-network if
each queue has one server, who serves at rate μi,
external arrivals of positive customers or of triggers or resets form Poisson processes of rate
Λ
i
{\displaystyle \scriptstyle {\Lambda _{i}}}
for positive customers, while triggers and resets, including negative customers, form a Poisson process of rate
λ
i
{\displaystyle \scriptstyle {\lambda _{i}}}
,
on completing service a customer moves from queue i to queue j as a positive customer with probability
p
i
j
+
{\displaystyle \scriptstyle {p_{ij}^{+}}}
, as a trigger or reset with probability
p
i
j
−
{\displaystyle \scriptstyle {p_{ij}^{-}}}
and departs the network with probability
d
i
{\displaystyle \scriptstyle {d_{i}}}
,
on arrival to a queue, a positive customer acts as usual and increases the queue length by 1,
on arrival to a queue, the negative customer reduces the length of the queue by some random number (if there is at least one positive customer present at the queue), while a trigger moves a customer probabilistically to another queue and a reset sets the state of the queue to its steady-state if the queue is empty when the reset arrives. All triggers, negative customers and resets disappear after they have taken their action, so that they are in fact "control" signals in the network,
note that normal customers leaving a queue can become triggers or resets and negative customers when they visit the next queue.
A queue in such a network is known as a G-queue.
== Stationary distribution ==
Define the utilization at each node,
ρ
i
=
λ
i
+
μ
i
+
λ
i
−
{\displaystyle \rho _{i}={\frac {\lambda _{i}^{+}}{\mu _{i}+\lambda _{i}^{-}}}}
where the
λ
i
+
,
λ
i
−
{\displaystyle \scriptstyle {\lambda _{i}^{+},\lambda _{i}^{-}}}
for
i
=
1
,
…
,
m
{\displaystyle \scriptstyle {i=1,\ldots ,m}}
satisfy
Then writing (n1, ... ,nm) for the state of the network (with queue length ni at node i), if a unique non-negative solution
(
λ
i
+
,
λ
i
−
)
{\displaystyle \scriptstyle {(\lambda _{i}^{+},\lambda _{i}^{-})}}
exists to the above equations (1) and (2) such that ρi for all i then the stationary probability distribution π exists and is given by
π
(
n
1
,
n
2
,
…
,
n
m
)
=
∏
i
=
1
m
(
1
−
ρ
i
)
ρ
i
n
i
.
{\displaystyle \pi (n_{1},n_{2},\ldots ,n_{m})=\prod _{i=1}^{m}(1-\rho _{i})\rho _{i}^{n_{i}}.}
=== Proof ===
It is sufficient to show
π
{\displaystyle \pi }
satisfies the global balance equations which, quite differently from Jackson networks are non-linear. We note that the model also allows for multiple classes.
G-networks have been used in a wide range of applications, including to represent Gene Regulatory Networks, the mix of control and payload in packet networks, neural networks, and the representation of colour images and medical images such as Magnetic Resonance Images.
== Response time distribution ==
The response time is the length of time a customer spends in the system. The response time distribution for a single G-queue is known where customers are served using a FCFS discipline at rate μ, with positive arrivals at rate λ+ and negative arrivals at rate λ− which kill customers from the end of the queue. The Laplace transform of response time distribution in this situation is
W
∗
(
s
)
=
μ
(
1
−
ρ
)
λ
+
s
+
λ
+
μ
(
1
−
ρ
)
−
[
s
+
λ
+
μ
(
1
−
ρ
)
]
2
−
4
λ
+
λ
−
λ
−
−
λ
+
−
μ
(
1
−
ρ
)
−
s
+
[
s
+
λ
+
μ
(
1
−
ρ
)
]
2
−
4
λ
+
λ
−
{\displaystyle W^{\ast }(s)={\frac {\mu (1-\rho )}{\lambda ^{+}}}{\frac {s+\lambda +\mu (1-\rho )-{\sqrt {[s+\lambda +\mu (1-\rho )]^{2}-4\lambda ^{+}\lambda ^{-}}}}{\lambda ^{-}-\lambda ^{+}-\mu (1-\rho )-s+{\sqrt {[s+\lambda +\mu (1-\rho )]^{2}-4\lambda ^{+}\lambda ^{-}}}}}}
where λ = λ+ + λ− and ρ = λ+/(λ− + μ), requiring ρ < 1 for stability.
The response time for a tandem pair of G-queues (where customers who finish service at the first node immediately move to the second, then leave the network) is also known, and it is thought extensions to larger networks will be intractable.
== References == | Wikipedia/G-networks |
Queueing Systems is a peer-reviewed scientific journal covering queueing theory. It is published by Springer Science+Business Media. The current editor-in-chief is Sergey Foss. According to the Journal Citation Reports, the journal has a 2019 impact factor of 1.114.
== Editors-in-chief ==
N. U. Prabhu was the founding editor-in-chief when the journal was established in 1986 and remained editor until 1995. Richard F. Serfozo was editor from 1996 to 2004, and Onno J. Boxma from 2004 to 2009. Since 2009, the editor has been Sergey Foss.
== Abstracting and indexing ==
Queueing Systems is abstracted and indexed in DBLP, Journal Citation Reports, Mathematical Reviews, Research Papers in Economics, SCImago Journal Rank, Scopus, Science Citation Index, Zentralblatt MATH, among others.
== References ==
== External links ==
Official website | Wikipedia/Queueing_Systems |
In queueing theory, a loss network is a stochastic model of a telephony network in which calls are routed around a network between nodes. The links between nodes have finite capacity and thus some calls arriving may find no route available to their destination. These calls are lost from the network, hence the name loss networks.
The loss network was first studied by Erlang for a single telephone link. Frank Kelly was awarded the Frederick W. Lanchester Prize for his 1991 paper Loss Networks where he demonstrated the behaviour of loss networks can exhibit hysteresis.
== Model ==
=== Fixed routing ===
Consider a network with J links labelled 1, 2, …, J and that each link j has Cj circuits. Let R be the set of all possible routes in the network (combinations of links a call might use) and each route r, write Ajr for the number of circuits route r uses on link j (A is therefore a J x |R| matrix). Consider the case where all elements of A are either 0 or 1 and for each route r calls requiring use of the route arrive according to a Poisson process of rate vr. When a call arrives if there is sufficient capacity remaining on all the required links the call is accepted and occupies the network for an exponentially distributed length of time with parameter 1. If there is insufficient capacity on any individual link to accept the call it is rejected (lost) from the network.
Write nr(t) for the number of calls on route r in progress at time t, n(t) for the vector (nr(t) : r in R) and C = (C1, C2, ... , CJ). Then the continuous-time Markov process n(t) has unique stationary distribution
π
(
n
)
=
G
(
C
)
−
1
∏
r
∈
R
v
r
n
r
n
r
!
for
n
∈
S
(
C
)
{\displaystyle \pi (n)=G(C)^{-1}\prod _{r\in R}{\frac {v_{r}^{n_{r}}}{n_{r}!}}{\text{ for }}n\in S(C)}
where
S
(
C
)
=
{
n
∈
Z
+
R
:
A
n
≤
C
}
{\displaystyle S(C)=\{n\in \mathbb {Z} _{+}^{R}:An\leq C\}}
and
G
(
C
)
=
(
∑
n
∈
S
(
C
)
∏
r
∈
R
v
r
n
r
n
r
!
)
.
{\displaystyle G(C)=\left(\sum _{n\in S(C)}\prod _{r\in R}{\frac {v_{r}^{n_{r}}}{n_{r}!}}\right).}
From this result loss probabilities for calls arriving on different routes can be calculated by summing over appropriate states.
== Computing loss probabilities ==
There are common algorithms for computing the loss probabilities in loss networks
Erlang fixed-point approximation
Slice method
3-point slice method
== Notes == | Wikipedia/Loss_network |
In queueing theory, a discipline within the mathematical theory of probability, traffic equations are equations that describe the mean arrival rate of traffic, allowing the arrival rates at individual nodes to be determined. Mitrani notes "if the network is stable, the traffic equations are valid and can be solved.": 125
== Jackson network ==
In a Jackson network, the mean arrival rate
λ
i
{\displaystyle \lambda _{i}}
at each node i in the network is given by the sum of external arrivals (that is, arrivals from outside the network directly placed onto node i, if any), and internal arrivals from each of the other nodes on the network. If external arrivals at node i have rate
γ
i
{\displaystyle \gamma _{i}}
, and the routing matrix is P, the traffic equations are, (for i = 1, 2, ..., m)
λ
i
=
γ
i
+
∑
j
=
1
m
p
j
i
λ
j
.
{\displaystyle \lambda _{i}=\gamma _{i}+\sum _{j=1}^{m}p_{ji}\lambda _{j}.}
This can be written in matrix form as
λ
(
I
−
P
)
=
γ
,
{\displaystyle \lambda (I-P)=\gamma \,,}
and there is a unique solution of unknowns
λ
i
{\displaystyle \lambda _{i}}
to this equation, so the mean arrival rates at each of the nodes can be determined given knowledge of the external arrival rates
γ
i
{\displaystyle \gamma _{i}}
and the matrix P. The matrix I − P is surely non-singular as otherwise in the long run the network would become empty.
== Gordon–Newell network ==
In a Gordon–Newell network there are no external arrivals, so the traffic equations take the form (for i = 1, 2, ..., m)
λ
i
=
∑
j
=
1
m
p
j
i
λ
j
.
{\displaystyle \lambda _{i}=\sum _{j=1}^{m}p_{ji}\lambda _{j}.}
== Notes == | Wikipedia/Traffic_equations |
A network scheduler, also called packet scheduler, queueing discipline (qdisc) or queueing algorithm, is an arbiter on a node in a packet switching communication network. It manages the sequence of network packets in the transmit and receive queues of the protocol stack and network interface controller. There are several network schedulers available for the different operating systems, that implement many of the existing network scheduling algorithms.
The network scheduler logic decides which network packet to forward next. The network scheduler is associated with a queuing system, storing the network packets temporarily until they are transmitted. Systems may have a single or multiple queues in which case each may hold the packets of one flow, classification, or priority.
In some cases it may not be possible to schedule all transmissions within the constraints of the system. In these cases the network scheduler is responsible for deciding which traffic to forward and what gets dropped.
== Terminology and responsibilities ==
A network scheduler may have responsibility in implementation of specific network traffic control initiatives. Network traffic control is an umbrella term for all measures aimed at reducing network congestion, latency and packet loss. Specifically, active queue management (AQM) is the selective dropping of queued network packets to achieve the larger goal of preventing excessive network congestion. The scheduler must choose which packets to drop. Traffic shaping smooths the bandwidth requirements of traffic flows by delaying transmission packets when they are queued in bursts. The scheduler decides the timing for the transmitted packets. Quality of service (QoS) is the prioritization of traffic based on service class (Differentiated services) or reserved connection (Integrated services).
== Algorithms ==
In the course of time, many network queueing disciplines have been developed. Each of these provides specific reordering or dropping of network packets inside various transmit or receive buffers.
Queuing disciplines are commonly used as attempts to compensate for various networking conditions, like reducing the latency for certain classes of network packets, and are generally used as part of QoS measures.
Classful queueing disciplines allow the creation of classes, which work like branches on a tree. Rules can then be set to filter packets into each class. Each class can itself have assigned other classful or classless queueing discipline. Classless queueing disciplines do not allow adding more queueing disciplines to it.
Examples of algorithms suitable for managing network traffic include:
Several of the above have been implemented as Linux kernel modules and are freely available.
== Bufferbloat ==
Bufferbloat is a phenomenon in packet-switched networks in which excess buffering of packets causes high latency and packet delay variation. Bufferbloat can be addressed by a network scheduler that strategically discards packets to avoid an unnecessarily high buffering backlog. Examples include CoDel, FQ-CoDel and random early detection.
== Implementations ==
=== Linux kernel ===
The Linux kernel packet scheduler is an integral part of the Linux kernel's network stack and manages the transmit and receive ring buffers of all NICs.
The packet scheduler is configured using the utility called tc (short for traffic control). As the default queuing discipline, the packet scheduler uses a FIFO implementation called pfifo_fast, although systemd since its version 217 changes the default queuing discipline to fq_codel.
The ifconfig and ip utilities enable system administrators to configure the buffer sizes txqueuelen and rxqueuelen for each device separately in terms of number of Ethernet frames regardless of their size. The Linux kernel's network stack contains several other buffers, which are not managed by the network scheduler.
Berkeley Packet Filter filters can be attached to the packet scheduler's classifiers. The eBPF functionality brought by version 4.1 of the Linux kernel in 2015 extends the classic BPF programmable classifiers to eBPF. These can be compiled using the LLVM eBPF backend and loaded into a running kernel using the tc utility.
=== BSD and OpenBSD ===
ALTQ is the implementation of a network scheduler for BSDs. As of OpenBSD version 5.5 ALTQ was replaced by the HFSC scheduler.
=== Cell-Free Network Scheduling ===
Schedulers in communication networks manage resource allocation, including packet prioritization, timing, and resource distribution. Advanced implementations increasingly leverage artificial intelligence to address the complexities of modern network configurations. For instance, a supervised neural network (NN)-based scheduler has been introduced in cell-free networks to efficiently handle interactions between multiple radio units (RUs) and user equipment (UEs). This approach reduces computational complexity while optimizing latency, throughput, and resource allocation, making it a promising solution for beyond-5G networks.
== See also ==
Queueing theory
Statistical time-division multiplexing
Type of service
== Notes ==
== References == | Wikipedia/Queueing_algorithm |
The Ehrenfest model (or dog–flea model) of diffusion was proposed by Tatiana and Paul Ehrenfest to explain the second law of thermodynamics. The model considers N particles in two containers. Particles independently change container at a rate λ. If X(t) = i is defined to be the number of particles in one container at time t, then it is a birth–death process with transition rates
q
i
,
i
−
1
=
i
λ
{\displaystyle q_{i,i-1}=i\,\lambda }
for i = 1, 2, ..., N
q
i
,
i
+
1
=
(
N
−
i
)
λ
{\displaystyle q_{i,i+1}=(N-i\,)\lambda }
for i = 0, 1, ..., N – 1
and equilibrium distribution
π
i
=
2
−
N
(
N
i
)
{\displaystyle \pi _{i}=2^{-N}{\tbinom {N}{i}}}
.
Mark Kac proved in 1947 that if the initial system state is not equilibrium, then the entropy, given by
H
(
t
)
=
−
∑
i
P
(
X
(
t
)
=
i
)
log
(
P
(
X
(
t
)
=
i
)
π
i
)
,
{\displaystyle H(t)=-\sum _{i}P(X(t)=i)\log \left({\frac {P(X(t)=i)}{\pi _{i}}}\right),}
is monotonically increasing (H-theorem). This is a consequence of the convergence to the equilibrium distribution.
== Interpretation of results ==
Consider that at the beginning all the particles are in one of the containers. It is expected that over time the number of particles in this container will approach
N
/
2
{\displaystyle N/2}
and stabilize near that state (containers will have approximately the same number of particles). However from mathematical point of view, going back to the initial state is possible (even almost sure). From mean recurrence theorem follows that even the expected time to going back to the initial state is finite, and it is
2
N
{\displaystyle 2^{N}}
. Using Stirling's approximation one finds that if we start at equilibrium (equal number of particles in the containers), the expected time to return to equilibrium is asymptotically equal to
π
N
/
2
{\displaystyle \textstyle {\sqrt {\pi N/2}}}
. If we assume that particles change containers at rate one in a second, in the particular case of
N
=
100
{\displaystyle N=100}
particles, starting at equilibrium the return to equilibrium is expected to occur in
13
{\displaystyle 13}
seconds, while starting at configuration
100
{\displaystyle 100}
in one of the containers,
0
{\displaystyle 0}
at the other, the return to that state is expected to take
4
⋅
10
22
{\displaystyle 4\cdot 10^{22}}
years. This supposes that while theoretically sure, recurrence to the initial highly disproportionate state is unlikely to be observed.
== Bibliography ==
Paul and Tatjana Ehrenfest: Über zwei bekannte Einwände gegen das Boltzmannsche H-Theorem. Physikalische Zeitschrift, vol. 8 (1907), pp. 311–314.
F.P. Kelly: The Ehrenfest model, in Reversibility and Stochastic Networks. Wiley, Chichester, 1979. ISBN 0-471-27601-4 pp. 17–20.
David O. Siegmund: Ehrenfest model of diffusion (mathematics). Encyclopædia Britannica.
== See also ==
Kac ring
Ornstein–Uhlenbeck process
== References == | Wikipedia/Ehrenfest_model |
In probability theory, the matrix geometric method is a method for the analysis of quasi-birth–death processes, continuous-time Markov chain whose transition rate matrices with a repetitive block structure. The method was developed "largely by Marcel F. Neuts and his students starting around 1975."
== Method description ==
The method requires a transition rate matrix with tridiagonal block structure as follows
Q
=
(
B
00
B
01
B
10
A
1
A
2
A
0
A
1
A
2
A
0
A
1
A
2
A
0
A
1
A
2
⋱
⋱
⋱
)
{\displaystyle Q={\begin{pmatrix}B_{00}&B_{01}\\B_{10}&A_{1}&A_{2}\\&A_{0}&A_{1}&A_{2}\\&&A_{0}&A_{1}&A_{2}\\&&&A_{0}&A_{1}&A_{2}\\&&&&\ddots &\ddots &\ddots \end{pmatrix}}}
where each of B00, B01, B10, A0, A1 and A2 are matrices. To compute the stationary distribution π writing π Q = 0 the balance equations are considered for sub-vectors πi
π
0
B
00
+
π
1
B
10
=
0
π
0
B
01
+
π
1
A
1
+
π
2
A
0
=
0
π
1
A
2
+
π
2
A
1
+
π
3
A
0
=
0
⋮
π
i
−
1
A
2
+
π
i
A
1
+
π
i
+
1
A
0
=
0
⋮
{\displaystyle {\begin{aligned}\pi _{0}B_{00}+\pi _{1}B_{10}&=0\\\pi _{0}B_{01}+\pi _{1}A_{1}+\pi _{2}A_{0}&=0\\\pi _{1}A_{2}+\pi _{2}A_{1}+\pi _{3}A_{0}&=0\\&\vdots \\\pi _{i-1}A_{2}+\pi _{i}A_{1}+\pi _{i+1}A_{0}&=0\\&\vdots \\\end{aligned}}}
Observe that the relationship
π
i
=
π
1
R
i
−
1
{\displaystyle \pi _{i}=\pi _{1}R^{i-1}}
holds where R is the Neut's rate matrix, which can be computed numerically. Using this we write
(
π
0
π
1
)
(
B
00
B
01
B
10
A
1
+
R
A
0
)
=
(
0
0
)
{\displaystyle {\begin{aligned}{\begin{pmatrix}\pi _{0}&\pi _{1}\end{pmatrix}}{\begin{pmatrix}B_{00}&B_{01}\\B_{10}&A_{1}+RA_{0}\end{pmatrix}}={\begin{pmatrix}0&0\end{pmatrix}}\end{aligned}}}
which can be solve to find π0 and π1 and therefore iteratively all the πi.
== Computation of R ==
The matrix R can be computed using cyclic reduction or logarithmic reduction.
== Matrix analytic method ==
The matrix analytic method is a more complicated version of the matrix geometric solution method used to analyse models with block M/G/1 matrices. Such models are harder because no relationship like πi = π1 Ri – 1 used above holds.
== External links ==
Performance Modelling and Markov Chains (part 2) by William J. Stewart at 7th International School on Formal Methods for the Design of Computer, Communication and Software Systems: Performance Evaluation
== References == | Wikipedia/Matrix_geometric_method |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.