text
stringlengths
26
3.6k
page_title
stringlengths
1
71
source
stringclasses
1 value
token_count
int64
10
512
id
stringlengths
2
8
url
stringlengths
31
117
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
A trebuchet () is a type of catapult that uses a rotating arm with a sling attached to the tip to launch a projectile. It was a common powerful siege engine until the advent of gunpowder. The design of a trebuchet allows it to launch projectiles of greater weights and further distances than that of a traditional catapult. There are two main types of trebuchet. The first is the traction trebuchet, or mangonel, which uses manpower to swing the arm. It first appeared in China by the 4th century BC. It spread westward, possibly by the Avars, and was adopted by the Byzantines, Persians, Arabs, and other neighboring peoples by the sixth to seventh centuries AD. The later, and often larger and more powerful, counterweight trebuchet, also known as the counterpoise trebuchet, uses a counterweight to swing the arm. It appeared in both Christian and Muslim lands around the Mediterranean in the 12th century, and was carried back to China by the Mongols in the 13th century. Etymology and terminology The numerous forms of the word that appeared during the 13th century, including trabocco, tribok, tribuclietta, and trubechetum, have obscured the origin of the term. In Arabic the counterweight trebuchet was called manjaniq maghribi or majaniq ifranji. In China it was called the húihúi pào (Muslim trebuchet). The English word trebuchet is first mentioned in the 14th century (13th century in Anglo-Latin) as "medieval stone-throwing engine of war". It is borrowed from (Old) French trebuchet (now trébuchet). The French word is from the verbal root of trebucher (now trébucher) : trebuch- + diminutive noun suffix -et, trebucher (10th century) meant "to overthrow, to bring down", then and now "to stumble", maybe earlier "to rock" or "to tilt". It is a compound of (Old) French tre(s)-, variant form tra- (now tré- / tra-) from Latin trans expressing "displacement" in that case + Old French buc "trunk of the body, bulk", itself from Old Low Franconian *būk- "belly" similar to Old High German buh, German Bauch "belly".
Trebuchet
Wikipedia
499
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
The earliest appearance of the term "trebuchet" in French dates to the late 12th century and the first attestations of trebuchet as a siege weapon are from around the year 1200. The 1174-77 edition of Roman de Renart, an epic about Renard the Fox, describes it as a "trap whose trigger mechanism consists of an assembly of balanced logs" (understood as animal trap by 1375) while the ca. 1200 edition describes it as a "war engine that throws stones to break down walls". The word trabuchellus appeared alongside manganum and prederia in a document in Vicenza on . Trabucha is found a decade later with predariae at the siege of Castelnuovo Bocca d'Adda in an account by Iohannes Codagnellus. It is unclear, however, whether these referred to counterweight trebuchets. Codagnellus did not specify a specific type of engine with the term and even implied that they were "fairly light in subsequent references". Only in the late 1210s do variations of "trebuchet" in sources, described as increasingly powerful machines or utilizing different components, identify more closely with the counterweight trebuchet. Other terms, such as machina maior/magna, might have also referred to counterweight trebuchets. Traction trebuchet and counterweight trebuchet are modern terms (retronyms), not used by contemporary users of the weapons. The term traction trebuchet was created mainly to distinguish this type of weapon from the onager, a torsion powered catapult that is often conflated in contemporary sources with the mangonel, which was used as a generic term for any medieval stone throwing artillery. Both the traction and counterweight trebuchets have been called mangonel at one point or another. Confusion between the onager, mangonel, trebuchet, and other catapult types in contemporary terminology has led some historians today to use the more precise traction trebuchet instead, with counterweight trebuchet used to distinguish what was before called simply a trebuchet. Some modern historians use mangonel to mean exclusively traction trebuchets, while others call traction trebuchets traction mangonels and counterweight trebuchets counterweight mangonels. Basic design
Trebuchet
Wikipedia
472
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
The trebuchet is a compound machine that makes use of the mechanical advantage of a lever to throw a projectile. They are typically large constructions, with the length of the beam as much as , with some purported to be even larger. A trebuchet consists primarily of a long beam attached by an axle suspended high above the ground by a stout frame and base, such that the beam can rotate vertically through a wide arc (typically over 180°). A sling is attached to one end of the beam to hold the projectile. The projectile is thrown when the beam is quickly rotated by applying force to the opposite end of the beam. The mechanical advantage is primarily obtained by having the projectile section of the beam much longer than the opposite section where the force is applied – usually four to six times longer. The difference between counterweight and traction trebuchets is what force they use. Counterweight trebuchets use gravity; potential energy is stored by slowly raising an extremely heavy box (typically filled with stones, sand, or lead) attached to the shorter end of the beam (typically on a hinged connection), and releasing it on command. Traction trebuchets use human power; on command, men pull ropes attached to the shorter end of the trebuchet beam. The difficulties of coordinating the pull of many men together repeatedly and predictably makes counterweight trebuchets preferable for the larger machines, though they are more complicated to engineer. The trebuchet had further modifications to allow an increase to its range, by creating a slot for the sling and projectile to sit underneath the trebuchet, enabling the sling to be lengthened and thus extending the range, an alteration in the trajectory, or the release point to be changed. Further increasing their complexity is that either winches or treadwheels, aided by block and tackle, are typically required to raise the more massive counterweights. So while counterweight trebuchets require significantly fewer men to operate than traction trebuchets, they require significantly more time to reload. In a long siege, reload time may not be a critical concern.
Trebuchet
Wikipedia
424
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
When the trebuchet is operated, the force causes rotational acceleration of the beam around the axle (the fulcrum of the lever). These factors multiply the acceleration transmitted to the throwing portion of the beam and its attached sling. The sling starts rotating with the beam, but rotates farther (typically about 360°) and therefore faster, transmitting this increased speed to the projectile. The length of the sling increases the mechanical advantage, and also changes the trajectory so that, at the time of release from the sling, the projectile is traveling in the desired speed and angle to give it the range to hit the target. Adjusting the sling's release point is the primary means of fine-tuning the range, as the rest of the trebuchet's actions are difficult to adjust after construction. The rotation speed of the throwing beam increases smoothly, starting slow but building up quickly. After the projectile is released, the arm continues to rotate, allowed to smoothly slow down on its own accord and come to rest at the end of the rotation. This is unlike the violent sudden stop inherent in the action of other catapult designs such as the onager, which must absorb most of the launching energy into their own frame, and must be heavily built and reinforced as a result. This key difference makes the trebuchet much more durable, allowing for larger and more powerful machines. A trebuchet projectile can be almost anything, even debris, rotting carcasses, or incendiaries, but is typically a large stone. Dense stone, or even metal, specially worked to be round and smooth, gives the best range and predictability. When attempting to breach enemy walls, it is important to use materials that will not shatter on impact; projectiles were sometimes brought from distant quarries to get the desired properties. History Traction trebuchet The traction trebuchet, also referred to as a mangonel in some sources, originated in ancient China.
Trebuchet
Wikipedia
392
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
The first recorded use of traction trebuchets was in ancient China. They were probably used by the Mohists as early as 4th century BC; descriptions can be found in the Mozi (compiled in the 4th century BC). According to the Mozi, the traction trebuchet was high with buried below ground, the fulcrum attached was constructed from the wheels of a cart, the throwing arm was long with three quarters above the pivot and a quarter below to which the ropes are attached, and the sling long. The range given for projectiles are , , and . They were used as defensive weapons stationed on walls and sometimes hurled hollowed-out logs filled with burning charcoal to destroy enemy siege works. By the 1st century AD, commentators were interpreting other passages in texts such as the Zuo zhuan and Classic of Poetry as references to the traction trebuchet: "the guai is 'a great arm of wood on which a stone is laid, and this by means of a device [ji] is shot off and so strikes down the enemy. The Records of the Grand Historian say that "The flying stones weigh 12 catties and by devices [ji] are shot off 300 paces." Traction trebuchets went into decline during the Han dynasty due to long periods of peace but became a common siege weapon again during the Three Kingdoms period. They were commonly called stone-throwing machines, thunder carriages, and stone carriages in the following centuries. They were used as ship mounted weapons by 573 for attacking enemy fortifications. It seems that during the early 7th century, improvements were made on traction trebuchets, although it is not explicitly stated what. According to a stele in Barkul celebrating Tang Taizong's conquest of what is now Ejin Banner, the engineer Jiang Xingben made great advancements on trebuchets that were unknown in ancient times. Jiang Xingben participated in the construction of siege engines for Taizong's campaigns against the Western Regions. In 617 Li Mi (Sui dynasty) constructed 300 trebuchets for his assault on Luoyang, in 621 Li Shimin did the same at Luoyang, and onward into the Song dynasty when in 1161, trebuchets operated by Song dynasty soldiers fired bombs of lime and sulphur against the ships of the Jin dynasty navy during the Battle of Caishi.
Trebuchet
Wikipedia
480
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
The traction trebuchet was adopted by various peoples west of China such as the Byzantines, Persians, Arabs, and Avars by the sixth to seventh centuries AD. Some scholars suggest that the Avars carried the traction trebuchet westward while others claim that the Byzantines already possessed knowledge of the traction trebuchet beforehand. Regardless of the vector of transmission, it appeared in the eastern Mediterranean by the late 6th century AD, where it replaced torsion powered siege engines such as the ballista and onager. The rapid displacement of torsion siege engines was probably due to a combination of reasons. The traction trebuchet is simpler in design, has a faster rate of fire, increased accuracy, and comparable range and power. It was probably also safer than the twisted cords of torsion weapons, "whose bundles of taut sinews stored up huge amounts of energy even in resting state and were prone to catastrophic failure when in use." At the same time, the late Roman Empire seems to have fielded "considerably less artillery than its forebears, organised now in separate units, so the weaponry that came into the hands of successor states might have been limited in quantity." Evidence from Gaul and Germania suggests there was substantial loss of skills and techniques in artillery further west. According to the Miracles of Saint Demetrius, probably written around 620 by John, Archbishop of Thessaloniki, the Avaro-Slavs attacked Thessaloniki in 586 with traction trebuchets. The bombardment lasted for hours, but the operators were inaccurate and most of the shots missed their target. When one stone did reach their target, it "demolished the top of the rampart down to the walkway." The Byzantines adopted the traction trebuchet possibly as early as 587, the Persians in the early 7th century, and the Arabs in the second half of the 7th century. In 652, the Arabs used trebuchets at the siege of Dongola in the Sudan. Like the Chinese, by 653, the Arabs also had ship mounted traction trebuchets. The Franks and Saxons adopted the weapon in the 8th century. The Life of Louis the Pious contains the earliest western European reference to mangonels (traction trebuchets) in its account of the siege of Tortosa (808–809). In 1173, the Republic of Pisa tried to capture an island castle with traction trebuchet on galleys. Traction trebuchets were also used in India.
Trebuchet
Wikipedia
496
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
The traction trebuchet was most efficient as an anti-personnel weapon, used in a supportive position alongside archers and slingers. Most accounts of traction trebuchets describe them as light artillery weapons while actual penetration of defenses was the result of mining or siege towers. At the Siege of Kamacha in 766, Byzantine defenders used wooden cover to protect themselves from the enemy artillery while inflicting casualties with their own stone throwers. Michael the Syrian noted that at the siege of Balis in 823 it was the defenders that suffered from bombardment rather than the fortifications. At the siege of Kaysum, Abdallah ibn Tahir al-Khurasani used artillery to damage houses in the town. The Sack of Amorium in 838 saw the use of traction trebuchets to drive away defenders and destroy wooden defenses. At the siege of Marand in 848, traction trebuchets were used, "reportedly killing 100 and wounding 400 on each side during the eight-month siege." During the siege of Baghdad in 865, defensive artillery were responsible for repelling an attack on the city gate while traction trebuchets on boats claimed a hundred of the defenders' lives. Some exceptionally large and powerful traction trebuchets have been described during the 11th century or later. At the Siege of Manzikert (1054), the Seljuks' initial siege artillery was countered by the defenders' own, which shot stones at the besieging machine. In response, the Seljuks constructed another one requiring 400 men to pull and threw stones weighing . A breach was created on the first shot but the machine was burnt down by the defenders. According to Matthew of Edessa, this machine weighed and caused a number of casualties to the city's defenders. Ibn al-Adim describes a traction trebuchet capable of throwing a man in 1089. At the siege of Haizhou in 1161, a traction trebuchet was reported to have had a range of 200 paces (over ). West of China, the traction trebuchet remained the primary siege engine until the 12th century when it was replaced by the counterweight trebuchet. In China the traction trebuchet was the primary siege engine until the counterweight trebuchet was introduced during the Mongol conquest of the Song dynasty in the 13th century. Counterweight trebuchet Origins
Trebuchet
Wikipedia
482
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
There is little to no consensus as to where and when the counterweight trebuchet, which has been described as the "most powerful weapon of the Middle Ages", was first developed. The earliest known description and illustration of a counterweight trebuchet comes from a commentary on the conquests of Saladin by Mardi ibn Ali al-Tarsusi in 1187. However cases for the existence of both European and Muslim counterweight trebuchets prior to 1187 have been made. In 1090, Khalaf ibn Mula'ib threw out a man from the citadel in Salamiya with a machine and in the early 12th century, Muslim siege engines were able to breach crusader fortifications. David Nicolle argues that these events could have only been possible with the use of counterweight trebuchets. Although al-Tarsusi provided the first description and illustration of a counterweight trebuchet, the text implies that the engine was not new and had previously been built. Al-Tarsusi referred to the counterweight trebuchet as the "Persian" trebuchet whereas the "Frankish" trebuchet was a light traction engine. Later during the 13th century, Muslims used manjaniq maghribi (Western trebuchet) and manjaniq ifranji (Frankish trebuchet) to refer to counterweight trebuchets. Paul E. Chevedden suggests that manjaniq maghribi was used to describe hinged counterweight engines in contrast to previous fixed or hanging counterweight trebuchets. Sometimes counterweight trebuchets are separated into two or three different categories based on how their counterweights are attached. These being fixed, hanging, and hinged counterweights. A fixed counterweight is an intrinsic part of the swinging arm and its trajectory is circular. Hanging counterweights hang below the arm and drop vertically. Hinged counterweights are attached to the arm by a swinging joint. Some fixed counterweights also had a hinged component. The type described by al-Tarsusi was a hanging counterweight. Writing in 1280, Giles of Rome claimed that hinged counterweight trebuchets had a greater range than fixed counterweight types.
Trebuchet
Wikipedia
448
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
Chevedden argues that counterweight trebuchets appeared prior to 1187 in Europe based on what might have been counterweight trebuchets in earlier sources. The 12th-century Byzantine historian Niketas Choniates may have been referring to a counterweight trebuchet when he described one equipped with a windlass, which is only useful to counterweight machines, at the siege of Zevgminon in 1165. However the source for this was written in the 1180s to 1190s and Niketas may have been placing the engine of his own time anachronistically into the past. At the siege of Nicaea in 1097 the Byzantine emperor Alexios I Komnenos reportedly invented new pieces of heavy artillery which deviated from the conventional design and made a deep impression on everyone. Illustrations produced later in 1270 depicted fixed counterweight trebuchets used at the siege. Possible references to counterweight trebuchets also appear for the second siege of Tyre in 1124, where the crusaders reportedly made use of "great trebuchets". However the sources for this siege, Fulcher of Chartres and William of Tyre, only mention machinae and machinae iaculatoriae that were later translated as perrieres and mangoniaux in the Estoire d'Eracles. Chevedden argues that given the references to new and better trebuchets that by the 1120–30s, the counterweight trebuchet was being used in a variety of places by different peoples such as the crusader states, the Normans of Sicily and the Seljuks.
Trebuchet
Wikipedia
323
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
The earliest solid reference to a "trebuchet" in European sources dates to the siege of Castelnuovo Bocca d'Adda in 1199. However it is unclear if this referred to counterweight trebuchets since the author did not specify what engine was used and described the machine as fairly light. They may have been used in Germany from around 1205. Only in the late 1210s do references to "trebuchet", describing more powerful engines and different components, more closely align with the features of a counterweight trebuchet. Some of these more powerful engines may have just been traction trebuchets, as one was described being pulled by ten thousand. At the Siege of Toulouse (1217–1218), trabuquets were mentioned to have been deployed, but the siege engine depicted at the tomb of Simon de Montfort, who was killed by artillery at the siege, is a traction trebuchet. Though soon after, clear evidence of counterweight machines appeared. According to the Song of the Albigensian Crusade, the defenders "ran to the ropes and wound the trebuchets", and to shoot the machine, they "then released their ropes." They were used in England at least by 1217 and in Iberia shortly after 1218. By the 1230s the counterweight trebuchet was a common item in siege warfare. Despite the lack of clearly definable terms in the late 12th and early 13th centuries, it is likely that both Muslims and Europeans already had working knowledge of the counterweight trebuchet beforehand. From the First Crusade (1096–1099) onward, there does not appear to be any discernible difference in the technology of siege engines employed by Muslim and Frankish forces, and by the Third Crusade (1189–1192), both sides seemed well acquainted with the enemy's siege weapons, which "appear to have been remarkably similar." China
Trebuchet
Wikipedia
396
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
Counterweight trebuchets do not appear with certainty in Chinese historical records until about 1268. Prior to 1268, the counterweight trebuchet may have been used in 1232 by the Jurchen Jin commander Qiang Shen. Qiang invented a device called the "Arresting Trebuchet" which only needed a few men to work it, and could hurl great stones more than a hundred paces, further than even the strongest traction trebuchet. However no other details on the machine are given. Qiang died the following year and no further references to the Arresting Trebuchet appear. The earliest definite mention of the counterweight trebuchet in China was in 1268, when the Mongols laid siege to Fancheng and Xiangyang. After failing to take the twin cities of Fancheng and Xiangyang for several years, collectively known as the siege of Fancheng and Xiangyang, the Mongol army brought in two Persian engineers to build hinged counterweight trebuchets. Known as the Huihui trebuchet (回回砲, where "huihui" is a loose slang referring to any Muslims), or Xiangyang trebuchet (襄陽砲) because they were first encountered in that battle. Ismail and Al-aud-Din travelled to South China from Iraq and built trebuchets for the siege. Chinese and Muslim engineers operated artillery and siege engines for the Mongol armies. By 1283, counterweight trebuchets were also used in Southeast Asia by the Chams against the Yuan dynasty. Function
Trebuchet
Wikipedia
308
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
While some historians have described the counterweight trebuchet as a type of medieval super weapon, other historians have urged caution in overemphasizing its destructive capability. On the side of the counterweight engine as a medieval military revolution, historians such as Sydney Toy, Paul Chevedden, and Hugh Kennedy consider its power to have caused significant changes in medieval warfare. This line of thought suggests that rams were abandoned due to the effectiveness of the counterweight trebuchet, which was capable of reducing "any fortress to rubble". Accordingly, traditional fortifications became obsolete and had to be improved with new architectural structures to support defensive counterweight trebuchets. In southern France during the Albigensian Crusade, sieges were a last resort and negotiations for surrender were common. In these instances, trebuchets were used to threaten or bombard enemy fortifications and ensure victory. On the side of caution, historians such as John France, Christopher Marshall, and Michael Fulton emphasize the still considerable difficulty of reducing fortifications with siege artillery. Examples of the failure of siege artillery include the lack of evidence that artillery ever threatened the defenses of Kerak Castle between 1170 and 1188. Marshall maintains that "the methods of attack and defence remained largely the same through the thirteenth century as they had been during the twelfth." Reservations on the counterweight trebuchet's destructive capability were expressed by Viollet-le-Duc, who "asserted that even counterweight-powered artillery could do little more than destroy crenellations, clear defenders from parapets and target the machines of the besieged."
Trebuchet
Wikipedia
319
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
In spite of the evidence regarding increasingly powerful counterweight trebuchets during the 13th century, "it remains an important consideration that not one of these appears to have effected a breach that directly led to the fall of a stronghold." In 1220, Al-Mu'azzam Isa laid siege to Atlit with a trabuculus, three petrariae, and four mangonelli but could not penetrate past the outer wall, which was soft but thick. As late as the Siege of Acre (1291), where the Mamluk Sultanate fielded 72 or 92 trebuchets, including 14 or 15 counterweight trebuchets and the remaining traction types, they were never able to fulfill a breaching role. The Mamluks entered the city by sapping the northeast corner of the outer wall. Though stone projectiles of substantial size (~) have been found at Acre, located near the site of the siege and likely used by the Mamluks, surviving walls of a 13th-century Montmusard tower are no more than one meter thick. There is no indication that the thickness of fortress walls increased exponentially rather than a modest increase of between the 12th and 13th century. The Templar of Tyre described the faster firing traction trebuchets as more dangerous to the defenders than the counterweight ones. The Song dynasty described countermeasures against counterweight trebuchets that prevented them from damaging towers and houses: "an extraordinary method was invented of neutralising the effects of the enemy's trebuchets. Ropes of rice straw four inches thick and thirty-four feet long were joined together twenty at a time, draped on to the buildings from top to bottom, and covered with [wet] clay. Then neither the incendiary arrows, nor bombs [huo pao] from trebuchets, nor even stones of a hundred jun caused any damage to the towers and houses."
Trebuchet
Wikipedia
388
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
The counterweight trebuchet did not completely replace the traction trebuchet. Despite its greater range, counterweight trebuchets had to be constructed close to the site of the siege unlike traction trebuchets, which were smaller, lighter, cheaper, and easier to take apart and put back together again where necessary. The superiority of the counterweight trebuchet was not clear cut. Of this, the Hongwu Emperor stated in 1388: "The old type of trebuchet was really more convenient. If you have a hundred of those machines, then when you are ready to march, each wooden pole can be carried by only four men. Then when you reach your destination, you encircle the city, set them up, and start shooting!" The traction trebuchet continued to serve as an anti-personnel weapon. The Norwegian text of 1240, Speculum regale, explicitly states this division of functions. Traction trebuchets were to be used for hitting people in undefended areas. At the Siege of Acre (1291), both traction and counterweight trebuchets were used. The traction trebuchets provided cover fire while the counterweight trebuchets destroyed the city's fortifications. The counterweight-trebuchet could also be used for cover fire and as an anti-personnel weapon. King James I of Aragon employed this as a defensive tactic in many fortified structures and towns which proved effective. Trebuchets could cause mass casualties due to the destruction of structures. During an assault on Muntcada by King James I, a trebuchet was used to target a tower, destroying the structure and causing the consequential deaths of civilians and livestock. But typically the counterweight trebuchet was used against battlements such as parapets, other defensive structures, and the lower section of walls due to its greater accuracy and longer range, which was how it was employed by the Kingdom of Aragon.
Trebuchet
Wikipedia
394
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
There is some evidence that the counterweight trebuchet could be transported. Armies employed a magister tormentorum ('master of trebuchets') for the reconstruction of trebuchets after they were deconstructed for transportation to their destination, whether on carts or by ship. They could also be equipped with their own wheels, as shown in two 17th- and 18th-century Chinese illustrations, which are also the only Chinese depictions of counterweight trebuchets on land. According to Liang Jieming, the "illustration shows ... its throwing arm disassembled, its counterweight locked with supporting braces, and prepped for transport and not in battle deployment." However, according to Joseph Needham, the large tank in the middle was the counterweight, while the bulb at the end of the arm was for adjusting between fixed and swinging counterweights. Both Liang and Needham note that the illustrations are poorly drawn and confusing, leading to mislabeling. The counterweight and traction trebuchets were phased out around the mid-15th century in favor of gunpowder weapons. Decline of military use With the introduction of gunpowder, the trebuchet began to lose its place as the siege engine of choice to the cannon. Trebuchets were still used both at the siege of Burgos (1475–1476) and siege of Rhodes (1480). One of the last recorded military uses was by Hernán Cortés, at the 1521 siege of the Aztec capital Tenochtitlán. Accounts of the attack note that its use was motivated by the limited supply of gunpowder. The attempt was reportedly unsuccessful: the first projectile landed on the trebuchet itself, destroying it. In China, the last time trebuchets were seriously considered for military purposes was in 1480. Not much is heard of them afterwards. In 2024, the Israeli military made at least partial use of trebuchets against Hezbollah objectives in southern Lebanon. Other trebuchets
Trebuchet
Wikipedia
401
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
Hand-trebuchet The hand-trebuchet () was a staff sling mounted on a pole using a lever mechanism to propel projectiles. Basically a one-man traction trebuchet, it was used by troops of emperor Nikephoros II Phokas around 965 to disrupt enemy formations in the open field. It was also mentioned in the Taktika of general Nikephoros Ouranos (c. 1000), and listed in De obsidione toleranda (author anonymous) as a form of artillery. In China, the hand-trebuchet (shoupao) was invented by Liu Yongxi and presented to the emperor in 1002. It was a pole with a pin at its upper end that acted as a fulcrum for the arm. The pole was used as a shot for fixing in the ground and the user could then throw missiles at the enemy from a static position. Hybrid trebuchet According to Paul E. Chevedden, a hybrid trebuchet existed that used both counterweight and human propulsion. However no illustrations or descriptions of the device exist from the time when they were supposed to have been used. The entire argument for the existence of hybrid trebuchets rests on accounts of increasingly more effective siege weapons. Peter Purton suggests that this was simply because the machines became larger. The earliest depiction of a hybrid trebuchet is dated to 1462, when trebuchets had already become obsolete due to cannons. Couillard The couillard is a smaller version of a counterweight trebuchet with a single frame instead of the usual double "A" frames. The counterweight is split into two halves to avoid hitting the center frame. Comparison of different artillery weapons Roman torsion engines Chinese trebuchets Counterweight trebuchets (estimates) Siege crossbows Reconstructed traction trebuchets Reconstructed counterweight trebuchets Modern use Recreation and education Most trebuchet use in recent centuries has been for recreational or educational, rather than military purposes. New machines have been constructed and old ones restored by living history enthusiasts, for historical re-enactments, and use in other historical celebrations. As their construction is substantially simpler than modern weapons, trebuchets also serve as the object of engineering challenges. The methods of trebuchet construction were lost at the beginning of the 16th century. In 1984, the French engineer Renaud Beffeyte made the first modern reconstruction of a trebuchet, based on documents from 1324.
Trebuchet
Wikipedia
505
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
The largest currently-functioning trebuchet in the world is the machine at Warwick Castle, England, constructed in 2005. Based on historical designs, it stands tall and throws missiles typically 36 kg (80 lbs) up to . The trebuchet gained significant interest from numerous news sources when in 2015 a burning missile fired from the siege engine struck and damaged a Victorian-era boathouse situated at the River Avon close by, inadvertently demonstrating the weapon's power. It is built on the design of a similar trebuchet at Middelaldercentret in Denmark. In 1989, Middelaldercentret became the first place in the modern era to have a working trebuchet. Trebuchets compete in one of the classifications of machines used to hurl pumpkins at the annual pumpkin chucking contest held in Sussex County, Delaware, U.S. The record-holder in that contest for trebuchets is the Yankee Siege II from New Hampshire, which at the 2013 WCPC Championship tossed a pumpkin 2835.8 ft (864.35 metres). The , trebuchet flings the standard pumpkins, specified for all entries in the WCPC competition. A large trebuchet was tested in late 2017 in Belfast as part of the set for the television series Game of Thrones. A large trebuchet based on Edward I's "Warwolf" was constructed for a scene in David Mackenzie's movie Outlaw King (2018) about Robert the Bruce, King of Scots. During the film, it hurls an incendiary projectile at Stirling Castle. It recreates the true story that it took some three months to build and Edward would not let his enemy surrender until he could use it. In recent years several trebuchets has been created capable of throwing cars. In the episode "Carnage A Trois" in series 4 of The Grand Tour the presenters uses a trebuchet to allegedly sling a Citroën C3 Pluriel from the White Cliffs of Dover across the English Channel. The Stamford based YouTube personality and inventor Colin Furze created a high trebuchet capable of throwing a washing machine in December 2020. Developments
Trebuchet
Wikipedia
439
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
Although rarely used as a weapon today, trebuchets maintain the interest of professional and hobbyist engineers. One modern technological development, especially for the competitive pumpkin-hurling events, is the "floating arms" design. Instead of using the traditional axle fixed to a frame, these devices are mounted on wheels that roll on a track parallel to the ground, with a counterweight that falls directly downward upon release, allowing for greater efficiency by increasing the proportion of energy transferred to the projectile. A more radical design; Jonathan, Orion, and Emmerson Stapleton's "walking arm", described as "a stick falling over with a huge counterweight on top of the stick" debuted in 2016 and in 2018 won both the Grand Champion Best Design and Middleweight Open Division of the 10th annual Vermont Pumpkin Chuckin Festival. Another recent development is the "flywheel trebuchet," in which a flywheel is spun into rapid rotation to build up momentum before release. Uses in activism and insurgency In 2013, during the Syrian civil war, rebels were filmed using a trebuchet in the Battle of Aleppo. The trebuchet was used to project explosives at government troops. In 2014, during the Hrushevskoho street riots in Ukraine, rioters used an improvised trebuchet to throw bricks and Molotov cocktails at the Berkut. Uses in regular armies In 2024, the IDF used a trebuchet to hurl flaming projectiles into Lebanon. The goal was to burn down the thicket that grew alongside the border wall between Israel and Lebanon, so it couldn't be used as cover by Hezbollah troops. The IDF later issued a response to suggest that the trebuchet's use was a "local initiative", rather than a widely-used tool in the Israeli military. Gallery
Trebuchet
Wikipedia
365
43380
https://en.wikipedia.org/wiki/Trebuchet
Technology
Artillery
null
A simulation is an imitative representation of a process or system that could exist in the real world. In this broad sense, simulation can often be used interchangeably with model. Sometimes a clear distinction between the two terms is made, in which simulations require the use of models; the model represents the key characteristics or behaviors of the selected system or process, whereas the simulation represents the evolution of the model over time. Another way to distinguish between the terms is to define simulation as experimentation with the help of a model. This definition includes time-independent simulations. Often, computers are used to execute the simulation. Simulation is used in many contexts, such as simulation of technology for performance tuning or optimizing, safety engineering, testing, training, education, and video games. Simulation is also used with scientific modelling of natural systems or human systems to gain insight into their functioning, as in economics. Simulation can be used to show the eventual real effects of alternative conditions and courses of action. Simulation is also used when the real system cannot be engaged, because it may not be accessible, or it may be dangerous or unacceptable to engage, or it is being designed but not yet built, or it may simply not exist. Key issues in modeling and simulation include the acquisition of valid sources of information about the relevant selection of key characteristics and behaviors used to build the model, the use of simplifying approximations and assumptions within the model, and fidelity and validity of the simulation outcomes. Procedures and protocols for model verification and validation are an ongoing field of academic study, refinement, research and development in simulations technology or practice, particularly in the work of computer simulation. Classification and terminology Historically, simulations used in different fields developed largely independently, but 20th-century studies of systems theory and cybernetics combined with spreading use of computers across all those fields have led to some unification and a more systematic view of the concept. Physical simulation refers to simulation in which physical objects are substituted for the real thing (some circles use the term for computer simulations modelling selected laws of physics, but this article does not). These physical objects are often chosen because they are smaller or cheaper than the actual object or system. Interactive simulation is a special kind of physical simulation, often referred to as a human-in-the-loop simulation, in which physical simulations include human operators, such as in a flight simulator, sailing simulator, or driving simulator. Continuous simulation is a simulation based on continuous-time rather than discrete-time steps, using numerical integration of differential equations.
Simulation
Wikipedia
506
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Discrete-event simulation studies systems whose states change their values only at discrete times. For example, a simulation of an epidemic could change the number of infected people at time instants when susceptible individuals get infected or when infected individuals recover. Stochastic simulation is a simulation where some variable or process is subject to random variations and is projected using Monte Carlo techniques using pseudo-random numbers. Thus replicated runs with the same boundary conditions will each produce different results within a specific confidence band. Deterministic simulation is a simulation which is not stochastic: thus the variables are regulated by deterministic algorithms. So replicated runs from the same boundary conditions always produce identical results. Hybrid simulation (or combined simulation) corresponds to a mix between continuous and discrete event simulation and results in integrating numerically the differential equations between two sequential events to reduce the number of discontinuities. A stand-alone simulation is a simulation running on a single workstation by itself. A is one which uses more than one computer simultaneously, to guarantee access from/to different resources (e.g. multi-users operating different systems, or distributed data sets); a classical example is Distributed Interactive Simulation (DIS). Parallel simulation speeds up a simulation's execution by concurrently distributing its workload over multiple processors, as in high-performance computing. Interoperable simulation is where multiple models, simulators (often defined as federates) interoperate locally, distributed over a network; a classical example is High-Level Architecture. Modeling and simulation as a service is where simulation is accessed as a service over the web. Modeling, interoperable simulation and serious games is where serious game approaches (e.g. game engines and engagement methods) are integrated with interoperable simulation. Simulation fidelity is used to describe the accuracy of a simulation and how closely it imitates the real-life counterpart. Fidelity is broadly classified as one of three categories: low, medium, and high. Specific descriptions of fidelity levels are subject to interpretation, but the following generalizations can be made: Low – the minimum simulation required for a system to respond to accept inputs and provide outputs Medium – responds automatically to stimuli, with limited accuracy High – nearly indistinguishable or as close as possible to the real system A synthetic environment is a computer simulation that can be included in human-in-the-loop simulations.
Simulation
Wikipedia
482
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Simulation in failure analysis refers to simulation in which we create environment/conditions to identify the cause of equipment failure. This can be the best and fastest method to identify the failure cause. Computer simulation A computer simulation (or "sim") is an attempt to model a real-life or hypothetical situation on a computer so that it can be studied to see how the system works. By changing variables in the simulation, predictions may be made about the behaviour of the system. It is a tool to virtually investigate the behaviour of the system under study. Computer simulation has become a useful part of modeling many natural systems in physics, chemistry and biology, and human systems in economics and social science (e.g., computational sociology) as well as in engineering to gain insight into the operation of those systems. A good example of the usefulness of using computers to simulate can be found in the field of network traffic simulation. In such simulations, the model behaviour will change each simulation according to the set of initial parameters assumed for the environment. Traditionally, the formal modeling of systems has been via a mathematical model, which attempts to find analytical solutions enabling the prediction of the behaviour of the system from a set of parameters and initial conditions. Computer simulation is often used as an adjunct to, or substitution for, modeling systems for which simple closed form analytic solutions are not possible. There are many different types of computer simulation, the common feature they all share is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states would be prohibitive or impossible. Several software packages exist for running computer-based simulation modeling (e.g. Monte Carlo simulation, stochastic modeling, multimethod modeling) that makes all the modeling almost effortless. Modern usage of the term "computer simulation" may encompass virtually any computer-based representation. Computer science In computer science, simulation has some specialized meanings: Alan Turing used the term simulation to refer to what happens when a universal machine executes a state transition table (in modern terminology, a computer runs a program) that describes the state transitions, inputs and outputs of a subject discrete-state machine. The computer simulates the subject machine. Accordingly, in theoretical computer science the term simulation is a relation between state transition systems, useful in the study of operational semantics.
Simulation
Wikipedia
468
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Less theoretically, an interesting application of computer simulation is to simulate computers using computers. In computer architecture, a type of simulator, typically called an emulator, is often used to execute a program that has to run on some inconvenient type of computer (for example, a newly designed computer that has not yet been built or an obsolete computer that is no longer available), or in a tightly controlled testing environment (see Computer architecture simulator and Platform virtualization). For example, simulators have been used to debug a microprogram or sometimes commercial application programs, before the program is downloaded to the target machine. Since the operation of the computer is simulated, all of the information about the computer's operation is directly available to the programmer, and the speed and execution of the simulation can be varied at will. Simulators may also be used to interpret fault trees, or test VLSI logic designs before they are constructed. Symbolic simulation uses variables to stand for unknown values. In the field of optimization, simulations of physical processes are often used in conjunction with evolutionary computation to optimize control strategies. Simulation in education and training Simulation is extensively used for educational purposes. It is used for cases where it is prohibitively expensive or simply too dangerous to allow trainees to use the real equipment in the real world. In such situations they will spend time learning valuable lessons in a "safe" virtual environment yet living a lifelike experience (or at least it is the goal). Often the convenience is to permit mistakes during training for a safety-critical system. Simulations in education are somewhat like training simulations. They focus on specific tasks. The term 'microworld' is used to refer to educational simulations which model some abstract concept rather than simulating a realistic object or environment, or in some cases model a real-world environment in a simplistic way so as to help a learner develop an understanding of the key concepts. Normally, a user can create some sort of construction within the microworld that will behave in a way consistent with the concepts being modeled. Seymour Papert was one of the first to advocate the value of microworlds, and the Logo programming environment developed by Papert is one of the most well-known microworlds. Project management simulation is increasingly used to train students and professionals in the art and science of project management. Using simulation for project management training improves learning retention and enhances the learning process.
Simulation
Wikipedia
490
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Social simulations may be used in social science classrooms to illustrate social and political processes in anthropology, economics, history, political science, or sociology courses, typically at the high school or university level. These may, for example, take the form of civics simulations, in which participants assume roles in a simulated society, or international relations simulations in which participants engage in negotiations, alliance formation, trade, diplomacy, and the use of force. Such simulations might be based on fictitious political systems, or be based on current or historical events. An example of the latter would be Barnard College's Reacting to the Past series of historical educational games. The National Science Foundation has also supported the creation of reacting games that address science and math education. In social media simulations, participants train communication with critics and other stakeholders in a private environment. In recent years, there has been increasing use of social simulations for staff training in aid and development agencies. The Carana simulation, for example, was first developed by the United Nations Development Programme, and is now used in a very revised form by the World Bank for training staff to deal with fragile and conflict-affected countries. Military uses for simulation often involve aircraft or armoured fighting vehicles, but can also target small arms and other weapon systems training. Specifically, virtual firearms ranges have become the norm in most military training processes and there is a significant amount of data to suggest this is a useful tool for armed professionals. Virtual simulation A virtual simulation is a category of simulation that uses simulation equipment to create a simulated world for the user. Virtual simulations allow users to interact with a virtual world. Virtual worlds operate on platforms of integrated software and hardware components. In this manner, the system can accept input from the user (e.g., body tracking, voice/sound recognition, physical controllers) and produce output to the user (e.g., visual display, aural display, haptic display) . Virtual simulations use the aforementioned modes of interaction to produce a sense of immersion for the user. Virtual simulation input hardware There is a wide variety of input hardware available to accept user input for virtual simulations. The following list briefly describes several of them:
Simulation
Wikipedia
432
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Body tracking: The motion capture method is often used to record the user's movements and translate the captured data into inputs for the virtual simulation. For example, if a user physically turns their head, the motion would be captured by the simulation hardware in some way and translated to a corresponding shift in view within the simulation. Capture suits and/or gloves may be used to capture movements of users body parts. The systems may have sensors incorporated inside them to sense movements of different body parts (e.g., fingers). Alternatively, these systems may have exterior tracking devices or marks that can be detected by external ultrasound, optical receivers or electromagnetic sensors. Internal inertial sensors are also available on some systems. The units may transmit data either wirelessly or through cables. Eye trackers can also be used to detect eye movements so that the system can determine precisely where a user is looking at any given instant. Physical controllers: Physical controllers provide input to the simulation only through direct manipulation by the user. In virtual simulations, tactile feedback from physical controllers is highly desirable in a number of simulation environments. Omnidirectional treadmills can be used to capture the users locomotion as they walk or run. High fidelity instrumentation such as instrument panels in virtual aircraft cockpits provides users with actual controls to raise the level of immersion. For example, pilots can use the actual global positioning system controls from the real device in a simulated cockpit to help them practice procedures with the actual device in the context of the integrated cockpit system. Voice/sound recognition: This form of interaction may be used either to interact with agents within the simulation (e.g., virtual people) or to manipulate objects in the simulation (e.g., information). Voice interaction presumably increases the level of immersion for the user. Users may use headsets with boom microphones, lapel microphones or the room may be equipped with strategically located microphones.
Simulation
Wikipedia
391
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Current research into user input systems Research in future input systems holds a great deal of promise for virtual simulations. Systems such as brain–computer interfaces (BCIs) offer the ability to further increase the level of immersion for virtual simulation users. Lee, Keinrath, Scherer, Bischof, Pfurtscheller proved that naïve subjects could be trained to use a BCI to navigate a virtual apartment with relative ease. Using the BCI, the authors found that subjects were able to freely navigate the virtual environment with relatively minimal effort. It is possible that these types of systems will become standard input modalities in future virtual simulation systems. Virtual simulation output hardware There is a wide variety of output hardware available to deliver a stimulus to users in virtual simulations. The following list briefly describes several of them:
Simulation
Wikipedia
162
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Visual display: Visual displays provide the visual stimulus to the user. Stationary displays can vary from a conventional desktop display to 360-degree wrap-around screens to stereo three-dimensional screens. Conventional desktop displays can vary in size from . Wrap around screens is typically used in what is known as a cave automatic virtual environment (CAVE). Stereo three-dimensional screens produce three-dimensional images either with or without special glasses—depending on the design. Head-mounted displays (HMDs) have small displays that are mounted on headgear worn by the user. These systems are connected directly into the virtual simulation to provide the user with a more immersive experience. Weight, update rates and field of view are some of the key variables that differentiate HMDs. Naturally, heavier HMDs are undesirable as they cause fatigue over time. If the update rate is too slow, the system is unable to update the displays fast enough to correspond with a quick head turn by the user. Slower update rates tend to cause simulation sickness and disrupt the sense of immersion. Field of view or the angular extent of the world that is seen at a given moment field of view can vary from system to system and has been found to affect the user's sense of immersion. Aural display: Several different types of audio systems exist to help the user hear and localize sounds spatially. Special software can be used to produce 3D audio effects 3D audio to create the illusion that sound sources are placed within a defined three-dimensional space around the user. Stationary conventional speaker systems may be used to provide dual or multi-channel surround sound. However, external speakers are not as effective as headphones in producing 3D audio effects. Conventional headphones offer a portable alternative to stationary speakers. They also have the added advantages of masking real-world noise and facilitate more effective 3D audio sound effects. Haptic display: These displays provide a sense of touch to the user (haptic technology). This type of output is sometimes referred to as force feedback. Tactile tile displays use different types of actuators such as inflatable bladders, vibrators, low-frequency sub-woofers, pin actuators and/or thermo-actuators to produce sensations for the user. End effector displays can respond to users inputs with resistance and force. These systems are often used in medical applications for remote surgeries that employ robotic instruments.
Simulation
Wikipedia
490
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Vestibular display: These displays provide a sense of motion to the user (motion simulator). They often manifest as motion bases for virtual vehicle simulation such as driving simulators or flight simulators. Motion bases are fixed in place but use actuators to move the simulator in ways that can produce the sensations pitching, yawing or rolling. The simulators can also move in such a way as to produce a sense of acceleration on all axes (e.g., the motion base can produce the sensation of falling).
Simulation
Wikipedia
106
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Clinical healthcare simulators Clinical healthcare simulators are increasingly being developed and deployed to teach therapeutic and diagnostic procedures as well as medical concepts and decision making to personnel in the health professions. Simulators have been developed for training procedures ranging from the basics such as blood draw, to laparoscopic surgery and trauma care. They are also important to help on prototyping new devices for biomedical engineering problems. Currently, simulators are applied to research and develop tools for new therapies, treatments and early diagnosis in medicine. Many medical simulators involve a computer connected to a plastic simulation of the relevant anatomy. Sophisticated simulators of this type employ a life-size mannequin that responds to injected drugs and can be programmed to create simulations of life-threatening emergencies. In other simulations, visual components of the procedure are reproduced by computer graphics techniques, while touch-based components are reproduced by haptic feedback devices combined with physical simulation routines computed in response to the user's actions. Medical simulations of this sort will often use 3D CT or MRI scans of patient data to enhance realism. Some medical simulations are developed to be widely distributed (such as web-enabled simulations and procedural simulations that can be viewed via standard web browsers) and can be interacted with using standard computer interfaces, such as the keyboard and mouse. Placebo An important medical application of a simulator—although, perhaps, denoting a slightly different meaning of simulator—is the use of a placebo drug, a formulation that simulates the active drug in trials of drug efficacy. Improving patient safety Patient safety is a concern in the medical industry. Patients have been known to suffer injuries and even death due to management error, and lack of using best standards of care and training. According to Building a National Agenda for Simulation-Based Medical Education (Eder-Van Hook, Jackie, 2004), "a health care provider's ability to react prudently in an unexpected situation is one of the most critical factors in creating a positive outcome in medical emergency, regardless of whether it occurs on the battlefield, freeway, or hospital emergency room." Eder-Van Hook (2004) also noted that medical errors kill up to 98,000 with an estimated cost between $37 and $50 million and $17 to $29 billion for preventable adverse events dollars per year.
Simulation
Wikipedia
466
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Simulation is being used to study patient safety, as well as train medical professionals. Studying patient safety and safety interventions in healthcare is challenging, because there is a lack of experimental control (i.e., patient complexity, system/process variances) to see if an intervention made a meaningful difference (Groves & Manges, 2017). An example of innovative simulation to study patient safety is from nursing research. Groves et al. (2016) used a high-fidelity simulation to examine nursing safety-oriented behaviors during times such as change-of-shift report. However, the value of simulation interventions to translating to clinical practice are is still debatable. As Nishisaki states, "there is good evidence that simulation training improves provider and team self-efficacy and competence on manikins. There is also good evidence that procedural simulation improves actual operational performance in clinical settings." However, there is a need to have improved evidence to show that crew resource management training through simulation. One of the largest challenges is showing that team simulation improves team operational performance at the bedside. Although evidence that simulation-based training actually improves patient outcome has been slow to accrue, today the ability of simulation to provide hands-on experience that translates to the operating room is no longer in doubt. One of the largest factors that might impact the ability to have training impact the work of practitioners at the bedside is the ability to empower frontline staff (Stewart, Manges, Ward, 2015). Another example of an attempt to improve patient safety through the use of simulations training is patient care to deliver just-in-time service or/and just-in-place. This training consists of 20  minutes of simulated training just before workers report to shift. One study found that just in time training improved the transition to the bedside. The conclusion as reported in Nishisaki (2008) work, was that the simulation training improved resident participation in real cases; but did not sacrifice the quality of service. It could be therefore hypothesized that by increasing the number of highly trained residents through the use of simulation training, that the simulation training does, in fact, increase patient safety. History of simulation in healthcare The first medical simulators were simple models of human patients.
Simulation
Wikipedia
454
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Since antiquity, these representations in clay and stone were used to demonstrate clinical features of disease states and their effects on humans. Models have been found in many cultures and continents. These models have been used in some cultures (e.g., Chinese culture) as a "diagnostic" instrument, allowing women to consult male physicians while maintaining social laws of modesty. Models are used today to help students learn the anatomy of the musculoskeletal system and organ systems. In 2002, the Society for Simulation in Healthcare (SSH) was formed to become a leader in international interprofessional advances the application of medical simulation in healthcare The need for a "uniform mechanism to educate, evaluate, and certify simulation instructors for the health care profession" was recognized by McGaghie et al. in their critical review of simulation-based medical education research. In 2012 the SSH piloted two new certifications to provide recognition to educators in an effort to meet this need. Type of models Active models Active models that attempt to reproduce living anatomy or physiology are recent developments. The famous "Harvey" mannequin was developed at the University of Miami and is able to recreate many of the physical findings of the cardiology examination, including palpation, auscultation, and electrocardiography. Interactive models More recently, interactive models have been developed that respond to actions taken by a student or physician. Until recently, these simulations were two dimensional computer programs that acted more like a textbook than a patient. Computer simulations have the advantage of allowing a student to make judgments, and also to make errors. The process of iterative learning through assessment, evaluation, decision making, and error correction creates a much stronger learning environment than passive instruction. Computer simulators Simulators have been proposed as an ideal tool for assessment of students for clinical skills. For patients, "cybertherapy" can be used for sessions simulating traumatic experiences, from fear of heights to social anxiety. Programmed patients and simulated clinical situations, including mock disaster drills, have been used extensively for education and evaluation. These "lifelike" simulations are expensive, and lack reproducibility. A fully functional "3Di" simulator would be the most specific tool available for teaching and measurement of clinical skills. Gaming platforms have been applied to create these virtual medical environments to create an interactive method for learning and application of information in a clinical context.
Simulation
Wikipedia
483
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Immersive disease state simulations allow a doctor or HCP to experience what a disease actually feels like. Using sensors and transducers symptomatic effects can be delivered to a participant allowing them to experience the patients disease state. Such a simulator meets the goals of an objective and standardized examination for clinical competence. This system is superior to examinations that use "standard patients" because it permits the quantitative measurement of competence, as well as reproducing the same objective findings. Simulation in entertainment Simulation in entertainment encompasses many large and popular industries such as film, television, video games (including serious games) and rides in theme parks. Although modern simulation is thought to have its roots in training and the military, in the 20th century it also became a conduit for enterprises which were more hedonistic in nature. History of visual simulation in film and games Early history (1940s and 1950s) The first simulation game may have been created as early as 1947 by Thomas T. Goldsmith Jr. and Estle Ray Mann. This was a straightforward game that simulated a missile being fired at a target. The curve of the missile and its speed could be adjusted using several knobs. In 1958, a computer game called Tennis for Two was created by Willy Higginbotham which simulated a tennis game between two players who could both play at the same time using hand controls and was displayed on an oscilloscope. This was one of the first electronic video games to use a graphical display. 1970s and early 1980s Computer-generated imagery was used in the film to simulate objects as early as 1972 in A Computer Animated Hand, parts of which were shown on the big screen in the 1976 film Futureworld. This was followed by the "targeting computer" that young Skywalker turns off in the 1977 film Star Wars. The film Tron (1982) was the first film to use computer-generated imagery for more than a couple of minutes. Advances in technology in the 1980s caused 3D simulation to become more widely used and it began to appear in movies and in computer-based games such as Atari's Battlezone (1980) and Acornsoft's Elite (1984), one of the first wire-frame 3D graphics games for home computers.
Simulation
Wikipedia
451
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Pre-virtual cinematography era (early 1980s to 1990s) Advances in technology in the 1980s made the computer more affordable and more capable than they were in previous decades, which facilitated the rise of computer such as the Xbox gaming. The first video game consoles released in the 1970s and early 1980s fell prey to the industry crash in 1983, but in 1985, Nintendo released the Nintendo Entertainment System (NES) which became one of the best selling consoles in video game history. In the 1990s, computer games became widely popular with the release of such game as The Sims and Command & Conquer and the still increasing power of desktop computers. Today, computer simulation games such as World of Warcraft are played by millions of people around the world. In 1993, the film Jurassic Park became the first popular film to use computer-generated graphics extensively, integrating the simulated dinosaurs almost seamlessly into live action scenes. This event transformed the film industry; in 1995, the film Toy Story was the first film to use only computer-generated images and by the new millennium computer generated graphics were the leading choice for special effects in films. Virtual cinematography (early 2000s–present) The advent of virtual cinematography in the early 2000s has led to an explosion of movies that would have been impossible to shoot without it. Classic examples are the digital look-alikes of Neo, Smith and other characters in the Matrix sequels and the extensive use of physically impossible camera runs in The Lord of the Rings trilogy. The terminal in the Pan Am (TV series) no longer existed during the filming of this 2011–2012 aired series, which was no problem as they created it in virtual cinematography using automated viewpoint finding and matching in conjunction with compositing real and simulated footage, which has been the bread and butter of the movie artist in and around film studios since the early 2000s. Computer-generated imagery is "the application of the field of 3D computer graphics to special effects". This technology is used for visual effects because they are high in quality, controllable, and can create effects that would not be feasible using any other technology either because of cost, resources or safety. Computer-generated graphics can be seen in many live-action movies today, especially those of the action genre. Further, computer-generated imagery has almost completely supplanted hand-drawn animation in children's movies which are increasingly computer-generated only. Examples of movies that use computer-generated imagery include Finding Nemo, 300 and Iron Man. Examples of non-film entertainment simulation
Simulation
Wikipedia
501
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Simulation games Simulation games, as opposed to other genres of video and computer games, represent or simulate an environment accurately. Moreover, they represent the interactions between the playable characters and the environment realistically. These kinds of games are usually more complex in terms of gameplay. Simulation games have become incredibly popular among people of all ages. Popular simulation games include SimCity and Tiger Woods PGA Tour. There are also flight simulator and driving simulator games. Theme park rides Simulators have been used for entertainment since the Link Trainer in the 1930s. The first modern simulator ride to open at a theme park was Disney's Star Tours in 1987 soon followed by Universal's The Funtastic World of Hanna-Barbera in 1990 which was the first ride to be done entirely with computer graphics. Simulator rides are the progeny of military training simulators and commercial simulators, but they are different in a fundamental way. While military training simulators react realistically to the input of the trainee in real time, ride simulators only feel like they move realistically and move according to prerecorded motion scripts. One of the first simulator rides, Star Tours, which cost $32 million, used a hydraulic motion based cabin. The movement was programmed by a joystick. Today's simulator rides, such as The Amazing Adventures of Spider-Man include elements to increase the amount of immersion experienced by the riders such as: 3D imagery, physical effects (spraying water or producing scents), and movement through an environment. Simulation and manufacturing Manufacturing simulation represents one of the most important applications of simulation. This technique represents a valuable tool used by engineers when evaluating the effect of capital investment in equipment and physical facilities like factory plants, warehouses, and distribution centers. Simulation can be used to predict the performance of an existing or planned system and to compare alternative solutions for a particular design problem. Another important goal of simulation in manufacturing systems is to quantify system performance. Common measures of system performance include the following: Throughput under average and peak loads System cycle time (how long it takes to produce one part) Use of resource, labor, and machines Bottlenecks and choke points Queuing at work locations Queuing and delays caused by material-handling devices and systems WIP storages needs Staffing requirements Effectiveness of scheduling systems Effectiveness of control systems More examples of simulation Automobiles
Simulation
Wikipedia
466
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
An automobile simulator provides an opportunity to reproduce the characteristics of real vehicles in a virtual environment. It replicates the external factors and conditions with which a vehicle interacts enabling a driver to feel as if they are sitting in the cab of their own vehicle. Scenarios and events are replicated with sufficient reality to ensure that drivers become fully immersed in the experience rather than simply viewing it as an educational experience. The simulator provides a constructive experience for the novice driver and enables more complex exercises to be undertaken by the more mature driver. For novice drivers, truck simulators provide an opportunity to begin their career by applying best practice. For mature drivers, simulation provides the ability to enhance good driving or to detect poor practice and to suggest the necessary steps for remedial action. For companies, it provides an opportunity to educate staff in the driving skills that achieve reduced maintenance costs, improved productivity and, most importantly, to ensure the safety of their actions in all possible situations. Biomechanics A biomechanics simulator is a simulation platform for creating dynamic mechanical models built from combinations of rigid and deformable bodies, joints, constraints, and various force actuators. It is specialized for creating biomechanical models of human anatomical structures, with the intention to study their function and eventually assist in the design and planning of medical treatment. A biomechanics simulator is used to analyze walking dynamics, study sports performance, simulate surgical procedures, analyze joint loads, design medical devices, and animate human and animal movement. A neuromechanical simulator that combines biomechanical and biologically realistic neural network simulation. It allows the user to test hypotheses on the neural basis of behavior in a physically accurate 3-D virtual environment. City and urban A city simulator can be a city-building game but can also be a tool used by urban planners to understand how cities are likely to evolve in response to various policy decisions. AnyLogic is an example of modern, large-scale urban simulators designed for use by urban planners. City simulators are generally agent-based simulations with explicit representations for land use and transportation. UrbanSim and LEAM are examples of large-scale urban simulation models that are used by metropolitan planning agencies and military bases for land use and transportation planning. Christmas
Simulation
Wikipedia
459
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Several Christmas-themed simulations exist, many of which are centred around Santa Claus. An example of these simulations are websites which claim to allow the user to track Santa Claus. Due to the fact that Santa is a legendary character and not a real, living person, it is impossible to provide actual information on his location, and services such as NORAD Tracks Santa and the Google Santa Tracker (the former of which claims to use radar and other technologies to track Santa) display fake, predetermined location information to users. Another example of these simulations are websites that claim to allow the user to email or send messages to Santa Claus. Websites such as emailSanta.com or Santa's former page on the now-defunct Windows Live Spaces by Microsoft use automated programs or scripts to generate personalized replies claimed to be from Santa himself based on user input. Classroom of the future The classroom of the future will probably contain several kinds of simulators, in addition to textual and visual learning tools. This will allow students to enter the clinical years better prepared, and with a higher skill level. The advanced student or postgraduate will have a more concise and comprehensive method of retraining—or of incorporating new clinical procedures into their skill set—and regulatory bodies and medical institutions will find it easier to assess the proficiency and competency of individuals. The classroom of the future will also form the basis of a clinical skills unit for continuing education of medical personnel; and in the same way that the use of periodic flight training assists airline pilots, this technology will assist practitioners throughout their career. The simulator will be more than a "living" textbook, it will become an integral a part of the practice of medicine. The simulator environment will also provide a standard platform for curriculum development in institutions of medical education. Communication satellites
Simulation
Wikipedia
358
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Modern satellite communications systems (SATCOM) are often large and complex with many interacting parts and elements. In addition, the need for broadband connectivity on a moving vehicle has increased dramatically in the past few years for both commercial and military applications. To accurately predict and deliver high quality of service, SATCOM system designers have to factor in terrain as well as atmospheric and meteorological conditions in their planning. To deal with such complexity, system designers and operators increasingly turn towards computer models of their systems to simulate real-world operating conditions and gain insights into usability and requirements prior to final product sign-off. Modeling improves the understanding of the system by enabling the SATCOM system designer or planner to simulate real-world performance by injecting the models with multiple hypothetical atmospheric and environmental conditions. Simulation is often used in the training of civilian and military personnel. This usually occurs when it is prohibitively expensive or simply too dangerous to allow trainees to use the real equipment in the real world. In such situations, they will spend time learning valuable lessons in a "safe" virtual environment yet living a lifelike experience (or at least it is the goal). Often the convenience is to permit mistakes during training for a safety-critical system. Digital lifecycle Simulation solutions are being increasingly integrated with computer-aided solutions and processes (computer-aided design or CAD, computer-aided manufacturing or CAM, computer-aided engineering or CAE, etc.). The use of simulation throughout the product lifecycle, especially at the earlier concept and design stages, has the potential of providing substantial benefits. These benefits range from direct cost issues such as reduced prototyping and shorter time-to-market to better performing products and higher margins. However, for some companies, simulation has not provided the expected benefits. The successful use of simulation, early in the lifecycle, has been largely driven by increased integration of simulation tools with the entire set of CAD, CAM and product-lifecycle management solutions. Simulation solutions can now function across the extended enterprise in a multi-CAD environment, and include solutions for managing simulation data and processes and ensuring that simulation results are made part of the product lifecycle history. Disaster preparedness Simulation training has become a method for preparing people for disasters. Simulations can replicate emergency situations and track how learners respond thanks to a lifelike experience. Disaster preparedness simulations can involve training on how to handle terrorism attacks, natural disasters, pandemic outbreaks, or other life-threatening emergencies.
Simulation
Wikipedia
496
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
One organization that has used simulation training for disaster preparedness is CADE (Center for Advancement of Distance Education). CADE has used a video game to prepare emergency workers for multiple types of attacks. As reported by News-Medical.Net, "The video game is the first in a series of simulations to address bioterrorism, pandemic flu, smallpox, and other disasters that emergency personnel must prepare for." Developed by a team from the University of Illinois at Chicago (UIC), the game allows learners to practice their emergency skills in a safe, controlled environment. The Emergency Simulation Program (ESP) at the British Columbia Institute of Technology (BCIT), Vancouver, British Columbia, Canada is another example of an organization that uses simulation to train for emergency situations. ESP uses simulation to train on the following situations: forest fire fighting, oil or chemical spill response, earthquake response, law enforcement, municipal firefighting, hazardous material handling, military training, and response to terrorist attack One feature of the simulation system is the implementation of "Dynamic Run-Time Clock," which allows simulations to run a 'simulated' time frame, "'speeding up' or 'slowing down' time as desired" Additionally, the system allows session recordings, picture-icon based navigation, file storage of individual simulations, multimedia components, and launch external applications. At the University of Québec in Chicoutimi, a research team at the outdoor research and expertise laboratory (Laboratoire d'Expertise et de Recherche en Plein Air – LERPA) specializes in using wilderness backcountry accident simulations to verify emergency response coordination. Instructionally, the benefits of emergency training through simulations are that learner performance can be tracked through the system. This allows the developer to make adjustments as necessary or alert the educator on topics that may require additional attention. Other advantages are that the learner can be guided or trained on how to respond appropriately before continuing to the next emergency segment—this is an aspect that may not be available in the live environment. Some emergency training simulators also allow for immediate feedback, while other simulations may provide a summary and instruct the learner to engage in the learning topic again.
Simulation
Wikipedia
443
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
In a live-emergency situation, emergency responders do not have time to waste. Simulation-training in this environment provides an opportunity for learners to gather as much information as they can and practice their knowledge in a safe environment. They can make mistakes without risk of endangering lives and be given the opportunity to correct their errors to prepare for the real-life emergency. Economics Simulations in economics and especially in macroeconomics, judge the desirability of the effects of proposed policy actions, such as fiscal policy changes or monetary policy changes. A mathematical model of the economy, having been fitted to historical economic data, is used as a proxy for the actual economy; proposed values of government spending, taxation, open market operations, etc. are used as inputs to the simulation of the model, and various variables of interest such as the inflation rate, the unemployment rate, the balance of trade deficit, the government budget deficit, etc. are the outputs of the simulation. The simulated values of these variables of interest are compared for different proposed policy inputs to determine which set of outcomes is most desirable. Engineering, technology, and processes Simulation is an important feature in engineering systems or any system that involves many processes. For example, in electrical engineering, delay lines may be used to simulate propagation delay and phase shift caused by an actual transmission line. Similarly, dummy loads may be used to simulate impedance without simulating propagation and is used in situations where propagation is unwanted. A simulator may imitate only a few of the operations and functions of the unit it simulates. Contrast with: emulate. Most engineering simulations entail mathematical modeling and computer-assisted investigation. There are many cases, however, where mathematical modeling is not reliable. Simulation of fluid dynamics problems often require both mathematical and physical simulations. In these cases the physical models require dynamic similitude. Physical and chemical simulations have also direct realistic uses, rather than research uses; in chemical engineering, for example, process simulations are used to give the process parameters immediately used for operating chemical plants, such as oil refineries. Simulators are also used for plant operator training. It is called Operator Training Simulator (OTS) and has been widely adopted by many industries from chemical to oil&gas and to the power industry. This created a safe and realistic virtual environment to train board operators and engineers. Mimic is capable of providing high fidelity dynamic models of nearly all chemical plants for operator training and control system testing.
Simulation
Wikipedia
491
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Ergonomics Ergonomic simulation involves the analysis of virtual products or manual tasks within a virtual environment. In the engineering process, the aim of ergonomics is to develop and to improve the design of products and work environments. Ergonomic simulation utilizes an anthropometric virtual representation of the human, commonly referenced as a mannequin or Digital Human Models (DHMs), to mimic the postures, mechanical loads, and performance of a human operator in a simulated environment such as an airplane, automobile, or manufacturing facility. DHMs are recognized as evolving and valuable tool for performing proactive ergonomics analysis and design. The simulations employ 3D-graphics and physics-based models to animate the virtual humans. Ergonomics software uses inverse kinematics (IK) capability for posing the DHMs. Software tools typically calculate biomechanical properties including individual muscle forces, joint forces and moments. Most of these tools employ standard ergonomic evaluation methods such as the NIOSH lifting equation and Rapid Upper Limb Assessment (RULA). Some simulations also analyze physiological measures including metabolism, energy expenditure, and fatigue limits Cycle time studies, design and process validation, user comfort, reachability, and line of sight are other human-factors that may be examined in ergonomic simulation packages. Modeling and simulation of a task can be performed by manually manipulating the virtual human in the simulated environment. Some ergonomics simulation software permits interactive, real-time simulation and evaluation through actual human input via motion capture technologies. However, motion capture for ergonomics requires expensive equipment and the creation of props to represent the environment or product. Some applications of ergonomic simulation in include analysis of solid waste collection, disaster management tasks, interactive gaming, automotive assembly line, virtual prototyping of rehabilitation aids, and aerospace product design. Ford engineers use ergonomics simulation software to perform virtual product design reviews. Using engineering data, the simulations assist evaluation of assembly ergonomics. The company uses Siemen's Jack and Jill ergonomics simulation software in improving worker safety and efficiency, without the need to build expensive prototypes. Finance
Simulation
Wikipedia
429
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
In finance, computer simulations are often used for scenario planning. Risk-adjusted net present value, for example, is computed from well-defined but not always known (or fixed) inputs. By imitating the performance of the project under evaluation, simulation can provide a distribution of NPV over a range of discount rates and other variables. Simulations are also often used to test a financial theory or the ability of a financial model. Simulations are frequently used in financial training to engage participants in experiencing various historical as well as fictional situations. There are stock market simulations, portfolio simulations, risk management simulations or models and forex simulations. Such simulations are typically based on stochastic asset models. Using these simulations in a training program allows for the application of theory into a something akin to real life. As with other industries, the use of simulations can be technology or case-study driven. Flight Flight simulation is mainly used to train pilots outside of the aircraft. In comparison to training in flight, simulation-based training allows for practicing maneuvers or situations that may be impractical (or even dangerous) to perform in the aircraft while keeping the pilot and instructor in a relatively low-risk environment on the ground. For example, electrical system failures, instrument failures, hydraulic system failures, and even flight control failures can be simulated without risk to the crew or equipment. Instructors can also provide students with a higher concentration of training tasks in a given period of time than is usually possible in the aircraft. For example, conducting multiple instrument approaches in the actual aircraft may require significant time spent repositioning the aircraft, while in a simulation, as soon as one approach has been completed, the instructor can immediately reposition the simulated aircraft to a location from which the next approach can be begun. Flight simulation also provides an economic advantage over training in an actual aircraft. Once fuel, maintenance, and insurance costs are taken into account, the operating costs of an FSTD are usually substantially lower than the operating costs of the simulated aircraft. For some large transport category airplanes, the operating costs may be several times lower for the FSTD than the actual aircraft. Another advantage is reduced environmental impact, as simulators don't contribute directly to carbon or noise emissions.
Simulation
Wikipedia
446
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
There also exist "engineering flight simulators" which are a key element of the aircraft design process. Many benefits that come from a lower number of test flights like cost and safety improvements are described above, but there are some unique advantages. Having a simulator available allows for faster design iteration cycle or using more test equipment than could be fit into a real aircraft. Marine Bearing resemblance to flight simulators, a marine simulator is meant for training of ship personnel. The most common marine simulators include: Ship's bridge simulators Engine room simulators Cargo handling simulators Communication / GMDSS simulators ROV simulators Simulators like these are mostly used within maritime colleges, training institutions, and navies. They often consist of a replication of a ships' bridge, with the operating console(s), and a number of screens on which the virtual surroundings are projected. Military Military simulations, also known informally as war games, are models in which theories of warfare can be tested and refined without the need for actual hostilities. They exist in many different forms, with varying degrees of realism. In recent times, their scope has widened to include not only military but also political and social factors (for example, the Nationlab series of strategic exercises in Latin America). While many governments make use of simulation, both individually and collaboratively, little is known about the model's specifics outside professional circles. Network and distributed systems Network and distributed systems have been extensively simulated in other to understand the impact of new protocols and algorithms before their deployment in the actual systems. The simulation can focus on different levels (physical layer, network layer, application layer), and evaluate different metrics (network bandwidth, resource consumption, service time, dropped packets, system availability). Examples of simulation scenarios of network and distributed systems are: Content delivery networks Smart cities Internet of things Payment and securities settlement system Simulation techniques have also been applied to payment and securities settlement systems. Among the main users are central banks who are generally responsible for the oversight of market infrastructure and entitled to contribute to the smooth functioning of the payment systems. Central banks have been using payment system simulations to evaluate things such as the adequacy or sufficiency of liquidity available ( in the form of account balances and intraday credit limits) to participants (mainly banks) to allow efficient settlement of payments. The need for liquidity is also dependent on the availability and the type of netting procedures in the systems, thus some of the studies have a focus on system comparisons.
Simulation
Wikipedia
504
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Another application is to evaluate risks related to events such as communication network breakdowns or the inability of participants to send payments (e.g. in case of possible bank failure). This kind of analysis falls under the concepts of stress testing or scenario analysis. A common way to conduct these simulations is to replicate the settlement logics of the real payment or securities settlement systems under analysis and then use real observed payment data. In case of system comparison or system development, naturally, also the other settlement logics need to be implemented. To perform stress testing and scenario analysis, the observed data needs to be altered, e.g. some payments delayed or removed. To analyze the levels of liquidity, initial liquidity levels are varied. System comparisons (benchmarking) or evaluations of new netting algorithms or rules are performed by running simulations with a fixed set of data and varying only the system setups. An inference is usually done by comparing the benchmark simulation results to the results of altered simulation setups by comparing indicators such as unsettled transactions or settlement delays. Power systems Project management Project management simulation is simulation used for project management training and analysis. It is often used as a training simulation for project managers. In other cases, it is used for what-if analysis and for supporting decision-making in real projects. Frequently the simulation is conducted using software tools. Robotics A robotics simulator is used to create embedded applications for a specific (or not) robot without being dependent on the 'real' robot. In some cases, these applications can be transferred to the real robot (or rebuilt) without modifications. Robotics simulators allow reproducing situations that cannot be 'created' in the real world because of cost, time, or the 'uniqueness' of a resource. A simulator also allows fast robot prototyping. Many robot simulators feature physics engines to simulate a robot's dynamics. Production Simulation of production systems is used mainly to examine the effect of improvements or investments in a production system. Most often this is done using a static spreadsheet with process times and transportation times. For more sophisticated simulations Discrete Event Simulation (DES) is used with the advantages to simulate dynamics in the production system. A production system is very much dynamic depending on variations in manufacturing processes, assembly times, machine set-ups, breaks, breakdowns and small stoppages. There is much software commonly used for discrete event simulation. They differ in usability and markets but do often share the same foundation. Sales process
Simulation
Wikipedia
504
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Simulations are useful in modeling the flow of transactions through business processes, such as in the field of sales process engineering, to study and improve the flow of customer orders through various stages of completion (say, from an initial proposal for providing goods/services through order acceptance and installation). Such simulations can help predict the impact of how improvements in methods might impact variability, cost, labor time, and the number of transactions at various stages in the process. A full-featured computerized process simulator can be used to depict such models, as can simpler educational demonstrations using spreadsheet software, pennies being transferred between cups based on the roll of a die, or dipping into a tub of colored beads with a scoop. Sports In sports, computer simulations are often done to predict the outcome of events and the performance of individual sportspeople. They attempt to recreate the event through models built from statistics. The increase in technology has allowed anyone with knowledge of programming the ability to run simulations of their models. The simulations are built from a series of mathematical algorithms, or models, and can vary with accuracy. Accuscore, which is licensed by companies such as ESPN, is a well-known simulation program for all major sports. It offers a detailed analysis of games through simulated betting lines, projected point totals and overall probabilities. With the increased interest in fantasy sports simulation models that predict individual player performance have gained popularity. Companies like What If Sports and StatFox specialize in not only using their simulations for predicting game results but how well individual players will do as well. Many people use models to determine whom to start in their fantasy leagues. Another way simulations are helping the sports field is in the use of biomechanics. Models are derived and simulations are run from data received from sensors attached to athletes and video equipment. Sports biomechanics aided by simulation models answer questions regarding training techniques such as the effect of fatigue on throwing performance (height of throw) and biomechanical factors of the upper limbs (reactive strength index; hand contact time). Computer simulations allow their users to take models which before were too complex to run, and give them answers. Simulations have proven to be some of the best insights into both play performance and team predictability. Space shuttle countdown
Simulation
Wikipedia
456
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Simulation was used at Kennedy Space Center (KSC) to train and certify Space Shuttle engineers during simulated launch countdown operations. The Space Shuttle engineering community would participate in a launch countdown integrated simulation before each Shuttle flight. This simulation is a virtual simulation where real people interact with simulated Space Shuttle vehicle and Ground Support Equipment (GSE) hardware. The Shuttle Final Countdown Phase Simulation, also known as S0044, involved countdown processes that would integrate many of the Space Shuttle vehicle and GSE systems. Some of the Shuttle systems integrated in the simulation are the main propulsion system, RS-25, solid rocket boosters, ground liquid hydrogen and liquid oxygen, external tank, flight controls, navigation, and avionics. The high-level objectives of the Shuttle Final Countdown Phase Simulation are: To demonstrate firing room final countdown phase operations. To provide training for system engineers in recognizing, reporting and evaluating system problems in a time critical environment. To exercise the launch team's ability to evaluate, prioritize and respond to problems in an integrated manner within a time critical environment. To provide procedures to be used in performing failure/recovery testing of the operations performed in the final countdown phase. The Shuttle Final Countdown Phase Simulation took place at the Kennedy Space Center Launch Control Center firing rooms. The firing room used during the simulation is the same control room where real launch countdown operations are executed. As a result, equipment used for real launch countdown operations is engaged. Command and control computers, application software, engineering plotting and trending tools, launch countdown procedure documents, launch commit criteria documents, hardware requirement documents, and any other items used by the engineering launch countdown teams during real launch countdown operations are used during the simulation. The Space Shuttle vehicle hardware and related GSE hardware is simulated by mathematical models (written in Shuttle Ground Operations Simulator (SGOS) modeling language) that behave and react like real hardware. During the Shuttle Final Countdown Phase Simulation, engineers command and control hardware via real application software executing in the control consoles – just as if they were commanding real vehicle hardware. However, these real software applications do not interface with real Shuttle hardware during simulations. Instead, the applications interface with mathematical model representations of the vehicle and GSE hardware. Consequently, the simulations bypass sensitive and even dangerous mechanisms while providing engineering measurements detailing how the hardware would have reacted. Since these math models interact with the command and control application software, models and simulations are also used to debug and verify the functionality of application software. Satellite navigation
Simulation
Wikipedia
499
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
The only true way to test GNSS receivers (commonly known as Sat-Nav's in the commercial world) is by using an RF Constellation Simulator. A receiver that may, for example, be used on an aircraft, can be tested under dynamic conditions without the need to take it on a real flight. The test conditions can be repeated exactly, and there is full control over all the test parameters. this is not possible in the 'real-world' using the actual signals. For testing receivers that will use the new Galileo (satellite navigation) there is no alternative, as the real signals do not yet exist. Trains Weather Predicting weather conditions by extrapolating/interpolating previous data is one of the real use of simulation. Most of the weather forecasts use this information published by Weather bureaus. This kind of simulations helps in predicting and forewarning about extreme weather conditions like the path of an active hurricane/cyclone. Numerical weather prediction for forecasting involves complicated numeric computer models to predict weather accurately by taking many parameters into account. Simulation games Strategy games—both traditional and modern—may be viewed as simulations of abstracted decision-making for the purpose of training military and political leaders (see History of Go for an example of such a tradition, or Kriegsspiel for a more recent example). Many other video games are simulators of some kind. Such games can simulate various aspects of reality, from business, to government, to construction, to piloting vehicles (see above). Historical usage Historically, the word had negative connotations: However, the connection between simulation and dissembling later faded out and is now only of linguistic interest.
Simulation
Wikipedia
340
43444
https://en.wikipedia.org/wiki/Simulation
Technology
General
null
Operations research () (U.S. Air Force Specialty Code: Operations Analysis), often shortened to the initialism OR, is a branch of applied mathematics that deals with the development and application of analytical methods to improve decision-making. Although the term management science is sometimes used similarly, the two fields differ in their scope and emphasis. Employing techniques from other mathematical sciences, such as modeling, statistics, and optimization, operations research arrives at optimal or near-optimal solutions to decision-making problems. Because of its emphasis on practical applications, operations research has overlapped with many other disciplines, notably industrial engineering. Operations research is often concerned with determining the extreme values of some real-world objective: the maximum (of profit, performance, or yield) or minimum (of loss, risk, or cost). Originating in military efforts before World War II, its techniques have grown to concern problems in a variety of industries. Overview Operations research (OR) encompasses the development and the use of a wide range of problem-solving techniques and methods applied in the pursuit of improved decision-making and efficiency, such as simulation, mathematical optimization, queueing theory and other stochastic-process models, Markov decision processes, econometric methods, data envelopment analysis, ordinal priority approach, neural networks, expert systems, decision analysis, and the analytic hierarchy process. Nearly all of these techniques involve the construction of mathematical models that attempt to describe the system. Because of the computational and statistical nature of most of these fields, OR also has strong ties to computer science and analytics. Operational researchers faced with a new problem must determine which of these techniques are most appropriate given the nature of the system, the goals for improvement, and constraints on time and computing power, or develop a new technique specific to the problem at hand (and, afterwards, to that type of problem). The major sub-disciplines (but not limited to) in modern operational research, as identified by the journal Operations Research and The Journal of the Operational Research Society are: Computing and information technologies Financial engineering Manufacturing, service sciences, and supply chain management Policy modeling and public sector work Revenue management Simulation Stochastic models Transportation theory Game theory for strategies Linear programming Nonlinear programming Integer programming in NP-complete problem specially for 0-1 integer linear programming for binary Dynamic programming in Aerospace engineering and Economics Information theory used in Cryptography, Quantum computing Quadratic programming for solutions of Quadratic equation and Quadratic function
Operations research
Wikipedia
494
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
History In the decades after the two world wars, the tools of operations research were more widely applied to problems in business, industry, and society. Since that time, operational research has expanded into a field widely used in industries ranging from petrochemicals to airlines, finance, logistics, and government, moving to a focus on the development of mathematical models that can be used to analyse and optimize sometimes complex systems, and has become an area of active academic and industrial research. Historical origins In the 17th century, mathematicians Blaise Pascal and Christiaan Huygens solved problems involving sometimes complex decisions (problem of points) by using game-theoretic ideas and expected values; others, such as Pierre de Fermat and Jacob Bernoulli, solved these types of problems using combinatorial reasoning instead. Charles Babbage's research into the cost of transportation and sorting of mail led to England's universal "Penny Post" in 1840, and to studies into the dynamical behaviour of railway vehicles in defence of the GWR's broad gauge. Beginning in the 20th century, study of inventory management could be considered the origin of modern operations research with economic order quantity developed by Ford W. Harris in 1913. Operational research may have originated in the efforts of military planners during World War I (convoy theory and Lanchester's laws). Percy Bridgman brought operational research to bear on problems in physics in the 1920s and would later attempt to extend these to the social sciences. Modern operational research originated at the Bawdsey Research Station in the UK in 1937 as the result of an initiative of the station's superintendent, A. P. Rowe and Robert Watson-Watt. Rowe conceived the idea as a means to analyse and improve the working of the UK's early-warning radar system, code-named "Chain Home" (CH). Initially, Rowe analysed the operating of the radar equipment and its communication networks, expanding later to include the operating personnel's behaviour. This revealed unappreciated limitations of the CH network and allowed remedial action to be taken.
Operations research
Wikipedia
427
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
Scientists in the United Kingdom (including Patrick Blackett (later Lord Blackett OM PRS), Cecil Gordon, Solly Zuckerman, (later Baron Zuckerman OM, KCB, FRS), C. H. Waddington, Owen Wansbrough-Jones, Frank Yates, Jacob Bronowski and Freeman Dyson), and in the United States (George Dantzig) looked for ways to make better decisions in such areas as logistics and training schedules. Second World War The modern field of operational research arose during World War II. In the World War II era, operational research was defined as "a scientific method of providing executive departments with a quantitative basis for decisions regarding the operations under their control". Other names for it included operational analysis (UK Ministry of Defence from 1962) and quantitative management. During the Second World War close to 1,000 men and women in Britain were engaged in operational research. About 200 operational research scientists worked for the British Army. Patrick Blackett worked for several different organizations during the war. Early in the war while working for the Royal Aircraft Establishment (RAE) he set up a team known as the "Circus" which helped to reduce the number of anti-aircraft artillery rounds needed to shoot down an enemy aircraft from an average of over 20,000 at the start of the Battle of Britain to 4,000 in 1941. In 1941, Blackett moved from the RAE to the Navy, after first working with RAF Coastal Command, in 1941 and then early in 1942 to the Admiralty. Blackett's team at Coastal Command's Operational Research Section (CC-ORS) included two future Nobel prize winners and many other people who went on to be pre-eminent in their fields. They undertook a number of crucial analyses that aided the war effort. Britain introduced the convoy system to reduce shipping losses, but while the principle of using warships to accompany merchant ships was generally accepted, it was unclear whether it was better for convoys to be small or large. Convoys travel at the speed of the slowest member, so small convoys can travel faster. It was also argued that small convoys would be harder for German U-boats to detect. On the other hand, large convoys could deploy more warships against an attacker. Blackett's staff showed that the losses suffered by convoys depended largely on the number of escort vessels present, rather than the size of the convoy. Their conclusion was that a few large convoys are more defensible than many small ones.
Operations research
Wikipedia
503
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
While performing an analysis of the methods used by RAF Coastal Command to hunt and destroy submarines, one of the analysts asked what colour the aircraft were. As most of them were from Bomber Command they were painted black for night-time operations. At the suggestion of CC-ORS a test was run to see if that was the best colour to camouflage the aircraft for daytime operations in the grey North Atlantic skies. Tests showed that aircraft painted white were on average not spotted until they were 20% closer than those painted black. This change indicated that 30% more submarines would be attacked and sunk for the same number of sightings. As a result of these findings Coastal Command changed their aircraft to using white undersurfaces. Other work by the CC-ORS indicated that on average if the trigger depth of aerial-delivered depth charges were changed from 100 to 25 feet, the kill ratios would go up. The reason was that if a U-boat saw an aircraft only shortly before it arrived over the target then at 100 feet the charges would do no damage (because the U-boat wouldn't have had time to descend as far as 100 feet), and if it saw the aircraft a long way from the target it had time to alter course under water so the chances of it being within the 20-foot kill zone of the charges was small. It was more efficient to attack those submarines close to the surface when the targets' locations were better known than to attempt their destruction at greater depths when their positions could only be guessed. Before the change of settings from 100 to 25 feet, 1% of submerged U-boats were sunk and 14% damaged. After the change, 7% were sunk and 11% damaged; if submarines were caught on the surface but had time to submerge just before being attacked, the numbers rose to 11% sunk and 15% damaged. Blackett observed "there can be few cases where such a great operational gain had been obtained by such a small and simple change of tactics".
Operations research
Wikipedia
404
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
Bomber Command's Operational Research Section (BC-ORS), analyzed a report of a survey carried out by RAF Bomber Command. For the survey, Bomber Command inspected all bombers returning from bombing raids over Germany over a particular period. All damage inflicted by German air defenses was noted and the recommendation was given that armor be added in the most heavily damaged areas. This recommendation was not adopted because the fact that the aircraft were able to return with these areas damaged indicated the areas were not vital, and adding armor to non-vital areas where damage is acceptable reduces aircraft performance. Their suggestion to remove some of the crew so that an aircraft loss would result in fewer personnel losses, was also rejected by RAF command. Blackett's team made the logical recommendation that the armor be placed in the areas which were completely untouched by damage in the bombers who returned. They reasoned that the survey was biased, since it only included aircraft that returned to Britain. The areas untouched in returning aircraft were probably vital areas, which, if hit, would result in the loss of the aircraft. This story has been disputed, with a similar damage assessment study completed in the US by the Statistical Research Group at Columbia University, the result of work done by Abraham Wald. When Germany organized its air defences into the Kammhuber Line, it was realized by the British that if the RAF bombers were to fly in a bomber stream they could overwhelm the night fighters who flew in individual cells directed to their targets by ground controllers. It was then a matter of calculating the statistical loss from collisions against the statistical loss from night fighters to calculate how close the bombers should fly to minimize RAF losses. The "exchange rate" ratio of output to input was a characteristic feature of operational research. By comparing the number of flying hours put in by Allied aircraft to the number of U-boat sightings in a given area, it was possible to redistribute aircraft to more productive patrol areas. Comparison of exchange rates established "effectiveness ratios" useful in planning. The ratio of 60 mines laid per ship sunk was common to several campaigns: German mines in British ports, British mines on German routes, and United States mines in Japanese routes.
Operations research
Wikipedia
441
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
Operational research doubled the on-target bomb rate of B-29s bombing Japan from the Marianas Islands by increasing the training ratio from 4 to 10 percent of flying hours; revealed that wolf-packs of three United States submarines were the most effective number to enable all members of the pack to engage targets discovered on their individual patrol stations; revealed that glossy enamel paint was more effective camouflage for night fighters than conventional dull camouflage paint finish, and a smooth paint finish increased airspeed by reducing skin friction. On land, the operational research sections of the Army Operational Research Group (AORG) of the Ministry of Supply (MoS) were landed in Normandy in 1944, and they followed British forces in the advance across Europe. They analyzed, among other topics, the effectiveness of artillery, aerial bombing and anti-tank shooting. After World War II In 1947, under the auspices of the British Association, a symposium was organized in Dundee. In his opening address, Watson-Watt offered a definition of the aims of OR: "To examine quantitatively whether the user organization is getting from the operation of its equipment the best attainable contribution to its overall objective." With expanded techniques and growing awareness of the field at the close of the war, operational research was no longer limited to only operational, but was extended to encompass equipment procurement, training, logistics and infrastructure. Operations research also grew in many areas other than the military once scientists learned to apply its principles to the civilian sector. The development of the simplex algorithm for linear programming was in 1947. In the 1950s, the term Operations Research was used to describe heterogeneous mathematical methods such as game theory, dynamic programming, linear programming, warehousing, spare parts theory, queue theory, simulation and production control, which were used primarily in civilian industry. Scientific societies and journals on the subject of operations research were founded in the 1950s, such as the Operation Research Society of America (ORSA) in 1952 and the Institute for Management Science (TIMS) in 1953. Philip Morse, the head of the Weapons Systems Evaluation Group of the Pentagon, became the first president of ORSA and attracted the companies of the military-industrial complex to ORSA, which soon had more than 500 members. In the 1960s, ORSA reached 8000 members. Consulting companies also founded OR groups. In 1953, Abraham Charnes and William Cooper published the first textbook on Linear Programming.
Operations research
Wikipedia
482
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
In the 1950s and 1960s, chairs of operations research were established in the U.S. and United Kingdom (from 1964 in Lancaster) in the management faculties of universities. Further influences from the U.S. on the development of operations research in Western Europe can be traced here. The authoritative OR textbooks from the U.S. were published in Germany in German language and in France in French (but not in Italian), such as the book by George Dantzig "Linear Programming"(1963) and the book by C. West Churchman et al. "Introduction to Operations Research"(1957). The latter was also published in Spanish in 1973, opening at the same time Latin American readers to Operations Research. NATO gave important impulses for the spread of Operations Research in Western Europe; NATO headquarters (SHAPE) organised four conferences on OR in the 1950s – the one in 1956 with 120 participants – bringing OR to mainland Europe. Within NATO, OR was also known as "Scientific Advisory" (SA) and was grouped together in the Advisory Group of Aeronautical Research and Development (AGARD). SHAPE and AGARD organized an OR conference in April 1957 in Paris. When France withdrew from the NATO military command structure, the transfer of NATO headquarters from France to Belgium led to the institutionalization of OR in Belgium, where Jacques Drèze founded CORE, the Center for Operations Research and Econometrics at the Catholic University of Leuven in 1966. With the development of computers over the next three decades, Operations Research can now solve problems with hundreds of thousands of variables and constraints. Moreover, the large volumes of data required for such problems can be stored and manipulated very efficiently." Much of operations research (modernly known as 'analytics') relies upon stochastic variables and a therefore access to truly random numbers. Fortunately, the cybernetics field also required the same level of randomness. The development of increasingly better random number generators has been a boon to both disciplines. Modern applications of operations research includes city planning, football strategies, emergency planning, optimizing all facets of industry and economy, and undoubtedly with the likelihood of the inclusion of terrorist attack planning and definitely counterterrorist attack planning. More recently, the research approach of operations research, which dates back to the 1950s, has been criticized for being collections of mathematical models but lacking an empirical basis of data collection for applications. How to collect data is not presented in the textbooks. Because of the lack of data, there are also no computer applications in the textbooks.
Operations research
Wikipedia
510
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
Problems addressed Critical path analysis or project planning: identifying those processes in a multiple-dependency project which affect the overall duration of the project Floorplanning: designing the layout of equipment in a factory or components on a computer chip to reduce manufacturing time (therefore reducing cost) Network optimization: for instance, setup of telecommunications or power system networks to maintain quality of service during outages Resource allocation problems Facility location Assignment Problems: Assignment problem Generalized assignment problem Quadratic assignment problem Weapon target assignment problem Bayesian search theory: looking for a target Optimal search Routing, such as determining the routes of buses so that as few buses are needed as possible Supply chain management: managing the flow of raw materials and products based on uncertain demand for the finished products Project production activities: managing the flow of work activities in a capital project in response to system variability through operations research tools for variability reduction and buffer allocation using a combination of allocation of capacity, inventory and time Efficient messaging and customer response tactics Automation: automating or integrating robotic systems in human-driven operations processes Globalization: globalizing operations processes in order to take advantage of cheaper materials, labor, land or other productivity inputs Transportation: managing freight transportation and delivery systems (Examples: LTL shipping, intermodal freight transport, travelling salesman problem, driver scheduling problem) Scheduling: Personnel staffing Manufacturing steps Project tasks Network data traffic: these are known as queueing models or queueing systems. Sports events and their television coverage Blending of raw materials in oil refineries Determining optimal prices, in many retail and B2B settings, within the disciplines of pricing science Cutting stock problem: Cutting small items out of bigger ones. Finding the optimal parameter (weights) setting of an algorithm that generates the realisation of a figured bass in Baroque compositions (classical music) by using weighted local cost and transition cost rules Operational research is also used extensively in government where evidence-based policy is used. Management science The field of management science (MS) is known as using operations research models in business. Stafford Beer characterized this in 1967. Like operational research itself, management science is an interdisciplinary branch of applied mathematics devoted to optimal decision planning, with strong links with economics, business, engineering, and other sciences. It uses various scientific research-based principles, strategies, and analytical methods including mathematical modeling, statistics and numerical algorithms to improve an organization's ability to enact rational and meaningful management decisions by arriving at optimal or near-optimal solutions to sometimes complex decision problems. Management scientists help businesses to achieve their goals using the scientific methods of operational research.
Operations research
Wikipedia
509
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
The management scientist's mandate is to use rational, systematic, science-based techniques to inform and improve decisions of all kinds. Of course, the techniques of management science are not restricted to business applications but may be applied to military, medical, public administration, charitable groups, political groups or community groups. Management science is concerned with developing and applying models and concepts that may prove useful in helping to illuminate management issues and solve managerial problems, as well as designing and developing new and better models of organizational excellence. Related fields Some of the fields that have considerable overlap with Operations Research and Management Science include: Artificial Intelligence Business analytics Computer science Data mining/Data science/Big data Decision analysis Decision intelligence Engineering Financial engineering Forecasting Game theory Geography/Geographic information science Graph theory Industrial engineering Inventory control Logistics Mathematical modeling Mathematical optimization Probability and statistics Project management Policy analysis Queueing theory Simulation Social network/Transportation forecasting models Stochastic processes Supply chain management Systems engineering Applications Applications are abundant such as in airlines, manufacturing companies, service organizations, military branches, and government. The range of problems and issues to which it has contributed insights and solutions is vast. It includes: Scheduling (of airlines, trains, buses etc.) Assignment (assigning crew to flights, trains or buses; employees to projects; commitment and dispatch of power generation facilities) Facility location (deciding most appropriate location for new facilities such as warehouses; factories or fire station) Hydraulics & Piping Engineering (managing flow of water from reservoirs) Health Services (information and supply chain management) Game Theory (identifying, understanding; developing strategies adopted by companies) Urban Design Computer Network Engineering (packet routing; timing; analysis) Telecom & Data Communication Engineering (packet routing; timing; analysis) Management is also concerned with so-called soft-operational analysis which concerns methods for strategic planning, strategic decision support, problem structuring methods. In dealing with these sorts of challenges, mathematical modeling and simulation may not be appropriate or may not suffice. Therefore, during the past 30 years, a number of non-quantified modeling methods have been developed. These include: stakeholder based approaches including metagame analysis and drama theory morphological analysis and various forms of influence diagrams cognitive mapping strategic choice robustness analysis Societies and journals
Operations research
Wikipedia
452
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
Societies The International Federation of Operational Research Societies (IFORS) is an umbrella organization for operational research societies worldwide, representing approximately 50 national societies including those in the US, UK, France, Germany, Italy, Canada, Australia, New Zealand, Philippines, India, Japan and South Africa. For the institutionalization of Operations Research, the foundation of IFORS in 1960 was of decisive importance, which stimulated the foundation of national OR societies in Austria, Switzerland and Germany. IFORS held important international conferences every three years since 1957. The constituent members of IFORS form regional groups, such as that in Europe, the Association of European Operational Research Societies (EURO). Other important operational research organizations are Simulation Interoperability Standards Organization (SISO) and Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) In 2004, the US-based organization INFORMS began an initiative to market the OR profession better, including a website entitled The Science of Better which provides an introduction to OR and examples of successful applications of OR to industrial problems. This initiative has been adopted by the Operational Research Society in the UK, including a website entitled Learn About OR. Journals of INFORMS The Institute for Operations Research and the Management Sciences (INFORMS) publishes thirteen scholarly journals about operations research, including the top two journals in their class, according to 2005 Journal Citation Reports. They are: Decision Analysis Information Systems Research INFORMS Journal on Computing INFORMS Transactions on Education (an open access journal) Interfaces Management Science Manufacturing & Service Operations Management Marketing Science Mathematics of Operations Research Operations Research Organization Science Service Science Transportation Science
Operations research
Wikipedia
315
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
Other journals These are listed in alphabetical order of their titles. 4OR-A Quarterly Journal of Operations Research: jointly published the Belgian, French and Italian Operations Research Societies (Springer); Decision Sciences published by Wiley-Blackwell on behalf of the Decision Sciences Institute European Journal of Operational Research (EJOR): Founded in 1975 and is presently by far the largest operational research journal in the world, with its around 9,000 pages of published papers per year. In 2004, its total number of citations was the second largest amongst Operational Research and Management Science journals; INFOR Journal: published and sponsored by the Canadian Operational Research Society; Journal of Defense Modeling and Simulation (JDMS): Applications, Methodology, Technology: a quarterly journal devoted to advancing the science of modeling and simulation as it relates to the military and defense. Journal of the Operational Research Society (JORS): an official journal of The OR Society; this is the oldest continuously published journal of OR in the world, published by Taylor & Francis; Military Operations Research (MOR): published by the Military Operations Research Society; Omega - The International Journal of Management Science; Operations Research Letters; Opsearch: official journal of the Operational Research Society of India; OR Insight: a quarterly journal of The OR Society published by Palgrave; Pesquisa Operacional, the official journal of the Brazilian Operations Research Society Production and Operations Management, the official journal of the Production and Operations Management Society TOP: the official journal of the Spanish Statistics and Operations Research Society.
Operations research
Wikipedia
306
43476
https://en.wikipedia.org/wiki/Operations%20research
Mathematics
Other
null
In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) can be interpreted as providing a relative likelihood that the value of the random variable would be equal to that sample. Probability density is the probability per unit length, in other words, while the absolute likelihood for a continuous random variable to take on any particular value is 0 (since there is an infinite set of possible values to begin with), the value of the PDF at two different samples can be used to infer, in any particular draw of the random variable, how much more likely it is that the random variable would be close to one sample compared to the other sample. More precisely, the PDF is used to specify the probability of the random variable falling within a particular range of values, as opposed to taking on any one value. This probability is given by the integral of this variable's PDF over that range—that is, it is given by the area under the density function but above the horizontal axis and between the lowest and greatest values of the range. The probability density function is nonnegative everywhere, and the area under the entire curve is equal to 1. The terms probability distribution function and probability function have also sometimes been used to denote the probability density function. However, this use is not standard among probabilists and statisticians. In other sources, "probability distribution function" may be used when the probability distribution is defined as a function over general sets of values or it may refer to the cumulative distribution function, or it may be a probability mass function (PMF) rather than the density. "Density function" itself is also used for the probability mass function, leading to further confusion. In general though, the PMF is used in the context of discrete random variables (random variables that take values on a countable set), while the PDF is used in the context of continuous random variables. Example
Probability density function
Wikipedia
419
43487
https://en.wikipedia.org/wiki/Probability%20density%20function
Mathematics
Statistics and probability
null
Suppose bacteria of a certain species typically live 20 to 30 hours. The probability that a bacterium lives 5 hours is equal to zero. A lot of bacteria live for approximately 5 hours, but there is no chance that any given bacterium dies at exactly 5.00... hours. However, the probability that the bacterium dies between 5 hours and 5.01 hours is quantifiable. Suppose the answer is 0.02 (i.e., 2%). Then, the probability that the bacterium dies between 5 hours and 5.001 hours should be about 0.002, since this time interval is one-tenth as long as the previous. The probability that the bacterium dies between 5 hours and 5.0001 hours should be about 0.0002, and so on. In this example, the ratio (probability of living during an interval) / (duration of the interval) is approximately constant, and equal to 2 per hour (or 2 hour−1). For example, there is 0.02 probability of dying in the 0.01-hour interval between 5 and 5.01 hours, and (0.02 probability / 0.01 hours) = 2 hour−1. This quantity 2 hour−1 is called the probability density for dying at around 5 hours. Therefore, the probability that the bacterium dies at 5 hours can be written as (2 hour−1) dt. This is the probability that the bacterium dies within an infinitesimal window of time around 5 hours, where dt is the duration of this window. For example, the probability that it lives longer than 5 hours, but shorter than (5 hours + 1 nanosecond), is (2 hour−1)×(1 nanosecond) ≈ (using the unit conversion nanoseconds = 1 hour). There is a probability density function f with f(5 hours) = 2 hour−1. The integral of f over any window of time (not only infinitesimal windows but also large windows) is the probability that the bacterium dies in that window. Absolutely continuous univariate distributions A probability density function is most commonly associated with absolutely continuous univariate distributions. A random variable has density , where is a non-negative Lebesgue-integrable function, if: Hence, if is the cumulative distribution function of , then: and (if is continuous at ) Intuitively, one can think of as being the probability of falling within the infinitesimal interval .
Probability density function
Wikipedia
500
43487
https://en.wikipedia.org/wiki/Probability%20density%20function
Mathematics
Statistics and probability
null
Formal definition (This definition may be extended to any probability distribution using the measure-theoretic definition of probability.) A random variable with values in a measurable space (usually with the Borel sets as measurable subsets) has as probability distribution the pushforward measure X∗P on : the density of with respect to a reference measure on is the Radon–Nikodym derivative: That is, f is any measurable function with the property that: for any measurable set Discussion In the continuous univariate case above, the reference measure is the Lebesgue measure. The probability mass function of a discrete random variable is the density with respect to the counting measure over the sample space (usually the set of integers, or some subset thereof). It is not possible to define a density with reference to an arbitrary measure (e.g. one can not choose the counting measure as a reference for a continuous random variable). Furthermore, when it does exist, the density is almost unique, meaning that any two such densities coincide almost everywhere. Further details Unlike a probability, a probability density function can take on values greater than one; for example, the continuous uniform distribution on the interval has probability density for and elsewhere. The standard normal distribution has probability density If a random variable is given and its distribution admits a probability density function , then the expected value of (if the expected value exists) can be calculated as Not every probability distribution has a density function: the distributions of discrete random variables do not; nor does the Cantor distribution, even though it has no discrete component, i.e., does not assign positive probability to any individual point. A distribution has a density function if its cumulative distribution function is absolutely continuous. In this case: is almost everywhere differentiable, and its derivative can be used as probability density: If a probability distribution admits a density, then the probability of every one-point set is zero; the same holds for finite and countable sets. Two probability densities and represent the same probability distribution precisely if they differ only on a set of Lebesgue measure zero. In the field of statistical physics, a non-formal reformulation of the relation above between the derivative of the cumulative distribution function and the probability density function is generally used as the definition of the probability density function. This alternate definition is the following: If is an infinitely small number, the probability that is included within the interval is equal to , or: Link between discrete and continuous distributions
Probability density function
Wikipedia
501
43487
https://en.wikipedia.org/wiki/Probability%20density%20function
Mathematics
Statistics and probability
null
It is possible to represent certain discrete random variables as well as random variables involving both a continuous and a discrete part with a generalized probability density function using the Dirac delta function. (This is not possible with a probability density function in the sense defined above, it may be done with a distribution.) For example, consider a binary discrete random variable having the Rademacher distribution—that is, taking −1 or 1 for values, with probability each. The density of probability associated with this variable is: More generally, if a discrete variable can take different values among real numbers, then the associated probability density function is: where are the discrete values accessible to the variable and are the probabilities associated with these values. This substantially unifies the treatment of discrete and continuous probability distributions. The above expression allows for determining statistical characteristics of such a discrete variable (such as the mean, variance, and kurtosis), starting from the formulas given for a continuous distribution of the probability. Families of densities It is common for probability density functions (and probability mass functions) to be parametrized—that is, to be characterized by unspecified parameters. For example, the normal distribution is parametrized in terms of the mean and the variance, denoted by and respectively, giving the family of densities Different values of the parameters describe different distributions of different random variables on the same sample space (the same set of all possible values of the variable); this sample space is the domain of the family of random variables that this family of distributions describes. A given set of parameters describes a single distribution within the family sharing the functional form of the density. From the perspective of a given distribution, the parameters are constants, and terms in a density function that contain only parameters, but not variables, are part of the normalization factor of a distribution (the multiplicative factor that ensures that the area under the density—the probability of something in the domain occurring— equals 1). This normalization factor is outside the kernel of the distribution. Since the parameters are constants, reparametrizing a density in terms of different parameters to give a characterization of a different random variable in the family, means simply substituting the new parameter values into the formula in place of the old ones. Densities associated with multiple variables
Probability density function
Wikipedia
464
43487
https://en.wikipedia.org/wiki/Probability%20density%20function
Mathematics
Statistics and probability
null
For continuous random variables , it is also possible to define a probability density function associated to the set as a whole, often called joint probability density function. This density function is defined as a function of the variables, such that, for any domain in the -dimensional space of the values of the variables , the probability that a realisation of the set variables falls inside the domain is If is the cumulative distribution function of the vector , then the joint probability density function can be computed as a partial derivative Marginal densities For , let be the probability density function associated with variable alone. This is called the marginal density function, and can be deduced from the probability density associated with the random variables by integrating over all values of the other variables: Independence Continuous random variables admitting a joint density are all independent from each other if Corollary If the joint probability density function of a vector of random variables can be factored into a product of functions of one variable (where each is not necessarily a density) then the variables in the set are all independent from each other, and the marginal probability density function of each of them is given by Example This elementary example illustrates the above definition of multidimensional probability density functions in the simple case of a function of a set of two variables. Let us call a 2-dimensional random vector of coordinates : the probability to obtain in the quarter plane of positive and is Function of random variables and change of variables in the probability density function If the probability density function of a random variable (or vector) is given as , it is possible (but often not necessary; see below) to calculate the probability density function of some variable . This is also called a "change of variable" and is in practice used to generate a random variable of arbitrary shape using a known (for instance, uniform) random number generator. It is tempting to think that in order to find the expected value , one must first find the probability density of the new random variable . However, rather than computing one may find instead The values of the two integrals are the same in all cases in which both and actually have probability density functions. It is not necessary that be a one-to-one function. In some cases the latter integral is computed much more easily than the former. See Law of the unconscious statistician. Scalar to scalar Let be a monotonic function, then the resulting density function is Here denotes the inverse function. This follows from the fact that the probability contained in a differential area must be invariant under change of variables. That is, or
Probability density function
Wikipedia
511
43487
https://en.wikipedia.org/wiki/Probability%20density%20function
Mathematics
Statistics and probability
null
For functions that are not monotonic, the probability density function for is where is the number of solutions in for the equation , and are these solutions. Vector to vector Suppose is an -dimensional random variable with joint density . If , where is a bijective, differentiable function, then has density : with the differential regarded as the Jacobian of the inverse of , evaluated at . For example, in the 2-dimensional case , suppose the transform is given as , with inverses , . The joint distribution for y = (y1, y2) has density Vector to scalar Let be a differentiable function and be a random vector taking values in , be the probability density function of and be the Dirac delta function. It is possible to use the formulas above to determine , the probability density function of , which will be given by This result leads to the law of the unconscious statistician: Proof: Let be a collapsed random variable with probability density function (i.e., a constant equal to zero). Let the random vector and the transform be defined as It is clear that is a bijective mapping, and the Jacobian of is given by: which is an upper triangular matrix with ones on the main diagonal, therefore its determinant is 1. Applying the change of variable theorem from the previous section we obtain that which if marginalized over leads to the desired probability density function. Sums of independent random variables The probability density function of the sum of two independent random variables and , each of which has a probability density function, is the convolution of their separate density functions: It is possible to generalize the previous relation to a sum of N independent random variables, with densities : This can be derived from a two-way change of variables involving and , similarly to the example below for the quotient of independent random variables. Products and quotients of independent random variables Given two independent random variables and , each of which has a probability density function, the density of the product and quotient can be computed by a change of variables. Example: Quotient distribution To compute the quotient of two independent random variables and , define the following transformation: Then, the joint density can be computed by a change of variables from U,V to Y,Z, and can be derived by marginalizing out from the joint density. The inverse transformation is The absolute value of the Jacobian matrix determinant of this transformation is: Thus: And the distribution of can be computed by marginalizing out :
Probability density function
Wikipedia
512
43487
https://en.wikipedia.org/wiki/Probability%20density%20function
Mathematics
Statistics and probability
null
This method crucially requires that the transformation from U,V to Y,Z be bijective. The above transformation meets this because can be mapped directly back to , and for a given the quotient is monotonic. This is similarly the case for the sum , difference and product . Exactly the same method can be used to compute the distribution of other functions of multiple independent random variables. Example: Quotient of two standard normals Given two standard normal variables and , the quotient can be computed as follows. First, the variables have the following density functions: We transform as described above: This leads to: This is the density of a standard Cauchy distribution.
Probability density function
Wikipedia
138
43487
https://en.wikipedia.org/wiki/Probability%20density%20function
Mathematics
Statistics and probability
null
A windmill is a structure that converts wind power into rotational energy using vanes called sails or blades, by tradition specifically to mill grain (gristmills), but in some parts of the English-speaking world, the term has also been extended to encompass windpumps, wind turbines, and other applications. The term wind engine is also sometimes used to describe such devices. Windmills were used throughout the high medieval and early modern periods; the horizontal or panemone windmill first appeared in Persia during the 9th century, and the vertical windmill first appeared in northwestern Europe in the 12th century. Regarded as an icon of Dutch culture, there are approximately 1,000 windmills in the Netherlands today. Forerunners Wind-powered machines may have been known earlier, but there is no clear evidence of windmills before the 9th century. Hero of Alexandria (Heron) in first-century Roman Egypt described what appears to be a wind-driven wheel to power a machine. His description of a wind-powered organ is not a practical windmill but was either an early wind-powered toy or a design concept for a wind-powered machine that may or may not have been a working device, as there is ambiguity in the text and issues with the design. Another early example of a wind-driven wheel was the prayer wheel, which is believed to have been first used in Tibet and China, though there is uncertainty over the date of its first appearance, which could have been either , the 7th century, or after the 9th century. One of the earliest recorded working windmill designs found was invented sometime around 700–900 AD in Persia. This design was the panemone, with vertical lightweight wooden sails attached by horizontal struts to a central vertical shaft. It was first built to pump water and subsequently modified to grind grain as well. Horizontal windmills
Windmill
Wikipedia
368
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
The first practical windmills were panemone windmills, using sails that rotated in a horizontal plane, around a vertical axis. Made of six to 12 sails covered in reed matting or cloth material, these windmills were used to grind grain or draw up water. A medieval account reports that windmill technology was used in Persia and the Middle East during the reign of Rashidun caliph Umar ibn al-Khattab (), based on the caliph's conversation with a Persian builder slave. The authenticity of part of the anecdote involving the caliph Umar is questioned because it was recorded only in the 10th century. The Persian geographer Estakhri reported windmills being operated in Khorasan (Eastern Iran and Western Afghanistan) already in the 9th century. Such windmills were in widespread use across the Middle East and Central Asia and later spread to Europe, China, and India from there. By the 11th century, the vertical-axle windmill had reached parts of Southern Europe, including the Iberian Peninsula (via Al-Andalus) and the Aegean Sea (in the Balkans). A similar type of horizontal windmill with rectangular blades, used for irrigation, can also be found in thirteenth-century China (during the Jurchen Jin dynasty in the north), introduced by the travels of Yelü Chucai to Turkestan in 1219. Vertical-axle windmills were built, in small numbers, in Europe during the 18th and nineteenth centuries, for example Fowler's Mill at Battersea in London, and Hooper's Mill at Margate in Kent. These early modern examples seem not to have been directly influenced by the vertical-axle windmills of the medieval period, but to have been independent inventions by 18th-century engineers. Vertical windmills The horizontal-axis or vertical windmill (so called due to the plane of the movement of its sails) is a development of the 12th century, first used in northwestern Europe, in the triangle of northern France, eastern England and Flanders. It is unclear whether the vertical windmill was influenced by the introduction of the horizontal windmill from Persia-Middle East to Southern Europe in the preceding century.
Windmill
Wikipedia
430
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
The earliest certain reference to a windmill in Northern Europe (assumed to have been of the vertical type) dates from 1185, in the former village of Weedley in Yorkshire which was located at the southern tip of the Wold overlooking the Humber Estuary. Several earlier, but less certainly dated, 12th-century European sources referring to windmills have also been found. These earliest mills were used to grind cereals. Post mill The evidence at present is that the earliest type of European windmill was the post mill, so named because of the large upright post on which the mill's main structure (the "body" or "buck") is balanced. By mounting the body this way, the mill can rotate to face the wind direction; an essential requirement for windmills to operate economically in north-western Europe, where wind directions are variable. The body contains all the milling machinery. The first post mills were of the sunken type, where the post was buried in an earth mound to support it. Later, a wooden support was developed called the trestle. This was often covered over or surrounded by a roundhouse to protect the trestle from the weather and to provide storage space. This type of windmill was the most common in Europe until the 19th century when more powerful tower and smock mills replaced them. Hollow-post mill In a hollow-post mill, the post on which the body is mounted is hollowed out, to accommodate the drive shaft. This makes it possible to drive machinery below or outside the body while still being able to rotate the body into the wind. Hollow-post mills driving scoop wheels were used in the Netherlands to drain wetlands since the early 15th century onwards. Tower mill
Windmill
Wikipedia
340
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
By the end of the 13th century, the masonry tower mill, on which only the cap is rotated rather than the whole body of the mill, had been introduced. The spread of tower mills came with a growing economy that called for larger and more stable sources of power, though they were more expensive to build. In contrast to the post mill, only the cap of the tower mill needs to be turned into the wind, so the main structure can be made much taller, allowing the sails to be made longer, which enables them to provide useful work even in low winds. The cap can be turned into the wind either by winches or gearing inside the cap or from a winch on the tail pole outside the mill. A method of keeping the cap and sails into the wind automatically is by using a fantail, a small windmill mounted at right angles to the sails, at the rear of the windmill. These are also fitted to tail poles of post mills and are common in Great Britain and English-speaking countries of the former British Empire, Denmark, and Germany but rare in other places. Around some parts of the Mediterranean Sea, tower mills with fixed caps were built because the wind's direction varied little most of the time. Smock mill The smock mill is a later development of the tower mill, where the masonry tower is replaced by a wooden framework, called the "smock", which is thatched, boarded, or covered by other materials, such as slate, sheet metal, or tar paper. The smock is commonly of octagonal plan, though there are examples with different numbers of sides. Smock windmills were introduced by the Dutch in the 17th century to overcome the limitations of tower windmills, which were expensive to build and could not be erected on wet surfaces. The lower half of the smock windmill was made of brick, while the upper half was made of wood, with a sloping tower shape that added structural strength to the design. This made them lightweight and able to be erected on unstable ground. The smock windmill design included a small turbine in the back that helped the main mill to face the direction of the wind. Mechanics Sails
Windmill
Wikipedia
435
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
Common sails consist of a lattice framework on which the sailcloth is spread. The miller can adjust the amount of cloth spread according to the wind and the power needed. In medieval mills, the sailcloth was wound in and out of a ladder-type arrangement of sails. Later mill sails had a lattice framework over which the sailcloth was spread, while in colder climates, the cloth was replaced by wooden slats, which were easier to handle in freezing conditions. The jib sail is commonly found in Mediterranean countries and consists of a simple triangle of cloth wound round a spar. In all cases, the mill needs to be stopped to adjust the sails. Inventions in Great Britain in the late eighteenth and nineteenth centuries led to sails that automatically adjust to the wind speed without the need for the miller to intervene, culminating in patent sails invented by William Cubitt in 1807. In these sails, the cloth is replaced by a mechanism of connected shutters. In France, Pierre-Théophile Berton invented a system consisting of longitudinal wooden slats connected by a mechanism that lets the miller open them while the mill is turning. In the twentieth century, increased knowledge of aerodynamics from the development of the airplane led to further improvements in efficiency by German engineer Bilau and several Dutch millwrights. The majority of windmills have four sails. Multiple-sailed mills, with five, six, or eight sails, were built in Great Britain (especially in and around the counties of Lincolnshire and Yorkshire), Germany, and less commonly elsewhere. Earlier multiple-sailed mills are found in Spain, Portugal, Greece, parts of Romania, Bulgaria, and Russia. A mill with an even number of sails has the advantage of being able to run with a damaged sail by removing both the damaged sail and the one opposite, which does not unbalance the mill.
Windmill
Wikipedia
367
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
In the Netherlands, the stationary position of the sails, i.e. when the mill is not working, has long been used to give signals. If the blades are stopped in a "+" sign (3-6-9-12 o'clock), the windmill is open for business. When the blades are stopped in an "X" configuration, the windmill is closed or not functional. A slight tilt of the sails (top blade at 1 o'clock) signals joy, such as the birth of a healthy baby. A tilt of the blades to 11-2-5-8 o'clock signals mourning, or warning. It was used to signal the local region during Nazi operations in World War II, such as searches for Jews. Across the Netherlands, windmills were placed in mourning positions in honor of the Dutch victims of the 2014 Malaysian Airlines Flight 17 shootdown. Machinery Gears inside a windmill convey power from the rotary motion of the sails to a mechanical device. The sails are carried on the horizontal windshaft. Windshafts can be wholly made of wood, wood with a cast iron pole end (where the sails are mounted), or entirely of cast iron. The brake wheel is fitted onto the windshaft between the front and rear bearings. It has the brake around the outside of the rim and teeth in the side of the rim which drives the horizontal gearwheel called wallower on the top end of the vertical upright shaft. In grist mills, the great spur wheel, lower down the upright shaft, drives one or more stone nuts on the shafts driving each millstone. Post mills sometimes have a head and/or tail wheel driving the stone nuts directly, instead of the spur gear arrangement. Additional gear wheels drive a sack hoist or other machinery. The machinery differs if the windmill is used for other applications than milling grain. A drainage mill uses another set of gear wheels on the bottom end of the upright shaft to drive a scoop wheel or Archimedes' screw. Sawmills uses a crankshaft to provide a reciprocating motion to the saws. Windmills have been used to power many other industrial processes, including papermills, threshing mills, and to process oil seeds, wool, paints, and stone products. Spread and decline
Windmill
Wikipedia
463
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
In the 14th century, windmills became popular in Europe; the total number of wind-powered mills is estimated to have been around 200,000 at the peak in 1850, which is close to half of the some 500,000 water wheels. Windmills were applied in regions where there was too little water, where rivers freeze in winter and in flat lands where the flow of the river was too slow to provide the required power. With the coming of the Industrial Revolution, the importance of wind and water as primary industrial energy sources declined, and they were eventually replaced by steam (in steam mills) and internal combustion engines, although windmills continued to be built in large numbers until late in the nineteenth century. More recently, windmills have been preserved for their historic value, in some cases as static exhibits when the antique machinery is too fragile to be put in motion, and other cases as fully working mills. Of the 10,000 windmills in use in the Netherlands around 1850, about 1,000 are still standing. Most of these are being run by volunteers, though some grist mills are still operating commercially. Many of the drainage mills have been appointed as a backup to the modern pumping stations. The Zaan district has been said to have been the first industrialized region of the world with around 600 operating wind-powered industries by the end of the eighteenth century. Economic fluctuations and the industrial revolution had a much greater impact on these industries than on grain and drainage mills, so only very few are left. Construction of mills spread to the Cape Colony in the seventeenth century. The early tower mills did not survive the gales of the Cape Peninsula, so in 1717 the Heeren XVII sent carpenters, masons, and materials to construct a durable mill. The mill, completed in 1718, became known as the Oude Molen and was located between Pinelands Station and the Black River. Long since demolished, its name lives on as that of a Technical school in Pinelands. By 1863, Cape Town had 11 mills stretching from Paarden Eiland to Mowbray. Specialized windmills Wind turbines
Windmill
Wikipedia
425
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
A wind turbine is a windmill-like structure specifically developed to generate electricity. They can be seen as the next step in the development of the windmill. The first wind turbines were built by the end of the nineteenth century by James Blyth in Scotland (1887), Charles F. Brush in Cleveland, Ohio (1887–1888) and Poul la Cour in Denmark (1890s). La Cour's mill from 1896 later became the local power of the village of Askov. By 1908, there were 72 wind-driven electric generators in Denmark, ranging from 5 to 25 kW. By the 1930s, windmills were widely used to generate electricity on farms in the United States where distribution systems had not yet been installed, built by companies such as Jacobs Wind, Wincharger, Miller Airlite, Universal Aeroelectric, Paris-Dunn, Airline, and Winpower. The Dunlite Corporation produced turbines for similar locations in Australia. Forerunners of modern horizontal-axis utility-scale wind generators were the WIME-3D in service in Balaklava, USSR, from 1931 until 1942, a 100 kW generator on a tower, the Smith–Putnam wind turbine built in 1941 on the mountain known as Grandpa's Knob in Castleton, Vermont, United States, of 1.25 MW, and the NASA wind turbines developed from 1974 through the mid-1980s. The development of these 13 experimental wind turbines pioneered many of the wind turbine design technologies in use today, including steel tube towers, variable-speed generators, composite blade materials, and partial-span pitch control, as well as aerodynamic, structural, and acoustic engineering design capabilities. The modern wind power industry began in 1979 with the serial production of wind turbines by Danish manufacturers Kuriant, Vestas, Nordtank, and Bonus. These early turbines were small by today's standards, with capacities of 20–30 kW each. Since then, commercial turbines have increased greatly in size, with the Enercon E-126 capable of delivering up to 7 MW, while wind turbine production has expanded to many countries. As the 21st century began, rising concerns over energy security, global warming, and eventual fossil fuel depletion led to an expansion of interest in all available forms of renewable energy. Worldwide, many thousands of wind turbines are now operating, with a total nameplate capacity of 591 GW as of 2018. Materials
Windmill
Wikipedia
483
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
In an attempt to make wind turbines more efficient and increase their energy output, they are being built bigger, with taller towers and longer blades, and being increasingly deployed in offshore locations. While such changes increase their power output, they subject the components of the windmills to stronger forces and consequently put them at a greater risk of failure. Taller towers and longer blades suffer from higher fatigue, and offshore windfarms are subject to greater forces due to higher wind speeds and accelerated corrosion due to the proximity to seawater. To ensure a long enough lifetime to make the return on the investment viable, the materials for the components must be chosen appropriately. The blade of a wind turbine consists of 4 main elements: the root, spar, aerodynamic fairing, and surfacing. The fairing is composed of two shells (one on the pressure side, and one on the suction side), connected by one or more webs linking the upper and lower shells. The webs connect to the spar laminates, which are enclosed within the skins (surfacing) of the blade, and together, the system of the webs and spars resist the flapwise loading. Flapwise loading, one of the two different types of loading that blades are subject to, is caused by the wind pressure, and edgewise loading (the second type of loading) is caused by the gravitational force and torque load. The former loading subjects the spar laminate on the pressure (upwind) side of the blade to cyclic tension-tension loading, while the suction (downwind) side of the blade is subject to cyclic compression-compression loading. Edgewise bending subjects the leading edge to a tensile load, and the trailing edge to a compressive load. The remainder of the shell, not supported by the spars or laminated at the leading and trailing edges, is designed as a sandwiched structure, consisting of multiple layers to prevent elastic buckling.
Windmill
Wikipedia
391
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
In addition to meeting the stiffness, strength, and toughness requirements determined by the loading, the blade needs to be lightweight, and the weight of the blade scales with the cube of its radius. To determine which materials fit the criteria described above, a parameter known as the beam merit index is defined: Mb = E^1/2 / rho, where E is Young's modulus and rho is the density. The best blade materials are carbon fiber and glass fiber reinforced polymers (CFRP and GFRP). Currently, GFRP materials are chosen for their lower cost, despite the much greater figure of merit of CFRP. Recycling and waste problems with polymers blades When the Vindeby Offshore Wind Farm was taken down in Denmark in 2017, 99% of the not-degradable fiberglass from 33 wind turbine blades ended as cut up at the Rærup Controlled Landfill near Aalborg and in 2020, with considerably larger fiberglass quantities, even though it is the least environmentally friendly way of handling waste. Scrapped wind turbine blades are set to become a huge waste problem in Denmark and countries Denmark, to a greater and greater extent, export its many produced wind turbines. "The reason why many wings end up in landfill is that they are incredibly difficult to separate from each other, which you will have to do if you hope to be able to recycle the fiberglass", says Lykke Margot Ricard, Associate Professor in Innovation and Technological Foresight and education leader for civil engineering in Product Development and Innovation at the University of Southern Denmark (SDU). According to Dakofa, the Danish Competence Center for Waste and Resources, there is nothing specific in the Danish waste order about how to handle discarded fiberglass. Several scrap dealers tell Ingeniøren that they have handled wind turbine blades (wings) that have been pulverized after being taken to a recycling station. One of them is the recycling company H.J. Hansen, where the product manager informed, that they have transported approximately half of the wings they have received since 2012 to Reno Nord's landfill in Aalborg. A total of around 1,000 wings have ended up there, he estimates - and today up to 99 percent of the wings the company receives end up in a landfill.
Windmill
Wikipedia
465
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
Since 1996, according to an estimate made by Lykke Margot Ricard (SDU) in 2020, at least 8,810 tonnes of the wing scrap have been disposed of in Denmark, and the waste problem will grow significantly in the coming years when more and more wind turbines have reached their end of life. According to the SDU lecturer's calculations, the waste sector in Denmark will have to receive 46,400 tonnes of fiberglass from wind turbine blades over the next 20–25 years. As so, at the island, Lolland, in Denmark, 250 tonnes of fiberglass from wind turbine waste also pours up on a landfill at Gerringe in the middle of Lolland in 2020. In the United States, worn-out wind turbine blades made of fiberglass go to the handful of landfills that accept them (e.g., in Lake Mills, Iowa; Sioux Falls, South Dakota; Casper). Windpumps Windpumps were used to pump water since at least the 9th century in what is now Afghanistan, Iran, and Pakistan. The use of windpumps became widespread across the Muslim world and later spread to East Asia (China) and South Asia (India). Windmills were later used extensively in Europe, particularly in the Netherlands and the East Anglia area of Great Britain, from the late Middle Ages onwards, to drain land for agricultural or building purposes.
Windmill
Wikipedia
284
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
The "American windmill", or "wind engine", was invented by Daniel Halladay in 1854 and was used mostly for lifting water from wells. Larger versions were also used for tasks such as sawing wood, chopping hay, and shelling and grinding grain. In early California and some other states, the windmill was part of a self-contained domestic water system which included a hand-dug well and a wooden water tower supporting a redwood tank enclosed by wooden siding known as a tankhouse. During the late 19th century, steel blades and towers replaced wooden construction. At their peak in 1930, an estimated 600,000 units were in use. Firms such as U.S. Wind Engine and Pump Company, Challenge Wind Mill and Feed Mill Company, Appleton Manufacturing Company, Star, Eclipse, Fairbanks-Morse, Dempster Mill Manufacturing Company, and Aermotor became the main suppliers in North and South America. These windpumps are used extensively on farms and ranches in the United States, Canada, Southern Africa, and Australia. They feature a large number of blades, so they turn slowly with considerable torque in low winds and are self-regulating in high winds. A tower-top gearbox and crankshaft convert the rotary motion into reciprocating strokes carried downward through a rod to the pump cylinder below. Such mills pumped water and powered feed mills, sawmills, and agricultural machinery. In Australia, the Griffiths Brothers at Toowoomba manufactured windmills of the American pattern from 1876, with the trade name Southern Cross Windmills in use from 1903. These became an icon of the Australian rural sector by utilizing the water of the Great Artesian Basin. Another well-known maker was Metters Ltd. of Adelaide, Perth and Sydney.
Windmill
Wikipedia
352
43490
https://en.wikipedia.org/wiki/Windmill
Technology
Energy and fuel
null
Schist ( ) is a medium-grained metamorphic rock showing pronounced schistosity (named for the rock). This means that the rock is composed of mineral grains easily seen with a low-power hand lens, oriented in such a way that the rock is easily split into thin flakes or plates. This texture reflects a high content of platy minerals, such as mica, talc, chlorite, or graphite. These are often interleaved with more granular minerals, such as feldspar or quartz. Schist typically forms during regional metamorphism accompanying the process of mountain building (orogeny) and usually reflects a medium grade of metamorphism. Schist can form from many different kinds of rocks, including sedimentary rocks such as mudstones and igneous rocks such as tuffs. Schist metamorphosed from mudstone is particularly common and is often very rich in mica (a mica schist). Where the type of the original rock (the protolith) is discernible, the schist is usually given a name reflecting its protolith, such as schistose metasandstone. Otherwise, the names of the constituent minerals will be included in the rock name, such as quartz-felspar-biotite schist. Schist bedrock can pose a challenge for civil engineering because of its pronounced planes of weakness. Etymology The word schist is derived ultimately from the Greek word σχίζειν (schízein), meaning "to split", which refers to the ease with which schists can be split along the plane in which the platy minerals lie.
Schist
Wikipedia
351
43530
https://en.wikipedia.org/wiki/Schist
Physical sciences
Petrology
null
Definition Before the mid-19th century, the terms slate, shale and schist were not sharply differentiated by those involved with mining. Geologists define schist as medium-grained metamorphic rock that shows well-developed schistosity. Schistosity is a thin layering of the rock produced by metamorphism (a foliation) that permits the rock to easily be split into flakes or slabs less than thick. The mineral grains in a schist are typically from in size and so are easily seen with a 10× hand lens. Typically, over half the mineral grains in a schist show a preferred orientation. Schists make up one of the three divisions of metamorphic rock by texture, with the other two divisions being gneiss, which has poorly developed schistosity and thicker layering, and granofels, which has no discernible schistosity. Schists are defined by their texture without reference to their composition, and while most are a result of medium-grade metamorphism, they can vary greatly in mineral makeup. However, schistosity normally develops only when the rock contains abundant platy minerals, such as mica or chlorite. Grains of these minerals are strongly oriented in a preferred direction in schist, often also forming very thin parallel layers. The ease with which the rock splits along the aligned grains accounts for the schistosity. Though not a defining characteristic, schists very often contain porphyroblasts (individual crystals of unusual size) of distinctive minerals, such as garnet, staurolite, kyanite, sillimanite, or cordierite.
Schist
Wikipedia
346
43530
https://en.wikipedia.org/wiki/Schist
Physical sciences
Petrology
null
Because schists are a very large class of metamorphic rock, geologists will formally describe a rock as a schist only when the original type of the rock prior to metamorphism (the protolith) is unknown and its mineral content is not yet determined. Otherwise, the modifier schistose will be applied to a more precise type name, such as schistose semipelite (when the rock is known to contain moderate amounts of mica) or a schistose metasandstone (if the protolith is known to have been a sandstone). If all that is known is that the protolith was a sedimentary rock, the schist will be described as a paraschist, while if the protolith was an igneous rock, the schist will be described as an orthoschist. Mineral qualifiers are important when naming a schist. For example, a quartz-feldspar-biotite schist is a schist of uncertain protolith that contains biotite mica, feldspar, and quartz in order of apparent decreasing abundance. Lineated schist has a strong linear fabric in a rock which otherwise has well-developed schistosity. Formation Schistosity is developed at elevated temperature when the rock is more strongly compressed in one direction than in other directions (nonhydrostatic stress). Nonhydrostatic stress is characteristic of regional metamorphism where mountain building is taking place (an orogenic belt). The schistosity develops perpendicular to the direction of greatest compression, also called the shortening direction, as platy minerals are rotated or recrystallized into parallel layers. While platy or elongated minerals are most obviously reoriented, even quartz or calcite may take up preferred orientations. At the microscopic level, schistosity is divided into internal schistosity, in which inclusions within porphyroblasts take a preferred orientation, and external schistosity, which is the orientation of grains in the surrounding medium-grained rock.
Schist
Wikipedia
429
43530
https://en.wikipedia.org/wiki/Schist
Physical sciences
Petrology
null
The composition of the rock must permit formation of abundant platy minerals. For example, the clay minerals in mudstone are metamorphosed to mica, producing a mica schist. Early stages of metamorphism convert mudstone to a very fine-grained metamorphic rock called slate, which with further metamorphism becomes fine-grained phyllite. Further recrystallization produces medium-grained mica schist. If the metamorphism proceeds further, the mica schist experiences dehydration reactions that convert platy minerals to granular minerals such as feldspars, decreasing schistosity and turning the rock into a gneiss. Other platy minerals found in schists include chlorite, talc, and graphite. Chlorite schist is typically formed by metamorphism of ultramafic igneous rocks, as is talc schist. Talc schist also forms from metamorphosis of talc-bearing carbonate rocks formed by hydrothermal alteration. Graphite schist is uncommon but can form from metamorphosis of sedimentary beds containing abundant organic carbon. This may be of algal origin. Graphite schist is known to have experienced greenschist facies metamorphism, for example in the northern Andes. Metamorphosis of felsic volcanic rock, such as tuff, can produce quartz-muscovite schist. Engineering considerations In geotechnical engineering a schistosity plane often forms a discontinuity that may have a large influence on the mechanical behavior (strength, deformation, etc.) of rock masses in, for example, tunnel, foundation, or slope construction. A hazard may exist even in undisturbed terrain. On August 17, 1959, a magnitude 7.2 earthquake destabilized a mountain slope near Hebgen Lake, Montana, composed of schist. This caused a massive landslide that killed 26 people camping in the area.
Schist
Wikipedia
415
43530
https://en.wikipedia.org/wiki/Schist
Physical sciences
Petrology
null
Uraninite, also known as pitchblende, is a radioactive, uranium-rich mineral and ore with a chemical composition that is largely UO2 but because of oxidation typically contains variable proportions of U3O8. Radioactive decay of the uranium causes the mineral to contain oxides of lead and trace amounts of helium. It may also contain thorium and rare-earth elements. Overview Uraninite used to be known as pitchblende (from pitch, because of its black color, and blende, from blenden meaning "to deceive", a term used by German miners to denote minerals whose density suggested metal content, but whose exploitation, at the time they were named, was either unknown or not economically feasible). The mineral has been known since at least the 15th century, from silver mines in the Ore Mountains, on the German/Czech border. The type locality is the historic mining and spa town known as Joachimsthal, the modern-day Jáchymov, on the Czech side of the mountains, where F. E. Brückmann described the mineral in 1772. Pitchblende from the Johanngeorgenstadt deposit in Germany was used by M. Klaproth in 1789 to discover the element uranium. All uraninite minerals contain a small amount of radium as a radioactive decay product of uranium. Marie Curie used pitchblende, processing tons of it herself, as the source material for her isolation of radium in 1910. Uraninite also always contains small amounts of the lead isotopes 206Pb and 207Pb, the end products of the decay series of the uranium isotopes 238U and 235U respectively. Small amounts of helium are also present in uraninite as a result of alpha decay. Helium was first found on Earth in cleveite, an impure radioactive variety of uraninite, after having been discovered spectroscopically in the Sun's atmosphere. The extremely rare elements technetium and promethium can be found in uraninite in very small quantities (about 200 pg/kg and 4 fg/kg respectively), produced by the spontaneous fission of uranium-238. Francium can also be found in uraninite at 1 francium atom for every 1 × 1018 uranium atoms in the ore as a result from the decay of actinium. Occurrence
Uraninite
Wikipedia
488
43532
https://en.wikipedia.org/wiki/Uraninite
Physical sciences
Minerals
Earth science
Uraninite is a major ore of uranium. Some of the highest-grade uranium ores in the world were found in the Shinkolobwe mine in the Democratic Republic of the Congo (the initial source for the Manhattan Project) and in the Athabasca Basin in northern Saskatchewan, Canada. Another important source of pitchblende is at Great Bear Lake in the Northwest Territories of Canada, where it is found in large quantities associated with silver. It also occurs in Australia, the Czech Republic, Germany, England, Rwanda, Namibia and South Africa. In the United States, it can be found in the states of Arizona, Colorado, Connecticut, Maine, New Hampshire, New Mexico, North Carolina and Wyoming. The geologist Charles Steen made a fortune on the production of uraninite in his Mi Vida mine in Moab, Utah. Uranium ores from the Ore Mountains (today the border between the Czech Republic and Germany) were an important supply of both the wartime German nuclear program (which failed to produce a bomb) and the Soviet nuclear program. Mining for uranium in the Ore Mountains (under the auspices of SDAG Wismut after the war) ceased after the collapse of the German Democratic Republic. Uranium ore is generally processed close to the mine into yellowcake, which is an intermediate step in the processing of uranium.
Uraninite
Wikipedia
271
43532
https://en.wikipedia.org/wiki/Uraninite
Physical sciences
Minerals
Earth science
Hornblende is a complex inosilicate series of minerals. It is not a recognized mineral in its own right, but the name is used as a general or field term, to refer to a dark amphibole. Hornblende minerals are common in igneous and metamorphic rocks. The general formula is . Physical properties Hornblende has a hardness of 5–6, a specific gravity of 3.0 to 3.6, and is typically an opaque green, dark green, brown, or black color. It tends to form slender prismatic to bladed crystals, diamond-shaped in cross section, or is present as irregular grains or fibrous masses. Its planes of cleavage intersect at 56° and 124° angles. Hornblende is most often confused with the pyroxene series and biotite mica, which are also dark minerals found in granite and charnockite. Pyroxenes differ in their cleavage planes, which intersect at 87° and 93°. Hornblende is an inosilicate (chain silicate) mineral, built around double chains of silica tetrahedra. These chains extend the length of the crystal and are bonded to their neighbors by additional metal ions to form the complete crystal structure. Compositional variances Hornblende is part of the calcium-amphibole group of amphibole minerals. It is highly variable in composition, and includes at least five solid solution series: Magnesiohornblende–ferrohornblende, Tschermakite–ferrotschermakite, Edenite–ferroedenite, Pargasite–ferropargasite, Magnesiohastingstite–hastingsite, In addition, titanium, manganese, or chromium can substitute for some of the cations and oxygen, fluorine, or chlorine for some of the hydroxide (OH). The different chemical types are almost impossible to distinguish even by optical or X-ray methods, and detailed chemical analysis using an electron microprobe is required. There is a solid solution series between hornblende and the closely related amphibole minerals, tremolite–actinolite, at elevated temperature. A miscibility gap exists at lower temperatures, and, as a result, hornblende often contains exsolution lamellae of grunerite. Occurrence
Hornblende
Wikipedia
491
43533
https://en.wikipedia.org/wiki/Hornblende
Physical sciences
Silicate minerals
Earth science
Hornblende is a common constituent of many igneous and metamorphic rocks such as granite, syenite, diorite, gabbro, basalt, andesite, gneiss, and schist. It crystallizes in preference to pyroxene minerals from cooler magma that is richer in silica and water. It is the principal mineral of amphibolites, which form during medium- to high-grade metamorphism of mafic to intermediate igneous rock (igneous rocks with relative low silica content) in the presence of pore water. Much of the pore water comes from the breakdown of micas or other hydrous minerals. However, hornblende itself breaks down at very high temperatures. Hornblende alters easily to chlorite, biotite, or other mafic minerals. A rare variety of hornblende contains less than 5% of iron oxide, is gray to white in color, and is named edenite from its locality in Edenville, Orange County, New York. Oxyhornblende is a variety in which most of the iron has been oxidized to the ferric state, . Charge balance is preserved by the substitution of oxygen ions for hydroxide. Oxyhornblende is also typically enriched in titanium. It is found almost exclusively in volcanic rock and is sometimes called basaltic hornblende. Etymology The word hornblende is derived from German ('horn') and ('deceive'), in allusion to its similar appearance to metal-bearing ore minerals.
Hornblende
Wikipedia
322
43533
https://en.wikipedia.org/wiki/Hornblende
Physical sciences
Silicate minerals
Earth science
Basalt (; ) is an aphanitic (fine-grained) extrusive igneous rock formed from the rapid cooling of low-viscosity lava rich in magnesium and iron (mafic lava) exposed at or very near the surface of a rocky planet or moon. More than 90% of all volcanic rock on Earth is basalt. Rapid-cooling, fine-grained basalt is chemically equivalent to slow-cooling, coarse-grained gabbro. The eruption of basalt lava is observed by geologists at about 20 volcanoes per year. Basalt is also an important rock type on other planetary bodies in the Solar System. For example, the bulk of the plains of Venus, which cover ~80% of the surface, are basaltic; the lunar maria are plains of flood-basaltic lava flows; and basalt is a common rock on the surface of Mars. Molten basalt lava has a low viscosity due to its relatively low silica content (between 45% and 52%), resulting in rapidly moving lava flows that can spread over great areas before cooling and solidifying. Flood basalts are thick sequences of many such flows that can cover hundreds of thousands of square kilometres and constitute the most voluminous of all volcanic formations. Basaltic magmas within Earth are thought to originate from the upper mantle. The chemistry of basalts thus provides clues to processes deep in Earth's interior. Definition and characteristics Basalt is composed mostly of oxides of silicon, iron, magnesium, potassium, aluminum, titanium, and calcium. Geologists classify igneous rock by its mineral content whenever possible; the relative volume percentages of quartz (crystalline silica (SiO2)), alkali feldspar, plagioclase, and feldspathoid (QAPF) are particularly important. An aphanitic (fine-grained) igneous rock is classified as basalt when its QAPF fraction is composed of less than 10% feldspathoid and less than 20% quartz, and plagioclase makes up at least 65% of its feldspar content. This places basalt in the basalt/andesite field of the QAPF diagram. Basalt is further distinguished from andesite by its silica content of under 52%.
Basalt
Wikipedia
464
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
It is often not practical to determine the mineral composition of volcanic rocks, due to their very small grain size, in which case geologists instead classify the rocks chemically, with particular emphasis on the total content of alkali metal oxides and silica (TAS); in that context, basalt is defined as volcanic rock with a content of between 45% and 52% silica and no more than 5% alkali metal oxides. This places basalt in the B field of the TAS diagram. Such a composition is described as mafic. Basalt is usually dark grey to black in colour, due to a high content of augite or other dark-coloured pyroxene minerals, but can exhibit a wide range of shading. Some basalts are quite light-coloured due to a high content of plagioclase; these are sometimes described as leucobasalts. It can be difficult to distinguish between lighter-colored basalt and andesite, so field researchers commonly use a rule of thumb for this purpose, classifying it as basalt if it has a color index of 35 or greater. The physical properties of basalt result from its relatively low silica content and typically high iron and magnesium content. The average density of basalt is 2.9 g/cm3, compared, for example, to granite’s typical density of 2.7 g/cm3. The viscosity of basaltic magma is relatively low—around 104 to 105 cP—similar to the viscosity of ketchup, but that is still several orders of magnitude higher than the viscosity of water, which is about 1 cP). Basalt is often porphyritic, containing larger crystals (phenocrysts) that formed before the extrusion event that brought the magma to the surface, embedded in a finer-grained matrix. These phenocrysts are usually made of augite, olivine, or a calcium-rich plagioclase, which have the highest melting temperatures of any of the minerals that can typically crystallize from the melt, and which are therefore the first to form solid crystals. Basalt often contains vesicles; they are formed when dissolved gases bubble out of the magma as it decompresses during its approach to the surface; the erupted lava then solidifies before the gases can escape. When vesicles make up a substantial fraction of the volume of the rock, the rock is described as scoria.
Basalt
Wikipedia
501
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
The term basalt is at times applied to shallow intrusive rocks with a composition typical of basalt, but rocks of this composition with a phaneritic (coarser) groundmass are more properly referred to either as diabase (also called dolerite) or—when they are more coarse-grained (having crystals over 2 mm across)—as gabbro. Diabase and gabbro are thus the hypabyssal and plutonic equivalents of basalt. During the Hadean, Archean, and early Proterozoic eons of Earth's history, the chemistry of erupted magmas was significantly different from what it is today, due to immature crustal and asthenosphere differentiation. The resulting ultramafic volcanic rocks, with silica (SiO2) contents below 45% and high magnesium oxide (MgO) content, are usually classified as komatiites. Etymology The word "basalt" is ultimately derived from Late Latin , a misspelling of Latin "very hard stone", which was imported from Ancient Greek (), from (, "touchstone"). The modern petrological term basalt, describing a particular composition of lava-derived rock, became standard because of its use by Georgius Agricola in 1546, in his work De Natura Fossilium. Agricola applied the term "basalt" to the volcanic black rock beneath the Bishop of Meissen's Stolpen castle, believing it to be the same as the "basaniten" described by Pliny the Elder in AD 77 in . Types On Earth, most basalt is formed by decompression melting of the mantle. The high pressure in the upper mantle (due to the weight of the overlying rock) raises the melting point of mantle rock, so that almost all of the upper mantle is solid. However, mantle rock is ductile (the solid rock slowly deforms under high stress). When tectonic forces cause hot mantle rock to creep upwards, pressure on the ascending rock decreases, and this can lower its melting point enough for the rock to partially melt, producing basaltic magma.
Basalt
Wikipedia
441
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Decompression melting can occur in a variety of tectonic settings, including in continental rift zones, at mid-ocean ridges, above geological hotspots, and in back-arc basins. Basalt also forms in subduction zones, where mantle rock rises into a mantle wedge above the descending slab. The slab releases water vapor and other volatiles as it descends, which further lowers the melting point, further increasing the amount of decompression melting. Each tectonic setting produces basalt with its own distinctive characteristics.
Basalt
Wikipedia
106
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Tholeiitic basalt, which is relatively rich in iron and poor in alkali metals and aluminium, include most basalts of the ocean floor, most large oceanic islands, and continental flood basalts such as the Columbia River Plateau. High- and low-titanium basalt rocks, which are sometimes classified based on their titanium (Ti) content in High-Ti and Low-Ti varieties. High-Ti and Low-Ti basalt have been distinguished from each other in the Paraná and Etendeka traps and the Emeishan Traps. Mid-ocean ridge basalt (MORB) is a tholeiitic basalt that has almost exclusively erupted at ocean ridges; it is characteristically low in incompatible elements. Although all MORBs are chemically similar, geologists recognize that they vary significantly in how depleted they are in incompatible elements. When they are present in close proximity along mid-ocean ridges, that is seen as evidence for mantle inhomogeneity. Enriched MORB (E-MORB) is defined as MORB that is relatively undepleted in incompatible elements. It was once thought to be mostly located in hot spots along mid-ocean ridges, such as Iceland, but it is now known to be located in many other places along those ridges. Normal MORB (N-MORB) is defined as MORB that has an average amount of incompatible elements. D-MORB, depleted MORB, is defined as MORB that is highly depleted in incompatible elements. Alkali basalt is relatively rich in alkali metals. It is silica-undersaturated and may contain feldspathoids, alkali feldspar, phlogopite, and kaersutite. Augite in alkali basalts is titanium-enriched augite; low-calcium pyroxenes are never present. They are characteristic of continental rifting and hotspot volcanism. High-alumina basalt has greater than 17% alumina (Al2O3) and is intermediate in composition between tholeiitic basalt and alkali basalt. Its relatively alumina-rich composition is based on rocks without phenocrysts of plagioclase. These represent the low-silica end of the calc-alkaline magma series and are characteristic of volcanic arcs above subduction zones. Boninite is a high-magnesium form of basalt that is erupted generally in back-arc basins; it is distinguished by its low titanium content and trace-element composition.
Basalt
Wikipedia
506
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Ocean island basalts include both tholeiites and alkali basalts; the tholeiites predominate early in the eruptive history of the island. These basalts are characterized by elevated concentrations of incompatible elements, which suggests that their source mantle rock has produced little magma in the past (it is undepleted).
Basalt
Wikipedia
68
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Petrology The mineralogy of basalt is characterized by a preponderance of calcic plagioclase feldspar and pyroxene. Olivine can also be a significant constituent. Accessory minerals present in relatively minor amounts include iron oxides and iron-titanium oxides, such as magnetite, ulvöspinel, and ilmenite. Because of the presence of such oxide minerals, basalt can acquire strong magnetic signatures as it cools, and paleomagnetic studies have made extensive use of basalt. In tholeiitic basalt, pyroxene (augite and orthopyroxene or pigeonite) and calcium-rich plagioclase are common phenocryst minerals. Olivine may also be a phenocryst, and when present, may have rims of pigeonite. The groundmass contains interstitial quartz or tridymite or cristobalite. Olivine tholeiitic basalt has augite and orthopyroxene or pigeonite with abundant olivine, but olivine may have rims of pyroxene and is unlikely to be present in the groundmass. Alkali basalts typically have mineral assemblages that lack orthopyroxene but contain olivine. Feldspar phenocrysts typically are labradorite to andesine in composition. Augite is rich in titanium compared to augite in tholeiitic basalt. Minerals such as alkali feldspar, leucite, nepheline, sodalite, phlogopite mica, and apatite may be present in the groundmass. Basalt has high liquidus and solidus temperatures—values at the Earth's surface are near or above 1200 °C (liquidus) and near or below 1000 °C (solidus); these values are higher than those of other common igneous rocks. The majority of tholeiitic basalts are formed at approximately 50–100 km depth within the mantle. Many alkali basalts may be formed at greater depths, perhaps as deep as 150–200 km. The origin of high-alumina basalt continues to be controversial, with disagreement over whether it is a primary melt or derived from other basalt types by fractionation.
Basalt
Wikipedia
467
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Geochemistry Relative to most common igneous rocks, basalt compositions are rich in MgO and CaO and low in SiO2 and the alkali oxides, i.e., Na2O + K2O, consistent with their TAS classification. Basalt contains more silica than picrobasalt and most basanites and tephrites but less than basaltic andesite. Basalt has a lower total content of alkali oxides than trachybasalt and most basanites and tephrites. Basalt generally has a composition of 45–52 wt% SiO2, 2–5 wt% total alkalis, 0.5–2.0 wt% TiO2, 5–14 wt% FeO and 14 wt% or more Al2O3. Contents of CaO are commonly near 10 wt%, those of MgO commonly in the range 5 to 12 wt%. High-alumina basalts have aluminium contents of 17–19 wt% Al2O3; boninites have magnesium (MgO) contents of up to 15 percent. Rare feldspathoid-rich mafic rocks, akin to alkali basalts, may have Na2O + K2O contents of 12% or more. The abundances of the lanthanide or rare-earth elements (REE) can be a useful diagnostic tool to help explain the history of mineral crystallisation as the melt cooled. In particular, the relative abundance of europium compared to the other REE is often markedly higher or lower, and called the europium anomaly. It arises because Eu2+ can substitute for Ca2+ in plagioclase feldspar, unlike any of the other lanthanides, which tend to only form 3+ cations.
Basalt
Wikipedia
379
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Mid-ocean ridge basalts (MORB) and their intrusive equivalents, gabbros, are the characteristic igneous rocks formed at mid-ocean ridges. They are tholeiitic basalts particularly low in total alkalis and in incompatible trace elements, and they have relatively flat REE patterns normalized to mantle or chondrite values. In contrast, alkali basalts have normalized patterns highly enriched in the light REE, and with greater abundances of the REE and of other incompatible elements. Because MORB basalt is considered a key to understanding plate tectonics, its compositions have been much studied. Although MORB compositions are distinctive relative to average compositions of basalts erupted in other environments, they are not uniform. For instance, compositions change with position along the Mid-Atlantic Ridge, and the compositions also define different ranges in different ocean basins. Mid-ocean ridge basalts have been subdivided into varieties such as normal (NMORB) and those slightly more enriched in incompatible elements (EMORB). Isotope ratios of elements such as strontium, neodymium, lead, hafnium, and osmium in basalts have been much studied to learn about the evolution of the Earth's mantle. Isotopic ratios of noble gases, such as 3He/4He, are also of great value: for instance, ratios for basalts range from 6 to 10 for mid-ocean ridge tholeiitic basalt (normalized to atmospheric values), but to 15–24 and more for ocean-island basalts thought to be derived from mantle plumes. Source rocks for the partial melts that produce basaltic magma probably include both peridotite and pyroxenite. Morphology and textures The shape, structure and texture of a basalt is diagnostic of how and where it erupted—for example, whether into the sea, in an explosive cinder eruption or as creeping pāhoehoe lava flows, the classic image of Hawaiian basalt eruptions. Subaerial eruptions Basalt that erupts under open air (that is, subaerially) forms three distinct types of lava or volcanic deposits: scoria; ash or cinder (breccia); and lava flows. Basalt in the tops of subaerial lava flows and cinder cones will often be highly vesiculated, imparting a lightweight "frothy" texture to the rock. Basaltic cinders are often red, coloured by oxidized iron from weathered iron-rich minerals such as pyroxene.
Basalt
Wikipedia
512
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Aā types of blocky cinder and breccia flows of thick, viscous basaltic lava are common in Hawaii. Pāhoehoe is a highly fluid, hot form of basalt which tends to form thin aprons of molten lava which fill up hollows and sometimes forms lava lakes. Lava tubes are common features of pāhoehoe eruptions. Basaltic tuff or pyroclastic rocks are less common than basaltic lava flows. Usually basalt is too hot and fluid to build up sufficient pressure to form explosive lava eruptions but occasionally this will happen by trapping of the lava within the volcanic throat and buildup of volcanic gases. Hawaii's Mauna Loa volcano erupted in this way in the 19th century, as did Mount Tarawera, New Zealand in its violent 1886 eruption. Maar volcanoes are typical of small basalt tuffs, formed by explosive eruption of basalt through the crust, forming an apron of mixed basalt and wall rock breccia and a fan of basalt tuff further out from the volcano. Amygdaloidal structure is common in relict vesicles and beautifully crystallized species of zeolites, quartz or calcite are frequently found. Columnar basalt During the cooling of a thick lava flow, contractional joints or fractures form. If a flow cools relatively rapidly, significant contraction forces build up. While a flow can shrink in the vertical dimension without fracturing, it cannot easily accommodate shrinking in the horizontal direction unless cracks form; the extensive fracture network that develops results in the formation of columns. These structures, or basalt prisms, are predominantly hexagonal in cross-section, but polygons with three to twelve or more sides can be observed. The size of the columns depends loosely on the rate of cooling; very rapid cooling may result in very small (<1 cm diameter) columns, while slow cooling is more likely to produce large columns. Submarine eruptions The character of submarine basalt eruptions is largely determined by depth of water, since increased pressure restricts the release of volatile gases and results in effusive eruptions. It has been estimated that at depths greater than , explosive activity associated with basaltic magma is suppressed. Above this depth, submarine eruptions are often explosive, tending to produce pyroclastic rock rather than basalt flows. These eruptions, described as Surtseyan, are characterised by large quantities of steam and gas and the creation of large amounts of pumice. Pillow basalts
Basalt
Wikipedia
491
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
When basalt erupts underwater or flows into the sea, contact with the water quenches the surface and the lava forms a distinctive pillow shape, through which the hot lava breaks to form another pillow. This "pillow" texture is very common in underwater basaltic flows and is diagnostic of an underwater eruption environment when found in ancient rocks. Pillows typically consist of a fine-grained core with a glassy crust and have radial jointing. The size of individual pillows varies from 10 cm up to several metres. When pāhoehoe lava enters the sea it usually forms pillow basalts. However, when aā enters the ocean it forms a littoral cone, a small cone-shaped accumulation of tuffaceous debris formed when the blocky aā lava enters the water and explodes from built-up steam. The island of Surtsey in the Atlantic Ocean is a basalt volcano which breached the ocean surface in 1963. The initial phase of Surtsey's eruption was highly explosive, as the magma was quite fluid, causing the rock to be blown apart by the boiling steam to form a tuff and cinder cone. This has subsequently moved to a typical pāhoehoe-type behaviour. Volcanic glass may be present, particularly as rinds on rapidly chilled surfaces of lava flows, and is commonly (but not exclusively) associated with underwater eruptions. Pillow basalt is also produced by some subglacial volcanic eruptions. Distribution Earth Basalt is the most common volcanic rock type on Earth, making up over 90% of all volcanic rock on the planet. The crustal portions of oceanic tectonic plates are composed predominantly of basalt, produced from upwelling mantle below the ocean ridges. Basalt is also the principal volcanic rock in many oceanic islands, including the islands of Hawaii, the Faroe Islands, and Réunion. The eruption of basalt lava is observed by geologists at about 20 volcanoes per year. Basalt is the rock most typical of large igneous provinces. These include continental flood basalts, the most voluminous basalts found on land. Examples of continental flood basalts included the Deccan Traps in India, the Chilcotin Group in British Columbia, Canada, the Paraná Traps in Brazil, the Siberian Traps in Russia, the Karoo flood basalt province in South Africa, and the Columbia River Plateau of Washington and Oregon. Basalt is also prevalent across extensive regions of the Eastern Galilee, Golan, and Bashan in Israel and Syria. Basalt also is common around volcanic arcs, specially those on thin crust.
Basalt
Wikipedia
506
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Ancient Precambrian basalts are usually only found in fold and thrust belts, and are often heavily metamorphosed. These are known as greenstone belts, because low-grade metamorphism of basalt produces chlorite, actinolite, epidote and other green minerals. Other bodies in the Solar System As well as forming large parts of the Earth's crust, basalt also occurs in other parts of the Solar System. Basalt commonly erupts on Io (the third largest moon of Jupiter), and has also formed on the Moon, Mars, Venus, and the asteroid Vesta. The Moon The dark areas visible on Earth's moon, the lunar maria, are plains of flood basaltic lava flows. These rocks were sampled both by the crewed American Apollo program and the robotic Russian Luna program, and are represented among the lunar meteorites. Lunar basalts differ from their Earth counterparts principally in their high iron contents, which typically range from about 17 to 22 wt% FeO. They also possess a wide range of titanium concentrations (present in the mineral ilmenite), ranging from less than 1 wt% TiO2, to about 13 wt.%. Traditionally, lunar basalts have been classified according to their titanium content, with classes being named high-Ti, low-Ti, and very-low-Ti. Nevertheless, global geochemical maps of titanium obtained from the Clementine mission demonstrate that the lunar maria possess a continuum of titanium concentrations, and that the highest concentrations are the least abundant. Lunar basalts show exotic textures and mineralogy, particularly shock metamorphism, lack of the oxidation typical of terrestrial basalts, and a complete lack of hydration. Most of the Moon's basalts erupted between about 3 and 3.5 billion years ago, but the oldest samples are 4.2 billion years old, and the youngest flows, based on the age dating method of crater counting, are estimated to have erupted only 1.2 billion years ago.
Basalt
Wikipedia
407
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Venus From 1972 to 1985, five Venera and two VEGA landers successfully reached the surface of Venus and carried out geochemical measurements using X-ray fluorescence and gamma-ray analysis. These returned results consistent with the rock at the landing sites being basalts, including both tholeiitic and highly alkaline basalts. The landers are thought to have landed on plains whose radar signature is that of basaltic lava flows. These constitute about 80% of the surface of Venus. Some locations show high reflectivity consistent with unweathered basalt, indicating basaltic volcanism within the last 2.5 million years. Mars Basalt is also a common rock on the surface of Mars, as determined by data sent back from the planet's surface, and by Martian meteorites. Vesta Analysis of Hubble Space Telescope images of Vesta suggests this asteroid has a basaltic crust covered with a brecciated regolith derived from the crust. Evidence from Earth-based telescopes and the Dawn mission suggest that Vesta is the source of the HED meteorites, which have basaltic characteristics. Vesta is the main contributor to the inventory of basaltic asteroids of the main Asteroid Belt. Io Lava flows represent a major volcanic terrain on Io. Analysis of the Voyager images led scientists to believe that these flows were composed mostly of various compounds of molten sulfur. However, subsequent Earth-based infrared studies and measurements from the Galileo spacecraft indicate that these flows are composed of basaltic lava with mafic to ultramafic compositions. This conclusion is based on temperature measurements of Io's "hotspots", or thermal-emission locations, which suggest temperatures of at least 1,300 K and some as high as 1,600 K. Initial estimates suggesting eruption temperatures approaching 2,000 K have since proven to be overestimates because the wrong thermal models were used to model the temperatures. Alteration of basalt Weathering
Basalt
Wikipedia
390
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Compared to granitic rocks exposed at the Earth's surface, basalt outcrops weather relatively rapidly. This reflects their content of minerals that crystallized at higher temperatures and in an environment poorer in water vapor than granite. These minerals are less stable in the colder, wetter environment at the Earth's surface. The finer grain size of basalt and the volcanic glass sometimes found between the grains also hasten weathering. The high iron content of basalt causes weathered surfaces in humid climates to accumulate a thick crust of hematite or other iron oxides and hydroxides, staining the rock a brown to rust-red colour. Because of the low potassium content of most basalts, weathering converts the basalt to calcium-rich clay (montmorillonite) rather than potassium-rich clay (illite). Further weathering, particularly in tropical climates, converts the montmorillonite to kaolinite or gibbsite. This produces the distinctive tropical soil known as laterite. The ultimate weathering product is bauxite, the principal ore of aluminium. Chemical weathering also releases readily water-soluble cations such as calcium, sodium and magnesium, which give basaltic areas a strong buffer capacity against acidification. Calcium released by basalts binds CO2 from the atmosphere forming CaCO3 acting thus as a CO2 trap. Metamorphism Intense heat or great pressure transforms basalt into its metamorphic rock equivalents. Depending on the temperature and pressure of metamorphism, these may include greenschist, amphibolite, or eclogite. Basalts are important rocks within metamorphic regions because they can provide vital information on the conditions of metamorphism that have affected the region. Metamorphosed basalts are important hosts for a variety of hydrothermal ores, including deposits of gold, copper and volcanogenic massive sulfides.
Basalt
Wikipedia
374
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Life on basaltic rocks The common corrosion features of underwater volcanic basalt suggest that microbial activity may play a significant role in the chemical exchange between basaltic rocks and seawater. The significant amounts of reduced iron, Fe(II), and manganese, Mn(II), present in basaltic rocks provide potential energy sources for bacteria. Some Fe(II)-oxidizing bacteria cultured from iron-sulfide surfaces are also able to grow with basaltic rock as a source of Fe(II). Fe- and Mn- oxidizing bacteria have been cultured from weathered submarine basalts of Kamaʻehuakanaloa Seamount (formerly Loihi). The impact of bacteria on altering the chemical composition of basaltic glass (and thus, the oceanic crust) and seawater suggest that these interactions may lead to an application of hydrothermal vents to the origin of life. Uses Basalt is used in construction (e.g. as building blocks or in the groundwork), making cobblestones (from columnar basalt) and in making statues. Heating and extruding basalt yields stone wool, which has potential to be an excellent thermal insulator. Carbon sequestration in basalt has been studied as a means of removing carbon dioxide, produced by human industrialization, from the atmosphere. Underwater basalt deposits, scattered in seas around the globe, have the added benefit of the water serving as a barrier to the re-release of CO2 into the atmosphere.
Basalt
Wikipedia
301
43534
https://en.wikipedia.org/wiki/Basalt
Physical sciences
Petrology
null
Ruby is a pinkish red to blood-red colored gemstone, a variety of the mineral corundum (aluminium oxide). Ruby is one of the most popular traditional jewelry gems and is very durable. Other varieties of gem-quality corundum are called sapphires. Ruby is one of the traditional cardinal gems, alongside amethyst, sapphire, emerald, and diamond. The word ruby comes from ruber, Latin for red. The color of a ruby is due to the element chromium. Some gemstones that are popularly or historically called rubies, such as the Black Prince's Ruby in the British Imperial State Crown, are actually spinels. These were once known as "Balas rubies". The quality of a ruby is determined by its color, cut, and clarity, which, along with carat weight, affect its value. The brightest and most valuable shade of red, called blood-red or pigeon blood, commands a large premium over other rubies of similar quality. After color follows clarity: similar to diamonds, a clear stone will command a premium, but a ruby without any needle-like rutile inclusions may indicate that the stone has been treated. Ruby is the traditional birthstone for July and is usually red/pinker than garnet, although some rhodolite garnets have a similar pinkish hue to most rubies. The world's most valuable ruby to be sold at auction is the Estrela de Fura, which sold for US$34.8 million. Physical properties Rubies have a hardness of 9.0 on the Mohs scale of mineral hardness. Among the natural gems, only moissanite and diamond are harder, with diamond having a Mohs hardness of 10.0 and moissanite falling somewhere in between corundum (ruby) and diamond in hardness. Sapphire, ruby, and pure corundum are α-alumina, the most stable form of AlO, in which 3 electrons leave each aluminium ion to join the regular octahedral group of six nearby O ions; in pure corundum this leaves all of the aluminium ions with a very stable configuration of no unpaired electrons or unfilled energy levels, and the crystal is perfectly colorless, and transparent except for flaws.
Ruby
Wikipedia
474
43551
https://en.wikipedia.org/wiki/Ruby
Physical sciences
Mineral gemstones
null
When a chromium atom replaces an occasional aluminium atom, it too loses 3 electrons to become a chromium ion to maintain the charge balance of the AlO crystal. However, the Cr ions are larger and have electron orbitals in different directions than aluminium. The octahedral arrangement of the O ions is distorted, and the energy levels of the different orbitals of those Cr ions are slightly altered because of the directions to the O ions. Those energy differences correspond to absorption in the ultraviolet, violet, and yellow-green regions of the spectrum. If one percent of the aluminium ions are replaced by chromium in ruby, the yellow-green absorption results in a red color for the gem. Additionally, absorption at any of the above wavelengths stimulates fluorescent emission of 694-nanometer-wavelength red light, which adds to its red color and perceived luster. The chromium concentration in artificial rubies can be adjusted (in the crystal growth process) to be ten to twenty times less than in the natural gemstones. Theodore Maiman says that "because of the low chromium level in these crystals they display a lighter red color than gemstone ruby and are referred to as pink ruby." After absorbing short-wavelength light, there is a short interval of time when the crystal lattice of ruby is in an excited state before fluorescence occurs. If 694-nanometer photons pass through the crystal during that time, they can stimulate more fluorescent photons to be emitted in-phase with them, thus strengthening the intensity of that red light. By arranging mirrors or other means to pass emitted light repeatedly through the crystal, a ruby laser in this way produces a very high intensity of coherent red light. All natural rubies have imperfections in them, including color impurities and inclusions of rutile needles known as "silk". Gemologists use these needle inclusions found in natural rubies to distinguish them from synthetics, simulants, or substitutes. Usually, the rough stone is heated before cutting. These days, almost all rubies are treated in some form, with heat treatment being the most common practice. Untreated rubies of high quality command a large premium.
Ruby
Wikipedia
446
43551
https://en.wikipedia.org/wiki/Ruby
Physical sciences
Mineral gemstones
null