id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
14,421,816
https://en.wikipedia.org/wiki/Crysis%202
Crysis 2 is a first-person shooter video game developed by Crytek, published by Electronic Arts and released in North America, Australia and Europe in March 2011 for Microsoft Windows, PlayStation 3, and Xbox 360. Officially announced on June 1, 2009, the game is the second main installment of the Crysis series, and a sequel to the 2007 video game Crysis, and its expansion Crysis Warhead. The story was written by Richard Morgan, while Peter Watts was consulted and wrote a novel adaptation of the game. It was the first game to showcase the CryEngine 3 game engine and the first game using the engine to be released on consoles. A sequel, Crysis 3, was released in 2013. A remastered version, titled Crysis 2 Remastered and following in the steps of Crysis Remastered, was released in 2021 for Nintendo Switch, PlayStation 4, Windows, and Xbox One, also bundled as part of the Crysis Remastered Trilogy compilation. Gameplay Crysis 2 is a first-person shooter. The player assumes the role of a Force Recon Marine called Alcatraz. Similar to its predecessor, it provides freedom to customize weaponry and abilities. Crytek wanted to avoid making another game set in a true jungle environment (as were Far Cry and Crysis); New York City has been dubbed an "urban jungle". The urban atmosphere offers new options with relation to progressing and planning attacks. Players are able to navigate between floors and buildings, as well as a destroyed cityscape. Campaign The player assumes the control of a Force Recon Marine named "Alcatraz", who gains ownership of the Nanosuit 2.0 from Army Delta Force officer Laurence "Prophet" Barnes, who returns from the original Crysis. CryNet Systems has been hunting Prophet to retrieve the suit, inadvertently pursuing Alcatraz, believing he is Prophet. The aliens seen in the original game have undergone a major redesigning, abandoning the ancient, tentacled exosuits seen in the first game for high-tech humanoid armored war machines that stalk Alcatraz through the ravaged New York City. Crytek stated prior to release that their intention was to surpass the original game graphically and gameplay-wise while also having lower system requirements and also supporting true stereoscopic 3D. The new Nanosuit supports new and upgraded features. Suit functionality has been streamlined; multiple modes can easily be used simultaneously whenever the user wishes. The first suit's Strength and Speed Modes have been combined into the new Power Mode, the suit binoculars function has been upgraded with an advanced Tactical mode, the Cloaking Device has been modified to allow increased sensory input and silent melee stealth kills and has been renamed to Stealth Mode, while the Armor Mode has been left more or less as is, with the exception of slightly restricted agility and an ever-decreasing energy level. Synopsis Characters and setting Crysis 2 takes place in 2023, three years after the events of the first game, in a war-torn New York City which has since been evacuated due to alien infestation. The game begins with news footage of a large outbreak of the "Manhattan" virus, a gruesome disease that causes complete cellular breakdown; civil unrest; and panic about an alien invasion by the Ceph, the tentacled, squid-like alien race behind the incident of the previous game, Crysis. Due to the breakdown in social order within New York City, Manhattan is placed under martial law, and under contract from the US Department of Defense, soldiers from Crynet Enforcement & Local Logistics (or simply "CELL"), a private military contractor run by the Crynet Corporation, police the chaos. Plot On August 23, 2023, a United States Marine Corps Force Recon unit is deployed into New York City to extract former Crynet employee Doctor Nathan Gould, who may have vital information on combating the Ceph, the alien race that is trying to destroy humanity. However, the Ceph sink the submarine transporting the unit, killing everyone but Force Recon Marine "Alcatraz", who is left mortally wounded. Delta Force Major Laurence "Prophet" Barnes saves Alcatraz and kills himself to allow his Nanosuit to assimilate and revive Alcatraz. In a recording, Prophet reveals that he had been infected by the Manhattan virus, and asks Alcatraz to continue his work against the Ceph. Believing Alcatraz is Prophet, Gould contacts Alcatraz and asks him to meet up at Gould's lab. CELL forces, led by Commander Dominic Lockhart, attack Alcatraz, believing him to be Prophet. On his way to Gould's laboratory, Alcatraz collects tissue samples from the Ceph, which cause strange reactions within his Nanosuit. Alcatraz meets with Gould, who becomes aware of Prophet's death, and explains that the suit has been rewriting its own code after absorbing the Ceph's tissue. He speculates that the suit is creating an antibody for the Manhattan virus, and they decide to scan more samples at a Crynet base on Wall Street. The scans are cut short when CELL forces led by Lockhart and Lieutenant Tara Strickland ambush them. As they attempt to transfer Alcatraz to their headquarters, the Ceph attack the CELL personnel. Additionally, a massive alien spire rises from the underground, releasing a spore-based bioweapon that kills most CELL troops in the area. Alcatraz's nanosuit further adapts the spores, but malfunctions, and is rebooted remotely by Crynet director and Hargreave-Rasch Biotechnologies co-founder Jacob Hargreave. Hargreave contacts Alcatraz, claiming to have knowledge of the Ceph, and to have designed the Nanosuit based on stolen Ceph technology to be used as a defense against the aliens. Hargreave directs Alcatraz to another Ceph spire to conduct an experiment for him. On the way, Hargreave reveals to Alcatraz that the Manhattan virus had been spread by the Ceph, to clear out the entire human population from Earth. The Manhattan virus would cause all infected humans to melt down into a liquidated mass, which could then be stored and disposed of. Upon reaching the alien spire, Alcatraz attempts to interface the Nanosuit's systems with the aliens' technology, but fails. Meanwhile, the US Department of Defense rescinds CELL's authority over Manhattan and deploys Marines in their place under the command of Colonel Sherman Barclay. The American forces order an air strike on the city's flood barrier, in an attempt to drown the aliens out of lower Manhattan. Washed away by the resulting wave of water, Alcatraz is found in Madison Square Park by a squad led by Alcatraz's squadmate Chino, who survived the submarine's destruction. The Marines enlist his aid in evacuating civilians to Grand Central Terminal, the city's primary evacuation point. Hargreave asks Alcatraz to take a detour to the Hargreave-Rasch building, to find a stabilizing agent to facilitate the Nanosuit's analyzing process. Ceph interference causes this to fail, with Hargreave telling Alcatraz to help evacuation efforts at Grand Central. At the terminal, Alcatraz is reunited with Gould, who had somehow escaped Strickland. Grand Central is overrun by Ceph forces, but Alcatraz holds them off long enough for the evacuation to succeed, and he escapes the building. Alcatraz is tasked with defending another evacuation point at Times Square, and this time manages to repurpose the alien spores to be lethal to the Ceph, destroying all Ceph in the area. With the evacuation complete, Gould instructs Alcatraz to head to Roosevelt Island, to infiltrate a Crynet complex named "The Prism", where Hargreave resides. Alcatraz foils Lockhart's attempts to ambush him, and kills him in the process. However, he is then betrayed and captured by Hargreave, who wants the Nanosuit for himself, to continue the mission in person. Hargreave attempts to remove the Nanosuit from Alcatraz's body, but the Nanosuit resists its removal, having assimilated with Alcatraz. Alcatraz is then rescued by Strickland, who reveals herself to be an undercover CIA operative responsible for Alcatraz's deployment. Strickland instructs Alcatraz to capture Hargreave. In Hargreave's private office, Alcatraz discovers Hargreave's body in a vegetative state stored in a cryonic chamber. Hargreave reveals to Alcatraz that he had been communicating with him through an advanced computer system, having been injured in an encounter with the Ceph at Tunguska. Hargreave gives Alcatraz a Nanosuit upgrade, allowing it to fully interface with the Ceph, as the Ceph invade the island. Hargreave triggers the self-destruct system of the complex, and orders the remaining CELL forces to aid Alcatraz's exfiltration. Alcatraz escapes the complex, and reunites with Gould, Strickland and Chino on the shores of Manhattan. Alcatraz is notified by Barclay that STRATCOM has just authorized a tactical nuclear strike on Manhattan Island. Thus, Alcatraz has a short period of time to end the conflict with the Ceph before the missile is launched. Alcatraz and his comrades make their way toward the center of the alien infestation, and spot a massive alien "litho-ship" rising out of the ground beneath Central Park. Alcatraz assaults the floating section of Central Park, and makes his way to the alien spire at its center, which serves as a dispersal point for the alien spores. Alcatraz successfully turns the spire's bio-weapon against the Ceph, causing the death of all the Ceph in the city. After some days, the city begins to recover with the help of Gould, Strickland, and the US military. Alcatraz, while unconscious, communicates with Prophet, whose memories, experiences, and personality had been stored in the suit. Prophet tells Alcatraz that, while the mission in New York is a success, the Ceph, who had been present on Earth since prehistoric times, had built constructs globally. The Nanosuit then assimilates Prophet's memories into Alcatraz. The Nanosuit then receives a broadcast from Karl Rasch, the other founder of Hargreave-Rasch Biotechnologies, asking for his name. Alcatraz replies: "They call me Prophet.". Development Crysis 2 was announced at E3 2009 on June 1, 2009, and was in development from 2007. Crysis 2 is the sequel to 2007's Crysis which was lauded for its impressive visuals. German-based studio Crytek, which developed the first game, is the lead developer of the sequel, along with help from Crytek UK, formerly Free Radical. It is the first game using the new engine CryEngine 3. The Microsoft Windows version is built on DirectX 9, with an optional DirectX 11 add-on. Crytek looked to surpass the original Crysis, which was still a benchmark of PC graphical performance in 2011. Crysis 2 did not use EA's online pass system. On April 14, 2014, Crytek announced that the multiplayer mode for Microsoft Windows will be unplayable after GameSpy switches off its servers on May 30, 2014. Leaked beta A beta version of the game dated from January was leaked on multiple torrent sites on February 11, 2011. Online reports indicated that the leaked beta is a partially functioning developer's build compiled on January 13, 2011. The leaked version included the entire single-player campaign and multiplayer component, but contained numerous bugs, was plagued by frequent crashes, and was only partially completed with many placeholders and textures missing and was limited to DirectX 9, rather than the DirectX 11 which was expected in the retail game. Crytek released a statement saying they were "deeply disappointed" in piracy, which "continues to damage the PC packaged goods market." Some reviewers remarked that Crytek's statement was strange, considering that no PC demo of the game had been released yet, and moreover, the source of the leaked beta was almost certainly an internal employee (rather than pirates). On February 14, 2011, Crytek released a statement by Cevat Yerli that stated that despite their disappointment caused by the leak incident, Crytek is overwhelmed with the support they have received and they can assure the community that PC gaming is very important to them now and in the future. Crytek producer Nathan Camarillo called the Crysis 2 game leak a "really ugly version" that the studio did not want people to see: In the beginning of 2012, PC Gamer reported that Crysis 2 was the most pirated PC game of 2011 with 3.9 million downloads. Marketing and release Retail versions On August 17, 2010, EA announced that there would be two special editions of the game. The Limited Edition of Crysis 2 is available at the same price as the regular game, but in limited quantities. It comes with bonus experience points to "immediately boost the player up to Rank 5, giving access to all the preset class loadouts", a digital camo weapon skin for the SCAR, the "Hologram Decoy" attachment for the SCAR, and unique in-game platinum dog tags. The Indian Version, on pre-order, also includes the Threat Tracer Suit Module (Early Access), and, on buying from the EA store, a gold dog tag and desert camo for SCAR. The Nano Edition of Crysis 2, which was only available through pre-order, includes the Limited Edition copy of the game in a Steelbook case, an 11" statue of Alcatraz posed on top of a New York City taxi, an art book, and a Nanosuit backpack "modeled after the in-game super suit." The Nanosuit backpack is large enough to accommodate a 17" laptop. As of September 26, 2010, the Nano Edition was made available for pre-order on the EA website for a $149.95 price tag but was sold out before March 2011. After the game's launch, those who preordered the Nano Edition had their shipment delayed. EA gave the affected customers a 75% off coupon and later a free digital copy of Crysis 2 Limited Edition as compensation. In May 2012, Crysis 2: Maximum Edition was released for Microsoft Windows. It included the game and previously released DLC, including nine additional multiplayer maps and new game modes for them, two new weapons (FY71 Assault Rifle and M18 Smoke Grenades), the Scar weapon skin, a Scar hologram decoy to attach to the player's weapon, platinum dog tags, and access to bonus XP through custom and preset classes. Also included is the high resolution texture pack. Multiplayer demo EA and Crytek launched a multiplayer demo of Crysis 2 on January 25, 2011. Crytek announced that the demo would only be available until February 4, 2011. The demo was on the Xbox 360, for Gold members to download, although on January 27, Crytek confirmed that there would be a multiplayer demo for Microsoft Windows. The demo featured the maps Skyline and Pier 17, as well as two multiplayer game modes to play: Team Instant Action and Crash Site. Team Instant Action puts two teams against one another in a team deathmatch style, while Crash Site has players defending alien drop pods like control points. Within hours of its release, thousands of complaints were reported after numbers of players were met with disconnects from games, crashing during loading and, oddly, a temperamental incompatibility with the Xbox Wireless WiFi adaptor. Crytek issued a statement telling players it was aware of "technical issues" with the Xbox multiplayer demo of Crysis 2, and managed to fix most of the issues in time for the PC demo. Some bugs still exist as of the current trial build but are mostly if not entirely visual and do not noticeably affect game play. Speaking at an EA event to PlayStation Universe, Crysis 2 producer Nathan Camarillo said that a PlayStation 3 version was possible, also stating there would be no difference in quality between the PlayStation 3 version of Crysis 2 and the Xbox 360 one, which had seen a pre-release demo. Crytek released the first footage of Crysis 2 running on PlayStation 3 on February 24, 2011. The second Crysis 2 multiplayer demo was released on March 1, 2011, on both Microsoft Windows and Xbox 360. Among bug fixes from the first beta, the map 'Pier 17' was added to the Xbox version and extended to the PC version. PC gamers have commented on Xbox 360 remnants in the PC demo version, such as the prompt to "press start to begin" or to "adjust your TV settings" when configuring the game brightness. It has also been reported that the PC version would not be released with support for DirectX 11, which will instead be implemented with a patch "later on". On April 8, 2011, Crytek announced – still without marking delivery date – that a DirectX 11 patch was being worked on. On March 15, 2011, a multiplayer demo was released on the PlayStation Network, featuring both of the maps featured on the Xbox 360 version of the demo, being 'Pier 17' and 'Skyline'. On March 18, it was removed from the Store and the servers were shut down due to server connection delays. Crysis 2 Remastered A remastered version, following in the steps of Crysis Remastered, was announced on May 21, 2021. It was released for the Nintendo Switch, PlayStation 4, Xbox One and Microsoft Windows on October 15, 2021, both as a bundle with Crysis Remastered and Crysis 3 Remastered, titled Crysis Remastered Trilogy, and separately. This version of the game was co-developed with Saber Interactive and is self-published by Crytek. Downloadable content The first post-launch downloadable content (DLC) package, titled Crysis 2: Retaliation, was announced on May 10, 2011. Retaliation features four new multiplayer maps - Park Avenue, Transit, Shipyard, and Compound. It was released on May 18, 2011, for the PC, PlayStation 3 and Xbox 360. On June 14, 2011, a second map pack entitled Decimation was released for the Xbox 360 and PC. It included five new maps (5th Avenue, Chasm, Plaza, Prism, and Apartments) as well as two new weapons (FY71 Assault Rifle and the Smoke Grenade). Decimation was released onto the PlayStation 3 platform on June 28 in North America and June 29 in Europe. Soundtrack The Crysis 2 Original Soundtrack was composed by Borislav Slavov and Tilman Sillescu with the help of Hans Zimmer and Lorne Balfe. A new rendition of the song entitled "New York, New York" by B.o.B was used in the launch trailer. There have been four official releases of the soundtrack. Three albums are available in digital form (via iTunes and Amazon): The Original Videogame Soundtrack, released on the game's launch date, with 15 tracks; Be Fast!, released on April 26, 2011, with 16 tracks; and Be the Weapon!, released on June 7, 2011, with 17 tracks. The most complete version, consisting of two CDs and 46 tracks, was released on April 26, 2011, under La-La Land Records. Reception Crysis 2 received positive reviews from critics. The reviewers praised various graphical attributes as well as the empowering Nanosuit, but criticized the linearity of the gameplay in contrast to its open world predecessors, Crysis and Crysis Warhead, as well as Crytek's acclaimed debut title Far Cry. Review aggregator website Metacritic rated the PC version 86/100, the Xbox 360 version 84/100, and the PlayStation 3 version 85/100. One early review of Crysis 2 was published by Official Xbox Magazine, which gave the game a 9/10. According to the magazine, it was the multiplayer that tipped the balance. It describes the online experience as "some of the most exciting, angry and satisfying action you'll ever have". The sheer spectacle of the single player campaign also left OXM impressed, and the magazine said the game's Nanosuit "is both massively empowering and intelligently balanced by the need to manage its energy levels". Gamereactor reviewed all versions simultaneously and awarded the game a 9/10, noting that "its design is an exciting contrast to the jungles of the original, and New York is filled with destroyed landmarks, ruined neighbourhoods and the beauty of disaster that Cevat talked about. The amount of detail is insane, and the effects are incredible." On the other hand, the reviewers criticized the story, noting "the dialogue often feels over the top and the characters feel flat and uninteresting. Crytek have clearly been inspired by Half-Life 2, but they never even come close to Valve's league despite their high ambitions." They concluded that "it would simply be a shame not to call this the best action game so far this year." A review for the PlayStation 3 version only was published by Official PlayStation Magazine, which gave the game an 8/10. OPM calls Crysis 2 "excellent - technically strong, visually outstanding and full of welcome fresh ideas." OPMs main gripes are with the shooter's "bungled" opening section, and their view that it takes several hours of "persisting" through "a thorny, poorly signposted and indifferent shooter" until Crysis 2 really kicks off. "Developer Crytek has a deserved reputation for pushing gaming hardware to the brink, and its debut work on PS3 is first-rate," it says. "It doesn't just look good, it looks different. The Manhattan mix of crooked concrete spires and green urban spaces is refreshing after the relentless dark khaki backgrounds of Call of Duty and Medal of Honor." The Telegraph considered that the game heavily borrowed from the Call of Duty shooters, being much more scripted and linear than Crysis, calling the game a "walled in" experience. The Telegraph also criticized the enemies' "utterly atrocious" AI, "problematic" sound, and "uninspiring" multiplayer. GameZone gave the game an 8.5/10, stating "With plenty of in-game collectibles in both the multiplayer and single-player modes, as well as solid multiplayer gameplay options, players will find plenty of bang for their buck, and the stunning power of the CryEngine needs to be seen to be believed." During the 15th Annual Interactive Achievement Awards, the Academy of Interactive Arts & Sciences nominated Crysis 2 for "Outstanding Achievement in Visual Engineering". Unlike the original Crysis, which allowed the user to extensively change various graphical settings, Crysis 2 at launch provided fewer options. However, advanced settings and DirectX 11 features were added in patch 1.8 along with high resolution textures. The high-res texture upgrade can be used in either DX9 mode or DX11 mode (the graphics card must have 768 MB or more video memory), but can only be enabled on 64-bit operating systems. Due to an unresolved bug in Crysis 2, DirectX 11 mode does not function properly on Windows 8. As of June 30, 2011, over 3 million copies of the game have been sold across all platforms, which is less than Crysis on PC only. In April 2012 it was awarded with the Deutscher Computerspielpreis in the category Best German Game. Criticism The game was heavily criticized for its misuse of tesselation that resulted in the game unjustifiably favoring NVIDIA GPUs. Sequel A sequel titled Crysis 3 was released on February 19, 2013, for Microsoft Windows, PlayStation 3, and Xbox 360. Notes References External links 2011 video games Video games about alien invasions Apocalyptic video games Biological weapons in popular culture Fiction about corporate warfare CryEngine games Crytek games Dystopian video games Electronic Arts games First-person shooters German science fiction Inactive multiplayer online games Fiction about memory erasure and alteration Fiction about mind control Multiplayer and single-player video games Nanopunk Fiction about nanotechnology New York City in fiction Nintendo Switch games PlayStation 3 games PlayStation 4 games Science fiction video games Stealth video games Transhumanism in video games Video games about amnesia Video games about death Video games about the United States Marine Corps Video game sequels Video games developed in Germany Video games developed in the United Kingdom Video games scored by Hans Zimmer Video games scored by Lorne Balfe Video games set in New York City Video games set in 2023 Video games with stereoscopic 3D graphics Windows games Xbox 360 games Xbox One games Fiction about invisibility 2
Crysis 2
[ "Materials_science", "Biology" ]
5,137
[ "Fiction about nanotechnology", "Nanotechnology", "Biological weapons in popular culture", "Biological warfare" ]
14,421,821
https://en.wikipedia.org/wiki/Crysis%203
Crysis 3 is a 2013 first-person shooter video game developed by Crytek and published by Electronic Arts. It is the third installment in the Crysis series, and a sequel to the 2011 video game Crysis 2. The multiplayer portion of the game was developed by Crytek UK. Crysis 3s story, which serves to end the Crysis trilogy, revolves around a Nanosuit holder named Prophet and his quest for revenge against the Alpha Ceph, the leader of the Ceph alien race. Gameplay revolves around the use of the Nanosuit, which grants players a variety of abilities such as invisibility. New features introduced in Crysis 3 include a new Nanosuit ability called "Rip & Throw", a compound bow and hacking, which allows players to hack into enemies' equipment, drones, and security defenses. Crysis 3 is set in a post-apocalyptic New York City, in an effort to merge the urban landscape of Crysis 2 and the forest setting of the original Crysis. The game introduces the "Seven Wonders", with each wonder having its own unique landscape and tactical layout. Due to complaints about Crysis 2s linearity, the game's levels were opened up so as to grant players more freedom. The development team also put efforts into creating a more emotional story, and the story's protagonist was inspired by the lead character of District 9. The game was developed by a team of 100 people during its 23-month development cycle. Crytek UK developed the game's multiplayer portion. Officially announced in April 2012, the game was released for PlayStation 3, Windows, and Xbox 360 in February 2013. A Wii U port was in development, but was canned because of relation troubles between Nintendo and EA. The game received positive reviews upon release. Praise was directed at the weapon selection and customization, menus, level design, visuals and multiplayer, while it was criticized for its story, length, and outdated mechanics in comparison to its predecessors. With a budget of $66 million, the game sold 205,000 copies in its debut month, and became a commercial failure for Electronic Arts. The game was later included in Crysis Trilogy, a compilation released in February 2014. A remastered version, titled Crysis 3 Remastered and following in the steps of Crysis 2 Remastered, was released in 2021 for Nintendo Switch, PlayStation 4, Windows, and Xbox One. and also bundled as part of the Crysis Remastered Trilogy compilation. Gameplay Similar to the earlier games in the Crysis series, Crysis 3 is a first-person shooter. Players take control of Prophet as he progresses through New York City to defeat the Ceph, a technologically advanced alien race. Throughout the game, players can slide, sprint, jump and crouch. When encountering enemies, players can defeat them by using guns or a compound bow, utilizing explosives like grenades and C4, or by performing a melee attack. Performing certain movements takes up energy from the Nanosuit, the armor worn by Prophet. Some abilities are not available for players to perform or utilize if the Nanosuit's energy is too low; they must wait until energy is refilled. As a result, players are tasked to manage the use of the energy. The game's artificial intelligence was updated allowing enemies to react more quickly to players' attacks. Enemies can take cover when attacked, and can employ strategy to assist and support each other against attacks. The Nanosuit allows players to identify the threat level, and the weapons held by enemies. Players can tag enemies and items by enabling visor, and can spot enemies using Nano-Vision, which detects the heat of both enemies and allies. Levels are more open-ended than in Crysis 2. Players are given more freedom, and can choose a gameplay style based on direct confrontation, or a more discreet and stealthy approach, in order to deal with enemies and to complete their objectives. There is no definite way to beat the game's seven levels. Instead, players can take different alternate routes to reach their objectives. Players can fight against enemies utilizing a wide array of gadgets and weapons, and by using the abilities granted by the Nanosuit. They can utilize an invisibility cloak to evade enemies, prevent themselves from being detected, or to perform silent takedowns. The nanosuit also has an armor mode, which reduces the amount of damage taken, in exchange for slower movement speed. New weapons are introduced in Crysis 3, such as a compound bow. Players can use the bow while in cloak mode. When using other firearms, the cloak is disrupted and can no longer function until it cools down. Arrows can be collected by players after use. Players can hack into enemies' technology, one of the game's new features. In addition, players can hack security codes, weapon boxes, Ceph technology, mines, lasers, and sentry turrets, which can all be used to fight against enemies. Players can also upgrade and customize their weapons and the Nanosuit. They can change the attachment and ammo types for their weapons. For instance, players can change between explosive arrows and electric arrows for their bow. The Nanosuit can be upgraded by collecting different suit upgrade modules scattered across the world. These upgrades can increase the suit's properties, and strengthen or unlock new abilities for players as they progress through the game. Multiplayer Gameplay remains similar when playing the multiplayer mode. Unlike the single-player campaign, when players sprint or boost their armor in the multiplayer modes, it does not use any nanosuit energy. There are 8 different modes, with a total of 12 available maps to play on. They are Team Deathmatch, Deathmatch, Crash Site / Spears, Capture the Relay / Extraction, Hunter, Assault, Cell Vs Rebel, Developers Choice, Maximum Team Deathmatch, and Maximum Deathmatch. Scattered throughout each map are special alien weapons with scarce ammo and can be picked up by players. Players also have a new passive ability called Rip and Throw, in which they interact with environmental objects to create obstacles for hostile players and tactical advantages for themselves. This ability to interact with the environment was pushed heavily upon the team by their publisher, EA. A refined kill streak system is introduced in Crysis 3, allowing players to gain rewards by killing hostile players simultaneously while collecting their dog tag. This refined kill system involves collecting dog tags that are dropped by enemy players when killed. The perks gained from this vary from map to map. They are Maximum Radar, Swarmer, Gamma Burst, EMP, and Maximum Nanosuit, which reveals enemy players locations on the mini-map. In addition to traditional multiplayer modes, a new multiplayer mode, the Hunter mode, is featured in the game. It is an asymmetrical multiplayer mode which pits two teams of players, playing as either hunters, or troopers from CELL, against each other. The three CELL classes are equipped with completely different weapons, and defeated troopers respawn as hunters and have to defeat their former teammates. Hunters are equipped with the nanosuit and infinite cloak, as well as the Hunter Bow, which allows the hunter to fire while they're cloaked. The hunters also have access to a Thermite Arrow tip, and the CELL have various explosives and weapons that change depending on class selected. The PC version of the game can accommodate up to 16 players, while the console versions can only support 12 players. Synopsis Setting Players take on the role of Prophet as he returns to New York City in 2047, 24 years after the events of Crysis 2. He discovers the city has been encased in a giant dome created by the corrupt CELL corporation. The New York City Liberty Dome is a veritable urban rainforest teeming with overgrown trees, dense swamplands, and raging rivers. Within the Liberty Dome, seven distinct and treacherous environments are known as the Seven Wonders. Prophet is said to be on a "revenge mission" after uncovering the truth behind CELL's motives for building the quarantined Nanodome. Plot After the events of Crysis 2, Psycho and Prophet travel the world looking for the Alpha Ceph, the ultimate Ceph leader. Prophet and Psycho finally trace the Alpha Ceph in Russia and imprison it. However, shortly afterwards CELL Corporation, now attempting global domination of land and technology, disables Prophet in Siberia and captures all the Nanosuit soldiers, skinning them of their suits to recover the Ceph genetics stored in them. CELL transfers Prophet to a facility in New York, encased within a giant "Nanodome", to skin him. He is saved by a resistance force, led by Claire Fontanelli and Karl Ernst Rasch, as Prophet is the only Nanosuit holder left who can stop CELL. Psycho, who was saved by Claire after being skinned, explains to Prophet that during his absence, CELL used Ceph technology to generate unlimited energy, and gained a monopoly over the world's power supply. Those who could not pay for energy were enslaved by debt to CELL. The source of CELL's power generation for the entire world, called System X, is located in now abandoned New York. The resistance group wants System X destroyed to free the world from CELL. After Psycho and Prophet disable System X's core, it turns out that it is a system protocol designed to contain the Alpha Ceph while draining energy off of the alien. However, the secondary defense protocol was initiated, causing the power facility to self-destruct. The Alpha Ceph, free from containment, opens a wormhole to the Ceph homeworld. They plan to send the powerful Ceph warrior caste to invade Earth through the wormhole and terraform it at the expense of all local life's extinction. With the Alpha Ceph no longer dormant, the Ceph coordinator reactivates, and a coordinated Ceph attack ensues. After unlocking his potential ability by removing some neural blocks in his suit, Prophet learns that CELL plans to use Archangel, a satellite-based energy distribution device that can draw power from the world's power grid, as a directed energy weapon to destroy the Alpha Ceph. Firing it would cause a chain reaction that would destroy Earth. They shut off the weapon before it has enough energy to fire. Along the way, Psycho discovers Claire was one of the scientists who reluctantly skinned him, causing friction in his previously romantic relationship with her. Unfortunately, Karl, who had secretly used Ceph technology to extend his lifespan, is possessed by the Ceph and critically wounds Claire while psychically crippling Prophet. Regaining control after Psycho shot him, he sacrifices himself to distract the Alpha Ceph. Prophet, Psycho, and Claire board the VTOL and battles with Ceph ships, eventually crashing. Claire dies in the process. Psycho, saddened by her death, laments to Prophet that he is powerless because he no longer possesses a Nanosuit. Upon being told that his lack of a Nanosuit saved him from Karl's control earlier, Psycho, now going by his real name, Michael, finds another VTOL to take Prophet to the Ceph. Michael and Prophet head towards the Alpha Ceph, but are bogged down by the Ceph Master Mind. Prophet finds his way through the Ceph Army hordes and kills the Alpha Ceph which in turn kills all other Ceph troopers in the area. However, they do not have enough time to destroy the Ceph wormhole structure and the beam powering the wormhole pulls Prophet into space. Now in orbit around Earth, Prophet sees a massive Ceph warship coming through the wormhole. Recalling Archangel's power, Prophet wakes up and hacks into the satellite, and uses it to destroy the warship, collapsing the wormhole and preventing Ceph warrior caste from invading Earth. Prophet is pushed back to Earth. He crashlands in the water near the Lingshan Islands where the events of Crysis took place 27 years earlier. When Prophet wakes the next morning, he is in an abandoned hut in Lingshan. A television playing in the background informs him that CELL's assets were frozen by Senator Strickland as the corporation is under investigation. As the neural blocks are removed from the Nanosuit, the suit's outer layer is changed to reform Prophet's former physical body, resurrecting him. He walks out onto the beach and relinquishes his past by throwing his tags into the water. He then decides to use his actual name "Laurence Barnes" from then on. The game ends with Prophet walking back to the shed and activating his stealth ability. In a post-credit scene, two CELL soldiers are shot by Michael after they escort three board members into a bunker. Michael announces he would like to make a complaint to the board members regarding his treatment at one of their "hospitals". Development Crysis 3s development began in February 2011, two months before the release of Crysis 2. The development of the game's campaign was handled by Crytek; its multiplayer was handled by Crytek UK. Its development budget was significantly smaller than that of Crysis 2, as resources and manpower were relocated to develop Homefront 2 and Ryse: Son of Rome. As a result, only about 100 people worked on the game during its 23 months development. The game's budget was about $66 million. Crysis 2 was criticised for abandoning the island setting of the original Crysis. The studio decided to recreate New York City and set the game in a post-apocalyptic environment. To do this, the company introduced a concept called "Nanodome", a dome set up by CELL to isolate New York City, which had fallen to ruins over the years. Plants have grown significantly, leading to an environment that is a mix between an urban environment and the jungle setting of the past two games. The decision was made to create this environment as the team wanted to stay away from typical urban war field shown in other games of the same genre. Instead of having a rainforest in the abandoned wasteland, the Nanodome is used to fix the shape of the forest, and Crytek hoped that it would define the atmosphere and the narrative of the game. The game's main protagonist is Prophet, a nanosuit holder who had previously appeared in Crysis and Crysis 2. He returns in Crysis 3 as Crytek considered him as the most complex character who has the longest heritage in the series. When designing him, the team took inspiration from the protagonist of District 9. Psycho from the original Crysis and Crysis Warhead returns as Prophet's companion. The game revolves around Prophet attempting to redeem himself by taking revenge after finding his former squad members are dead. The story explores the themes of redemption and revenge as well as the relationship between humans and technology. Mike Read, the game's producer, summed up the game calling it "human". Unlike Crysis 2s protagonist Alcatraz who does not speak throughout the game, Prophet is voiced in Crysis 3 in an effort to deliver more emotional connection. The company made use of performance capture to record actor's performance, body movements and facial expressions. According to Read, this helps the company to create a more affecting and emotional story. This was not done in the previous games due to technological limitations. Unlike the original Crysis, which gives players more freedom to explore, Crysis 2 was criticized for being too linear. As a result, in developing the game's campaign, Crytek attempted to integrate the two major aspects of the previous installments, the freedom given in the original Crysis and the linearity of Crysis 2. Despite not being as open as the first half of Crysis, maps are significantly larger in this game than in previous installments. Several linear segments were preserved. Crytek referred to the game's map as "action bubbles", which do not necessarily fix players in a small place but allows them to move around freely. Linearity was preserved as developers considered having such segments present could help players to experience "epic" moments and "massive Michael Bay" moments. Crytek hoped that by opening up levels they could give players a sense of control, allowing them to plan and execute strategy. The game's seven levels were developed simultaneously, and the focus phrases of gameplay, art and optimization of a level was about one to two weeks. Another theme featured in the game is "hunt" and, as a result, many weapons were built around the concept. The studio wanted to stay away from typical weapon design and did not want a weapon to simply differentiate between Crysis 3 and Crysis 2. The team aimed to use weapons to create narrative. The game features a compound bow which is reminiscent of a weapon that a hunter often uses. The game's gameplay is built based on three pillars, access, adapt, and attack. Players are often tasked to do detect, spot, and to learn their enemies' behaviors and patterns before attacking. With the bow, players are encouraged to play the game stealthily, extending the game's combat. A new feature called "hacking" was developed. According to Crytek, hacking plays a huge role throughout the game. The game's combat was also made more fast-paced than that of its predecessors. One of the major goals in developing Crysis 3 was to "push graphics", as the company considered that they can effectively assist and drive gameplay and create immersion for the player. The game is powered by CryEngine 3, Crytek's proprietary engine. It utilizes some of the newest features of CryEngine 3, such as volumetric fog shadows, improved dynamic vegetation, dynamic caustics, improved area lighting and diffuse shadows. After Crysis 2 received some criticism from PC gamers because of the design sacrifices made due to the limitations of the older console hardware, Crytek responded that the PC version of Crysis 3 will figuratively "melt down PCs" due to its high system requirements. The PC version of the game required a DirectX 11 compatible video card and operating system. Similar to Crysis 2, the game is a multi-platform title, and Crytek considered the development of the game's console version a huge obstacle they had to "rip the engine to pieces" so as to get the game running on console. The multiplayer portion of the game was created by Crytek UK. It was designed to improve the efficiency of the Nanosuit in an online environment. In an effort to create memorable maps, the team designed routes that can only be discovered after a player's first playthrough. Crytek UK hoped that this approach would help players to become more immersed in the game's universe. The Hunter mode, introduced in this game, originated from a TimeSplitters'''s Gladiator mode. The decision to make sprint energy separated from the nanosuit energy was due to the development team's desire to create larger levels, and help players to navigate the map in a faster way. The game's music was composed by Borislav Slavov, who had previously led the soundtrack development of Crysis 2. New music was composed for the game, while some other themes from the past installments were rearranged. The theme of the game's music is changed so as to fit the game's post-apocalyptic setting. The game's music is dynamic and is designed to reflect players' gameplay style. As a result, when players use a radical approach to complete missions, more exciting background music will be played. In contrast, when players are playing stealthily, the background music will be relatively calmer and quieter. Marketing and release In November 2010, Nathan Camarillo, an executive producer from Crytek, revealed that the Crysis series could potentially be a very long-running franchise, as the company considered the series' universe easy for players to get into and become invested in. He added: "As the franchise grows down the line, there's no reason it can't be as big [as Call of Duty]". The story elements of the game had already been planned in January 2011. Despite that, Cevat Yerli, Crytek's CEO, claimed that if Crysis 2 was not a successful title, Crytek would not develop its sequel. In March 2012, Crytek teased an "absolutely fantastic" project and announced that a full reveal of the game would be held in April 2012. The game was accidentally revealed by EA on its web store on 11 April 2012. It was removed immediately from the store but the title was later officially announced on 16 April 2012. Filmmaker Albert Hughes was commissioned to produce The 7 Wonders of Crysis 3, a series of six short stylized videos, each of which features a different aspect of the game. A PC-only closed alpha version of the multiplayer was released to selected Origin users on 31 October 2012. The test began on 2 November and ended on 9 November 2012. A public multiplayer beta containing two maps ("Museum" and "Airport") and two game modes ("Crash Site" and "Hunter") was available for the Xbox 360 and PlayStation 3 console platforms, as well as for the PC through Origin. The beta was made available on 29 January 2013 and ended on 12 February 2013. Crytek and EA announced that 3 million people participated in the beta.Crysis 3 was released worldwide on 19 February 2013 in the US and 21 February 2013 in the UK for PlayStation 3, Windows, and Xbox 360. The Wii U version was cancelled after relations between Nintendo and EA became troubled. The Hunter Edition, which features exclusive in-game items, and early access to the compound bow in the multiplayer portion of the game, was released alongside the game. Players who had pre-ordered the game could also get the original Crysis for free. On 4 March 2015, the game was made available for Android via Nvidia Shield. The Crysis Trilogy bundle was released on 20 February 2014 consisting of the original, the Deluxe Edition of Crysis, along with other games in the series. On 30 May 2013, Electronic Arts announced The Lost Island downloadable content (DLC). The multiplayer-only DLC includes two weapons, four maps, and two competitive multiplayer modes called "Frenzy" and "Possession". The downloadable content was released worldwide on 4 July 2013 for PlayStation 3, Windows, and Xbox 360. EA announced that it will shut down the game's servers on September 7, 2023. Crysis 3 Remastered A remastered version, following in the steps of Crysis Remastered, was announced on May 21, 2021. It was released for the Nintendo Switch, PlayStation 4, Windows, and Xbox One on October 15, 2021 both as a bundle with Crysis Remastered and Crysis 2 Remastered, titled Crysis Remastered Trilogy, and separately. This version of the game was co-developed with Saber Interactive and is self-published by Crytek. This version of the game does not feature any multiplayer modes. Reception Critical receptionCrysis 3 has received generally positive reviews from critics. Aggregating review websites Metacritic rated the Xbox 360 version 76/100, the PlayStation 3 version 77/100 and the PC version 76/100. The visuals and graphics of the game were widely praised by reviewers. Christian Donlan of Eurogamer praised the game's stable frame rate. Furthermore, he considered the game's environmental design "artful". Matthew Rorie of GameSpy thought that the game was visually stunning. He applauded the team at Crytek for creating an environment that is "both inhospitable and queerly beautiful". Matt Bertz of Game Informer praised the visuals powered by CryEngine, and considered the game one of the best-looking games ever created. He especially praised its realistic environments, water effects, and character facial animation. Kevin VanOrd of GameSpot also praised the mix of the decayed urban environment and the rainforest, saying that it made the game striking to look at. The game's design was praised by various reviewers. Donlan considered the game's support of stealth a welcoming addition, despite calling the game's last level a forgettable experience. Rorie praised the game's map design; he opined that the opened-up levels encourage exploration. Bertz considered the game's world had successfully captured a balance between the settings of its predecessors, and that the larger levels allowed players to deploy strategy before performing attacks. He added that some of the best missions were featured in the later stages of the game. Tristan Ogilvie of IGN thought that the control was almost perfect, despite criticizing the clumsy control of several segments which require players to control vehicles. VanOrd criticized the game for being too easy for players to play. The game's online multiplayer received positive reviews from critics. Josh Harmon of Electronic Gaming Monthly thought that the game's multiplayer was better than the campaign, and that it made the overall experience more enjoyable. Donlan praised the Hunter mode; he believed that it had delivered a tense experience. Bertz echoed similar thoughts, but felt that the mode's appeal was not as good as typical modes like Domination. As well, he criticized the multiplayer's respawn system and terrain-design. Lorenzo Veloria of GamesRadar thought that some of the game modes were unique and entertaining, despite noting some technical issues. Michael Rougeau of Complex criticized the Hunter mode, calling it "unbalanced". He criticized the game for lacking a co-operative multiplayer mode. David Hinkle of Joystiq also noted some design errors in the Hunter mode. The story was disliked by critics compared to the game's other aspects. Harmon thought that several emotional segments of the game failed to deliver, as well as criticizing the forgettable storyline and plot twists. Despite that, he praised the game's finale and considered that it brought a proper closure to the Crysis trilogy. In contrast, Donlan commented that it was not quite the conclusion the series deserved. Rorie thought that the story was more mature than its predecessors, despite having a relatively weak start and short length of about six hours. Bertz opined that the story was the most cohesive of the titles in the series. Veloria, however, criticized the game's narrative; he added that it was uninspiring due to the lack of character development and interesting dialogue. In contrast, Ogilvie thought that the game's dialogue and voice-acting were excellent, citing the game's relateable characters that its predecessors failed to achieve. He considered the game's storytelling a massive improvement for the series. Many reviewers considered Crysis 3 was an evolution of the series instead of a groundbreaking revolution. Rorie criticized the game for being unambitious, and that despite the game's overall refinements, it had not strayed far enough from its predecessors. He concluded that Crysis 3 did not achieve the revolution brought by the original Crysis. Veloria thought that the title failed to bring any new element to the genre, but the overall experience delivered by the game was still satisfying. Jose Otero of 1Up.com thought the game was fun despite its lack of original ideas stating: "If you go in understanding that Crysis 3 delivers blockbuster entertainment and multiplayer that iterates on Call of Duty's perks system, you'll be fine. But if you want Crysis to stake a claim all its own, you might be disappointed." Evan Lahti of PC Gamer commented that the game did not surprise players, and that the title presented a feeling of Crysis 2: Episode 2 instead of a proper sequel. Sales During its debut release week and the next, Crysis 3 was the best-selling retail game in the UK closely followed by Metal Gear Rising: Revengeance. It sold 205,000 copies in 12 days in North America during its debut month. The title, along with Dead Space 3, another EA title that was released in the same month, failed to meet the company's sales expectations. Cevat Yerli, Crytek's CEO, was also disappointed by the sales of Crysis 3. Nevertheless, he considered Crysis 3 the best game the studio had produced so far. Sequel A sequel under the working title Crysis 4'' is currently in development. Notes References External links 2013 video games Video games about alien invasions Asymmetrical multiplayer video games Cancelled Wii U games Fiction about corporate warfare CryEngine games Crytek games Dystopian video games Electronic Arts games Fiction about invisibility First-person shooters German science fiction Hacking video games Hive minds in fiction Fiction about mind control Multiplayer and single-player video games Nanopunk Fiction about nanotechnology Nintendo Switch games PlayStation 3 games PlayStation 4 games Post-apocalyptic video games Fiction about resurrection Fiction about sacrifices Science fiction video games Stealth video games Transhumanism in video games Video game sequels Video games about revenge Video games developed in Germany Video games developed in the United Kingdom Video games featuring black protagonists Video games set in New York City Video games set in the 2040s Video games with stereoscopic 3D graphics Windows games Fiction about wormholes Xbox 360 games Xbox One games 3
Crysis 3
[ "Physics", "Materials_science" ]
5,930
[ "Fiction about nanotechnology", "Asymmetry", "Nanotechnology", "Asymmetrical multiplayer video games", "Symmetry" ]
14,422,528
https://en.wikipedia.org/wiki/Clumping%20factor%20A
Clumping factor A, or ClfA, is a virulence factor from Staphylococcus aureus (S. aureus) that binds to fibrinogen. ClfA also has been shown to bind to complement regulator I protein. It is responsible for the clumping of blood plasma observed when adding S. aureus to human plasma. Clumping factor can be detected by the slide test. See also Tefibazumab Coagulase References Staphylococcaceae Bacterial proteins Virulence factors
Clumping factor A
[ "Chemistry" ]
112
[ "Biochemistry stubs", "Protein stubs" ]
14,422,554
https://en.wikipedia.org/wiki/Isoenthalpic%E2%80%93isobaric%20ensemble
The isoenthalpic-isobaric ensemble (constant enthalpy and constant pressure ensemble) is a statistical mechanical ensemble that maintains constant enthalpy and constant pressure applied. It is also called the -ensemble, where the number of particles is also kept as a constant. It was developed by physicist H. C. Andersen in 1980. The ensemble adds another degree of freedom, which represents the variable volume of a system to which the coordinates of all particles are relative. The volume becomes a dynamical variable with potential energy and kinetic energy given by . The enthalpy is a conserved quantity. Using isoenthalpic-isobaric ensemble of Lennard-Jones fluid, it was shown that the Joule–Thomson coefficient and inversion curve can be computed directly from a single molecular dynamics simulation. A complete vapor-compression refrigeration cycle and a vapor–liquid coexistence curve, as well as a reasonable estimate of the supercritical point can be also simulated from this approach. NPH simulation can be carried out using GROMACS and LAMMPS. References Statistical ensembles
Isoenthalpic–isobaric ensemble
[ "Physics" ]
222
[ "Statistical mechanics stubs", "Statistical ensembles", "Statistical mechanics" ]
14,422,703
https://en.wikipedia.org/wiki/Tutton%27s%20salt
Tutton's salts are a family of salts with the formula M2M'(SO4)2(H2O)6 (sulfates) or M2M'(SeO4)2(H2O)6 (selenates). These materials are double salts, which means that they contain two different cations, M+ and M'2+ crystallized in the same regular ionic lattice. The univalent cation can be potassium, rubidium, caesium, ammonium (NH4), deuterated ammonium (ND4) or thallium. Sodium or lithium ions are too small. The divalent cation can be magnesium, vanadium, chromium, manganese, iron, cobalt, nickel, copper, zinc or cadmium. In addition to sulfate and selenate, the divalent anion can be chromate (CrO42−), tetrafluoroberyllate (BeF42−), hydrogenphosphate (HPO42−) or monofluorophosphate (PO3F2−). Tutton's salts crystallize in the monoclinic space group P21/a. The robustness is the result of the complementary hydrogen-bonding between the tetrahedral anions and cations as well their interactions with the metal aquo complex [M(H2O)6]2+. Examples and related compounds Perhaps the best-known is Mohr's salt, ferrous ammonium sulfate (NH4)2Fe(SO4)2.(H2O)6). Other examples include the vanadous Tutton salt (NH4)2V(SO4)2(H2O)6 and the chromous Tutton salt (NH4)2Cr(SO4)2(H2O)6. In solids and solutions, the M'2+ ion exists as a metal aquo complex [M'(H2O)6]2+. Related to the Tutton's salts are the alums, which are also double salts but with the formula MM'(SO4)2(H2O)12. The Tutton's salts were once termed "false alums". History Tutton salts are sometimes called Schönites after the naturally occurring mineral called Schönite (K2Mg(SO4)2(H2O)6). They are named for Alfred Edwin Howard Tutton, who identified and characterised a large range of these salts around 1900. Such salts were of historical importance because they were obtainable in high purity and served as reliable reagents and spectroscopic standards. Table of salts Organic salts Some organic bases can also form salts that crystallise like Tutton's salts. References Sulfates
Tutton's salt
[ "Chemistry" ]
586
[ "Sulfates", "Double salts", "Salts" ]
14,423,069
https://en.wikipedia.org/wiki/Netgear%20SC101
The SC101 was a home computer networking storage product manufactured and distributed by Netgear under the Storage Central brand from around 2005 through 2010. The devices shared data stored on one or two internal disks via Ethernet links. Description The two models in the Storage Central line were the Netgear SC101 and SC101T. The original SC101 model could hold one or two disks (sold separately) using Parallel ATA (known as "IDE" at the time) and had a 100 Mbit/sec Ethernet over twisted pair interface. The later Netgear SC101T model could hold one or two Serial ATA disks and had a Gigabit Ethernet interface. The ZSAN technology was licensed in 2005 from Zetera Corporation. Reviews praised the low price and ease of installation, but noted limited software support and passive cooling. At least one reviewer encountered an incompatible disk drive. By January 2010 the Storage Central series was replaced by Netgear storage products using the ReadyNAS name. Software The SC101 provided a block-level storage area network (SAN) interface, as opposed to file-level network-attached storage (NAS). Thus, like any SAN device, specific drivers and software must be installed on any client PC wishing to access the device. Only the Microsoft Windows family of operating systems were supported. Linux drivers There was discussion of a driver for Linux in 2008. An open source driver for Linux on Google Code used the network block device technology, but because this is a block level device, the OS is responsible for creating a filesystem. Consequently, a filesystem created by Linux will not be compatible with one created by Windows. However, a 2006 post on kerneltrap.org suggested it may be possible to use NTFS-3g on Linux. If possible, this would allow access from both Windows and Linux machines, at the expense of losing features that the proprietary file system offers, such as sharing the device access across multiple machines, as well as mirroring support. References Further reading External links Netgear SC101 Support Page OpenWrtDocs hardware internal information on the Netgear SC101 OpenWrt Wiki Table of Hardware: Netgear SC101 SC101 Home servers Storage area networks
Netgear SC101
[ "Technology" ]
460
[ "Netgear", "Wireless networking" ]
14,423,377
https://en.wikipedia.org/wiki/Binary%20cyclic%20group
In mathematics, the binary cyclic group of the n-gon is the cyclic group of order 2n, , thought of as an extension of the cyclic group by a cyclic group of order 2. Coxeter writes the binary cyclic group with angle-brackets, ⟨n⟩, and the index 2 subgroup as (n) or [n]+. It is the binary polyhedral group corresponding to the cyclic group. In terms of binary polyhedral groups, the binary cyclic group is the preimage of the cyclic group of rotations () under the 2:1 covering homomorphism of the special orthogonal group by the spin group. As a subgroup of the spin group, the binary cyclic group can be described concretely as a discrete subgroup of the unit quaternions, under the isomorphism where Sp(1) is the multiplicative group of unit quaternions. (For a description of this homomorphism see the article on quaternions and spatial rotations.) Presentation The binary cyclic group can be defined as the set of th roots of unity—that is, the set , where using multiplication as the group operation. See also binary dihedral group, ⟨2,2,n⟩, order 4n binary tetrahedral group, ⟨2,3,3⟩, order 24 binary octahedral group, ⟨2,3,4⟩, order 48 binary icosahedral group, ⟨2,3,5⟩, order 120 References Cyclic
Binary cyclic group
[ "Physics" ]
303
[ "Binary polyhedral groups", "Symmetry", "Rotational symmetry" ]
14,423,438
https://en.wikipedia.org/wiki/Povl%20Ahm
Povl Ahm (26 September 1926 – 15 May 2005) was a structural engineer and former chairman of Ove Arup & Partners. Life Born in Aarhus, Denmark, Ahm attended the Polyteknisk Læreanstalt in Copenhagen, from where he graduated in 1949. Ahm married Birgit Moller in 1953, with whom he had two sons, Carsten Ahm and Peter Ahm. He was a keen sportsman, and a good footballer. He played for the London amateur team Corinthian-Casuals and played in the 1956 Amateur Cup Final at Wembley Stadium. He died of cancer on 15 May 2005. Career He joined the firm Ove Arup and Partners in London in 1952, where he worked on Coventry Cathedral with Basil Spence. In his own words: "It was an architectural concept showing clearly the ecclesiastical functions but without any clear definition of structural concept, for so far no engineer had been involved in the design." Ahm was given great responsibility on this project, working directly with Ove Arup. He also worked on early conceptual design schemes for the Sydney Opera House, and worked on other projects, including Smithfield Market, London and Centre Pompidou, Paris – some of Ove Arup & Partners' most prestigious projects. The architect of Sydney Opera House, Jørn Utzon, later went on to design a house for Ahm in Hertfordshire - a project which avoided the many problems of Sydney Opera House. In 1957 Ahm was made an associate partner of Ove Arup & Partners, and in 1965 he was made a full partner, becoming a director of the firm after its ownership was rearranged in 1977 (the firm was now owned in trust for the staff). By winning the competition to design the Gateshead Viaduct in 1965, Ahm started the firm's new transport group, specialising in bridges. From 1989 to 1992 he was chairman of the firm. He was made a Fellow of the Royal Academy of Engineering in 1981. Ahm was an active member of the Institution of Civil Engineers, acting as a Council Member twice, and becoming Vice Chairman of Registered Engineers for Disaster Relief from 1989 to 1993. From 1992 to 1996 he was chairman of the Association of Consulting Engineers. Notable projects Coventry Cathedral, St Catherine's College, Oxford, 1960 University of Sussex, 1962 44 West Common Way (Ahm House), Harpenden, Hertfordshire, 1963 Gateshead Viaduct, 1965 Centre Pompidou, 1974 British Embassy in Rome, 1975 Danish Embassy in London, 1978 Awards Ahm was awarded the ICE's first gold medal in 1993; the same year he received a CBE for services to engineering. He received an honorary doctorate from University of Warwick in 1994. References Danish civil engineers Corinthian-Casuals F.C. players Structural engineers 1926 births 2005 deaths Fellows of the Royal Academy of Engineering Men's association football players not categorized by position Danish men's footballers Footballers from Aarhus 20th-century Danish engineers 20th-century Danish sportsmen Expatriate men's footballers in England
Povl Ahm
[ "Engineering" ]
627
[ "Structural engineering", "Structural engineers" ]
14,423,824
https://en.wikipedia.org/wiki/Chemical%20tests%20in%20mushroom%20identification
Chemical tests in mushroom identification are methods that aid in determining the variety of some fungi. The most useful tests are Melzer's reagent and potassium hydroxide. Ammonia Household ammonia can be used. A couple of drops are placed on the flesh. For example, Boletus spadiceus gives a fleeting blue to blue-green reaction. Iron salts Iron salts are used commonly in Russula and Bolete identification. It is best to dissolve the salts in water (typically a 10% solution) and then apply to the flesh, but it is sometimes possible to apply the dry salts directly to see a color change. For example, the white flesh of Boletus chrysenteron stains lemon-yellow or olive. Three results are expected with the iron salts tests: no change indicates a negative reaction; a color change to olive, green or blackish green; or a color change to reddish-pink. Meixner test for amatoxins The Meixner test (also known as the Wieland test) uses concentrated hydrochloric acid and newspaper to test for the deadly amatoxins found in some species of Amanita, Lepiota, and Galerina. The test yields false positives for some compounds, such as psilocin. Melzer's reagent Melzer's reagent can be used to test whether spores are amyloid, nonamyloid, or dextrinoid. Spores that stain bluish-gray to bluish-black are amyloid Spores that stain brown to reddish-brown are dextrinoid This test is normally performed on white spored mushrooms. If the spores are not light colored, a change will not be readily apparent. It is easiest to see the color change under a microscope, but it is possible to see it with the naked eye with a good spore print. Paradimethylaminobenzaldehyde In the genus Lyophyllum the lamellae usually turn blue with the application of para-Dimethylaminobenzaldehyde (PDAB or pDAB). Phenol A 2–3% aqueous solution of phenol gives a color change in some species when applied to the cap or stem. Potassium hydroxide A 3–10% solution of potassium hydroxide (KOH) gives a color change in some species of mushrooms: In Agaricus, some species such as A. xanthodermus turn yellow with KOH, many have no reaction, and A. subrutilescens turns green. Distinctive change occurs for some species of Cortinarius and Boletes Schaeffer reaction Developed by Julius Schäffer to help with the identification of Agaricus species. A positive reaction of Schaeffer's test, which uses the reaction of aniline and nitric acid on the surface of the mushroom, is indicated by an orange to red color; it is characteristic of species in the section Flavescentes. The compounds responsible for the reaction were named schaefferal A and B to honor Schäffer. Two intersecting lines are drawn on the surface of the cap, the first with aniline or aniline water, the second with an aqueous solution of 65% nitric acid. The test is considered positive when a bright orange color forms where the lines cross. Agaricus placomyces and Agaricus xanthodermus produce false negative reactions. Sometimes referred to as "Schaeffer's reaction", "Schaeffer's cross reaction" or "Schaeffer's test". Aniline + acid(s) Kerrigan's 2016 Agaricus of North America P45: (Referring to Schaffer's reaction) "In fact I recommend switching to the following modified test. Frank (1988) developed an alternative formulation in which aniline oil is combined with glacial acetic acid (GAA, essentially distilled vinegar) in a 50:50 solution. GAA is a much safer, less reactive acid. This single combined reagent is relatively stable over time. A single spot or line applied to the pileus (or other surface). In my experience the newer formulation works as well as Schaffer's while being safer and more convenient." Sulfo-vanillin Made from sulfuric acid (H2SO4) and vanillin (vanilla). Used in Russula and Panaeolus identification. References Arora, David "Mushrooms Demystified" 2nd Edition, Ten Speed Press, Berkeley, 1986 Jordan, Michael "The Encyclopedia of Fungi of Britain and Europe" Frances Lincoln 2004 Kuo, Michael "100 Edible Mushrooms", University of Michigan Press, Ann Arbor 2007 Largent, David L., Baroni, Timothy J. "How to Identify Mushrooms to Genus VI: Modern Genera" Mad River Press 1988 External links MushroomExpert.com Mycology Mushroom identification
Chemical tests in mushroom identification
[ "Chemistry", "Biology" ]
1,012
[ "Mycology", "Chemical tests" ]
14,424,101
https://en.wikipedia.org/wiki/Caterpillar%20345C%20L
The Caterpillar 345C L is a large hydraulic excavator manufactured by Caterpillar Inc. The 345C L, with 345 hp (257 kW) of net flywheel power, is classified as a large excavator by Caterpillar. In Caterpillar's naming conventions, the last two digits indicate the excavator's weight in metric tonnes. The 345C L is not named after its horsepower. Rather, it is a coincidence that both use the number 345. Caterpillar currently produces the 300 series, including the 345C L. Specifications Engine Engine Model: Caterpillar C13 ACERT Net Flywheel Power: 345 hp (257 kW) Net Power (ISO 9249): 345 hp (257 kW) Net Power (SAE J1349): 349 hp (257 kW) Net Power (EEC 80/1269): 345 hp (257 kW) Cylinders: 6 Weights Operating Weight: 99,150 lb (44,970 kg) Operating Weight (long undercarriage): 99,150 lb (44,970 kg) Operating specifications Max Reach at Ground Level: 42.5 ft (13.0 m) Max Digging Depth: 29.3 ft (8.9 m) Max Bucket Capacity: 5 yd³ (3.8 m³) Nominal bucket weight: 3,880 lb (1,760 kg) Bucket digging force (Normal): 39,300 lbf (175 kN) ( In the media "Episode 678: Auction Fever", Planet Money, January 22, 2016. References Caterpillar Inc. vehicles Tracked vehicles Excavators
Caterpillar 345C L
[ "Engineering" ]
338
[ "Engineering vehicles", "Caterpillar Inc. vehicles" ]
14,424,151
https://en.wikipedia.org/wiki/Book%20of%20Negroes
The Book of Negroes is a document created by Brigadier General Samuel Birch, under the direction of Sir Guy Carleton, that records names and descriptions of 3,000 Black Loyalists, enslaved Africans who escaped to the British lines during the American Revolution and were evacuated to points in Nova Scotia as free people of colour. Background The first African person in Nova Scotia arrived with the founding of Port Royal in 1605. African people were then brought as slaves to Nova Scotia during the founding of Louisbourg and Halifax. The first major migration of African people to Nova Scotia happened during the American Revolution. Enslaved Africans in America who escaped to the British during the American Revolutionary War became the first settlement of Black Nova Scotians and Black Canadians. Other Black Loyalists were transported to settlements in several islands in the West Indies and some to London. Recorded in 1783, this 150-page document is the only one to have recorded Black Canadians in a large, detailed scope of work. Contents The document contains records on 3000 Africans; the former slaves recorded in the Book of Negroes were evacuated to British North America, where they were settled in the newly established Birchtown and other places in the colony. According to the Treaty of Paris (1783), the United States argued for the return of all property, including slaves. The British refused to return the slaves, to whom they had promised freedom during the war for joining their cause. The detailed records were created to document the freed people whom the British resettled in Nova Scotia, along with other Loyalists. The book was assembled by Samuel Birch, the namesake of Birchtown, Nova Scotia, under the direction of Sir Guy Carleton. Some freedmen later migrated from Nova Scotia to Sierra Leone, where they formed the original settlers of Freetown, under the auspices of the Sierra Leone Company. They are among the ancestors of the Sierra Leone Creole ethnic group. Notable people recorded in the Book of Negroes include Boston King, Henry Washington, Moses Wilkinson and Cato Perkins. As the Book of Negroes was recorded separately by American and British officers, there are two versions of the document. The British version is held in The National Archives in Kew, London The American version is held by the National Archives and Records Administration in Washington, D.C. It was published under the title The Black Loyalist Directory: African Americans in Exile After the American Revolution (1996), edited by Graham Russell Hodges, Susan Hawkes Cook, and Alan Edward Brown. Representation in other media The Canadian novelist Lawrence Hill wrote The Book of Negroes (2007, published in the United States as Someone Knows My Name). It is inspired by the 3,000 former Black African Slaves from America, many of whom were owned by White slave owners. The Former Black African slaves were given free land and housing by the British, in a Nova Scotia town called Birchtown, named after the original author of "The Book of Negros" British Brigadier General Samuel Birch. In 1792 1,200 residents of Birchtown chose to emigrate to another British Colony called Sierra Leone, which was founded by a British Lieutenant John Clarkson for freedmen in South Africa. During the 1784 Shelburne riots that lasted 5 days - no fatalities. The American refugees were upset the free land and jobs were only being given to the former black African slaves who worked for less pay. He features Aminata Diallo, a young African woman captured as a child; she is literate and acts as a scribe to record the information about the former slaves. The book won the top 2008 Commonwealth Writers' Prize. Canadian director Clement Virgo adapted the book into a six-hour television mini-series of the same title. The series premiered on CBC in Canada on 7 January 2015 and on BET in the United States on 16 February 2015 and starred Aunjanue Ellis, Lyriq Bent, Cuba Gooding Jr. and Louis Gossett Jr. See also Black Nova Scotians Rough Crossings (subtitle: Britain, the Slaves and the American Revolution), a history book and television series by Simon Schama. Notes External links "The Book of Negroes", African Nova Scotians: in the Age of Slavery and Abolition, Nova Scotia Archives "Book of Negroes", Remembering Black Loyalists, Black Communities in Nova Scotia, 2001, Noval Scotia Museum Cassandra Pybus, Epic Journeys of Freedom: Runaway Slaves of the American Revolution and Their Global Quest for Liberty (Boston: Beacon Press, 2006). Black Loyalists: Our History, Our People, Canadian Digital Collections, website includes link to Book of Negroes African Nova Scotians in the Age of Slavery and Abolition (Digitized version of the British copy) Inspection Roll of Negroes Book No. 2 (Digitized version of the American copy.) Carleton Papers – Book of Negroes, 1783 (Library and Archives Canada). Searchable database of the Book of Negroes. Hodges, Graham R. The Black Loyalist Directory: African Americans in Exile After the American Revolution. New York: Garland Pub. in association with the New England Historic Genealogical Society, 1996. Print version of the American copy. 1783 documents African-American documents African-American slave records African-American genealogy Black Loyalists History of Black people in Canada Sierra Leone Creole history books Krio genealogy Collection of the National Archives (United Kingdom) Fugitive American slaves American expatriates in Canada
Book of Negroes
[ "Biology" ]
1,075
[ "Phylogenetics", "Genealogy" ]
14,424,426
https://en.wikipedia.org/wiki/Cloud%20iridescence
Cloud iridescence or irisation is a colorful optical phenomenon that occurs in a cloud and appears in the general proximity of the Sun or Moon. The colors resemble those seen in soap bubbles and oil on a water surface. It is a type of photometeor. This fairly common phenomenon is most often observed in altocumulus, cirrocumulus, lenticular, and cirrus clouds. They sometimes appear as bands parallel to the edge of the clouds. Iridescence is also seen in the much rarer polar stratospheric clouds, also called nacreous clouds. The colors are usually pastel, but can be very vivid or mingled together, sometimes similar to mother-of-pearl. When appearing near the Sun, the effect can be difficult to spot as it is drowned in the Sun's glare. This may be overcome by shielding the sunlight with one's hand or hiding it behind a tree or building. Other aids are dark glasses, or observing the sky reflected in a convex mirror or in a pool of water. Etymology Irisations are named after the Greek goddess Iris, goddess of rainbows and messenger of Zeus and Hera to the mortals below. Mechanism Iridescent clouds are a diffraction phenomenon caused by small water droplets or small ice crystals individually scattering light. Larger ice crystals do not produce iridescence, but can cause halos, a different phenomenon. Irisation is caused by very uniform water droplets diffracting light (within 10 degrees from the Sun) and by first order interference effects (beyond about 10 degrees from the Sun). It can extend up to 40 degrees from the Sun. If parts of clouds contain small water droplets or ice crystals of similar size, their cumulative effect is seen as colors. The cloud must be optically thin, so that most rays encounter only a single droplet. Iridescence is therefore mostly seen at cloud edges or in semi-transparent clouds, while newly forming clouds produce the brightest and most colorful iridescence. When the particles in a thin cloud are very similar in size over a large extent, the iridescence takes on the structured form of a corona, a bright circular disk around the Sun or Moon surrounded by one or more colored rings. Gallery See also Polar stratospheric cloud Circumhorizontal arc Noctilucent cloud References External links Iridescent cloud gallery – Atmospheric Optics site On the Cause of Iridescence in Clouds – Scientific American Supplement Cloud types Atmospheric optical phenomena
Cloud iridescence
[ "Physics" ]
509
[ "Optical phenomena", "Physical phenomena", "Atmospheric optical phenomena", "Earth phenomena" ]
14,425,296
https://en.wikipedia.org/wiki/Hume-Rothery%20rules
Hume-Rothery rules, named after William Hume-Rothery, are a set of basic rules that describe the conditions under which an element could dissolve in a metal, forming a solid solution. There are two sets of rules; one refers to substitutional solid solutions, and the other refers to interstitial solid solutions. Substitutional solid solution rules For substitutional solid solutions, the Hume-Rothery rules are as follows: The atomic radius of the solute and solvent atoms must differ by no more than 15%: The crystal structures of solute and solvent must be similar. Complete solubility occurs when the solvent and solute have the same valency. A metal is more likely to dissolve a metal of higher valency, than vice versa. The solute and solvent should have similar electronegativity. If the electronegativity difference is too great, the metals tend to form intermetallic compounds instead of solid solutions. Interstitial solid solution rules For interstitial solid solutions, the Hume-Rothery Rules are: Solute atoms should have a smaller radius than 59% of the radius of solvent atoms. The solute and solvent should have similar electronegativity. Valency factor: two elements should have the same valence. The greater the difference in valence between solute and solvent atoms, the lower the solubility. Solid solution rules for multicomponent systems Fundamentally, the Hume-Rothery rules are restricted to binary systems that form either substitutional or interstitial solid solutions. However, this approach limits assessing advanced alloys which are commonly multicomponent systems. Free energy diagrams (or phase diagrams) offer in-depth knowledge of equilibrium restraints in complex systems. In essence the Hume-Rothery rules (and Pauling's rules) are based on geometrical restraints. Likewise are the advancements being done to the Hume-Rothery rules. Where they are being considered as critical contact criterion describable with Voronoi diagrams. This could ease the theoretical phase diagram generation of multicomponent systems. For alloys containing transition metal elements there is a difficulty in interpretation of the Hume-Rothery electron concentration rule, as the values of e/a values (number of itinerant electrons per atom) for transition metals have been quite controversial for a long time, and no satisfactory solutions have yet emerged. See also CALPHAD Enthalpy of mixing Gibbs energy Phase diagram References Further reading Eponymous chemical rules Materials science Rules
Hume-Rothery rules
[ "Physics", "Materials_science", "Engineering" ]
505
[ "Applied and interdisciplinary physics", "Materials science", "nan" ]
156,533
https://en.wikipedia.org/wiki/Chebyshev%27s%20inequality
In probability theory, Chebyshev's inequality (also called the Bienaymé–Chebyshev inequality) provides an upper bound on the probability of deviation of a random variable (with finite variance) from its mean. More specifically, the probability that a random variable deviates from its mean by more than is at most , where is any positive constant and is the standard deviation (the square root of the variance). The rule is often called Chebyshev's theorem, about the range of standard deviations around the mean, in statistics. The inequality has great utility because it can be applied to any probability distribution in which the mean and variance are defined. For example, it can be used to prove the weak law of large numbers. Its practical usage is similar to the 68–95–99.7 rule, which applies only to normal distributions. Chebyshev's inequality is more general, stating that a minimum of just 75% of values must lie within two standard deviations of the mean and 88.89% within three standard deviations for a broad range of different probability distributions. The term Chebyshev's inequality may also refer to Markov's inequality, especially in the context of analysis. They are closely related, and some authors refer to Markov's inequality as "Chebyshev's First Inequality," and the similar one referred to on this page as "Chebyshev's Second Inequality." Chebyshev's inequality is tight in the sense that for each chosen positive constant, there exists a random variable such that the inequality is in fact an equality. History The theorem is named after Russian mathematician Pafnuty Chebyshev, although it was first formulated by his friend and colleague Irénée-Jules Bienaymé. The theorem was first proved by Bienaymé in 1853 and more generally proved by Chebyshev in 1867. His student Andrey Markov provided another proof in his 1884 Ph.D. thesis. Statement Chebyshev's inequality is usually stated for random variables, but can be generalized to a statement about measure spaces. Probabilistic statement Let X (integrable) be a random variable with finite non-zero variance σ2 (and thus finite expected value μ). Then for any real number , Only the case is useful. When the right-hand side and the inequality is trivial as all probabilities are ≤ 1. As an example, using shows that the probability values lie outside the interval does not exceed . Equivalently, it implies that the probability of values lying within the interval (i.e. its "coverage") is at least . Because it can be applied to completely arbitrary distributions provided they have a known finite mean and variance, the inequality generally gives a poor bound compared to what might be deduced if more aspects are known about the distribution involved. Measure-theoretic statement Let (X, Σ, μ) be a measure space, and let f be an extended real-valued measurable function defined on X. Then for any real number t > 0 and 0 < p < ∞, More generally, if g is an extended real-valued measurable function, nonnegative and nondecreasing, with then: This statement follows from the Markov inequality, , with and , since in this case . The previous statement then follows by defining as if and otherwise. Example Suppose we randomly select a journal article from a source with an average of 1000 words per article, with a standard deviation of 200 words. We can then infer that the probability that it has between 600 and 1400 words (i.e. within standard deviations of the mean) must be at least 75%, because there is no more than chance to be outside that range, by Chebyshev's inequality. But if we additionally know that the distribution is normal, we can say there is a 75% chance the word count is between 770 and 1230 (which is an even tighter bound). Sharpness of bounds As shown in the example above, the theorem typically provides rather loose bounds. However, these bounds cannot in general (remaining true for arbitrary distributions) be improved upon. The bounds are sharp for the following example: for any k ≥ 1, For this distribution, the mean μ = 0 and the variance σ2 = + 0 + = , so the standard deviation σ = and Chebyshev's inequality is an equality for precisely those distributions which are affine transformations of this example. Proof Markov's inequality states that for any real-valued random variable Y and any positive number a, we have . One way to prove Chebyshev's inequality is to apply Markov's inequality to the random variable with : It can also be proved directly using conditional expectation: Chebyshev's inequality then follows by dividing by k2σ2. This proof also shows why the bounds are quite loose in typical cases: the conditional expectation on the event where |X − μ| < kσ is thrown away, and the lower bound of k2σ2 on the event |X − μ| ≥ kσ can be quite poor. Chebyshev's inequality can also be obtained directly from a simple comparison of areas, starting from the representation of an expected value as the difference of two improper Riemann integrals (last formula in the definition of expected value for arbitrary real-valued random variables). Extensions Several extensions of Chebyshev's inequality have been developed. Selberg's inequality Selberg derived a generalization to arbitrary intervals. Suppose X is a random variable with mean μ and variance σ2. Selberg's inequality states that if , When , this reduces to Chebyshev's inequality. These are known to be the best possible bounds. Finite-dimensional vector Chebyshev's inequality naturally extends to the multivariate setting, where one has n random variables with mean and variance σi2. Then the following inequality holds. This is known as the Birnbaum–Raymond–Zuckerman inequality after the authors who proved it for two dimensions. This result can be rewritten in terms of vectors with mean , standard deviation σ = (σ1, σ2, ...), in the Euclidean norm . One can also get a similar infinite-dimensional Chebyshev's inequality. A second related inequality has also been derived by Chen. Let be the dimension of the stochastic vector and let be the mean of . Let be the covariance matrix and . Then where YT is the transpose of . The inequality can be written in terms of the Mahalanobis distance as where the Mahalanobis distance based on S is defined by Navarro proved that these bounds are sharp, that is, they are the best possible bounds for that regions when we just know the mean and the covariance matrix of X. Stellato et al. showed that this multivariate version of the Chebyshev inequality can be easily derived analytically as a special case of Vandenberghe et al. where the bound is computed by solving a semidefinite program (SDP). Known correlation If the variables are independent this inequality can be sharpened. Berge derived an inequality for two correlated variables . Let be the correlation coefficient between X1 and X2 and let σi2 be the variance of . Then This result can be sharpened to having different bounds for the two random variables and having asymmetric bounds, as in Selberg's inequality. Olkin and Pratt derived an inequality for correlated variables. where the sum is taken over the n variables and where is the correlation between and . Olkin and Pratt's inequality was subsequently generalised by Godwin. Higher moments Mitzenmacher and Upfal note that by applying Markov's inequality to the nonnegative variable , one can get a family of tail bounds For n = 2 we obtain Chebyshev's inequality. For k ≥ 1, n > 4 and assuming that the nth moment exists, this bound is tighter than Chebyshev's inequality. This strategy, called the method of moments, is often used to prove tail bounds. Exponential moment A related inequality sometimes known as the exponential Chebyshev's inequality is the inequality Let be the cumulant generating function, Taking the Legendre–Fenchel transformation of and using the exponential Chebyshev's inequality we have This inequality may be used to obtain exponential inequalities for unbounded variables. Bounded variables If P(x) has finite support based on the interval , let where |x| is the absolute value of . If the mean of P(x) is zero then for all The second of these inequalities with is the Chebyshev bound. The first provides a lower bound for the value of P(x). Finite samples Univariate case Saw et al extended Chebyshev's inequality to cases where the population mean and variance are not known and may not exist, but the sample mean and sample standard deviation from N samples are to be employed to bound the expected value of a new drawing from the same distribution. The following simpler version of this inequality is given by Kabán. where X is a random variable which we have sampled N times, m is the sample mean, k is a constant and s is the sample standard deviation. This inequality holds even when the population moments do not exist, and when the sample is only weakly exchangeably distributed; this criterion is met for randomised sampling. A table of values for the Saw–Yang–Mo inequality for finite sample sizes (N < 100) has been determined by Konijn. The table allows the calculation of various confidence intervals for the mean, based on multiples, C, of the standard error of the mean as calculated from the sample. For example, Konijn shows that for N = 59, the 95 percent confidence interval for the mean m is where (this is 2.28 times larger than the value found on the assumption of normality showing the loss on precision resulting from ignorance of the precise nature of the distribution). An equivalent inequality can be derived in terms of the sample mean instead, A table of values for the Saw–Yang–Mo inequality for finite sample sizes (N < 100) has been determined by Konijn. For fixed N and large m the Saw–Yang–Mo inequality is approximately Beasley et al have suggested a modification of this inequality In empirical testing this modification is conservative but appears to have low statistical power. Its theoretical basis currently remains unexplored. Dependence on sample size The bounds these inequalities give on a finite sample are less tight than those the Chebyshev inequality gives for a distribution. To illustrate this let the sample size N = 100 and let k = 3. Chebyshev's inequality states that at most approximately 11.11% of the distribution will lie at least three standard deviations away from the mean. Kabán's version of the inequality for a finite sample states that at most approximately 12.05% of the sample lies outside these limits. The dependence of the confidence intervals on sample size is further illustrated below. For N = 10, the 95% confidence interval is approximately ±13.5789 standard deviations. For N = 100 the 95% confidence interval is approximately ±4.9595 standard deviations; the 99% confidence interval is approximately ±140.0 standard deviations. For N = 500 the 95% confidence interval is approximately ±4.5574 standard deviations; the 99% confidence interval is approximately ±11.1620 standard deviations. For N = 1000 the 95% and 99% confidence intervals are approximately ±4.5141 and approximately ±10.5330 standard deviations respectively. The Chebyshev inequality for the distribution gives 95% and 99% confidence intervals of approximately ±4.472 standard deviations and ±10 standard deviations respectively. Samuelson's inequality Although Chebyshev's inequality is the best possible bound for an arbitrary distribution, this is not necessarily true for finite samples. Samuelson's inequality states that all values of a sample must lie within sample standard deviations of the mean. By comparison, Chebyshev's inequality states that all but a 1/N fraction of the sample will lie within standard deviations of the mean. Since there are N samples, this means that no samples will lie outside standard deviations of the mean, which is worse than Samuelson's inequality. However, the benefit of Chebyshev's inequality is that it can be applied more generally to get confidence bounds for ranges of standard deviations that do not depend on the number of samples. Semivariances An alternative method of obtaining sharper bounds is through the use of semivariances (partial variances). The upper (σ+2) and lower (σ−2) semivariances are defined as where m is the arithmetic mean of the sample and n is the number of elements in the sample. The variance of the sample is the sum of the two semivariances: In terms of the lower semivariance Chebyshev's inequality can be written Putting Chebyshev's inequality can now be written A similar result can also be derived for the upper semivariance. If we put Chebyshev's inequality can be written Because σu2 ≤ σ2, use of the semivariance sharpens the original inequality. If the distribution is known to be symmetric, then and This result agrees with that derived using standardised variables. Note The inequality with the lower semivariance has been found to be of use in estimating downside risk in finance and agriculture. Multivariate case Stellato et al. simplified the notation and extended the empirical Chebyshev inequality from Saw et al. to the multivariate case. Let be a random variable and let . We draw iid samples of denoted as . Based on the first samples, we define the empirical mean as and the unbiased empirical covariance as . If is nonsingular, then for all then Remarks In the univariate case, i.e. , this inequality corresponds to the one from Saw et al. Moreover, the right-hand side can be simplified by upper bounding the floor function by its argument As , the right-hand side tends to which corresponds to the multivariate Chebyshev inequality over ellipsoids shaped according to and centered in . Sharpened bounds Chebyshev's inequality is important because of its applicability to any distribution. As a result of its generality it may not (and usually does not) provide as sharp a bound as alternative methods that can be used if the distribution of the random variable is known. To improve the sharpness of the bounds provided by Chebyshev's inequality a number of methods have been developed; for a review see eg. Cantelli's inequality Cantelli's inequality due to Francesco Paolo Cantelli states that for a real random variable (X) with mean (μ) and variance (σ2) where a ≥ 0. This inequality can be used to prove a one tailed variant of Chebyshev's inequality with k > 0 The bound on the one tailed variant is known to be sharp. To see this consider the random variable X that takes the values with probability with probability Then E(X) = 0 and E(X2) = σ2 and P(X < 1) = 1 / (1 + σ2). An application: distance between the mean and the median The one-sided variant can be used to prove the proposition that for probability distributions having an expected value and a median, the mean and the median can never differ from each other by more than one standard deviation. To express this in symbols let μ, ν, and σ be respectively the mean, the median, and the standard deviation. Then There is no need to assume that the variance is finite because this inequality is trivially true if the variance is infinite. The proof is as follows. Setting k = 1 in the statement for the one-sided inequality gives: Changing the sign of X and of μ, we get As the median is by definition any real number m that satisfies the inequalities this implies that the median lies within one standard deviation of the mean. A proof using Jensen's inequality also exists. Bhattacharyya's inequality Bhattacharyya extended Cantelli's inequality using the third and fourth moments of the distribution. Let and be the variance. Let and . If then The necessity of may require to be reasonably large. In the case this simplifies to Since for close to 1, this bound improves slightly over Cantelli's bound as . wins a factor 2 over Chebyshev's inequality. Gauss's inequality In 1823 Gauss showed that for a distribution with a unique mode at zero, Vysochanskij–Petunin inequality The Vysochanskij–Petunin inequality generalizes Gauss's inequality, which only holds for deviation from the mode of a unimodal distribution, to deviation from the mean, or more generally, any center. If X is a unimodal distribution with mean μ and variance σ2, then the inequality states that For symmetrical unimodal distributions, the median and the mode are equal, so both the Vysochanskij–Petunin inequality and Gauss's inequality apply to the same center. Further, for symmetrical distributions, one-sided bounds can be obtained by noticing that The additional fraction of present in these tail bounds lead to better confidence intervals than Chebyshev's inequality. For example, for any symmetrical unimodal distribution, the Vysochanskij–Petunin inequality states that 4/(9 × 3^2) = 4/81 ≈ 4.9% of the distribution lies outside 3 standard deviations of the mode. Bounds for specific distributions DasGupta has shown that if the distribution is known to be normal From DasGupta's inequality it follows that for a normal distribution at least 95% lies within approximately 2.582 standard deviations of the mean. This is less sharp than the true figure (approximately 1.96 standard deviations of the mean). DasGupta has determined a set of best possible bounds for a normal distribution for this inequality. Steliga and Szynal have extended these bounds to the Pareto distribution. Grechuk et al. developed a general method for deriving the best possible bounds in Chebyshev's inequality for any family of distributions, and any deviation risk measure in place of standard deviation. In particular, they derived Chebyshev inequality for distributions with log-concave densities. Related inequalities Several other related inequalities are also known. Paley–Zygmund inequality The Paley–Zygmund inequality gives a lower bound on tail probabilities, as opposed to Chebyshev's inequality which gives an upper bound. Applying it to the square of a random variable, we get Haldane's transformation One use of Chebyshev's inequality in applications is to create confidence intervals for variates with an unknown distribution. Haldane noted, using an equation derived by Kendall, that if a variate (x) has a zero mean, unit variance and both finite skewness (γ) and kurtosis (κ) then the variate can be converted to a normally distributed standard score (z): This transformation may be useful as an alternative to Chebyshev's inequality or as an adjunct to it for deriving confidence intervals for variates with unknown distributions. While this transformation may be useful for moderately skewed and/or kurtotic distributions, it performs poorly when the distribution is markedly skewed and/or kurtotic. He, Zhang and Zhang's inequality For any collection of non-negative independent random variables with expectation 1 Integral Chebyshev inequality There is a second (less well known) inequality also named after Chebyshev If f, g : [a, b] → R are two monotonic functions of the same monotonicity, then If f and g are of opposite monotonicity, then the above inequality works in the reverse way. This inequality is related to Jensen's inequality, Kantorovich's inequality, the Hermite–Hadamard inequality and Walter's conjecture. Other inequalities There are also a number of other inequalities associated with Chebyshev: Chebyshev's sum inequality Chebyshev–Markov–Stieltjes inequalities Notes The Environmental Protection Agency has suggested best practices for the use of Chebyshev's inequality for estimating confidence intervals. See also Multidimensional Chebyshev's inequality Concentration inequality – a summary of tail-bounds on random variables. Cornish–Fisher expansion Eaton's inequality Kolmogorov's inequality Proof of the weak law of large numbers using Chebyshev's inequality Le Cam's theorem Paley–Zygmund inequality Vysochanskiï–Petunin inequality — a stronger result applicable to unimodal probability distributions Lenglart's inequality References Further reading A. Papoulis (1991), Probability, Random Variables, and Stochastic Processes, 3rd ed. McGraw–Hill. . pp. 113–114. G. Grimmett and D. Stirzaker (2001), Probability and Random Processes, 3rd ed. Oxford. . Section 7.3. External links Formal proof in the Mizar system. Articles containing proofs Probabilistic inequalities Statistical inequalities
Chebyshev's inequality
[ "Mathematics" ]
4,503
[ "Theorems in statistics", "Statistical inequalities", "Theorems in probability theory", "Probabilistic inequalities", "Inequalities (mathematics)", "Articles containing proofs" ]
156,549
https://en.wikipedia.org/wiki/Audio%20mixing
Audio mixing is the process by which multiple sounds are combined into one or more audio channels. In the process, a source's volume level, frequency content, dynamics, and panoramic position are manipulated or enhanced. This practical, aesthetic, or otherwise creative treatment is done in order to produce a finished version that is appealing to listeners. Audio mixing is practiced for music, film, television and live sound. The process is generally carried out by a mixing engineer operating a mixing console or digital audio workstation. Recorded music Before the introduction of multitrack recording, all the sounds and effects that were to be part of a recording were mixed together at one time during a live performance. If the sound blend was not satisfactory, or if one musician made a mistake, the selection had to be performed over until the desired balance and performance was obtained. However, with the introduction of multitrack recording, the production phase of a modern recording has radically changed into one that generally involves three stages: recording, overdubbing, and mixdown. Film and television During production dialogue recording of actors is done by a person variously known as location sound mixer, production sound or some similar designation. That person is a department head with a crew consisting of a boom operator and sometimes a cable person. Audio mixing for film and television is a process during the post-production stage of a moving image program by which a multitude of recorded sounds are combined. In the editing process, the source's signal level, frequency content, dynamics, and panoramic position are commonly manipulated and effects added. In video production, this is called sweetening. The process takes place on a mixing stage, typically in a studio or purpose-built theater, once the picture elements are edited into a final version. Normally the engineers will mix four main audio elements called stems: speech (dialogue, ADR, voice-overs, etc.), ambience (or atmosphere), sound effects, and music. As multi machine synchronization became available, filmmakers were able to split elements into multiple reels. With the advent of digital workstations and growing complexity, track counts in excess of 100 became common. Dialogue intelligibility Since the 2010s, critics and members of the audience have reported that dialogue in films tends to be increasingly more difficult to understand than in older films, to the point where viewers need to rely on subtitles to understand what is being said. Ben Pearson of SlashFilm attributed this to a combination of factors, only some of which can be addressed through audio mixing: Unintelligibility as a stylistic choice by filmmakers, particularly by Christopher Nolan and those influenced by him Soft, under one's breath delivery of lines by actors, a practice particularly popular among younger actors, as opposed to the theatrical clarity of delivery previously used Low priority of sound recording on set, with priority given to the visual aspects of production Increased technological possibilities, including in post-production, no longer compel filmmakers to obtain an optimal recording on set The film crew's familiarity with the dialogue can lead them to overestimate its intelligibility Theaters play films at a lower than recommended volume to avoid excessive loudness complaints from the audience Different standards of compression and volume balance applied by the various streaming platforms Inadequate audio remixing for films played in a home theater setting or on mobile devices, where the audio playback capabilities of the various setups strongly differ from each other and from cinema settings Live sound Live sound mixing is the process of electrically blending together multiple sound sources at a live event using a mixing console. Sounds used include those from instruments, voices, and pre-recorded material. Individual sources may be equalised and routed to effect processors to ultimately be amplified and reproduced via loudspeakers. The live sound engineer balances the various audio sources in a way that best suits the needs of the event. References Further reading Rose, Jay, Producing Great Sound for Film and Video. Focal Press, fourth edition 2014 Book info. Audio engineering Film sound production Film post-production technology
Audio mixing
[ "Engineering" ]
813
[ "Electrical engineering", "Audio engineering" ]
156,700
https://en.wikipedia.org/wiki/Communication%20channel
A communication channel refers either to a physical transmission medium such as a wire, or to a logical connection over a multiplexed medium such as a radio channel in telecommunications and computer networking. A channel is used for information transfer of, for example, a digital bit stream, from one or several senders to one or several receivers. A channel has a certain capacity for transmitting information, often measured by its bandwidth in Hz or its data rate in bits per second. Communicating an information signal across distance requires some form of pathway or medium. These pathways, called communication channels, use two types of media: Transmission line-based telecommunications cable (e.g. twisted-pair, coaxial, and fiber-optic cable) and broadcast (e.g. microwave, satellite, radio, and infrared). In information theory, a channel refers to a theoretical channel model with certain error characteristics. In this more general view, a storage device is also a communication channel, which can be sent to (written) and received from (reading) and allows communication of an information signal across time. Examples Examples of communications channels include: A connection between initiating and terminating communication endpoints of a telecommunication circuit. A single path provided by a transmission medium via either physical separation, such as by multipair cable or separation, such as by frequency-division or time-division multiplexing. A path for conveying electrical or electromagnetic signals, usually distinguished from other parallel paths. A data storage device which can communicate a message over time. The portion of a storage medium, such as a track or band, that is accessible to a given reading or writing station or head. A buffer from which messages can be put and got. In a communications system, the physical or logical link that connects a data source to a data sink. A specific radio frequency, pair or band of frequencies, usually named with a letter, number, or codeword, and often allocated by international agreement, for example: Marine VHF radio uses some 88 channels in the VHF band for two-way FM voice communication. Channel 16, for example, is 156.800 MHz. In the US, seven additional channels, WX1 - WX7, are allocated for weather broadcasts. Television channels such as North American TV Channel 2 at 55.25 MHz, Channel 13 at 211.25 MHz. Each channel is 6 MHz wide. This was based on the bandwidth required by analog television signals. Since 2006, television broadcasting has switched to digital modulation (digital television) which uses image compression to transmit a television signal in a much smaller bandwidth, so each of these physical channels has been divided into multiple virtual channels each carrying a DTV channel. Original Wi-Fi uses 13 channels in the ISM bands from 2412 MHz to 2484 MHz in 5 MHz steps. The radio channel between an amateur radio repeater and an amateur radio operator uses two frequencies often 600 kHz (0.6 MHz) apart. For example, a repeater that transmits on 146.94 MHz typically listens for a ham transmitting on 146.34 MHz. All of these communication channels share the property that they transfer information. The information is carried through the channel by a signal. Channel models Mathematical models of the channel can be made to describe how the input (the transmitted signal) is mapped to the output (the received signal). There exist many types and uses of channel models specific to the field of communication. In particular, separate models are formulated to describe each layer of a communication system. A channel can be modeled physically by trying to calculate the physical processes which modify the transmitted signal. For example, in wireless communications, the channel can be modeled by calculating the reflection from every object in the environment. A sequence of random numbers might also be added to simulate external interference or electronic noise in the receiver. Statistically, a communication channel is usually modeled as a tuple consisting of an input alphabet, an output alphabet, and for each pair (i, o) of input and output elements, a transition probability p(i, o). Semantically, the transition probability is the probability that the symbol o is received given that i was transmitted over the channel. Statistical and physical modeling can be combined. For example, in wireless communications the channel is often modeled by a random attenuation (known as fading) of the transmitted signal, followed by additive noise. The attenuation term is a simplification of the underlying physical processes and captures the change in signal power over the course of the transmission. The noise in the model captures external interference or electronic noise in the receiver. If the attenuation term is complex it also describes the relative time a signal takes to get through the channel. The statistical properties of the attenuation in the model are determined by previous measurements or physical simulations. Communication channels are also studied in discrete-alphabet modulation schemes. The mathematical model consists of a transition probability that specifies an output distribution for each possible sequence of channel inputs. In information theory, it is common to start with memoryless channels in which the output probability distribution only depends on the current channel input. A channel model may either be digital or analog. Digital channel models In a digital channel model, the transmitted message is modeled as a digital signal at a certain protocol layer. Underlying protocol layers are replaced by a simplified model. The model may reflect channel performance measures such as bit rate, bit errors, delay, delay variation, etc. Examples of digital channel models include: Binary symmetric channel (BSC), a discrete memoryless channel with a certain bit error probability Binary asymmetric channel (BAC), similar to BSC but the probability of a flip from 0 to 1 and vice-versa is unequal Binary bursty bit error channel model, a channel with memory Binary erasure channel (BEC), a discrete channel with a certain bit error detection (erasure) probability Packet erasure channel, where packets are lost with a certain packet loss probability or packet error rate Arbitrarily varying channel (AVC), where the behavior and state of the channel can change randomly Analog channel models In an analog channel model, the transmitted message is modeled as an analog signal. The model can be a linear or non-linear, time-continuous or time-discrete (sampled), memoryless or dynamic (resulting in burst errors), time-invariant or time-variant (also resulting in burst errors), baseband, passband (RF signal model), real-valued or complex-valued signal model. The model may reflect the following channel impairments: Noise model, for example Additive white Gaussian noise (AWGN) channel, a linear continuous memoryless model Phase noise model Interference model, for example crosstalk (co-channel interference) and intersymbol interference (ISI) Distortion model, for example a non-linear channel model causing intermodulation distortion (IMD) Frequency response model, including attenuation and phase-shift Group delay model Modelling of underlying physical layer transmission techniques, for example a complex-valued equivalent baseband model of modulation and frequency response Radio frequency propagation model, for example Log-distance path loss model Fading model, for example Rayleigh fading, Ricean fading, log-normal shadow fading and frequency selective (dispersive) fading Doppler shift model, which combined with fading results in a time-variant system Ray tracing models, which attempt to model the signal propagation and distortions for specified transmitter-receiver geometries, terrain types, and antennas Propagation graph, models signal dispersion by representing the radio propagation environment by a graph. Mobility models, which also causes a time-variant system Types Digital (discrete) or analog (continuous) channel Transmission medium, for example a fiber-optic cable Multiplexed channel Computer network virtual channel Simplex communication, duplex communication or half-duplex communication channel Return channel Uplink or downlink (upstream or downstream channel) Broadcast channel, unicast channel or multicast channel Channel performance measures These are examples of commonly used channel capacity and performance measures: Spectral bandwidth in Hertz Symbol rate in baud, symbols/s Digital bandwidth in bit/s measures: gross bit rate (signalling rate), net bit rate (information rate), channel capacity, and maximum throughput Channel utilization Spectral efficiency Signal-to-noise ratio in decibel measures: signal-to-interference ratio, Eb/N0 Bit error rate (BER), packet error rate (PER) Latency in seconds: propagation time, transmission time, round-trip delay, end-to-end delay Packet delay variation Eye pattern Multi-terminal channels, with application to cellular systems In networks, as opposed to point-to-point communication, the communication media can be shared between multiple communication endpoints (terminals). Depending on the type of communication, different terminals can cooperate or interfere with each other. In general, any complex multi-terminal network can be considered as a combination of simplified multi-terminal channels. The following channels are the principal multi-terminal channels first introduced in the field of information theory: A point-to-multipoint channel, also known as broadcasting medium (not to be confused with broadcasting channel): In this channel, a single sender transmits multiple messages to different destination nodes. All wireless channels except directional links can be considered as broadcasting media, but may not always provide broadcasting service. The downlink of a cellular system can be considered as a point-to-multipoint channel, if only one cell is considered and inter-cell co-channel interference is neglected. However, the communication service of a phone call is unicasting. Multiple access channel: In this channel, multiple senders transmit multiple possible different messages over a shared physical medium to one or several destination nodes. This requires a channel access scheme, including a media access control (MAC) protocol combined with a multiplexing scheme. This channel model has applications in the uplink of cellular networks. Relay channel: In this channel, one or several intermediate nodes (called relay, repeater or gap filler nodes) cooperate with a sender to send the message to an ultimate destination node. Interference channel: In this channel, two different senders transmit their data to different destination nodes. Hence, the different senders can have a possible crosstalk or co-channel interference on the signal of each other. The inter-cell interference in cellular wireless communications is an example of an interference channel. In spread-spectrum systems like 3G, interference also occurs inside the cell if non-orthogonal codes are used. A unicast channel is a channel that provides a unicast service, i.e. that sends data addressed to one specific user. An established phone call is an example. A broadcast channel is a channel that provides a broadcasting service, i.e. that sends data addressed to all users in the network. Cellular network examples are the paging service as well as the Multimedia Broadcast Multicast Service. A multicast channel is a channel where data is addressed to a group of subscribing users. LTE examples are the physical multicast channel (PMCH) and multicast broadcast single frequency network (MBSFN). References C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal, vol. 27, pp. 379–423 and 623–656, (July and October, 1948) Information theory Telecommunication theory Television terminology
Communication channel
[ "Mathematics", "Technology", "Engineering" ]
2,313
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory" ]
156,706
https://en.wikipedia.org/wiki/Effective%20mass%20%28solid-state%20physics%29
In solid state physics, a particle's effective mass (often denoted ) is the mass that it seems to have when responding to forces, or the mass that it seems to have when interacting with other identical particles in a thermal distribution. One of the results from the band theory of solids is that the movement of particles in a periodic potential, over long distances larger than the lattice spacing, can be very different from their motion in a vacuum. The effective mass is a quantity that is used to simplify band structures by modeling the behavior of a free particle with that mass. For some purposes and some materials, the effective mass can be considered to be a simple constant of a material. In general, however, the value of effective mass depends on the purpose for which it is used, and can vary depending on a number of factors. For electrons or electron holes in a solid, the effective mass is usually stated as a factor multiplying the rest mass of an electron, me (9.11 × 10−31 kg). This factor is usually in the range 0.01 to 10, but can be lower or higher—for example, reaching 1,000 in exotic heavy fermion materials, or anywhere from zero to infinity (depending on definition) in graphene. As it simplifies the more general band theory, the electronic effective mass can be seen as an important basic parameter that influences measurable properties of a solid, including everything from the efficiency of a solar cell to the speed of an integrated circuit. Simple case: parabolic, isotropic dispersion relation At the highest energies of the valence band in many semiconductors (Ge, Si, GaAs, ...), and the lowest energies of the conduction band in some semiconductors (GaAs, ...), the band structure can be locally approximated as where is the energy of an electron at wavevector in that band, is a constant giving the edge of energy of that band, and is a constant (the effective mass). It can be shown that the electrons placed in these bands behave as free electrons except with a different mass, as long as their energy stays within the range of validity of the approximation above. As a result, the electron mass in models such as the Drude model must be replaced with the effective mass. One remarkable property is that the effective mass can become negative, when the band curves downwards away from a maximum. As a result of the negative mass, the electrons respond to electric and magnetic forces by gaining velocity in the opposite direction compared to normal; even though these electrons have negative charge, they move in trajectories as if they had positive charge (and positive mass). This explains the existence of valence-band holes, the positive-charge, positive-mass quasiparticles that can be found in semiconductors. In any case, if the band structure has the simple parabolic form described above, then the value of effective mass is unambiguous. Unfortunately, this parabolic form is not valid for describing most materials. In such complex materials there is no single definition of "effective mass" but instead multiple definitions, each suited to a particular purpose. The rest of the article describes these effective masses in detail. Intermediate case: parabolic, anisotropic dispersion relation In some important semiconductors (notably, silicon) the lowest energies of the conduction band are not symmetrical, as the constant-energy surfaces are now ellipsoids, rather than the spheres in the isotropic case. Each conduction band minimum can be approximated only by where , , and axes are aligned to the principal axes of the ellipsoids, and , and are the inertial effective masses along these different axes. The offsets , , and reflect that the conduction band minimum is no longer centered at zero wavevector. (These effective masses correspond to the principal components of the inertial effective mass tensor, described later.) In this case, the electron motion is no longer directly comparable to a free electron; the speed of an electron will depend on its direction, and it will accelerate to a different degree depending on the direction of the force. Still, in crystals such as silicon the overall properties such as conductivity appear to be isotropic. This is because there are multiple valleys (conduction-band minima), each with effective masses rearranged along different axes. The valleys collectively act together to give an isotropic conductivity. It is possible to average the different axes' effective masses together in some way, to regain the free electron picture. However, the averaging method turns out to depend on the purpose: General case In general the dispersion relation cannot be approximated as parabolic, and in such cases the effective mass should be precisely defined if it is to be used at all. Here a commonly stated definition of effective mass is the inertial effective mass tensor defined below; however, in general it is a matrix-valued function of the wavevector, and even more complex than the band structure. Other effective masses are more relevant to directly measurable phenomena. Inertial effective mass tensor A classical particle under the influence of a force accelerates according to Newton's second law, , or alternatively, the momentum changes according to . This intuitive principle appears identically in semiclassical approximations derived from band structure when interband transitions can be ignored for sufficiently weak external fields. The force gives a rate of change in crystal momentum : where is the reduced Planck constant. Acceleration for a wave-like particle becomes the rate of change in group velocity: where is the del operator in reciprocal space. The last step follows from using the chain rule for a total derivative for a quantity with indirect dependencies, because the direct result of the force is the change in given above, which indirectly results in a change in . Combining these two equations yields using the dot product rule with a uniform force (). is the Hessian matrix of in reciprocal space. We see that the equivalent of the Newtonian reciprocal inertial mass for a free particle defined by has become a tensor quantity whose elements are This tensor allows the acceleration and force to be in different directions, and for the magnitude of the acceleration to depend on the direction of the force. For parabolic bands, the off-diagonal elements of are zero, and the diagonal elements are constants For isotropic bands the diagonal elements must all be equal and the off-diagonal elements must all be equal. For parabolic isotropic bands, , where is a scalar effective mass and is the identity. In general, the elements of are functions of . The inverse, , is known as the effective mass tensor. Note that it is not always possible to invert For bands with linear dispersion such as with photons or electrons in graphene, the group velocity is fixed, i.e. electrons travelling with parallel with to the force direction cannot be accelerated and the diagonal elements of are obviously zero. However, electrons travelling with a component perpendicular to the force can be accelerated in the direction of the force, and the off-diagonal elements of are non-zero. In fact the off-diagonal elements scale inversely with , i.e. they diverge (become infinite) for small . This is why the electrons in graphene are sometimes said to have infinite mass (due to the zeros on the diagonal of ) and sometimes said to be massless (due to the divergence on the off-diagonals). Cyclotron effective mass Classically, a charged particle in a magnetic field moves in a helix along the magnetic field axis. The period T of its motion depends on its mass m and charge e, where B is the magnetic flux density. For particles in asymmetrical band structures, the particle no longer moves exactly in a helix, however its motion transverse to the magnetic field still moves in a closed loop (not necessarily a circle). Moreover, the time to complete one of these loops still varies inversely with magnetic field, and so it is possible to define a cyclotron effective mass from the measured period, using the above equation. The semiclassical motion of the particle can be described by a closed loop in k-space. Throughout this loop, the particle maintains a constant energy, as well as a constant momentum along the magnetic field axis. By defining to be the area enclosed by this loop (this area depends on the energy , the direction of the magnetic field, and the on-axis wavevector ), then it can be shown that the cyclotron effective mass depends on the band structure via the derivative of this area in energy: Typically, experiments that measure cyclotron motion (cyclotron resonance, De Haas–Van Alphen effect, etc.) are restricted to only probe motion for energies near the Fermi level. In two-dimensional electron gases, the cyclotron effective mass is defined only for one magnetic field direction (perpendicular) and the out-of-plane wavevector drops out. The cyclotron effective mass therefore is only a function of energy, and it turns out to be exactly related to the density of states at that energy via the relation , where is the valley degeneracy. Such a simple relationship does not apply in three-dimensional materials. Density of states effective masses (lightly doped semiconductors) In semiconductors with low levels of doping, the electron concentration in the conduction band is in general given by where is the Fermi level, is the minimum energy of the conduction band, and is a concentration coefficient that depends on temperature. The above relationship for can be shown to apply for any conduction band shape (including non-parabolic, asymmetric bands), provided the doping is weak (); this is a consequence of Fermi–Dirac statistics limiting towards Maxwell–Boltzmann statistics. The concept of effective mass is useful to model the temperature dependence of , thereby allowing the above relationship to be used over a range of temperatures. In an idealized three-dimensional material with a parabolic band, the concentration coefficient is given by In semiconductors with non-simple band structures, this relationship is used to define an effective mass, known as the density of states effective mass of electrons. The name "density of states effective mass" is used since the above expression for is derived via the density of states for a parabolic band. In practice, the effective mass extracted in this way is not quite constant in temperature ( does not exactly vary as ). In silicon, for example, this effective mass varies by a few percent between absolute zero and room temperature because the band structure itself slightly changes in shape. These band structure distortions are a result of changes in electron–phonon interaction energies, with the lattice's thermal expansion playing a minor role. Similarly, the number of holes in the valence band, and the density of states effective mass of holes are defined by: where is the maximum energy of the valence band. Practically, this effective mass tends to vary greatly between absolute zero and room temperature in many materials (e.g., a factor of two in silicon), as there are multiple valence bands with distinct and significantly non-parabolic character, all peaking near the same energy. Determination Experimental Traditionally effective masses were measured using cyclotron resonance, a method in which microwave absorption of a semiconductor immersed in a magnetic field goes through a sharp peak when the microwave frequency equals the cyclotron frequency . In recent years effective masses have more commonly been determined through measurement of band structures using techniques such as angle-resolved photoemission spectroscopy (ARPES) or, most directly, the de Haas–van Alphen effect. Effective masses can also be estimated using the coefficient γ of the linear term in the low-temperature electronic specific heat at constant volume . The specific heat depends on the effective mass through the density of states at the Fermi level and as such is a measure of degeneracy as well as band curvature. Very large estimates of carrier mass from specific heat measurements have given rise to the concept of heavy fermion materials. Since carrier mobility depends on the ratio of carrier collision lifetime to effective mass, masses can in principle be determined from transport measurements, but this method is not practical since carrier collision probabilities are typically not known a priori. The optical Hall effect is an emerging technique for measuring the free charge carrier density, effective mass and mobility parameters in semiconductors. The optical Hall effect measures the analogue of the quasi-static electric-field-induced electrical Hall effect at optical frequencies in conductive and complex layered materials. The optical Hall effect also permits characterization of the anisotropy (tensor character) of the effective mass and mobility parameters. Theoretical A variety of theoretical methods including density functional theory, k·p perturbation theory, and others are used to supplement and support the various experimental measurements described in the previous section, including interpreting, fitting, and extrapolating these measurements. Some of these theoretical methods can also be used for predictions of effective mass in the absence of any experimental data, for example to study materials that have not yet been created in the laboratory. Significance The effective mass is used in transport calculations, such as transport of electrons under the influence of fields or carrier gradients, but it also is used to calculate the carrier density and density of states in semiconductors. These masses are related but, as explained in the previous sections, are not the same because the weightings of various directions and wavevectors are different. These differences are important, for example in thermoelectric materials, where high conductivity, generally associated with light mass, is desired at the same time as high Seebeck coefficient, generally associated with heavy mass. Methods for assessing the electronic structures of different materials in this context have been developed. Certain group III–V compounds such as gallium arsenide (GaAs) and indium antimonide (InSb) have far smaller effective masses than tetrahedral group IV materials like silicon and germanium. In the simplest Drude picture of electronic transport, the maximum obtainable charge carrier velocity is inversely proportional to the effective mass: , where with being the electronic charge. The ultimate speed of integrated circuits depends on the carrier velocity, so the low effective mass is the fundamental reason that GaAs and its derivatives are used instead of Si in high-bandwidth applications like cellular telephony. In April 2017, researchers at Washington State University claimed to have created a fluid with negative effective mass inside a Bose–Einstein condensate, by engineering the dispersion relation. See also Models of solids and crystals: Tight-binding model Free electron model Nearly free electron model Footnotes References This book contains an exhaustive but accessible discussion of the topic with extensive comparison between calculations and experiment. S. Pekar, The method of effective electron mass in crystals, Zh. Eksp. Teor. Fiz. 16, 933 (1946). External links NSM archive Condensed matter physics Mass
Effective mass (solid-state physics)
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
3,074
[ "Scalar physical quantities", "Physical quantities", "Quantity", "Mass", "Phases of matter", "Materials science", "Size", "Condensed matter physics", "Wikipedia categories named after physical quantities", "Matter" ]
156,720
https://en.wikipedia.org/wiki/Distributed%20Proofreaders
Distributed Proofreaders (commonly abbreviated as DP or PGDP) is a web-based project that supports the development of e-texts for Project Gutenberg by allowing many people to work together in proofreading drafts of e-texts for errors. the site had digitized 48,000 titles. History Distributed Proofreaders was founded by Charles Franks in 2000 as an independent site to assist Project Gutenberg. Distributed Proofreaders became an official Project Gutenberg site in 2002. On 8 November 2002, Distributed Proofreaders was slashdotted, and more than 4,000 new members joined in one day, causing an influx of new proofreaders and software developers, which helped to increase the quantity and quality of e-text production. In July 2015, the 30,000th Distributed Proofreaders produced e-text was posted to Project Gutenberg. DP-contributed e-texts comprised more than half of works in Project Gutenberg, . On 31 July 2006, the Distributed Proofreaders Foundation was formed to provide Distributed Proofreaders with its own legal entity and not-for-profit status. IRS approval of section 501(c)(3) status was granted retroactive to 7 April 2006. Proofreading process Public domain works, typically books with expired copyright, are scanned by volunteers, or sourced from digitization projects and the images are run through optical character recognition (OCR) software. Since OCR software is far from perfect, many errors often appear in the resulting text. To correct them, pages are made available to volunteers via the Internet; the original page image and the recognized text appear side by side. This process thereby distributes the time-consuming error-correction process, akin to distributed computing. Each page is proofread and formatted several times, and then a post-processor combines the pages and prepares the text for uploading to Project Gutenberg. Besides custom software created to support the project, DP also runs a forum and a wiki for project coordinators and participants. Related projects DP Europe In January 2004, Distributed Proofreaders Europe started, hosted by Project Rastko, Serbia. This site had the ability to process text in Unicode UTF-8 encoding. Books proofread centered on European culture, with a considerable proportion of non-English texts including Hebrew, Arabic, Urdu, and many others. , DP Europe had produced 787 e-texts, the last of these in November 2011. The original DP is sometimes referred to as "DP International" by members of DP Europe. However, DP servers are located in the United States, and therefore works must be cleared by Project Gutenberg as being in the public domain according to U.S. copyright law before they can be proofread and eventually published at DP. DP Canada In December 2007, Distributed Proofreaders Canada launched to support the production of e-books for Project Gutenberg Canada and take advantage of shorter Canadian copyright terms. Although it was established by members of the original Distributed Proofreaders site, it is a separate entity. All its projects are posted to Faded Page, their book archive website. In addition, it supplies books to Project Gutenberg Canada (which launched on Canada Day 2007) and (where copyright laws are compatible) to the original Project Gutenberg. In addition to preserving Canadiana, DP Canada is notable because it is the first major effort to take advantage of Canada's copyright laws which may allow more works to be preserved. Unlike copyright law in some other countries, Canada has a "life plus 50" copyright term. This means that works by authors who died more than fifty years ago may be preserved in Canada, whereas in other parts of the world those works may not be distributed because they are still under copyright. Notable authors whose works may be preserved in Canada but not in other parts of the world include Clark Ashton Smith, Dashiell Hammett, Ernest Hemingway, Carl Jung, A. A. Milne, Dorothy Sayers, Nevil Shute, Walter de la Mare, Sheila Kaye-Smith and Amy Carmichael. Milestones 10,000th E-book On 9 March 2007, Distributed Proofreaders announced the completion of more than 10,000 titles. In celebration, a collection of fifteen titles was published: Slave Narratives, Oklahoma (A Folk History of Slavery in the United States From Interviews with Former Slaves) by the U.S. Work Projects Administration (English) Eighth annual report of the Bureau of ethnology. (1891 N 08 / 1886–1887) edited by John Wesley Powell (English) R. Caldecott's First Collection of Pictures and Songs by Randolph Caldecott [Illustrator] (English) Como atravessei Àfrica (Volume II) by Serpa Pinto (Portuguese) Triplanetary by E. E. "Doc" Smith (English) Heidi by Johanna Spyri (English) Heimatlos by Johanna Spyri (German) October 27, 1920 issue of Punch (English) Sylva, or, A Discourse of Forest-Trees by John Evelyn (English) Encyclopedia of Needlework by Therese de Dillmont (English) The annals of the Cakchiquels by Francisco Ernantez Arana (fl. 1582), translated and edited by Daniel G. Brinton (1837–1899) (English with Central American Indian) The Shanty Book, Part I, Sailor Shanties (1921) by Richard Runciman Terry (1864–1938) (English) Le marchand de Venise by William Shakespeare, translated by François Guizot (French) Agriculture for beginners, Rev. ed. by Charles William Burkett (English) Species Plantarum (Part 1) by Carl Linnaeus (Carl von Linné) (Latin) 20,000th E-book On April 10, 2011, the 20,000th book milestone was celebrated as a group release of bilingual books: The Renaissance in Italy–Italian Literature, Vol 1, John Addington Symonds (English with Italian) Märchen und Erzählungen für Anfänger; erster Teil, H. A. Guerber (German with English) Gedichte und Sprüche, Walther von der Vogelweide (Middle High German (–1500) with German) Studien und Plaudereien im Vaterland, Sigmon Martin Stern (German with English) Caos del Triperuno, Teofilo Folengo (Italian with Latin) Niederländische Volkslieder, Hoffmann von Fallersleben (German with Dutch) A "San Francisco", Salvatore Di Giacomo (Italian with Neapolitan) O' voto, Salvatore Di Giacomo (Italian with Neapolitan) De Latino sine Flexione & Principio de Permanentia, Giuseppe Peano (1858–1932) (Latin with Latino sine Flexione) Cappiddazzu paga tuttu—Nino Martoglio, Luigi Pirandello (Italian with Sicilian) The International Auxiliary Language Esperanto, George Cox (English with Esperanto) Lusitania: canti popolari portoghesi, Ettore Toci (Italian with French) 30,000th E-book On 7 July 2015, the 30,000th book milestone was celebrated with a group of thirty texts. One was numbered 30,000: Graded literature readers - Fourth book, editors: Harry Pratt Judson and Ida C. Bender, 1900 See also List of digital library projects Wikisource References External links Collaborative projects Crowdsourcing Distributed computing projects Human-based computation Internet properties established in 2000 Mass digitization Proofreading
Distributed Proofreaders
[ "Technology", "Engineering" ]
1,570
[ "Information systems", "Human-based computation", "Distributed computing projects", "Information technology projects" ]
156,766
https://en.wikipedia.org/wiki/Professional%20development
Professional development, also known as professional education, is learning that leads to or emphasizes education in a specific professional career field or builds practical job applicable skills emphasizing praxis in addition to the transferable skills and theoretical academic knowledge found in traditional liberal arts and pure sciences education. It is used to earn or maintain professional credentials such as professional certifications or academic degrees through formal coursework at institutions known as professional schools, or attending conferences and informal learning opportunities to strengthen or gain new skills. Professional education has been described as intensive and collaborative, ideally incorporating an evaluative stage. There is a variety of approaches to professional development or professional education, including consultation, coaching, communities of practice, lesson study, case study, capstone project, mentoring, reflective supervision and technical assistance. Participants A wide variety of people, such as teachers, military officers and non-commissioned officers, health care professionals, architects, lawyers, accountants and engineers engage in professional development. Individuals may participate in professional development because of an interest in lifelong learning, a sense of moral obligation, to maintain and improve professional competence, to enhance career progression, to keep abreast of new technology and practices, or to comply with professional regulatory requirements. In the training of school staff in the United States, "[t]he need for professional development ... came to the forefront in the 1960s". Many American states have professional development requirements for school teachers. For example, Arkansas teachers must complete 60 hours of documented professional development activities annually. Professional development credits are named differently from state to state. For example, teachers in Indiana are required to earn 90 Continuing Renewal Units (CRUs) per year; in Massachusetts, teachers need 150 Professional Development Points (PDPs); and in Georgia, teachers must earn 10 Professional Learning Units (PLUs). American and Canadian nurses, as well as those in the United Kingdom, have to participate in formal and informal professional development (earning credit based on attendance of education that has been accredited by a regulatory agency) in order to maintain professional registration. Approaches In a broad sense, professional development may include formal types of vocational education, typically post-secondary or poly-technical training leading to qualification or credential required to obtain or retain employment. Professional development may also come in the form of pre-service or in-service professional development programs. These programs may be formal, or informal, group or individualized. Individuals may pursue professional development independently, or programs may be offered by human resource departments. Professional development on the job may develop or enhance process skills, sometimes referred to as leadership skills, as well as task skills. Some examples for process skills are 'effectiveness skills', 'team functioning skills', and 'systems thinking skills'. Professional development opportunities can range from a single workshop to a semester-long academic course, to services offered by a medley of different professional development providers and varying widely with respect to the philosophy, content, and format of the learning experiences. Some examples of approaches to professional development include: Case Study Method – The case method is a teaching approach that consists in presenting the students with a case, putting them in the role of a decision maker facing a problem – See Case method. Consultation – to assist an individual or group of individuals to clarify and address immediate concerns by following a systematic problem-solving process. Coaching – to enhance a person's competencies in a specific skill area by providing a process of observation, reflection, and action. Communities of Practice – to improve professional practice by engaging in shared inquiry and learning with people who have a common goal Lesson Study – to solve practical dilemmas related to intervention or instruction through participation with other professionals in systematically examining practice Mentoring – to promote an individual's awareness and refinement of his or her own professional development by providing and recommending structured opportunities for reflection and observation Reflective Supervision – to support, develop, and ultimately evaluate the performance of employees through a process of inquiry that encourages their understanding and articulation of the rationale for their own practices Technical Assistance – to assist individuals and their organization to improve by offering resources and information, supporting networking and change efforts. The World Bank's 2019 World Development Report on the future of work argues that professional development opportunities for those both in and out of work, such as flexible learning opportunities at universities and adult learning programs, enable labor markets to adjust to the future of work. Initial Initial professional development (IPD) is defined as "a period of development during which an individual acquires a level of competence necessary in order to operate as an autonomous professional". Professional associations may recognise the successful completion of IPD by the award of chartered or similar status. Examples of professional bodies that require IPD prior to the award of professional status are the Institute of Mathematics and its Applications, the Institution of Structural Engineers, and the Institution of Occupational Safety and Health. Continuing Continuing professional development (CPD) or continuing professional education (CPE) is continuing education to maintain knowledge and skills. Most professions have CPD obligations. Examples are the Royal Institution of Chartered Surveyors, American Academy of Financial Management, safety professionals with the International Institute of Risk & Safety Management (IIRSM) or the Institution of Occupational Safety and Health (IOSH), and medical and legal professionals, who are subject to continuing medical education or continuing legal education requirements, which vary by jurisdiction. CPD authorities in the United Kingdom include the CPD Standards Office who work in partnership with the CPD Institute, and also the CPD Certification Service. For example, CPD by the Institute of Highway Engineers is approved by the CPD Standards Office, and CPD by the Chartered Institution of Highways and Transportation is approved by the CPD Certification Service. A systematic review published in 2019 by the Campbell Collaboration found little evidence of the effectiveness of continuing professional development (CPD). See also References External links Personal development Vocational education Professional ethics
Professional development
[ "Biology" ]
1,181
[ "Personal development", "Behavior", "Human behavior" ]
156,773
https://en.wikipedia.org/wiki/List%20of%20digital%20library%20projects
This is a list of digital library projects. See also Bibliographic database List of academic databases and search engines List of online databases List of online encyclopedias List of open-access journals List of search engines References Digital library projects Digital library projects Digital library projects Digital library projects
List of digital library projects
[ "Technology" ]
57
[ "Computing-related lists", "Internet-related lists" ]
156,787
https://en.wikipedia.org/wiki/Desalination
Desalination is a process that removes mineral components from saline water. More generally, desalination is the removal of salts and minerals from a substance. One example is soil desalination. This is important for agriculture. It is possible to desalinate saltwater, especially sea water, to produce water for human consumption or irrigation. The by-product of the desalination process is brine. Many seagoing ships and submarines use desalination. Modern interest in desalination mostly focuses on cost-effective provision of fresh water for human use. Along with recycled wastewater, it is one of the few water resources independent of rainfall. Due to its energy consumption, desalinating sea water is generally more costly than fresh water from surface water or groundwater, water recycling and water conservation; however, these alternatives are not always available and depletion of reserves is a critical problem worldwide. Desalination processes are using either thermal methods (in the case of distillation) or membrane-based methods (e.g. in the case of reverse osmosis). An estimate in 2018 found that "18,426 desalination plants are in operation in over 150 countries. They produce 87 million cubic meters of clean water each day and supply over 300 million people." The energy intensity has improved: It is now about 3 kWh/m3 (in 2018), down by a factor of 10 from 20–30 kWh/m3 in 1970. Nevertheless, desalination represented about 25% of the energy consumed by the water sector in 2016. History Ancient Greek philosopher Aristotle observed in his work Meteorology that "salt water, when it turns into vapour, becomes sweet and the vapour does not form salt water again when it condenses", and that a fine wax vessel would hold potable water after being submerged long enough in seawater, having acted as a membrane to filter the salt. At the same time the desalination of seawater was recorded in China. Both the Classic of Mountains and Water Seas in the Period of the Warring States and the Theory of the Same Year in the Eastern Han Dynasty mentioned that people found that the bamboo mats used for steaming rice would form a thin outer layer after long use. The as-formed thin film had adsorption and ion exchange functions, which could adsorb salt. Numerous examples of experimentation in desalination appeared throughout Antiquity and the Middle Ages, but desalination became feasible on a large scale only in the modern era. A good example of this experimentation comes from Leonardo da Vinci (Florence, 1452), who realized that distilled water could be made cheaply in large quantities by adapting a still to a cookstove. During the Middle Ages elsewhere in Central Europe, work continued on distillation refinements, although not necessarily directed towards desalination. The first major land-based desalination plant may have been installed under emergency conditions on an island off the coast of Tunisia in 1560. It is believed that a garrison of 700 Spanish soldiers was besieged by the Turkish army and that, during the siege, the captain in charge fabricated a still capable of producing 40 barrels of fresh water per day, though details of the device have not been reported. Before the Industrial Revolution, desalination was primarily of concern to oceangoing ships, which otherwise needed to keep on board supplies of fresh water. Sir Richard Hawkins (1562–1622), who made extensive travels in the South Seas, reported that he had been able to supply his men with fresh water by means of shipboard distillation. Additionally, during the early 1600s, several prominent figures of the era such as Francis Bacon and Walter Raleigh published reports on desalination. These reports and others, set the climate for the first patent dispute concerning desalination apparatus. The two first patents regarding water desalination were approved in 1675 and 1683 (patents No. 184 and No. 226, published by William Walcot and Robert Fitzgerald (and others), respectively). Nevertheless, neither of the two inventions entered service as a consequence of scale-up difficulties. No significant improvements to the basic seawater distillation process were made during the 150 years from the mid-1600s until 1800. When the frigate Protector was sold to Denmark in the 1780s (as the ship Hussaren) its still was studied and recorded in great detail. In the United States, Thomas Jefferson catalogued heat-based methods going back to the 1500s, and formulated practical advice that was publicized to all U.S. ships on the reverse side of sailing clearance permits. Beginning about 1800, things started changing as a consequence of the appearance of the steam engine and the so-called age of steam. Knowledge of the thermodynamics of steam processes and the need for a pure water source for its use in boilers generated a positive effect regarding distilling systems. Additionally, the spread of European colonialism induced a need for freshwater in remote parts of the world, thus creating the appropriate climate for water desalination. In parallel with the development and improvement of systems using steam (multiple-effect evaporators), these type of devices quickly demonstrated their desalination potential. In 1852, Alphonse René le Mire de Normandy was issued a British patent for a vertical tube seawater distilling unit that, thanks to its simplicity of design and ease of construction, gained popularity for shipboard use. Land-based units did not significantly appear until the latter half of the nineteenth century. In the 1860s, the US Army purchased three Normandy evaporators, each rated at 7000 gallons/day and installed them on the islands of Key West and Dry Tortugas. Another land-based plant was installed at Suakin during the 1880s that provided freshwater to the British troops there. It consisted of six-effect distillers with a capacity of 350 tons/day. After World War II, many technologies were developed or improved such as Multi Effect Flash desalination (MEF) and Multi Stage Flash desalination (MSF). Another notable technology is freeze-thaw desalination. Freeze-thaw desalination, (cryo-desalination or FD), excludes dissolved minerals from saline water through crystallization. The Office of Saline Water was created in the United States Department of the Interior in 1955 in accordance with the Saline Water Conversion Act of 1952. This act was motivated by a water shortage in California and inland western United States. The Department of the Interior allocated resources including research grants, expert personnel, patent data, and land for experiments to further advancements. The results of these efforts included the construction of over 200 electrodialysis and distillation plants globally, reverse osmosis (RO) research, and international cooperation (for example, the First International Water Desalination Symposium and Exposition in 1965). The Office of Saline Water merged into the Office of Water Resources Research in 1974. The first industrial desalination plant in the United States opened in Freeport, Texas in 1961 after a decade of regional drought. By the late 1960s and the early 1970s, RO started to show promising results to replace traditional thermal desalination units. Research took place at state universities in California, at the Dow Chemical Company and DuPont. Many studies focus on ways to optimize desalination systems. The first commercial RO plant, the Coalinga desalination plant, was inaugurated in California in 1965 for brackish water. Dr. Sidney Loeb, in conjunction with staff at UCLA, designed a large pilot plant to gather data on RO, but was successful enough to provide freshwater to the residents of Coalinga. This was a milestone in desalination technology, as it proved the feasibility of RO and its advantages compared to existing technologies (efficiency, no phase change required, ambient temperature operation, scalability, and ease of standardization). A few years later, in 1975, the first sea water reverse osmosis desalination plant came into operation. As of 2000, more than 2000 plants were operated. The largest are in Saudi Arabia, Israel, and the UAE; and the biggest plant with a volume of 1,401,000 m3/d is in Saudi Arabia (Ras Al Khair). As of 2021 22,000 plants were in operation In 2024 the Catalan government installed a floating offshore plant near the port of Barcelona and purchased 12 mobile desalination units for the northern region of the Costa Brava to combat the severe drought. In 2012, cost averaged $0.75 per cubic meter. By 2022, that had declined (before inflation) to $0.41. Desalinated supplies are growing at a 10%+ compound rate, doubling in abundance every seven years. Applications There are now about 21,000 desalination plants in operation around the globe. The biggest ones are in the United Arab Emirates, Saudi Arabia, and Israel. The world's largest desalination plant is located in Saudi Arabia (Ras Al-Khair Power and Desalination Plant) with a capacity of 1,401,000 cubic meters per day. Desalination is currently expensive compared to most alternative sources of water, and only a very small fraction of total human use is satisfied by desalination. It is usually only economically practical for high-valued uses (such as household and industrial uses) in arid areas. However, there is growth in desalination for agricultural use and highly populated areas such as Singapore or California. The most extensive use is in the Persian Gulf. While noting costs are falling, and generally positive about the technology for affluent areas in proximity to oceans, a 2005 study argued, "Desalinated water may be a solution for some water-stress regions, but not for places that are poor, deep in the interior of a continent, or at high elevation. Unfortunately, that includes some of the places with the biggest water problems.", and, "Indeed, one needs to lift the water by 2000 m, or transport it over more than 1600 km to get transport costs equal to the desalination costs." Thus, it may be more economical to transport fresh water from somewhere else than to desalinate it. In places far from the sea, like New Delhi, or in high places, like Mexico City, transport costs could match desalination costs. Desalinated water is also expensive in places that are both somewhat far from the sea and somewhat high, such as Riyadh and Harare. By contrast in other locations transport costs are much less, such as Beijing, Bangkok, Zaragoza, Phoenix, and, of course, coastal cities like Tripoli. After desalination at Jubail, Saudi Arabia, water is pumped 320 km inland to Riyadh. For coastal cities, desalination is increasingly viewed as a competitive choice. In 2023, Israel was using desalination to replenish the Sea of Galilee's water supply. Not everyone is convinced that desalination is or will be economically viable or environmentally sustainable for the foreseeable future. Debbie Cook wrote in 2011 that desalination plants can be energy intensive and costly. Therefore, water-stressed regions might do better to focus on conservation or other water supply solutions than invest in desalination plants. Technologies Desalination is an artificial process by which saline water (generally sea water) is converted to fresh water. The most common desalination processes are distillation and reverse osmosis. There are several methods. Each has advantages and disadvantages but all are useful. The methods can be divided into membrane-based (e.g., reverse osmosis) and thermal-based (e.g., multistage flash distillation) methods. The traditional process of desalination is distillation (i.e., boiling and re-condensation of seawater to leave salt and impurities behind). There are currently two technologies with a large majority of the world's desalination capacity: multi-stage flash distillation and reverse osmosis. Distillation Solar distillation Solar distillation mimics the natural water cycle, in which the sun heats sea water enough for evaporation to occur. After evaporation, the water vapor is condensed onto a cool surface. There are two types of solar desalination. The first type uses photovoltaic cells to convert solar energy to electrical energy to power desalination. The second type converts solar energy to heat, and is known as solar thermal powered desalination. Natural evaporation Water can evaporate through several other physical effects besides solar irradiation. These effects have been included in a multidisciplinary desalination methodology in the IBTS Greenhouse. The IBTS is an industrial desalination (power)plant on one side and a greenhouse operating with the natural water cycle (scaled down 1:10) on the other side. The various processes of evaporation and condensation are hosted in low-tech utilities, partly underground and the architectural shape of the building itself. This integrated biotectural system is most suitable for large scale desert greening as it has a km2 footprint for the water distillation and the same for landscape transformation in desert greening, respectively the regeneration of natural fresh water cycles. Vacuum distillation In vacuum distillation atmospheric pressure is reduced, thus lowering the temperature required to evaporate the water. Liquids boil when the vapor pressure equals the ambient pressure and vapor pressure increases with temperature. Effectively, liquids boil at a lower temperature, when the ambient atmospheric pressure is less than usual atmospheric pressure. Thus, because of the reduced pressure, low-temperature "waste" heat from electrical power generation or industrial processes can be employed. Multi-stage flash distillation Water is evaporated and separated from sea water through multi-stage flash distillation, which is a series of flash evaporations. Each subsequent flash process uses energy released from the condensation of the water vapor from the previous step. Multiple-effect distillation Multiple-effect distillation (MED) works through a series of steps called "effects". Incoming water is sprayed onto pipes which are then heated to generate steam. The steam is then used to heat the next batch of incoming sea water. To increase efficiency, the steam used to heat the sea water can be taken from nearby power plants. Although this method is the most thermodynamically efficient among methods powered by heat, a few limitations exist such as a max temperature and max number of effects. Vapor-compression distillation Vapor-compression evaporation involves using either a mechanical compressor or a jet stream to compress the vapor present above the liquid. The compressed vapor is then used to provide the heat needed for the evaporation of the rest of the sea water. Since this system only requires power, it is more cost effective if kept at a small scale. Membrane distillation Membrane distillation uses a temperature difference across a membrane to evaporate vapor from a brine solution and condense pure water on the colder side. The design of the membrane can have a significant effect on efficiency and durability. A study found that a membrane created via co-axial electrospinning of PVDF-HFP and silica aerogel was able to filter 99.99% of salt after continuous 30-day usage. Osmosis Reverse osmosis The leading process for desalination in terms of installed capacity and yearly growth is reverse osmosis (RO). The RO membrane processes use semipermeable membranes and applied pressure (on the membrane feed side) to preferentially induce water permeation through the membrane while rejecting salts. Reverse osmosis plant membrane systems typically use less energy than thermal desalination processes. Energy cost in desalination processes varies considerably depending on water salinity, plant size and process type. At present the cost of seawater desalination, for example, is higher than traditional water sources, but it is expected that costs will continue to decrease with technology improvements that include, but are not limited to, improved efficiency, reduction in plant footprint, improvements to plant operation and optimization, more effective feed pretreatment, and lower cost energy sources. Reverse osmosis uses a thin-film composite membrane, which comprises an ultra-thin, aromatic polyamide thin-film. This polyamide film gives the membrane its transport properties, whereas the remainder of the thin-film composite membrane provides mechanical support. The polyamide film is a dense, void-free polymer with a high surface area, allowing for its high water permeability. A recent study has found that the water permeability is primarily governed by the internal nanoscale mass distribution of the polyamide active layer. The reverse osmosis process requires maintenance. Various factors interfere with efficiency: ionic contamination (calcium, magnesium etc.); dissolved organic carbon (DOC); bacteria; viruses; colloids and insoluble particulates; biofouling and scaling. In extreme cases, the RO membranes are destroyed. To mitigate damage, various pretreatment stages are introduced. Anti-scaling inhibitors include acids and other agents such as the organic polymers polyacrylamide and polymaleic acid, phosphonates and polyphosphates. Inhibitors for fouling are biocides (as oxidants against bacteria and viruses), such as chlorine, ozone, sodium or calcium hypochlorite. At regular intervals, depending on the membrane contamination; fluctuating seawater conditions; or when prompted by monitoring processes, the membranes need to be cleaned, known as emergency or shock-flushing. Flushing is done with inhibitors in a fresh water solution and the system must go offline. This procedure is environmentally risky, since contaminated water is diverted into the ocean without treatment. Sensitive marine habitats can be irreversibly damaged. Off-grid solar-powered desalination units use solar energy to fill a buffer tank on a hill with seawater. The reverse osmosis process receives its pressurized seawater feed in non-sunlight hours by gravity, resulting in sustainable drinking water production without the need for fossil fuels, an electricity grid or batteries. Nano-tubes are also used for the same function (i.e., Reverse Osmosis). Forward osmosis Forward osmosis uses a semi-permeable membrane to effect separation of water from dissolved solutes. The driving force for this separation is an osmotic pressure gradient, such as a "draw" solution of high concentration. Freeze–thaw Freeze–thaw desalination (or freezing desalination) uses freezing to remove fresh water from salt water. Salt water is sprayed during freezing conditions into a pad where an ice-pile builds up. When seasonal conditions warm, naturally desalinated melt water is recovered. This technique relies on extended periods of natural sub-freezing conditions. A different freeze–thaw method, not weather dependent and invented by Alexander Zarchin, freezes seawater in a vacuum. Under vacuum conditions the ice, desalinated, is melted and diverted for collection and the salt is collected. Electrodialysis Electrodialysis uses electric potential to move the salts through pairs of charged membranes, which trap salt in alternating channels. Several variances of electrodialysis exist such as conventional electrodialysis, electrodialysis reversal. Electrodialysis can simultaneously remove salt and carbonic acid from seawater. Preliminary estimates suggest that the cost of such carbon removal can be paid for in large part if not entirely from the sale of the desalinated water produced as a byproduct. Microbial desalination Microbial desalination cells are biological electrochemical systems that implements the use of electro-active bacteria to power desalination of water in situ, resourcing the natural anode and cathode gradient of the electro-active bacteria and thus creating an internal supercapacitor. Wave-powered desalination Wave powered desalination systems generally convert mechanical wave motion directly to hydraulic power for reverse osmosis. Such systems aim to maximize efficiency and reduce costs by avoiding conversion to electricity, minimizing excess pressurization above the osmotic pressure, and innovating on hydraulic and wave power components. One such approach is desalinating using submerged buoys, a wave power approach done by CETO and Oneka. Wave-powered desalination plants began operating by CETO on Garden Island in Western Australia in 2013 and in Perth in 2015 , and Oneka has installations in Chile, Florida, California, and the Caribbean. Wind-powered desalination Wind energy can also be coupled to desalination. Similar to wave power, a direct conversion of mechanical energy to hydraulic power can reduce components and losses in powering reverse osmosis. Wind power has also been considered for coupling with thermal desalination technologies. Other techniques In a April 2024, researchers from the Australian National University published experimental results of a novel technique for desalination. This technique, thermodiffusive desalination, passes saline water through a channel with a temperature gradient. Species migrate under this temperature gradient in a process known a thermodiffusion. Researchers then separated the water into fractions. After multiple passes through the channel, the researchers were able to achieve NaCL concentration drop of 25000 ppm with a recovery rate of 10% of the original water volume. Design aspects Energy consumption The desalination process's energy consumption depends on the water's salinity. Brackish water desalination requires less energy than seawater desalination. The energy intensity of seawater desalination has improved: It is now about 3 kWh/m3 (in 2018), down by a factor of 10 from 20-30 kWh/m3 in 1970. This is similar to the energy consumption of other freshwater supplies transported over large distances, but much higher than local fresh water supplies that use 0.2 kWh/m3 or less. A minimum energy consumption for seawater desalination of around 1 kWh/m3 has been determined, excluding prefiltering and intake/outfall pumping. Under 2 kWh/m3 has been achieved with reverse osmosis membrane technology, leaving limited scope for further energy reductions as the reverse osmosis energy consumption in the 1970s was 16 kWh/m3. Supplying all US domestic water by desalination would increase domestic energy consumption by around 10%, about the amount of energy used by domestic refrigerators. Domestic consumption is a relatively small fraction of the total water usage. Note: "Electrical equivalent" refers to the amount of electrical energy that could be generated using a given quantity of thermal energy and an appropriate turbine generator. These calculations do not include the energy required to construct or refurbish items consumed. Given the energy-intensive nature of desalination and the associated economic and environmental costs, desalination is generally considered a last resort after water conservation. But this is changing as prices continue to fall. Cogeneration Cogeneration is generating useful heat energy and electricity from a single process. Cogeneration can provide usable heat for desalination in an integrated, or "dual-purpose", facility where a power plant provides the energy for desalination. Alternatively, the facility's energy production may be dedicated to the production of potable water (a stand-alone facility), or excess energy may be produced and incorporated into the energy grid. Cogeneration takes various forms, and theoretically any form of energy production could be used. However, the majority of current and planned cogeneration desalination plants use either fossil fuels or nuclear power as their source of energy. Most plants are located in the Middle East or North Africa, which use their petroleum resources to offset limited water resources. The advantage of dual-purpose facilities is they can be more efficient in energy consumption, thus making desalination more viable. The current trend in dual-purpose facilities is hybrid configurations, in which the permeate from reverse osmosis desalination is mixed with distillate from thermal desalination. Basically, two or more desalination processes are combined along with power production. Such facilities have been implemented in Saudi Arabia at Jeddah and Yanbu. A typical supercarrier in the US military is capable of using nuclear power to desalinate of water per day. Alternatives to desalination Increased water conservation and efficiency remain the most cost-effective approaches in areas with a large potential to improve the efficiency of water use practices. Wastewater reclamation provides multiple benefits over desalination of saline water, although it typically uses desalination membranes. Urban runoff and storm water capture also provide benefits in treating, restoring and recharging groundwater. A proposed alternative to desalination in the American Southwest is the commercial importation of bulk water from water-rich areas either by oil tankers converted to water carriers, or pipelines. The idea is politically unpopular in Canada, where governments imposed trade barriers to bulk water exports as a result of a North American Free Trade Agreement (NAFTA) claim. The California Department of Water Resources and the California State Water Resources Control Board submitted a report to the state legislature recommending that urban water suppliers achieve an indoor water use efficiency standard of per capita per day by 2023, declining to per day by 2025, and by 2030 and beyond. Costs Factors that determine the costs for desalination include capacity and type of facility, location, feed water, labor, energy, financing, and concentrate disposal. Costs of desalinating sea water (infrastructure, energy, and maintenance) are generally higher than fresh water from rivers or groundwater, water recycling, and water conservation, but alternatives are only sometimes available. Desalination costs in 2013 ranged from US$0.45 to US$1.00/m3. More than half of the cost comes directly from energy costs, and since energy prices are very volatile, actual costs can vary substantially. The cost of untreated fresh water in the developing world can reach US$5/cubic metre. Since 1975, desalination technology has seen significant advancements, decreasing the average cost of producing one cubic meter of freshwater from seawater from $1.10 in 2000 to approximately $0.50 today. Improved desalination efficiency is a primary factor contributing to this reduction. Energy consumption remains a significant cost component, accounting for up to half the total cost of the desalination process. Desalination can substantially increase energy intensity, particularly for regions with limited energy resources. For instance, in the island nation of Cyprus, desalination accounts for approximately 5% of the country's total power consumption. The global desalination market was valued at $20 billion in 2023. With growing populations in arid coastal regions, this market is projected to double by 2032. In 2023, global desalination capacity reached 99 million cubic meters per day, a significant increase from 27 million cubic meters per day in 2003. Desalination stills control pressure, temperature and brine concentrations to optimize efficiency. Nuclear-powered desalination might be economical on a large scale. In 2014, the Israeli facilities of Hadera, Palmahim, Ashkelon, and Sorek were desalinizing water for less than US$0.40 per cubic meter. As of 2006, Singapore was desalinating water for US$0.49 per cubic meter. Environmental concerns Intake In the United States, cooling water intake structures are regulated by the Environmental Protection Agency (EPA). These structures can have the same impacts on the environment as desalination facility intakes. According to EPA, water intake structures cause adverse environmental impact by sucking fish and shellfish or their eggs into an industrial system. There, the organisms may be killed or injured by heat, physical stress, or chemicals. Larger organisms may be killed or injured when they become trapped against screens at the front of an intake structure. Alternative intake types that mitigate these impacts include beach wells, but they require more energy and higher costs. The Kwinana Desalination Plant opened in the Australian city of Perth, in 2007. Water there and at Queensland's Gold Coast Desalination Plant and Sydney's Kurnell Desalination Plant is withdrawn at , which is slow enough to let fish escape. The plant provides nearly of clean water per day. Outflow Desalination processes produce large quantities of brine, possibly at above ambient temperature, and contain residues of pretreatment and cleaning chemicals, their reaction byproducts and heavy metals due to corrosion (especially in thermal-based plants). Chemical pretreatment and cleaning are a necessity in most desalination plants, which typically includes prevention of biofouling, scaling, foaming and corrosion in thermal plants, and of biofouling, suspended solids and scale deposits in membrane plants. To limit the environmental impact of returning the brine to the ocean, it can be diluted with another stream of water entering the ocean, such as the outfall of a wastewater treatment or power plant. With medium to large power plant and desalination plants, the power plant's cooling water flow is likely to be several times larger than that of the desalination plant, reducing the salinity of the combination. Another method to dilute the brine is to mix it via a diffuser in a mixing zone. For example, once a pipeline containing the brine reaches the sea floor, it can split into many branches, each releasing brine gradually through small holes along its length. Mixing can be combined with power plant or wastewater plant dilution. Furthermore, zero liquid discharge systems can be adopted to treat brine before disposal. Another possibility is making the desalination plant movable, thus avoiding that the brine builds up into a single location (as it keeps being produced by the desalination plant). Some such movable (ship-connected) desalination plants have been constructed. Brine is denser than seawater and therefore sinks to the ocean bottom and can damage the ecosystem. Brine plumes have been seen to diminish over time to a diluted concentration, to where there was little to no effect on the surrounding environment. However studies have shown the dilution can be misleading due to the depth at which it occurred. If the dilution was observed during the summer season, there is possibility that there could have been a seasonal thermocline event that could have prevented the concentrated brine to sink to sea floor. This has the potential to not disrupt the sea floor ecosystem and instead the waters above it. Brine dispersal from the desalination plants has been seen to travel several kilometers away, meaning that it has the potential to cause harm to ecosystems far away from the plants. Careful reintroduction with appropriate measures and environmental studies can minimize this problem. Energy use The energy demand for desalination in the Middle East, driven by severe water scarcity, is expected to double by 2030. Currently, this process primarily uses fossil fuels, comprising over 95% of its energy source. In 2023, desalination consumed nearly half of the residential sector's energy in the region. Other issues Due to the nature of the process, there is a need to place the plants on approximately 25 acres of land on or near the shoreline. In the case of a plant built inland, pipes have to be laid into the ground to allow for easy intake and outtake. However, once the pipes are laid into the ground, they have a possibility of leaking into and contaminating nearby aquifers. Aside from environmental risks, the noise generated by certain types of desalination plants can be loud. Health aspects Iodine deficiency Desalination removes iodine from water and could increase the risk of iodine deficiency disorders. Israeli researchers claimed a possible link between seawater desalination and iodine deficiency, finding iodine deficits among adults exposed to iodine-poor water concurrently with an increasing proportion of their area's drinking water from seawater reverse osmosis (SWRO). They later found probable iodine deficiency disorders in a population reliant on desalinated seawater. A possible link of heavy desalinated water use and national iodine deficiency was suggested by Israeli researchers. They found a high burden of iodine deficiency in the general population of Israel: 62% of school-age children and 85% of pregnant women fall below the WHO's adequacy range. They also pointed out the national reliance on iodine-depleted desalinated water, the absence of a universal salt iodization program and reports of increased use of thyroid medication in Israel as a possible reasons that the population's iodine intake is low. In the year that the survey was conducted, the amount of water produced from the desalination plants constitutes about 50% of the quantity of fresh water supplied for all needs and about 80% of the water supplied for domestic and industrial needs in Israel. Experimental techniques Other desalination techniques include: Waste heat Thermally-driven desalination technologies are frequently suggested for use with low-temperature waste heat sources, as the low temperatures are not useful for process heat needed in many industrial processes, but ideal for the lower temperatures needed for desalination. In fact, such pairing with waste heat can even improve electrical process: Diesel generators commonly provide electricity in remote areas. About 40–50% of the energy output is low-grade heat that leaves the engine via the exhaust. Connecting a thermal desalination technology such as membrane distillation system to the diesel engine exhaust repurposes this low-grade heat for desalination. The system actively cools the diesel generator, improving its efficiency and increasing its electricity output. This results in an energy-neutral desalination solution. An example plant was commissioned by Dutch company Aquaver in March 2014 for Gulhi, Maldives. Low-temperature thermal Originally stemming from ocean thermal energy conversion research, low-temperature thermal desalination (LTTD) takes advantage of water boiling at low pressure, even at ambient temperature. The system uses pumps to create a low-pressure, low-temperature environment in which water boils at a temperature gradient of between two volumes of water. Cool ocean water is supplied from depths of up to . This water is pumped through coils to condense the water vapor. The resulting condensate is purified water. LTTD may take advantage of the temperature gradient available at power plants, where large quantities of warm wastewater are discharged from the plant, reducing the energy input needed to create a temperature gradient. Experiments were conducted in the US and Japan to test the approach. In Japan, a spray-flash evaporation system was tested by Saga University. In Hawaii, the National Energy Laboratory tested an open-cycle OTEC plant with fresh water and power production using a temperature difference of between surface water and water at a depth of around . LTTD was studied by India's National Institute of Ocean Technology (NIOT) in 2004. Their first LTTD plant opened in 2005 at Kavaratti in the Lakshadweep islands. The plant's capacity is /day, at a capital cost of INR 50 million (€922,000). The plant uses deep water at a temperature of . In 2007, NIOT opened an experimental, floating LTTD plant off the coast of Chennai, with a capacity of /day. A smaller plant was established in 2009 at the North Chennai Thermal Power Station to prove the LTTD application where power plant cooling water is available. Thermoionic process In October 2009, Saltworks Technologies announced a process that uses solar or other thermal heat to drive an ionic current that removes all sodium and chlorine ions from the water using ion-exchange membranes. Evaporation and condensation for crops The Seawater greenhouse uses natural evaporation and condensation processes inside a greenhouse powered by solar energy to grow crops in arid coastal land. Ion concentration polarisation (ICP) In 2022, using a technique that used multiple stages of ion concentration polarisation followed by a single stage of electrodialysis, researchers from MIT manage to create a filterless portable desalination unit, capable of removing both dissolved salts and suspended solids. Designed for use by non-experts in remote areas or natural disasters, as well as on military operations, the prototype is the size of a suitcase, measuring 42 × 33.5 × 19 cm3 and weighing 9.25 kg. The process is fully automated, notifying the user when the water is safe to drink, and can be controlled by a single button or smartphone app. As it does not require a high pressure pump the process is highly energy efficient, consuming only 20 watt-hours per liter of drinking water produced, making it capable of being powered by common portable solar panels. Using a filterless design at low pressures or replaceable filters significantly reduces maintenance requirements, while the device itself is self cleaning. However, the device is limited to producing 0.33 liters of drinking water per minute. There are also concerns that fouling will impact the long-term reliability, especially in water with high turbidity. The researchers are working to increase the efficiency and production rate with the intent to commercialise the product in the future, however a significant limitation is the reliance on expensive materials in the current design. Other approaches Adsorption-based desalination (AD) relies on the moisture absorption properties of certain materials such as Silica Gel. Forward osmosis One process was commercialized by Modern Water PLC using forward osmosis, with a number of plants reported to be in operation. Hydrogel based desalination The idea of the method is in the fact that when the hydrogel is put into contact with aqueous salt solution, it swells absorbing a solution with the ion composition different from the original one. This solution can be easily squeezed out from the gel by means of sieve or microfiltration membrane. The compression of the gel in closed system lead to change in salt concentration, whereas the compression in open system, while the gel is exchanging ions with bulk, lead to the change in the number of ions. The consequence of the compression and swelling in open and closed system conditions mimics the reverse Carnot Cycle of refrigerator machine. The only difference is that instead of heat this cycle transfers salt ions from the bulk of low salinity to a bulk of high salinity. Similarly to the Carnot cycle this cycle is fully reversible, so can in principle work with an ideal thermodynamic efficiency. Because the method is free from the use of osmotic membranes it can compete with reverse osmosis method. In addition, unlike the reverse osmosis, the approach is not sensitive to the quality of feed water and its seasonal changes, and allows the production of water of any desired concentration. Small-scale solar The United States, France and the United Arab Emirates are working to develop practical solar desalination. AquaDania's WaterStillar has been installed at Dahab, Egypt, and in Playa del Carmen, Mexico. In this approach, a solar thermal collector measuring two square metres can distill from 40 to 60 litres per day from any local water source – five times more than conventional stills. It eliminates the need for plastic PET bottles or energy-consuming water transport. In Central California, a startup company WaterFX is developing a solar-powered method of desalination that can enable the use of local water, including runoff water that can be treated and used again. Salty groundwater in the region would be treated to become freshwater, and in areas near the ocean, seawater could be treated. Passarell The Passarell process uses reduced atmospheric pressure rather than heat to drive evaporative desalination. The pure water vapor generated by distillation is then compressed and condensed using an advanced compressor. The compression process improves distillation efficiency by creating the reduced pressure in the evaporation chamber. The compressor centrifuges the pure water vapor after it is drawn through a demister (removing residual impurities) causing it to compress against tubes in the collection chamber. The compression of the vapor increases its temperature. The heat is transferred to the input water falling in the tubes, vaporizing the water in the tubes. Water vapor condenses on the outside of the tubes as product water. By combining several physical processes, Passarell enables most of the system's energy to be recycled through its evaporation, demisting, vapor compression, condensation, and water movement processes. Geothermal Geothermal energy can drive desalination. In most locations, geothermal desalination beats using scarce groundwater or surface water, environmentally and economically. Nanotechnology Nanotube membranes of higher permeability than current generation of membranes may lead to eventual reduction in the footprint of RO desalination plants. It has also been suggested that the use of such membranes will lead to reduction in the energy needed for desalination. Hermetic, sulphonated nano-composite membranes have shown to be capable of removing various contaminants to the parts per billion level, and have little or no susceptibility to high salt concentration levels. Biomimesis Biomimetic membranes are another approach. Electrochemical In 2008, Siemens Water Technologies announced technology that applied electric fields to desalinate one cubic meter of water while using only a purported 1.5 kWh of energy. If accurate, this process would consume one-half the energy of other processes. As of 2012 a demonstration plant was operating in Singapore. Researchers at the University of Texas at Austin and the University of Marburg are developing more efficient methods of electrochemically mediated seawater desalination. Electrokinetic shocks A process employing electrokinetic shock waves can be used to accomplish membraneless desalination at ambient temperature and pressure. In this process, anions and cations in salt water are exchanged for carbonate anions and calcium cations, respectively using electrokinetic shockwaves. Calcium and carbonate ions react to form calcium carbonate, which precipitates, leaving fresh water. The theoretical energy efficiency of this method is on par with electrodialysis and reverse osmosis. Temperature swing solvent extraction Temperature Swing Solvent Extraction (TSSE) uses a solvent instead of a membrane or high temperatures. Solvent extraction is a common technique in chemical engineering. It can be activated by low-grade heat (less than , which may not require active heating. In a study, TSSE removed up to 98.4 percent of the salt in brine. A solvent whose solubility varies with temperature is added to saltwater. At room temperature the solvent draws water molecules away from the salt. The water-laden solvent is then heated, causing the solvent to release the now salt-free water. It can desalinate extremely salty brine up to seven times as salty as the ocean. For comparison, the current methods can only handle brine twice as salty. Wave energy A small-scale offshore system uses wave energy to desalinate 30–50 m3/day. The system operates with no external power, and is constructed of recycled plastic bottles. Plants Trade Arabia claims Saudi Arabia to be producing 7.9 million cubic meters of desalinated water daily, or 22% of world total as of 2021 yearend. Perth began operating a reverse osmosis seawater desalination plant in 2006. The Perth desalination plant is powered partially by renewable energy from the Emu Downs Wind Farm. A desalination plant now operates in Sydney, and the Wonthaggi desalination plant was under construction in Wonthaggi, Victoria. A wind farm at Bungendore in New South Wales was purpose-built to generate enough renewable energy to offset the Sydney plant's energy use, mitigating concerns about harmful greenhouse gas emissions. A January 17, 2008, article in The Wall Street Journal stated, "In November, Connecticut-based Poseidon Resources Corp. won a key regulatory approval to build the $300 million water-desalination plant in Carlsbad, north of San Diego. The facility would produce 190,000 cubic metres of drinking water per day, enough to supply about 100,000 homes. As of June 2012, the cost for the desalinated water had risen to $2,329 per acre-foot. Each $1,000 per acre-foot works out to $3.06 for 1,000 gallons, or $0.81 per cubic meter. As new technological innovations continue to reduce the capital cost of desalination, more countries are building desalination plants as a small element in addressing their water scarcity problems. Israel desalinizes water for a cost of 53 cents per cubic meter Singapore desalinizes water for 49 cents per cubic meter and also treats sewage with reverse osmosis for industrial and potable use (NEWater). China and India, the world's two most populous countries, are turning to desalination to provide a small part of their water needs In 2007 Pakistan announced plans to use desalination All Australian capital cities (except Canberra, Darwin, Northern Territory and Hobart) are either in the process of building desalination plants, or are already using them. In late 2011, Melbourne will begin using Australia's largest desalination plant, the Wonthaggi desalination plant to raise low reservoir levels. In 2007 Bermuda signed a contract to purchase a desalination plant Before 2015, the largest desalination plant in the United States was at Tampa Bay, Florida, which began desalinizing 25 million gallons (95000 m3) of water per day in December 2007. In the United States, the cost of desalination is $3.06 for 1,000 gallons, or 81 cents per cubic meter. In the United States, California, Arizona, Texas, and Florida use desalination for a very small part of their water supply. Since 2015, the Claude "Bud" Lewis Carlsbad Desalination Plant has been producing 50 million gallons of drinking water daily. After being desalinized at Jubail, Saudi Arabia, water is pumped inland though a pipeline to the capital city of Riyadh. As of 2008, "World-wide, 13,080 desalination plants produce more than 12 billion gallons of water a day, according to the International Desalination Association." An estimate in 2009 found that the worldwide desalinated water supply will triple between 2008 and 2020. One of the world's largest desalination hubs is the Jebel Ali Power Generation and Water Production Complex in the United Arab Emirates. It is a site featuring multiple plants using different desalination technologies and is capable of producing 2.2 million cubic meters of water per day. A typical aircraft carrier in the U.S. military uses nuclear power to desalinize of water per day. In nature Evaporation of water over the oceans in the water cycle is a natural desalination process. The formation of sea ice produces ice with little salt, much lower than in seawater. Seabirds distill seawater using countercurrent exchange in a gland with a rete mirabile. The gland secretes highly concentrated brine stored near the nostrils above the beak. The bird then "sneezes" the brine out. As freshwater is not usually available in their environments, some seabirds, such as pelicans, petrels, albatrosses, gulls and terns, possess this gland, which allows them to drink the salty water from their environments while they are far from land. Mangrove trees grow in seawater; they secrete salt by trapping it in parts of the root, which are then eaten by animals (usually crabs). Additional salt is removed by storing it in leaves that fall off. Some types of mangroves have glands on their leaves, which work in a similar way to the seabird desalination gland. Salt is extracted to the leaf exterior as small crystals, which then fall off the leaf. Willow trees and reeds absorb salt and other contaminants, effectively desalinating the water. This is used in artificial constructed wetlands, for treating sewage. Society and culture Despite the issues associated with desalination processes, public support for its development can be very high. One survey of a Southern California community saw 71.9% of all respondents being in support of desalination plant development in their community. In many cases, high freshwater scarcity corresponds to higher public support for desalination development whereas areas with low water scarcity tend to have less public support for its development. See also Metal–organic framework Atmospheric water generator Dewvaporation Flexible barge Peak water Pumpable ice technology Soil desalination model Soil salinity Soil salinity and groundwater model References External links International Desalination Association European Desalination Society Working principles in desalination systems Classification of Desalination Technologies (CDT) SOLAR TOWER Project – Clean Electricity Generation for Desalination. Desalination bibliography Library of Congress Encyclopedia of Desalination and water and Water Resources Environmental issues with water Filters Fresh water Water supply Water desalination Water treatment
Desalination
[ "Chemistry", "Engineering", "Environmental_science" ]
10,098
[ "Hydrology", "Water desalination", "Water treatment", "Chemical equipment", "Fresh water", "Filters", "Water pollution", "Filtration", "Environmental engineering", "Water technology", "Water supply" ]
156,817
https://en.wikipedia.org/wiki/Gridiron%20pendulum
A gridiron pendulum was a temperature-compensated clock pendulum invented by British clockmaker John Harrison around 1726. It was used in precision clocks. In ordinary clock pendulums, the pendulum rod expands and contracts with changes in temperature. The period of the pendulum's swing depends on its length, so a pendulum clock's rate varied with changes in ambient temperature, causing inaccurate timekeeping. The gridiron pendulum consists of alternating parallel rods of two metals with different thermal expansion coefficients, such as steel and brass. The rods are connected by a frame in such a way that their different thermal expansions (or contractions) compensate for each other, so that the overall length of the pendulum, and thus its period, stays constant with temperature. The gridiron pendulum was used during the Industrial Revolution period in pendulum clocks, particularly precision regulator clocks employed as time standards in factories, laboratories, office buildings, railroad stations and post offices to schedule work and set other clocks. The gridiron became so associated with accurate timekeeping that by the turn of the 20th century many clocks had pendulums with decorative fake gridirons, which had no temperature compensating qualities. How it works The gridiron pendulum is constructed so the high thermal expansion (zinc or brass) rods make the pendulum shorter when they expand, while the low expansion steel rods make the pendulum longer. By using the correct ratio of lengths, the greater expansion of the zinc or brass rods exactly compensate for the greater length of the low expansion steel rods, and the pendulum stays the same length with temperature changes. The simplest form of gridiron pendulum, introduced as an improvement to Harrison's around 1750 by John Smeaton, consists of five rods, 3 of steel and two of zinc. A central steel rod runs up from the bob to the suspension pivot. At that point a cross-piece (middle bridge) extends from the central rod and connects to two zinc rods, one on each side of the central rod, which reach down to, and are fixed to, the bottom bridge just above the bob. The bottom bridge clears the central rod and connects to two further steel rods which run back up to the top bridge attached to the suspension. As the steel rods expand in heat, the bottom bridge drops relative to the suspension, and the bob drops relative to the middle bridge. However, the middle bridge rises relative to the bottom one because the greater expansion of the zinc rods pushes the middle bridge, and therefore the bob, upward to match the combined drop caused by the expanding steel. In simple terms, the upward expansion of the zinc counteracts the combined downward expansion of the steel (which has a greater total length). The rod lengths are calculated so that the effective length of the zinc rods multiplied by zinc's thermal expansion coefficient equals the effective length of the steel rods multiplied by iron's expansion coefficient, thereby keeping the pendulum the same length. Harrison's original pendulum used brass rods (pure zinc not being available then); these required more rods because brass does not expand as much as zinc does. Instead of one high expansion rod on each side, two are needed on each side, requiring a total of 9 rods, five steel and four brass. The exact degree of compensation can be adjusted by having a section of the central rod which is partly brass and partly steel. These overlap (like a sandwich) and are joined by a pin which passes through both metals. A number of holes for the pin are made in both parts and moving the pin up or down the rod changes how much of the combined rod is brass and how much is steel. In the late 19th century the Dent company developed a tubular version of the zinc gridiron in which the four outer rods were replaced by two concentric tubes which were linked by a tubular nut which could be screwed up and down to alter the degree of compensation. In the 1730s clockmaker John Ellicott designed a version that only required 3 rods, two brass and one steel (see drawing), in which the brass rods as they expanded with increasing temperature pressed against levers which lifted the bob. The Ellicott pendulum did not see much use. Disadvantages Scientists in the 1800s found that the gridiron pendulum had disadvantages that made it unsuitable for the highest-precision clocks. The friction of the rods sliding in the holes in the frame caused the rods to adjust to temperature changes in a series of tiny jumps, rather than with a smooth motion. This caused the rate of the pendulum, and therefore the clock, to change suddenly with each jump. Later it was found that zinc is not very stable dimensionally; it is subject to creep. Therefore, another type of temperature-compensated pendulum, the mercury pendulum invented in 1721 by George Graham, was used in the highest-precision clocks. By 1900, the highest-precision astronomical regulator clocks used pendulum rods of low thermal expansion materials such as invar and fused quartz. Gallery Mathematical analysis Temperature error All substances expand with an increase in temperature , so uncompensated pendulum rods get longer with a temperature increase, causing the clock to slow down, and get shorter with a temperature decrease, causing the clock to speed up. The amount depends on the linear coefficient of thermal expansion (CTE) of the material they are composed of. CTE is usually given in parts per million (ppm) per degree Celsius. If a rod has a length at some standard temperature , the length of the rod as a function of temperature is If and , the expansion or contraction of a rod of length with a coefficient of expansion caused by a temperature change is         (1) The period of oscillation of the pendulum (the time interval for a right swing and a left swing) is         (2) A change in length due to a temperature change will cause a change in the period . Since the expansion coefficient is so small, the length changes due to temperature are very small, parts per million, so and the change in period can be approximated to first order as a linear function Substituting equation (1), the change in the pendulum's period caused by a change in temperature is So the fractional change in an uncompensated pendulum's period is equal to one-half the coefficient of expansion times the change in temperature. Steel has a CTE of 11.5 x 10−6 per °C so a pendulum with a steel rod will have a thermal error rate of 5.7 parts per million or 0.5 seconds per day per degree Celsius (0.9 seconds per day per degree Fahrenheit). Before 1900 most buildings were unheated, so clocks in temperate climates like Europe and North America would experience a summer/winter temperature variation of around resulting in an error rate of 6.8 seconds per day. Wood has a smaller CTE of 4.9 x 10−6 per °C thus a pendulum with a wood rod will have a smaller thermal error of 0.21 sec per day per °C, so wood pendulum rods were often used in quality domestic clocks. The wood had to be varnished to protect it from the atmosphere as humidity could also cause changes in length. Compensation A gridiron pendulum is symmetrical, with two identical linkages of suspension rods, one on each side, suspending the bob from the pivot. Within each suspension chain, the total change in length of the pendulum is equal to the sum of the changes of the rods that make it up. It is designed so with an increase in temperature the high expansion rods on each side push the pendulum bob up, in the opposite direction to the low expansion rods which push it down, so the net change in length is the difference between these changes From (1) the change in length of a gridiron pendulum with a temperature change is where is the sum of the lengths of all the low expansion (steel) rods and is the sum of the lengths of the high expansion rods in the suspension chain from the bob to the pivot. The condition for zero length change with temperature is         (3) In other words, the ratio of the total rod lengths must be equal to the inverse ratio of the thermal expansion coefficients of the two metals In order to calculate the length of the individual rods, this equation is solved along with equation (2) giving the total length of pendulum needed for the correct period Most of the precision pendulum clocks with gridirons used a 'seconds pendulum', in which the period was two seconds. The length of the seconds pendulum was . In an ordinary uncompensated pendulum, which has most of its mass in the bob, the center of oscillation is near the center of the bob, so it was usually accurate enough to make the length from the pivot to the center of the bob 0.9936 m and then correct the clock's period with the adjustment nut. But in a gridiron pendulum, the gridiron constitutes a significant part of the mass of the pendulum. This changes the moment of inertia so the center of oscillation is somewhat higher, above the bob in the gridiron. Therefore the total length of the pendulum must be somewhat longer to give the correct period. This factor is hard to calculate accurately. Another minor factor is that if the pendulum bob is supported at bottom by a nut on the pendulum rod, as is typical, the rise in center of gravity due to thermal expansion of the bob has to be taken into account. Clockmakers of the 19th century usually used recommended lengths for gridiron rods that had been found by master clockmakers by trial and error. Five rod gridiron In the 5 rod gridiron, there is one high expansion rod on each side, of length , flanked by two low expansion rods with lengths and , one from the pivot to support the bottom of , the other goes from the top of down to support the bob. So from equation (3) the condition for compensation is Since to fit in the frame the high expansion rod must be equal to or shorter than each of the low expansion rods and the geometrical condition for construction of the gridiron is Therefore the 5 rod gridiron can only be made with metals whose expansion coefficients have a ratio greater than or equal to two Zinc has a CTE of = 26.2 ppm per °C, versus the steel value of 11.5, so the ratio of = 2.28. Thus the zinc/steel combination can be used in 5 rod pendulums. The compensation condition for a zinc/steel gridiron is Nine rod gridiron To allow the use of metals with a lower ratio of expansion coefficients, such as brass and steel, a greater proportion of the suspension length must be the high expansion metal, so a construction with more high expansion rods must be used. In the 9 rod gridiron, there are two high expansion rods on each side, of length and , flanked by three low expansion rods with lengths , and . So from equation (3) the condition for compensation is Since to fit in the frame each of the two high expansion rods must be as short as or shorter than each of the high expansion rods, the geometrical condition for construction is Therefore the 9 rod gridiron can be made with metals with a ratio of thermal expansion coefficients exceeding 1.5. Brass has a CTE of around = 19.3 ppm per °C, giving a ratio of = 1.68. So while brass/steel cannot be used in 5 rod gridirons, it can be used in the 9 rod version. So the compensation condition for a brass/steel gridiron using brass with the above CTE is Definition of variables References Further reading Pendulums Timekeeping components
Gridiron pendulum
[ "Technology" ]
2,372
[ "Timekeeping components", "Components" ]
156,859
https://en.wikipedia.org/wiki/Comparison%20of%20analog%20and%20digital%20recording
Sound can be recorded and stored and played using either digital or analog techniques. Both techniques introduce errors and distortions in the sound, and these methods can be systematically compared. Musicians and listeners have argued over the superiority of digital versus analog sound recordings. Arguments for analog systems include the absence of fundamental error mechanisms which are present in digital audio systems, including aliasing and associated anti-aliasing filter implementation, jitter and quantization noise. Advocates of digital point to the high levels of performance possible with digital audio, including excellent linearity in the audible band and low levels of noise and distortion. Two prominent differences in performance between the two methods are the bandwidth and the signal-to-noise ratio (S/N ratio). The bandwidth of the digital system is determined, according to the Nyquist frequency, by the sample rate used. The bandwidth of an analog system is dependent on the physical and electronic capabilities of the analog circuits. The S/N ratio of a digital system may be limited by the bit depth of the digitization process, but the electronic implementation of conversion circuits introduces additional noise. In an analog system, other natural analog noise sources exist, such as flicker noise and imperfections in the recording medium. Other performance differences are specific to the systems under comparison, such as the ability for more transparent filtering algorithms in digital systems and the harmonic saturation and speed variations of analog systems. Dynamic range The dynamic range of an audio system is a measure of the difference between the smallest and largest amplitude values that can be represented in a medium. Digital and analog differ in both the methods of transfer and storage, as well as the behavior exhibited by the systems due to these methods. The dynamic range capability of digital audio systems far exceeds that of analog audio systems. Consumer analog cassette tapes have a dynamic range of between 50 and 75 dB. Analog FM broadcasts rarely have a dynamic range exceeding 50 dB. Analog studio master tapes can have a dynamic range of up to 77 dB. An LP made out of perfect vinyl would have a theoretical dynamic range of 70 dB, though measurements indicate actual performance in the 60 to 70 dB range. Compare this to digital recording. Typically, a 16-bit digital recording has a dynamic range of between 90 and 95 dB. The benefits of using digital recorders with greater than 16-bit accuracy can be applied to the 16 bits of audio CD. Meridian Audio founder John Robert Stuart stresses that with the correct dither, the resolution of a digital system is theoretically infinite, and that it is possible, for example, to resolve sounds at −110 dB (below digital full-scale) in a well-designed 16-bit channel. Overload conditions There are some differences in the behaviour of analog and digital systems when high level signals are present, where there is the possibility that such signals could push the system into overload. With high level signals, analog magnetic tape approaches saturation, and high frequency response drops in proportion to low frequency response. While undesirable, the audible effect of this can be reasonably unobjectionable. In contrast, digital PCM recorders show non-benign behaviour in overload; samples that exceed the peak quantization level are simply truncated, clipping the waveform squarely, which introduces distortion in the form of large quantities of higher-frequency harmonics. In principle, PCM digital systems have the lowest level of nonlinear distortion at full signal amplitude. The opposite is usually true of analog systems, where distortion tends to increase at high signal levels. A study by Manson (1980) considered the requirements of a digital audio system for high quality broadcasting. It concluded that a 16-bit system would be sufficient, but noted the small reserve the system provided in ordinary operating conditions. For this reason, it was suggested that a fast-acting signal limiter or 'soft clipper' be used to prevent the system from becoming overloaded. With many recordings, high level distortions at signal peaks may be audibly masked by the original signal, thus large amounts of distortion may be acceptable at peak signal levels. The difference between analog and digital systems is the form of high-level signal error. Some early analog-to-digital converters displayed non-benign behaviour when in overload, where the overloading signals were 'wrapped' from positive to negative full-scale. Modern converter designs based on sigma-delta modulation may become unstable in overload conditions. It is usually a design goal of digital systems to limit high-level signals to prevent overload. To prevent overload, a modern digital system may compress input signals so that digital full-scale cannot be reached Physical degradation Unlike analog duplication, digital copies are exact replicas that can be duplicated indefinitely and without generation loss, in principle. Error correction allows digital formats to tolerate significant media deterioration though digital media is not immune to data loss. Consumer CD-R compact discs have a limited and variable lifespan due to both inherent and manufacturing quality issues. With vinyl records, there will be some loss in fidelity on each playing of the disc. This is due to the wear of the stylus in contact with the record surface. Magnetic tapes, both analog and digital, wear from friction between the tape and the heads, guides, and other parts of the tape transport as the tape slides over them. The brown residue deposited on swabs during cleaning of a tape machine's tape path is actually particles of magnetic coating shed from tapes. Sticky-shed syndrome is a prevalent problem with older tapes. Tapes can also suffer creasing, stretching, and frilling of the edges of the plastic tape base, particularly from low-quality or out-of-alignment tape decks. When a CD is played, there is no physical contact involved as the data is read optically using a laser beam. Therefore, no such media deterioration takes place, and the CD will, with proper care, sound exactly the same every time it is played (discounting aging of the player and CD itself); however, this is a benefit of the optical system, not of digital recording, and the Laserdisc format enjoys the same non-contact benefit with analog optical signals. CDs suffer from disc rot and slowly degrade with time, even if they are stored properly and not played. M-DISC, a recordable optical technology which markets itself as remaining readable for 1,000 years, is available in certain markets, but as of late 2020 has never been sold in the CD-R format. (Sound could, however, be stored on an M-DISC DVD-R using the DVD-Audio format.) Noise For electronic audio signals, sources of noise include mechanical, electrical and thermal noise in the recording and playback cycle. The amount of noise that a piece of audio equipment adds to the original signal can be quantified. Mathematically, this can be expressed by means of the signal-to-noise ratio (SNR or S/N ratio). Sometimes the maximum possible dynamic range of the system is quoted instead. With digital systems, the quality of reproduction depends on the analog-to-digital and digital-to-analog conversion steps, and does not depend on the quality of the recording medium, provided it is adequate to retain the digital values without error. Digital media capable of bit-perfect storage and retrieval have been commonplace for some time, since they were generally developed for software storage which has no tolerance for error. The process of analog-to-digital conversion will, according to theory, always introduce quantization distortion. This distortion can be rendered as uncorrelated quantization noise through the use of dither. The magnitude of this noise or distortion is determined by the number of quantization levels. In binary systems this is determined by and typically stated in terms of the number of bits. Each additional bit adds approximately 6 dB in possible SNR (e.g. 24 x 6 = 144 dB for 24-bit and 120 dB for 20-bit quantization). The 16-bit digital system of Red Book audio CD has 216 = 65,536 possible signal amplitudes, theoretically allowing for an SNR of 98 dB. Rumble Rumble is a form of noise characteristic caused by imperfections in the bearings of turntables. The platter tends to have a slight amount of motion besides the desired rotation and the turntable surface also moves up, down and side-to-side slightly. This additional motion is added to the desired signal as noise, usually of very low frequencies, creating a rumbling sound during quiet passages. Very inexpensive turntables sometimes used ball bearings, which are very likely to generate audible amounts of rumble. More expensive turntables tend to use massive sleeve bearings, which are much less likely to generate offensive amounts of rumble. Increased turntable mass also tends to lead to reduced rumble. A good turntable should have rumble at least 60 dB below the specified output level from the pick-up. Because they have no moving parts in the signal path, digital systems are not subject to rumble. Wow and flutter Wow and flutter are a change in frequency of an analog device and are the result of mechanical imperfections. Wow is a form of flutter that occurs at a slower rate. Wow and flutter are most noticeable on signals which contain pure tones. For LP records, the quality of the turntable will have a large effect on the level of wow and flutter. A good turntable will have wow and flutter values of less than 0.05%, which is the speed variation from the mean value. Wow and flutter can also be present in the recording, as a result of the imperfect operation of the recorder. Owing to their use of precision crystal oscillators for their timebase, digital systems are not subject to wow and flutter. Frequency response For digital systems, the upper limit of the frequency response is determined by the sampling frequency. The choice of sample sampling frequency in a digital system is based on the Nyquist–Shannon sampling theorem. This states that a sampled signal can be reproduced exactly as long as it is sampled at a frequency greater than twice the bandwidth of the signal, the Nyquist frequency. Therefore, a sampling frequency of 40 kHz is mathematically sufficient to capture all the information contained in a signal having frequency components less than or equal to 20 kHz. The sampling theorem also requires that frequency content above the Nyquist frequency be removed from the signal before sampling it. This is accomplished using anti-aliasing filters which require a transition band to sufficiently reduce aliasing. The bandwidth provided by the 44,100 Hz sampling frequency used by the standard for audio CDs is sufficiently wide to cover the entire human hearing range, which roughly extends from 20 Hz to 20 kHz. Professional digital recorders may record higher frequencies, while some consumer and telecommunications systems record a more restricted frequency range. Some analog tape manufacturers specify frequency responses up to 20 kHz, but these measurements may have been made at lower signal levels. Compact Cassettes may have a response extending up to 15 kHz at full (0 dB) recording level. At lower levels (−10 dB), cassettes are typically limited to 20 kHz due to self-erasure of the tape media. The frequency response for a conventional LP player might be 20 Hz to 20 kHz, ±3 dB. The low-frequency response of vinyl records is restricted by rumble noise (described above), as well as the physical and electrical characteristics of the entire pickup arm and transducer assembly. The high-frequency response of vinyl depends on the cartridge. CD4 records contained frequencies up to 50 kHz. Frequencies of up to 122 kHz have been experimentally cut on LP records. Aliasing Digital systems require that all high-frequency signal content above the Nyquist frequency must be removed prior to sampling, which, if not done, will result in these ultrasonic frequencies "folding over" into frequencies in the audible range, producing a kind of distortion called aliasing. Aliasing is prevented in digital systems by an anti-aliasing filter. However, designing an analog filter that precisely removes all frequency content exactly above or below a certain cutoff frequency, is impractical. Instead, a sample rate is usually chosen which is above the Nyquist requirement. This solution is called oversampling, and allows a less aggressive and lower-cost anti-aliasing filter to be used. Early digital systems may have suffered from a number of signal degradations related to the use of analog anti-aliasing filters, e.g., time dispersion, nonlinear distortion, ripple, temperature dependence of filters etc. Using an oversampling design and delta-sigma modulation, a less aggressive analog anti-aliasing filter can be supplemented by a digital filter. This approach has several advantages since the digital filter can be made to have a near-ideal frequency domain transfer function, with low in-band ripple, and no aging or thermal drift. However, the digital anti-aliasing filter may introduce degradations due to time domain response particularly at lower sample rates. Analog systems are not subject to a Nyquist limit or aliasing and thus do not require anti-aliasing filters or any of the design considerations associated with them. Instead, the limits of analog storage formats are determined by the physical properties of their construction. Sampling rates CD quality audio is sampled at 44,100 Hz (Nyquist frequency = 22.05 kHz) and at 16 bits. Sampling the waveform at higher frequencies and allowing for a greater number of bits per sample allows noise and distortion to be reduced further. DAT can sample audio at up to 48 kHz, while DVD-Audio can be 96 or 192 kHz and up to 24 bits resolution. With any of these sampling rates, signal information is captured above what is generally considered to be the human hearing frequency range. The higher sample rates impose less restrictions on anti-aliasing filter implementation which can result in both lower complexity and less signal distortion. Work done in 1981 by Muraoka et al. showed that music signals with frequency components above 20 kHz were only distinguished from those without by a few of the 176 test subjects. A perceptual study by Nishiguchi et al. (2004) concluded that "no significant difference was found between sounds with and without very high frequency components among the sound stimuli and the subjects... however, [Nishiguchi et al] can still neither confirm nor deny the possibility that some subjects could discriminate between musical sounds with and without very high frequency components." In blind listening tests conducted by Bob Katz in 1996, recounted in his book Mastering Audio: The Art and the Science, subjects using the same high-sample-rate reproduction equipment could not discern any audible difference between program material identically filtered to remove frequencies above 20 kHz versus 40 kHz. This demonstrates that presence or absence of ultrasonic content does not explain aural variation between sample rates. He posits that variation is due largely to performance of the band-limiting filters in converters. These results suggest that the main benefit to using higher sample rates is that it pushes consequential phase distortion from the band-limiting filters out of the audible range and that, under ideal conditions, higher sample rates may not be necessary. Dunn (1998) examined the performance of digital converters to see if these differences in performance could be explained by the band-limiting filters used in converters and looking for the artifacts they introduce. Quantization A signal is recorded digitally by an analog-to-digital converter, which measures the amplitude of an analog signal at regular intervals specified by the sampling rate, and then stores these sampled numbers in computer hardware. Numbers on computers represent a finite set of discrete values, which means that if an analog signal is digitally sampled using native methods (without dither), the amplitude of the audio signal will simply be rounded to the nearest representation. This process is called quantization, and these small errors in the measurements are manifested aurally as low level noise or distortion. This form of distortion, sometimes called granular or quantization distortion, has been pointed to as a fault of some digital systems and recordings particularly some early digital recordings, where the digital release was said to be inferior to the analog version. However, "if the quantisation is performed using the right dither, then the only consequence of the digitisation is effectively the addition of a white, uncorrelated, benign, random noise floor. The level of the noise depends on the number of the bits in the channel." The range of possible values that can be represented numerically by a sample is determined by the number of binary digits used. This is called the resolution, and is usually referred to as the bit depth in the context of PCM audio. The quantization noise level is directly determined by this number, decreasing exponentially (linearly in dB units) as the resolution increases. With an adequate bit depth, random noise from other sources will dominate and completely mask the quantization noise. The Redbook CD standard uses 16 bits, which keeps the quantization noise 96 dB below maximum amplitude, far below a discernible level with almost any source material. The addition of effective dither means that, "in practical terms, the resolution is limited by our ability to resolve sounds in noise. ... We have no problem measuring (and hearing) signals of –110dB in a well-designed 16- bit channel." DVD-Audio and most modern professional recording equipment allows for samples of 24 bits. Analog systems do not necessarily have discrete digital levels in which the signal is encoded. Consequently, the accuracy to which the original signal can be preserved is instead limited by the intrinsic noise-floor and maximum signal level of the media and the playback equipment. Quantization in analog media Since analog media is composed of molecules, the smallest microscopic structure represents the smallest quantization unit of the recorded signal. Natural dithering processes, like random thermal movements of molecules, the nonzero size of the reading instrument, and other averaging effects, make the practical limit larger than that of the smallest molecular structural feature. A theoretical LP composed of perfect diamond, with a groove size of 8 micron and a feature size of 0.5 nanometer, has a quantization that is similar to a 16-bit digital sample. Dither as a solution It is possible to make quantization noise audibly benign by applying dither. To do this, noise is added to the original signal before quantization. Optimal use of dither has the effect of making quantization error independent of the signal, and allows signal information to be retained below the least significant bit of the digital system. Dither algorithms also commonly have an option to employ some kind of noise shaping, which pushes the frequency of much of the dither noise to areas that are less audible to human ears, lowering the level of the noise floor apparent to the listener. Dither is commonly applied during mastering before final bit depth reduction, and also at various stages of DSP. Timing jitter One aspect that may degrade the performance of a digital system is jitter. This is the phenomenon of variations in time from what should be the correct spacing of discrete samples according to the sample rate. This can be due to timing inaccuracies of the digital clock. Ideally, a digital clock should produce a timing pulse at exactly regular intervals. Other sources of jitter within digital electronic circuits are data-induced jitter, where one part of the digital stream affects a subsequent part as it flows through the system, and power supply induced jitter, where noise from the power supply causes irregularities in the timing of signals in the circuits it powers. The accuracy of a digital system is dependent on the sampled amplitude values, but it is also dependent on the temporal regularity of these values. The analog versions of this temporal dependence are known as pitch error and wow-and-flutter. Periodic jitter produces modulation noise and can be thought of as being the equivalent of analog flutter. Random jitter alters the noise floor of the digital system. The sensitivity of the converter to jitter depends on the design of the converter. It has been shown that a random jitter of 5 ns may be significant for 16 bit digital systems. In 1998, Benjamin and Gannon researched the audibility of jitter using listening tests. They found that the lowest level of jitter to be audible was around 10 ns (rms). This was on a 17 kHz sine wave test signal. With music, no listeners found jitter audible at levels lower than 20 ns. A paper by Ashihara et al. (2005) attempted to determine the detection thresholds for random jitter in music signals. Their method involved ABX listening tests. When discussing their results, the authors commented that: So far, actual jitter in consumer products seems to be too small to be detected at least for reproduction of music signals. It is not clear, however, if detection thresholds obtained in the present study would really represent the limit of auditory resolution or it would be limited by resolution of equipment. Distortions due to very small jitter may be smaller than distortions due to non-linear characteristics of loudspeakers. Ashihara and Kiryu [8] evaluated linearity of loudspeaker and headphones. According to their observation, headphones seem to be more preferable to produce sufficient sound pressure at the ear drums with smaller distortions than loudspeakers. Signal processing After initial recording, it is common for the audio signal to be altered in some way, such as with the use of compression, equalization, delays and reverb. With analog, this comes in the form of outboard hardware components, and with digital, the same is typically accomplished with plug-ins in a digital audio workstation (DAW). A comparison of analog and digital filtering shows technical advantages to both methods. Digital filters are more precise and flexible. Analog filters are simpler, can be more efficient and do not introduce latency. Analog hardware When altering a signal with a filter, the output signal may differ in time from the signal at the input, which is measured as its phase response. All analog equalizers exhibit this behavior, with the amount of phase shift differing in some pattern, and centered around the band that is being adjusted. Although this effect alters the signal in a way other than a strict change in frequency response, it is usually not objectionable to listeners. Digital filters Because the variables involved can be precisely specified in the calculations, digital filters can be made to objectively perform better than analog components. Other processing such as delay and mixing can be done exactly. Digital filters are also more versatile. For example, the linear phase equalizer does not introduce frequency-dependent phase shift. This filter may be implemented digitally using a finite impulse response filter but has no practical implementation using analog components. A practical advantage of digital processing is the more convenient recall of settings. Plug-in parameters can be stored on the computer, whereas parameter details on an analog unit must be written down or otherwise recorded if the unit needs to be reused. This can be cumbersome when entire mixes must be recalled manually using an analog console and outboard gear. When working digitally, all parameters can simply be stored in a DAW project file and recalled instantly. Most modern professional DAWs also process plug-ins in real time, which means that processing can be largely non-destructive until final mix-down. Analog modeling Many plug-ins exist now that incorporate analog modeling. There are audio engineers that endorse them and feel that they compare equally in sound to the analog processes that they imitate. Analog modeling carries some benefits over their analog counterparts, such as the ability to remove noise from the algorithms and modifications to make the parameters more flexible. On the other hand, other engineers also feel that the modeling is still inferior to the genuine outboard components and still prefer to mix "outside the box". Sound quality Subjective evaluation Subjective evaluation attempts to measure how well an audio component performs according to the human ear. The most common form of subjective test is a listening test, where the audio component is simply used in the context for which it was designed. This test is popular with hi-fi reviewers, where the component is used for a length of time by the reviewer who then will describe the performance in subjective terms. Common descriptions include whether the component has a bright or warm sound, or how well the component manages to present a spatial image. Another type of subjective test is done under more controlled conditions and attempts to remove possible bias from listening tests. These sorts of tests are done with the component hidden from the listener, and are called blind tests. To prevent possible bias from the person running the test, the blind test may be done so that this person is also unaware of the component under test. This type of test is called a double-blind test. This sort of test is often used to evaluate the performance of lossy audio compression. Critics of double-blind tests see them as not allowing the listener to feel fully relaxed when evaluating the system component, and can therefore not judge differences between different components as well as in sighted (non-blind) tests. Those who employ the double-blind testing method may try to reduce listener stress by allowing a certain amount of time for listener training. Early digital recordings Early digital audio machines had disappointing results, with digital converters introducing errors that the ear could detect. Record companies released their first LPs based on digital audio masters in the late 1970s. CDs became available in the early 1980s. At this time analog sound reproduction was a mature technology. There was a mixed critical response to early digital recordings released on CD. Compared to vinyl record, it was noticed that CD was far more revealing of the acoustics and ambient background noise of the recording environment. For this reason, recording techniques developed for analog disc, e.g., microphone placement, needed to be adapted to suit the new digital format. Some analog recordings were remastered for digital formats. Analog recordings made in natural concert hall acoustics tended to benefit from remastering. The remastering process was occasionally criticised for being poorly handled. When the original analog recording was fairly bright, remastering sometimes resulted in an unnatural treble emphasis. Super Audio CD and DVD-Audio The Super Audio CD (SACD) format was created by Sony and Philips, who were also the developers of the earlier standard audio CD format. SACD uses Direct Stream Digital (DSD) based on delta-sigma modulation. Using this technique, the audio data is stored as a sequence of fixed amplitude (i.e. 1-bit) values at a sample rate of 2.884 MHz, which is 64 times the 44.1 kHz sample rate used by CD. At any point in time, the amplitude of the original analog signal is represented by the density of 1's or 0's in the data stream. This digital data stream can therefore be converted to analog by passing it through an analog low-pass filter. The DVD-Audio format uses standard, linear PCM at variable sampling rates and bit depths, which at the very least match and usually greatly surpass those of standard CD audio (16 bits, 44.1 kHz). In the popular Hi-Fi press, it had been suggested that linear PCM "creates [a] stress reaction in people", and that DSD "is the only digital recording system that does not [...] have these effects". This claim appears to originate from a 1980 article by Dr John Diamond. The core of the claim that PCM recordings (the only digital recording technique available at the time) created a stress reaction rested on using the pseudoscientific technique of applied kinesiology, for example by Dr Diamond at an AES 66th Convention (1980) presentation with the same title. Diamond had previously used a similar technique to demonstrate that rock music (as opposed to classical) was bad for your health due to the presence of the "stopped anapestic beat". Diamond's claims regarding digital audio were taken up by Mark Levinson, who asserted that while PCM recordings resulted in a stress reaction, DSD recordings did not. However, a double-blind subjective test between high resolution linear PCM (DVD-Audio) and DSD did not reveal a statistically significant difference. Listeners involved in this test noted their great difficulty in hearing any difference between the two formats. Analog preference The vinyl revival is in part because of analog audio's imperfection, which adds "warmth". Some listeners prefer such audio over that of a CD. Founder and editor Harry Pearson of The Absolute Sound magazine says that "LPs are decisively more musical. CDs drain the soul from music. The emotional involvement disappears". Dub producer Adrian Sherwood has similar feelings about the analog cassette tape, which he prefers because of its "warmer" sound. Those who favor the digital format point to the results of blind tests, which demonstrate the high performance possible with digital recorders. The assertion is that the "analog sound" is more a product of analog format inaccuracies than anything else. One of the first and largest supporters of digital audio was the classical conductor Herbert von Karajan, who said that digital recording was "definitely superior to any other form of recording we know". He also pioneered the unsuccessful Digital Compact Cassette and conducted the first recording ever to be commercially released on CD: Richard Strauss's Eine Alpensinfonie. The perception of analog audio being demonstrably superior was also called into question by music analysts following revelations that audiophile label Mobile Fidelity Sound Lab had been covertly using Direct Stream Digital files to produce vinyl releases marketed as coming from analog master tapes, with lawyer and audiophile Randy Braun stating that "These people who claim they have golden ears and can hear the difference between analog and digital, well, it turns out you couldn't." Hybrid systems While the words analog audio usually imply that the sound is described using a continuous signal approach, and the words digital audio imply a discrete approach, there are methods of encoding audio that fall somewhere between the two. Indeed, all analog systems show discrete (quantized) behaviour at the microscopic scale. While vinyl records and common compact cassettes are analog media and use quasi-linear physical encoding methods (e.g. spiral groove depth, tape magnetic field strength) without noticeable quantization or aliasing, there are analog non-linear systems that exhibit effects similar to those encountered on digital ones, such as aliasing and "hard" dynamic floors (e.g. frequency-modulated hi-fi audio on videotapes, PWM encoded signals). See also Audiophile Audio system measurements History of sound recording References Bibliography Pohlmann, K. (2005). Principles of Digital Audio 5th edn, McGraw-Hill Comp. External links Digital audio recording Sound recording Analog and digital recording
Comparison of analog and digital recording
[ "Technology" ]
6,270
[ "nan" ]
156,932
https://en.wikipedia.org/wiki/Peristalsis
Peristalsis ( , ) is a type of intestinal motility, characterized by radially symmetrical contraction and relaxation of muscles that propagate in a wave down a tube, in an anterograde direction. Peristalsis is progression of coordinated contraction of involuntary circular muscles, which is preceded by a simultaneous contraction of the longitudinal muscle and relaxation of the circular muscle in the lining of the gut. In much of a digestive tract, such as the human gastrointestinal tract, smooth muscle tissue contracts in sequence to produce a peristaltic wave, which propels a ball of food (called a bolus before being transformed into chyme in the stomach) along the tract. The peristaltic movement comprises relaxation of circular smooth muscles, then their contraction behind the chewed material to keep it from moving backward, then longitudinal contraction to push it forward. Earthworms use a similar mechanism to drive their locomotion, and some modern machinery imitate this design. The word comes from Neo-Latin and is derived from the Greek peristellein, "to wrap around," from peri-, "around" + stellein, "draw in, bring together; set in order". Human physiology Peristalsis is generally directed caudal, that is, towards the anus. This sense of direction might be attributable to the polarisation of the myenteric plexus. Because of the reliance of the peristaltic reflex on the myenteric plexus, it is also referred to as the myenteric reflex. Mechanism of the peristaltic reflex The food bolus causes a stretch of the gut smooth muscle that causes serotonin to be secreted to sensory neurons, which then get activated. These sensory neurons, in turn, activate neurons of the myenteric plexus, which then proceed to split into two cholinergic pathways: a retrograde and an anterograde. Activated neurons of the retrograde pathway release substance molecules alsoP and acetylcholine to contract the smooth muscle behind the bolus. The activated neurons of the anterograde pathway instead release nitric oxide and vasoactive intestinal polypeptide to relax the smooth muscle caudal to the bolus. This allows the food bolus to effectively be pushed forward along the digestive tract. Esophagus After food is chewed into a bolus, it is swallowed and moved through the esophagus. Smooth muscles contract behind the bolus to prevent it from being squeezed back into the mouth. Then rhythmic, unidirectional waves of contractions work to rapidly force the food into the stomach. The migrating motor complex (MMC) helps trigger peristaltic waves. This process works in one direction only, and its sole esophageal function is to move food from the mouth into the stomach (the MMC also functions to clear out remaining food in the stomach to the small bowel and remaining particles in the small bowel into the colon). In the esophagus, two types of peristalsis occur: First, there is a primary peristaltic wave, which occurs when the bolus enters the esophagus during swallowing. The primary peristaltic wave forces the bolus down the esophagus and into the stomach in a wave lasting about 8–9 seconds. The wave travels down to the stomach even if the bolus of food descends at a greater rate than the wave itself, and continues even if for some reason the bolus gets stuck further up the esophagus. If the bolus gets stuck or moves slower than the primary peristaltic wave (as can happen when it is poorly lubricated), then stretch receptors in the esophageal lining are stimulated and a local reflex response causes a secondary peristaltic wave around the bolus, forcing it further down the esophagus, and these secondary waves continue indefinitely until the bolus enters the stomach. The process of peristalsis is controlled by the medulla oblongata. Esophageal peristalsis is typically assessed by performing an esophageal motility study. A third type of peristalsis, tertiary peristalsis, is dysfunctional and involves irregular, diffuse, simultaneous contractions. These contractions are suspect in esophageal dysmotility and present on a barium swallow as a "corkscrew esophagus". During vomiting, the propulsion of food up the esophagus and out the mouth comes from the contraction of the abdominal muscles; peristalsis does not reverse in the esophagus. Stomach When a peristaltic wave reaches at the end of the esophagus, the cardiac sphincter (gastroesophageal sphincter) opens, allowing the passage of bolus into the stomach. The gastroesophageal sphincter normally remains closed and does not allow the stomach's food contents to move back. The churning movements of the stomach's thick muscular wall blend the food thoroughly with the acidic gastric juice, producing a mixture called the chyme. The muscularis layer of the stomach is thickest and maximum peristalsis occurs here. After short intervals, the pyloric sphincter keeps on opening and closing so the chyme is fed into the intestine in installments. Small intestine Once processed and digested by the stomach, the semifluid chyme is passed through the pyloric sphincter into the small intestine. Once past the stomach, a typical peristaltic wave lasts only a few seconds, traveling at only a few centimeters per second. Its primary purpose is to mix the chyme in the intestine rather than to move it forward in the intestine. Through this process of mixing and continued digestion and absorption of nutrients, the chyme gradually works its way through the small intestine to the large intestine. In contrast to peristalsis, segmentation contractions result in that churning and mixing without pushing materials further down the digestive tract. Large intestine Although the large intestine has peristalsis of the type that the small intestine uses, it is not the primary propulsion. Instead, general contractions called mass action contractions occur one to three times per day in the large intestine, propelling the chyme (now feces) toward the rectum. Mass movements often tend to be triggered by meals, as the presence of chyme in the stomach and duodenum prompts them (gastrocolic reflex). Minimum peristalsis is found in the rectum part of the large intestine as a result of the thinnest muscularis layer. Lymph The human lymphatic system has no central pump. Instead, lymph circulates through peristalsis in the lymph capillaries as well as valves in the capillaries, compression during contraction of adjacent skeletal muscle, and arterial pulsation. Sperm During ejaculation, the smooth muscle in the walls of the vasa deferentia contract reflexively in peristalsis, propelling sperm from the testicles to the urethra. Earthworms The earthworm is a limbless annelid worm with a hydrostatic skeleton that moves by peristalsis. Its hydrostatic skeleton consists of a fluid-filled body cavity surrounded by an extensible body wall. The worm moves by radially constricting the anterior portion of its body, increasing length via hydrostatic pressure. This constricted region propagates posteriorly along the worm's body. As a result, each segment is extended forward, then relaxes and re-contacts the substrate, with hair-like setae preventing backward slipping. Various other invertebrates, such as caterpillars and millipedes, also move by peristalsis. Machinery A peristaltic pump is a positive-displacement pump in which a motor pinches advancing portions of a flexible tube to propel a fluid within the tube. The pump isolates the fluid from the machinery, which is important if the fluid is abrasive or must remain sterile. Robots have been designed that use peristalsis to achieve locomotion, as the earthworm uses it. Related terms Aperistalsis refers to a lack of propulsion. It can result from achalasia of the smooth muscle involved. Basal electrical rhythm is a slow wave of electrical activity that can initiate a contraction. Catastalsis is a related intestinal muscle process. Ileus is a disruption of the normal propulsive ability of the gastrointestinal tract caused by the failure of peristalsis. Retroperistalsis, the reverse of peristalsis Segmentation contractions are another type of intestinal motility. Intestinal desmosis, the atrophy of the tendinous plexus layer, may cause disturbed gut motility. References External links Interactive 3D display of swallow waves at menne-biomed.de Overview at colostate.edu Digestive system
Peristalsis
[ "Biology" ]
1,902
[ "Digestive system", "Organ systems" ]
156,940
https://en.wikipedia.org/wiki/Electrophysiology
Electrophysiology (from Greek , ēlektron, "amber" [see the etymology of "electron"]; , physis, "nature, origin"; and , -logia) is the branch of physiology that studies the electrical properties of biological cells and tissues. It involves measurements of voltage changes or electric current or manipulations on a wide variety of scales from single ion channel proteins to whole organs like the heart. In neuroscience, it includes measurements of the electrical activity of neurons, and, in particular, action potential activity. Recordings of large-scale electric signals from the nervous system, such as electroencephalography, may also be referred to as electrophysiological recordings. They are useful for electrodiagnosis and monitoring. Definition and scope Classical electrophysiological techniques Principle and mechanisms Electrophysiology is the branch of physiology that pertains broadly to the flow of ions (ion current) in biological tissues and, in particular, to the electrical recording techniques that enable the measurement of this flow. Classical electrophysiology techniques involve placing electrodes into various preparations of biological tissue. The principal types of electrodes are: Simple solid conductors, such as discs and needles (singles or arrays, often insulated except for the tip), Tracings on printed circuit boards or flexible polymers, also insulated except for the tip, and Hollow, often elongated or 'pulled', tubes filled with an electrolyte, such as glass pipettes filled with potassium chloride solution or another electrolyte solution. The principal preparations include: living organisms (example in insects), excised tissue (acute or cultured), dissociated cells from excised tissue (acute or cultured), artificially grown cells or tissues, or hybrids of the above. Neuronal electrophysiology is the study of electrical properties of biological cells and tissues within the nervous system. With neuronal electrophysiology doctors and specialists can determine how neuronal disorders happen, by looking at the individual's brain activity. Activity such as which portions of the brain light up during any situations encountered. If an electrode is small enough (micrometers) in diameter, then the electrophysiologist may choose to insert the tip into a single cell. Such a configuration allows direct observation and intracellular recording of the intracellular electrical activity of a single cell. However, this invasive setup reduces the life of the cell and causes a leak of substances across the cell membrane. Intracellular activity may also be observed using a specially formed (hollow) glass pipette containing an electrolyte. In this technique, the microscopic pipette tip is pressed against the cell membrane, to which it tightly adheres by an interaction between glass and lipids of the cell membrane. The electrolyte within the pipette may be brought into fluid continuity with the cytoplasm by delivering a pulse of negative pressure to the pipette in order to rupture the small patch of membrane encircled by the pipette rim (whole-cell recording). Alternatively, ionic continuity may be established by "perforating" the patch by allowing exogenous pore-forming agents within the electrolyte to insert themselves into the membrane patch (perforated patch recording). Finally, the patch may be left intact (patch recording). The electrophysiologist may choose not to insert the tip into a single cell. Instead, the electrode tip may be left in continuity with the extracellular space. If the tip is small enough, such a configuration may allow indirect observation and recording of action potentials from a single cell, termed single-unit recording. Depending on the preparation and precise placement, an extracellular configuration may pick up the activity of several nearby cells simultaneously, termed multi-unit recording. As electrode size increases, the resolving power decreases. Larger electrodes are sensitive only to the net activity of many cells, termed local field potentials. Still larger electrodes, such as uninsulated needles and surface electrodes used by clinical and surgical neurophysiologists, are sensitive only to certain types of synchronous activity within populations of cells numbering in the millions. Other classical electrophysiological techniques include single channel recording and amperometry. Electrographic modalities by body part Electrophysiological recording in general is sometimes called electrography (from electro- + -graphy, "electrical recording"), with the record thus produced being an electrogram. However, the word electrography has other senses (including electrophotography), and the specific types of electrophysiological recording are usually called by specific names, constructed on the pattern of electro- + [body part combining form] + -graphy (abbreviation ExG). Relatedly, the word electrogram (not being needed for those other senses) often carries the specific meaning of intracardiac electrogram, which is like an electrocardiogram but with some invasive leads (inside the heart) rather than only noninvasive leads (on the skin). Electrophysiological recording for clinical diagnostic purposes is included within the category of electrodiagnostic testing. The various "ExG" modes are as follows: Optical electrophysiological techniques Optical electrophysiological techniques were created by scientists and engineers to overcome one of the main limitations of classical techniques. Classical techniques allow observation of electrical activity at approximately a single point within a volume of tissue. Classical techniques singularize a distributed phenomenon. Interest in the spatial distribution of bioelectric activity prompted development of molecules capable of emitting light in response to their electrical or chemical environment. Examples are voltage sensitive dyes and fluorescing proteins. After introducing one or more such compounds into tissue via perfusion, injection or gene expression, the 1 or 2-dimensional distribution of electrical activity may be observed and recorded. Intracellular recording Intracellular recording involves measuring voltage and/or current across the membrane of a cell. To make an intracellular recording, the tip of a fine (sharp) microelectrode must be inserted inside the cell, so that the membrane potential can be measured. Typically, the resting membrane potential of a healthy cell will be -60 to -80 mV, and during an action potential the membrane potential might reach +40 mV. In 1963, Alan Lloyd Hodgkin and Andrew Fielding Huxley won the Nobel Prize in Physiology or Medicine for their contribution to understanding the mechanisms underlying the generation of action potentials in neurons. Their experiments involved intracellular recordings from the giant axon of Atlantic squid (Loligo pealei), and were among the first applications of the "voltage clamp" technique. Today, most microelectrodes used for intracellular recording are glass micropipettes, with a tip diameter of < 1 micrometre, and a resistance of several megohms. The micropipettes are filled with a solution that has a similar ionic composition to the intracellular fluid of the cell. A chlorided silver wire inserted into the pipette connects the electrolyte electrically to the amplifier and signal processing circuit. The voltage measured by the electrode is compared to the voltage of a reference electrode, usually a silver chloride-coated silver wire in contact with the extracellular fluid around the cell. In general, the smaller the electrode tip, the higher its electrical resistance. So an electrode is a compromise between size (small enough to penetrate a single cell with minimum damage to the cell) and resistance (low enough so that small neuronal signals can be discerned from thermal noise in the electrode tip). Maintaining healthy brain slices is pivotal for successful electrophysiological recordings. The preparation of these slices is commonly achieved with tools such as the Compresstome vibratome, ensuring optimal conditions for accurate and reliable recordings. Nevertheless, even with the highest standards of tissue handling, slice preparation induces rapid and robust phenotype changes of the brain's major immune cells, microglia, which must be taken into consideration when using this model. Voltage clamp The voltage clamp technique allows an experimenter to "clamp" the cell potential at a chosen value. This makes it possible to measure how much ionic current crosses a cell's membrane at any given voltage. This is important because many of the ion channels in the membrane of a neuron are voltage-gated ion channels, which open only when the membrane voltage is within a certain range. Voltage clamp measurements of current are made possible by the near-simultaneous digital subtraction of transient capacitive currents that pass as the recording electrode and cell membrane are charged to alter the cell's potential. Current clamp The current clamp technique records the membrane potential by injecting current into a cell through the recording electrode. Unlike in the voltage clamp mode, where the membrane potential is held at a level determined by the experimenter, in "current clamp" mode the membrane potential is free to vary, and the amplifier records whatever voltage the cell generates on its own or as a result of stimulation. This technique is used to study how a cell responds when electric current enters a cell; this is important for instance for understanding how neurons respond to neurotransmitters that act by opening membrane ion channels. Most current-clamp amplifiers provide little or no amplification of the voltage changes recorded from the cell. The "amplifier" is actually an electrometer, sometimes referred to as a "unity gain amplifier"; its main purpose is to reduce the electrical load on the small signals (in the mV range) produced by cells so that they can be accurately recorded by low-impedance electronics. The amplifier increases the current behind the signal while decreasing the resistance over which that current passes. Consider this example based on Ohm's law: A voltage of 10 mV is generated by passing 10 nanoamperes of current across 1 MΩ of resistance. The electrometer changes this "high impedance signal" to a "low impedance signal" by using a voltage follower circuit. A voltage follower reads the voltage on the input (caused by a small current through a big resistor). It then instructs a parallel circuit that has a large current source behind it (the electrical mains) and adjusts the resistance of that parallel circuit to give the same output voltage, but across a lower resistance. Patch-clamp recording This technique was developed by Erwin Neher and Bert Sakmann who received the Nobel Prize in 1991. Conventional intracellular recording involves impaling a cell with a fine electrode; patch-clamp recording takes a different approach. A patch-clamp microelectrode is a micropipette with a relatively large tip diameter. The microelectrode is placed next to a cell, and gentle suction is applied through the microelectrode to draw a piece of the cell membrane (the 'patch') into the microelectrode tip; the glass tip forms a high resistance 'seal' with the cell membrane. This configuration is the "cell-attached" mode, and it can be used for studying the activity of the ion channels that are present in the patch of membrane. If more suction is now applied, the small patch of membrane in the electrode tip can be displaced, leaving the electrode sealed to the rest of the cell. This "whole-cell" mode allows very stable intracellular recording. A disadvantage (compared to conventional intracellular recording with sharp electrodes) is that the intracellular fluid of the cell mixes with the solution inside the recording electrode, and so some important components of the intracellular fluid can be diluted. A variant of this technique, the "perforated patch" technique, tries to minimize these problems. Instead of applying suction to displace the membrane patch from the electrode tip, it is also possible to make small holes on the patch with pore-forming agents so that large molecules such as proteins can stay inside the cell and ions can pass through the holes freely. Also the patch of membrane can be pulled away from the rest of the cell. This approach enables the membrane properties of the patch to be analyzed pharmacologically. Patch-clamp may also be combined with RNA sequencing in a technique known as patch-seq by extracting the cellular contents following recording in order to characterize the electrophysiological properties relationship to gene expression and cell-type. Sharp electrode recording In situations where one wants to record the potential inside the cell membrane with minimal effect on the ionic constitution of the intracellular fluid a sharp electrode can be used. These micropipettes (electrodes) are again like those for patch clamp pulled from glass capillaries, but the pore is much smaller so that there is very little ion exchange between the intracellular fluid and the electrolyte in the pipette. The electrical resistance of the micropipette electrode is reduced by filling with 2-4M KCl, rather than a salt concentration which mimics the intracellular ionic concentrations as used in patch clamping. Often the tip of the electrode is filled with various kinds of dyes like Lucifer yellow to fill the cells recorded from, for later confirmation of their morphology under a microscope. The dyes are injected by applying a positive or negative, DC or pulsed voltage to the electrodes depending on the polarity of the dye. Extracellular recording Single-unit recording An electrode introduced into the brain of a living animal will detect electrical activity that is generated by the neurons adjacent to the electrode tip. If the electrode is a microelectrode, with a tip size of about 1 micrometre, the electrode will usually detect the activity of at most one neuron. Recording in this way is in general called "single-unit" recording. The action potentials recorded are very much like the action potentials that are recorded intracellularly, but the signals are very much smaller (typically about 1 mV). Most recordings of the activity of single neurons in anesthetized and conscious animals are made in this way. Recordings of single neurons in living animals have provided important insights into how the brain processes information. For example, David Hubel and Torsten Wiesel recorded the activity of single neurons in the primary visual cortex of the anesthetized cat, and showed how single neurons in this area respond to very specific features of a visual stimulus. Hubel and Wiesel were awarded the Nobel Prize in Physiology or Medicine in 1981. To prepare the brain for such electrode insertion, delicate slicing devices like the compresstome vibratome, leica vibratome, microtome are often employed. These instruments aid in obtaining precise, thin brain sections necessary for electrode placement, enabling neuroscientists to target specific brain regions for recording. Multi-unit recording If the electrode tip is slightly larger, then the electrode might record the activity generated by several neurons. This type of recording is often called "multi-unit recording", and is often used in conscious animals to record changes in the activity in a discrete brain area during normal activity. Recordings from one or more such electrodes that are closely spaced can be used to identify the number of cells around it as well as which of the spikes come from which cell. This process is called spike sorting and is suitable in areas where there are identified types of cells with well defined spike characteristics. If the electrode tip is bigger still, in general the activity of individual neurons cannot be distinguished but the electrode will still be able to record a field potential generated by the activity of many cells. Field potentials Extracellular field potentials are local current sinks or sources that are generated by the collective activity of many cells. Usually, a field potential is generated by the simultaneous activation of many neurons by synaptic transmission. The diagram to the right shows hippocampal synaptic field potentials. At the right, the lower trace shows a negative wave that corresponds to a current sink caused by positive charges entering cells through postsynaptic glutamate receptors, while the upper trace shows a positive wave that is generated by the current that leaves the cell (at the cell body) to complete the circuit. For more information, see local field potential. Amperometry Amperometry uses a carbon electrode to record changes in the chemical composition of the oxidized components of a biological solution. Oxidation and reduction is accomplished by changing the voltage at the active surface of the recording electrode in a process known as "scanning". Because certain brain chemicals lose or gain electrons at characteristic voltages, individual species can be identified. Amperometry has been used for studying exocytosis in the nervous and endocrine systems. Many monoamine neurotransmitters; e.g., norepinephrine (noradrenalin), dopamine, and serotonin (5-HT) are oxidizable. The method can also be used with cells that do not secrete oxidizable neurotransmitters by "loading" them with 5-HT or dopamine. Planar patch clamp Planar patch clamp is a novel method developed for high throughput electrophysiology. Instead of positioning a pipette on an adherent cell, cell suspension is pipetted on a chip containing a microstructured aperture. A single cell is then positioned on the hole by suction and a tight connection (Gigaseal) is formed. The planar geometry offers a variety of advantages compared to the classical experiment: It allows for integration of microfluidics, which enables automatic compound application for ion channel screening. The system is accessible for optical or scanning probe techniques. Perfusion of the intracellular side can be performed. Other methods Solid-supported membrane (SSM)-based With this electrophysiological approach, proteoliposomes, membrane vesicles, or membrane fragments containing the channel or transporter of interest are adsorbed to a lipid monolayer painted over a functionalized electrode. This electrode consists of a glass support, a chromium layer, a gold layer, and an octadecyl mercaptane monolayer. Because the painted membrane is supported by the electrode, it is called a solid-supported membrane. Mechanical perturbations, which usually destroy a biological lipid membrane, do not influence the life-time of an SSM. The capacitive electrode (composed of the SSM and the absorbed vesicles) is so mechanically stable that solutions may be rapidly exchanged at its surface. This property allows the application of rapid substrate/ligand concentration jumps to investigate the electrogenic activity of the protein of interest, measured via capacitive coupling between the vesicles and the electrode. Bioelectric recognition assay (BERA) The bioelectric recognition assay (BERA) is a novel method for determination of various chemical and biological molecules by measuring changes in the membrane potential of cells immobilized in a gel matrix. Apart from the increased stability of the electrode-cell interface, immobilization preserves the viability and physiological functions of the cells. BERA is used primarily in biosensor applications in order to assay analytes that can interact with the immobilized cells by changing the cell membrane potential. In this way, when a positive sample is added to the sensor, a characteristic, "signature-like" change in electrical potential occurs. BERA is the core technology behind the recently launched pan-European FOODSCAN project, about pesticide and food risk assessment in Europe. BERA has been used for the detection of human viruses (hepatitis B and C viruses and herpes viruses), veterinary disease agents (foot and mouth disease virus, prions, and blue tongue virus), and plant viruses (tobacco and cucumber viruses) in a specific, rapid (1–2 minutes), reproducible, and cost-efficient fashion. The method has also been used for the detection of environmental toxins, such as pesticides and mycotoxins in food, and 2,4,6-trichloroanisole in cork and wine, as well as the determination of very low concentrations of the superoxide anion in clinical samples. A BERA sensor has two parts: The consumable biorecognition elements The electronic read-out device with embedded artificial intelligence. A recent advance is the development of a technique called molecular identification through membrane engineering (MIME). This technique allows for building cells with defined specificity for virtually any molecule of interest, by embedding thousands of artificial receptors into the cell membrane. Computational electrophysiology While not strictly constituting an experimental measurement, methods have been developed to examine the conductive properties of proteins and biomembranes in silico. These are mainly molecular dynamics simulations in which a model system like a lipid bilayer is subjected to an externally applied voltage. Studies using these setups have been able to study dynamical phenomena like electroporation of membranes and ion translocation by channels. The benefit of such methods is the high level of detail of the active conduction mechanism, given by the inherently high resolution and data density that atomistic simulation affords. There are significant drawbacks, given by the uncertainty of the legitimacy of the model and the computational cost of modeling systems that are large enough and over sufficient timescales to be considered reproducing the macroscopic properties of the systems themselves. While atomistic simulations may access timescales close to, or into the microsecond domain, this is still several orders of magnitude lower than even the resolution of experimental methods such as patch-clamping. Clinical electrophysiology Clinical electrophysiology is the study of how electrophysiological principles and technologies can be applied to human health. For example, clinical cardiac electrophysiology is the study of the electrical properties which govern heart rhythm and activity. Cardiac electrophysiology can be used to observe and treat disorders such as arrhythmia (irregular heartbeat). For example, a doctor may insert a catheter containing an electrode into the heart to record the heart muscle's electrical activity. Another example of clinical electrophysiology is clinical neurophysiology. In this medical specialty, doctors measure the electrical properties of the brain, spinal cord, and nerves. Scientists such as Duchenne de Boulogne (1806–1875) and Nathaniel A. Buchwald (1924–2006) are considered to have greatly advanced the field of neurophysiology, enabling its clinical applications. Clinical reporting guidelines Minimum Information (MI) standards or reporting guidelines specify the minimum amount of meta data (information) and data required to meet a specific aim or aims in a clinical study. The "Minimum Information about a Neuroscience investigation" (MINI) family of reporting guideline documents aims to provide a consistent set of guidelines in order to report an electrophysiology experiment. In practice a MINI module comprises a checklist of information that should be provided (for example about the protocols employed) when a data set is described for publication. See also References External links Book chapter on Planar Patch Clamp Ion channels Neuroimaging Neurophysiology Biophysics
Electrophysiology
[ "Physics", "Chemistry", "Biology" ]
4,734
[ "Neurochemistry", "Applied and interdisciplinary physics", "Biophysics", "Ion channels" ]
156,952
https://en.wikipedia.org/wiki/Butanone
Butanone, also known as methyl ethyl ketone (MEK) or ethyl methyl ketone, is an organic compound with the formula CH3C(O)CH2CH3. This colorless liquid ketone has a sharp, sweet odor reminiscent of acetone. It is produced industrially on a large scale, but occurs in nature only in trace amounts. It is partially soluble in water, and is commonly used as an industrial solvent. It is an isomer of another solvent, tetrahydrofuran. Production Butanone may be produced by oxidation of 2-butanol. The dehydrogenation of 2-butanol is catalysed by copper, zinc, or bronze: CH3CH(OH)CH2CH3 → CH3C(O)CH2CH3 + H2 This is used to produce approximately 700 million kilograms yearly. Other syntheses that have been examined but not implemented include Wacker oxidation of 2-butene and oxidation of isobutylbenzene, which is analogous to the industrial production of acetone. The cumene process can be modified to produce phenol and a mixture of acetone and butanone instead of only phenol and acetone in the original. Both liquid-phase oxidation of heavy naphtha and the Fischer–Tropsch reaction produce mixed oxygenate streams, from which 2-butanone is extracted by fractionation. Applications Solvent Butanone is an effective and common solvent and is used in processes involving gums, resins, cellulose acetate and nitrocellulose coatings and in vinyl films. For this reason it finds use in the manufacture of plastics, textiles, in the production of paraffin wax, and in household products such as lacquer, varnishes, paint remover, a denaturing agent for denatured alcohol, glues, and as a cleaning agent. It is a prime component of plumbers' priming fluid, used to clean PVC materials. It has similar solvent properties to acetone but boils at a higher temperature and has a significantly slower evaporation rate. Unlike acetone, it forms an azeotrope with water, making it useful for azeotropic distillation of moisture in certain applications. Butanone is also used in dry erase markers as the solvent of the erasable dye. The hydroxylamine derivative of butanone is methylethyl ketone oxime (MEKO), which also find use in paints and varnishes as an anti-skinning agent. Plastic welding As butanone dissolves polystyrene and many other plastics, it is sold as "model cement" for use in connecting parts of scale model kits. Though often considered an adhesive, it is functioning as a welding agent in this context. Other uses Butanone is the precursor to methyl ethyl ketone peroxide, which is a catalyst for some polymerization reactions such as crosslinking of unsaturated polyester resins. Dimethylglyoxime can be prepared from butanone first by reaction with ethyl nitrite to give diacetyl monoxime followed by conversion to the dioxime: In the peroxide process on producing hydrazine, the starting chemical ammonia is bonded to butanone, oxidized by hydrogen peroxide, bonded to another ammonia molecule. In the final step of the process, hydrolysis produces the desired product, hydrazine, and regenerates the butanone. Me(Et)C=NN=C(Et)Me + 2 H2O → 2 Me(Et)C=O + N2H4 Safety Flammability Butanone can react with most oxidizing materials and can produce fires. It is moderately explosive, requiring only a small flame or spark to cause a vigorous reaction. The vapor is heavier than air, so it can accumulate at low points. It is explosive at concentrations between 1.4 and 11.4%. Concentrations in the air high enough to be flammable are intolerable to humans due to the irritating nature of the vapor. Butanone fires should be extinguished with carbon dioxide, dry agents, or alcohol-resistant foam. The ignition of butanone vapor was the proximate cause of the 2007 Xcel Energy Cabin Creek fire, resulting in the deaths of five workers in a hydroelectric penstock. After the incident, the U.S. Chemical Safety and Hazard Investigation Board specifically noted the danger posed by butanone in confined spaces, and suggested 1,1,1-trichloroethane or limonene as safer alternatives. Health effects Butanone is a constituent of tobacco smoke. It is an irritant, causing irritation to the eyes and nose of humans. Serious animal health effects have been seen only at very high levels. There are no long-term studies with animals breathing or drinking it, and no studies for carcinogenicity in animals breathing or drinking it. There is some evidence that butanone can potentiate the toxicity of other solvents, in contrast to the calculation of mixed solvent exposures by simply adding exposures. , the United States Environmental Protection Agency (EPA) listed butanone as a toxic chemical. There are reports of neuropsychological effects. It is rapidly absorbed through undamaged skin and lungs. It contributes to the formation of ground-level ozone, which is toxic in low concentrations. Regulation Butanone is listed as a Table II precursor under the United Nations Convention Against Illicit Traffic in Narcotic Drugs and Psychotropic Substances. Emission of butanone was regulated in the US as a hazardous air pollutant, because it is a volatile organic compound contributing to the formation of tropospheric (ground-level) ozone. In 2005, the US Environmental Protection Agency removed butanone from the list of hazardous air pollutants (HAPs). See also Butyraldehyde Butane n-Butanol 2-Butanol Notes References External links International Chemical Safety Card 0179 National Pollutant Inventory: Methyl Ethyl Ketone Fact Sheet NIOSH Pocket Guide to Chemical Hazards US EPA Datasheet Alkanones Ketone solvents Pollutants Commodity chemicals Sweet-smelling chemicals
Butanone
[ "Chemistry" ]
1,301
[ "Commodity chemicals", "Products of chemical industry" ]
156,959
https://en.wikipedia.org/wiki/Transmembrane%20domain
A transmembrane domain (TMD, TM domain) is a membrane-spanning protein domain. TMDs may consist of one or several alpha-helices or a transmembrane beta barrel. Because the interior of the lipid bilayer is hydrophobic, the amino acid residues in TMDs are often hydrophobic, although proteins such as membrane pumps and ion channels can contain polar residues. TMDs vary greatly in size and hydrophobicity; they may adopt organelle-specific properties. Functions of transmembrane domains Transmembrane domains are known to perform a variety of functions. These include: Anchoring transmembrane proteins to the membrane. Facilitating molecular transport of molecules such as ions and proteins across biological membranes; usually hydrophilic residues and binding sites in the TMDs help in this process. Signal transduction across the membrane; many transmembrane proteins, such as G protein-coupled receptors, receive extracellular signals. TMDs then propagate those signals across the membrane to induce an intracellular effect. Assisting in vesicle fusion; the function of TMDs is not well understood, but they have been shown to be critical for the fusion reaction, possibly as a result of TMDs affecting the tension of the lipid bilayer. Mediating transport and sorting of transmembrane proteins; TMDs have been shown to work in tandem with cytosolic sorting signals, with length and hydrophobicity being the main determinants in TDM sorting. Longer and more hydrophobic TMDs aid in sorting proteins to the cell membrane, whereas shorter and less hydrophobic TMDs are used to retain proteins in the endoplasmic reticulum and the Golgi apparatus. The exact mechanism of this process is still unknown. Identification of transmembrane helices Transmembrane helices are visible in structures of membrane proteins determined by X-ray diffraction. They may also be predicted on the basis of hydrophobicity scales. Because the interior of the bilayer and the interiors of most proteins of known structure are hydrophobic, it is presumed to be a requirement of the amino acids that span a membrane that they be hydrophobic as well. However, membrane pumps and ion channels also contain numerous charged and polar residues within the generally non-polar transmembrane segments. Using "hydrophobicity analysis" to predict transmembrane helices enables a prediction in turn of the "transmembrane topology" of a protein; i.e. prediction of what parts of it protrude into the cell, what parts protrude out, and how many times the protein chain crosses the membrane. Transmembrane helices can also be identified in silico using the bioinformatic tool, TMHMM. The role of membrane protein biogenesis and quality control factors Since protein translation occurs in the cytosol (an aqueous environment), factors that recognize the TMD and protect them in this hostile environment are required. Additional factors that allow the TMD to be incorporated into the target membrane (i.e. endoplasmic reticulum or other organelles) are also required. Factors also detect TMD misfolding within the membrane and perform quality control functions. These factors must be able to recognize a highly variable set of TMDs and can be segregated into those active in the cytosol or active in the membrane. Cytosolic recognition factors Cytosolic recognition factors are thought to use two distinct strategies. In the co-translational strategy the recognition and shielding are coupled to protein synthesis. Genome wide association studies indicate the majority of membrane proteins targeting the endoplasmic reticulum are handled by the signal recognition particle which is bound to the ribosomal exit tunnel and initiates recognition and shielding as protein is translated. The second strategy involves tail-anchored proteins, defined by a single TMD located close to the carboxyl terminus of the membrane protein. Once translation is completed, the tail-anchored TMD remains in the ribosomal exit tunnel, and an ATPase mediates targeting to the endoplasmic reticulum. Examples of shuttling factors include TRC40 in higher eukaryotes and Get3 in yeast. Furthermore, general TMD-binding factors protect against aggregation and other disrupting interactions. SGTA and calmodulin are two well-known general TMD-binding factors. Quality control of membrane proteins involve TMD-binding factors that are linked to ubiquitination proteasome system. Membrane recognition factors Once transported, factors assist with insertion of the TMD across the hydrophilic layer phosphate "head" group of the phospholipid membrane. Quality control factors must be able to discern function and topology, as well as facilitate extraction to the cytosol. The signal recognition particle transports membrane proteins to the Sec translocation channel, positioning the ribosome exit tunnel proximal to the translocon central pore and minimizing exposure of the TMD to cytosol. Insertases can also mediate TMD insertion into the lipid bilayer. Insertases include the bacterial YidC, mitochondrial Oxa1, and chloroplast Alb3, all of which are evolutionarily related. The conserved Hrd1 and Derlin enzyme families are examples of membrane bound quality control factors. Examples Tetraspanins have 4 conserved transmembrane domains. Mildew locus o (mlo) proteins have 7 conserved transmembrane domains that encode alpha helices. References Transmembrane proteins Protein structural motifs
Transmembrane domain
[ "Biology" ]
1,155
[ "Protein structural motifs", "Protein classification" ]
156,962
https://en.wikipedia.org/wiki/X-ray%20scattering%20techniques
X-ray scattering techniques are a family of non-destructive analytical techniques which reveal information about the crystal structure, chemical composition, and physical properties of materials and thin films. These techniques are based on observing the scattered intensity of an X-ray beam hitting a sample as a function of incident and scattered angle, polarization, and wavelength or energy. Note that X-ray diffraction is sometimes considered a sub-set of X-ray scattering, where the scattering is elastic and the scattering object is crystalline, so that the resulting pattern contains sharp spots analyzed by X-ray crystallography (as in the Figure). However, both scattering and diffraction are related general phenomena and the distinction has not always existed. Thus Guinier's classic text from 1963 is titled "X-ray diffraction in Crystals, Imperfect Crystals and Amorphous Bodies" so 'diffraction' was clearly not restricted to crystals at that time. Scattering techniques Elastic scattering X-ray diffraction, sometimes called Wide-angle X-ray diffraction (WAXD) Small-angle X-ray scattering (SAXS) probes structure in the nanometer to micrometer range by measuring scattering intensity at scattering angles 2θ close to 0°. X-ray reflectivity is an analytical technique for determining thickness, roughness, and density of single layer and multilayer thin films. Wide-angle X-ray scattering (WAXS), a technique concentrating on scattering angles 2θ larger than 5°. Inelastic X-ray scattering (IXS) In IXS the energy and angle of inelastically scattered X-rays are monitored, giving the dynamic structure factor . From this many properties of materials can be obtained, the specific property depending on the scale of the energy transfer. The table below, listing techniques, is adapted from. Inelastically scattered X-rays have intermediate phases and so in principle are not useful for X-ray crystallography. In practice X-rays with small energy transfers are included with the diffraction spots due to elastic scattering, and X-rays with large energy transfers contribute to the background noise in the diffraction pattern. See also Anomalous scattering Anomalous X-ray scattering Backscatter Materials science Metallurgy Mineralogy Rachinger correction Structure determination Ultrafast x-ray X-rays X-ray generator References External links Learning Crystallography International Union of Crystallography IUCr Crystallography Online The International Centre for Diffraction Data (ICDD) The British Crystallographic Association Introduction to X-ray Diffraction at University of California, Santa Barbara Laboratory techniques in condensed matter physics X-ray crystallography Materials science X-ray scattering
X-ray scattering techniques
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
555
[ "Applied and interdisciplinary physics", "X-ray scattering", "Materials science", "Laboratory techniques in condensed matter physics", "Crystallography", "Scattering", "Condensed matter physics", "nan", "X-ray crystallography" ]
156,964
https://en.wikipedia.org/wiki/MicroRNA
Micro ribonucleic acid (microRNA, miRNA, μRNA) are small, single-stranded, non-coding RNA molecules containing 21–23 nucleotides. Found in plants, animals, and even some viruses, miRNAs are involved in RNA silencing and post-transcriptional regulation of gene expression. miRNAs base-pair to complementary sequences in messenger RNA (mRNA) molecules, then silence said mRNA molecules by one or more of the following processes: Cleaving the mRNA strand into two pieces. Destabilizing the mRNA by shortening its poly(A) tail. Reducing translation of the mRNA into proteins. In cells of humans and other animals, miRNAs primarily act by destabilizing the mRNA. miRNAs resemble the small interfering RNAs (siRNAs) of the RNA interference (RNAi) pathway, except miRNAs derive from regions of RNA transcripts that fold back on themselves to form short hairpins, whereas siRNAs derive from longer regions of double-stranded RNA. The human genome may encode over 1900 miRNAs, However, only about 500 human miRNAs represent bona fide miRNAs in the manually curated miRNA gene database MirGeneDB. miRNAs are abundant in many mammalian cell types. They appear to target about 60% of the genes of humans and other mammals. Many miRNAs are evolutionarily conserved, which implies that they have important biological functions. For example, 90 families of miRNAs have been conserved since at least the common ancestor of mammals and fish, and most of these conserved miRNAs have important functions, as shown by studies in which genes for one or more members of a family have been knocked out in mice. In 2024, American scientists Victor Ambros and Gary Ruvkun were awarded the Nobel Prize in Physiology or Medicine for their work on the discovery of miRNA and its role in post-transcriptional gene regulation. History The first miRNA was discovered in the early 1990s. However, they were not recognized as a distinct class of biological regulators until the early 2000s. Research revealed different sets of miRNAs expressed in different cell types and tissues and multiple roles for miRNAs in plant and animal development and in many other biological processes. Aberrant miRNA expression are implicated in disease states. MiRNA-based therapies are under investigation. The first miRNA was discovered in 1993 by a group led by Victor Ambros and including Lee and Feinbaum. However, additional insight into its mode of action required simultaneously published work by Gary Ruvkun's team, including Wightman and Ha. These groups published back-to-back papers on the lin-4 gene, which was known to control the timing of C. elegans larval development by repressing the lin-14 gene. When Lee et al. isolated the lin-4 miRNA, they found that instead of producing an mRNA encoding a protein, it produced short non-coding RNAs, one of which was a ~22-nucleotide RNA that contained sequences partially complementary to multiple sequences in the 3' UTR of the lin-14 mRNA. This complementarity was proposed to inhibit the translation of the lin-14 mRNA into the LIN-14 protein. At the time, the lin-4 small RNA was thought to be a nematode idiosyncrasy. In 2000, a second small RNA was characterized: let-7 RNA, which represses lin-41 to promote a later developmental transition in C. elegans. The let-7 RNA was found to be conserved in many species, leading to the suggestion that let-7 RNA and additional "small temporal RNAs" might regulate the timing of development in diverse animals, including humans. A year later, the lin-4 and let-7 RNAs were found to be part of a large class of small RNAs present in C. elegans, Drosophila and human cells. The many RNAs of this class resembled the lin-4 and let-7 RNAs, except their expression patterns were usually inconsistent with a role in regulating the timing of development. This suggested that most might function in other types of regulatory pathways. At this point, researchers started using the term "microRNA" to refer to this class of small regulatory RNAs. The first human disease associated with deregulation of miRNAs was chronic lymphocytic leukemia. In this disorder, the miRNAs have a dual role working as both tumor suppressors and oncogenes. Nomenclature Under a standard nomenclature system, names are assigned to experimentally confirmed miRNAs before publication. The prefix "miR" is followed by a dash and a number, the latter often indicating order of naming. For example, miR-124 was named and likely discovered prior to miR-456. A capitalized "miR-" refers to the mature form of the miRNA, while the uncapitalized "mir-" refers to the pre-miRNA and the -miRNA. The genes encoding miRNAs are also named using the same three-letter prefix according to the conventions of the organism gene nomenclature. For examples, the official miRNAs gene names in some organisms are "mir-1 in C. elegans and Drosophila, Mir1 in Rattus norvegicus and MIR25 in human. miRNAs with nearly identical sequences except for one or two nucleotides are annotated with an additional lower case letter. For example, miR-124a is closely related to miR-124b. For example: : : Pre-miRNAs, -miRNAs and genes that lead to 100% identical mature miRNAs but that are located at different places in the genome are indicated with an additional dash-number suffix. For example, the pre-miRNAs -mir-194-1 and -mir-194-2 lead to an identical mature miRNA (-miR-194) but are from genes located in different genome regions. Species of origin is designated with a three-letter prefix, e.g., -miR-124 is a human (Homo sapiens) miRNA and oar-miR-124 is a sheep (Ovis aries) miRNA. Other common prefixes include "v" for viral (miRNA encoded by a viral genome) and "d" for Drosophila miRNA (a fruit fly commonly studied in genetic research). When two mature microRNAs originate from opposite arms of the same pre-miRNA and are found in roughly similar amounts, they are denoted with a -3p or -5p suffix. (In the past, this distinction was also made with "s" (sense) and "as" (antisense)). However, the mature microRNA found from one arm of the hairpin is usually much more abundant than that found from the other arm, in which case, an asterisk following the name indicates the mature species found at low levels from the opposite arm of a hairpin. For example, miR-124 and miR-124* share a pre-miRNA hairpin, but much more miR-124 is found in the cell. Targets Plant miRNAs usually have near-perfect pairing with their mRNA targets, which induces gene repression through cleavage of the target transcripts. In contrast, animal miRNAs are able to recognize their target mRNAs by using as few as 6–8 nucleotides (the seed region) at the 5' end of the miRNA, which is not enough pairing to induce cleavage of the target mRNAs. Combinatorial regulation is a feature of miRNA regulation in animals. A given miRNA may have hundreds of different mRNA targets, and a given target might be regulated by multiple miRNAs. Estimates of the average number of unique messenger RNAs that are targets for repression by a typical miRNA vary, depending on the estimation method, but multiple approaches show that mammalian miRNAs can have many unique targets. For example, an analysis of the miRNAs highly conserved in vertebrates shows that each has, on average, roughly 400 conserved targets. Likewise, experiments show that a single miRNA species can reduce the stability of hundreds of unique messenger RNAs. Other experiments show that a single miRNA species may repress the production of hundreds of proteins, but that this repression often is relatively mild (much less than 2-fold). Biogenesis As many as 40% of miRNA genes may lie in the introns or even exons of other genes. These are usually, though not exclusively, found in a sense orientation, and thus usually are regulated together with their host genes. The DNA template is not the final word on mature miRNA production: 6% of human miRNAs show RNA editing (IsomiRs), the site-specific modification of RNA sequences to yield products different from those encoded by their DNA. This increases the diversity and scope of miRNA action beyond that implicated from the genome alone. Transcription miRNA genes are usually transcribed by RNA polymerase II (Pol II). The polymerase often binds to a promoter found near the DNA sequence, encoding what will become the hairpin loop of the pre-miRNA. The resulting transcript is capped with a specially modified nucleotide at the 5' end, polyadenylated with multiple adenosines (a poly(A) tail), and spliced. Animal miRNAs are initially transcribed as part of one arm of an ~80 nucleotide RNA stem-loop that in turn forms part of a several hundred nucleotide-long miRNA precursor termed a pri-miRNA. When a stem-loop precursor is found in the 3' UTR, a transcript may serve as a pri-miRNA and a mRNA. RNA polymerase III (Pol III) transcribes some miRNAs, especially those with upstream Alu sequences, transfer RNAs (tRNAs), and mammalian wide interspersed repeat (MWIR) promoter units. Nuclear processing A single pri-miRNA may contain from one to six miRNA precursors. These hairpin loop structures are composed of about 70 nucleotides each. Each hairpin is flanked by sequences necessary for efficient processing. The double-stranded RNA (dsRNA) structure of the hairpins in a pri-miRNA is recognized by a nuclear protein known as DiGeorge Syndrome Critical Region 8 (DGCR8 or "Pasha" in invertebrates), named for its association with DiGeorge Syndrome. DGCR8 associates with the enzyme Drosha, a protein that cuts RNA, to form the Microprocessor complex. In this complex, DGCR8 orients the catalytic RNase III domain of Drosha to liberate hairpins from pri-miRNAs by cleaving RNA about eleven nucleotides from the hairpin base (one helical dsRNA turn into the stem). The product resulting has a two-nucleotide overhang at its 3' end; it has 3' hydroxyl and 5' phosphate groups. It is often termed as a pre-miRNA (precursor-miRNA). Sequence motifs downstream of the pre-miRNA that are important for efficient processing have been identified. Pre-miRNAs that are spliced directly out of introns, bypassing the Microprocessor complex, are known as "mirtrons." Mirtrons have been found in Drosophila, C. elegans, and mammals. As many as 16% of pre-miRNAs may be altered through nuclear RNA editing. Most commonly, enzymes known as adenosine deaminases acting on RNA (ADARs) catalyze adenosine to inosine (A to I) transitions. RNA editing can halt nuclear processing (for example, of pri-miR-142, leading to degradation by the ribonuclease Tudor-SN) and alter downstream processes including cytoplasmic miRNA processing and target specificity (e.g., by changing the seed region of miR-376 in the central nervous system). Nuclear export Pre-miRNA hairpins are exported from the nucleus in a process involving the nucleocytoplasmic shuttler Exportin-5. This protein, a member of the karyopherin family, recognizes a two-nucleotide overhang left by the RNase III enzyme Drosha at the 3' end of the pre-miRNA hairpin. Exportin-5-mediated transport to the cytoplasm is energy-dependent, using guanosine triphosphate (GTP) bound to the Ran protein. Cytoplasmic processing In the cytoplasm, the pre-miRNA hairpin is cleaved by the RNase III enzyme Dicer. This endoribonuclease interacts with 5' and 3' ends of the hairpin and cuts away the loop joining the 3' and 5' arms, yielding an imperfect miRNA:miRNA* duplex about 22 nucleotides in length. Overall hairpin length and loop size influence the efficiency of Dicer processing. The imperfect nature of the miRNA:miRNA* pairing also affects cleavage. Some of the G-rich pre-miRNAs can potentially adopt the G-quadruplex structure as an alternative to the canonical stem-loop structure. For example, human pre-miRNA 92b adopts a G-quadruplex structure which is resistant to the Dicer mediated cleavage in the cytoplasm. Although either strand of the duplex may potentially act as a functional miRNA, only one strand is usually incorporated into the RNA-induced silencing complex (RISC) where the miRNA and its mRNA target interact. While the majority of miRNAs are located within the cell, some miRNAs, commonly known as circulating miRNAs or extracellular miRNAs, have also been found in extracellular environment, including various biological fluids and cell culture media. Biogenesis in plants miRNA biogenesis in plants differs from animal biogenesis mainly in the steps of nuclear processing and export. Instead of being cleaved by two different enzymes, once inside and once outside the nucleus, both cleavages of the plant miRNA are performed by a Dicer homolog, called Dicer-like1 (DL1). DL1 is expressed only in the nucleus of plant cells, which indicates that both reactions take place inside the nucleus. Before plant miRNA:miRNA* duplexes are transported out of the nucleus, its 3' overhangs are methylated by a RNA methyltransferaseprotein called Hua-Enhancer1 (HEN1). The duplex is then transported out of the nucleus to the cytoplasm by a protein called Hasty (HST), an Exportin 5 homolog, where they disassemble and the mature miRNA is incorporated into the RISC. RNA-induced silencing complex The mature miRNA is part of an active RNA-induced silencing complex (RISC) containing Dicer and many associated proteins. RISC is also known as a microRNA ribonucleoprotein complex (miRNP); A RISC with incorporated miRNA is sometimes referred to as a "miRISC." Dicer processing of the pre-miRNA is thought to be coupled with unwinding of the duplex. Generally, only one strand is incorporated into the miRISC, selected on the basis of its thermodynamic instability and weaker base-pairing on the 5' end relative to the other strand. The position of the stem-loop may also influence strand choice. The other strand, called the passenger strand due to its lower levels in the steady state, is denoted with an asterisk (*) and is normally degraded. In some cases, both strands of the duplex are viable and become functional miRNA that target different mRNA populations. Members of the Argonaute (Ago) protein family are central to RISC function. Argonautes are needed for miRNA-induced silencing and contain two conserved RNA binding domains: a PAZ domain that can bind the single stranded 3' end of the mature miRNA and a PIWI domain that structurally resembles ribonuclease-H and functions to interact with the 5' end of the guide strand. They bind the mature miRNA and orient it for interaction with a target mRNA. Some argonautes, for example human Ago2, cleave target transcripts directly; argonautes may also recruit additional proteins to achieve translational repression. The human genome encodes eight argonaute proteins divided by sequence similarities into two families: AGO (with four members present in all mammalian cells and called E1F2C/hAgo in humans), and PIWI (found in the germline and hematopoietic stem cells). Additional RISC components include TRBP [human immunodeficiency virus (HIV) transactivating response RNA (TAR) binding protein], PACT (protein activator of the interferon-induced protein kinase), the SMN complex, fragile X mental retardation protein (FMRP), Tudor staphylococcal nuclease-domain-containing protein (Tudor-SN), the putative DNA helicase MOV10, and the RNA recognition motif containing protein TNRC6B. Mode of silencing and regulatory loops Gene silencing may occur either via mRNA degradation or preventing mRNA from being translated. For example, miR16 contains a sequence complementary to the AU-rich element found in the 3'UTR of many unstable mRNAs, such as TNF alpha or GM-CSF. It has been demonstrated that given complete complementarity between the miRNA and target mRNA sequence, Ago2 can cleave the mRNA and lead to direct mRNA degradation. In the absence of complementarity, silencing is achieved by preventing translation. The relation of miRNA and its target mRNA can be based on the simple negative regulation of a target mRNA, but it seems that a common scenario is the use of a "coherent feed-forward loop", "mutual negative feedback loop" (also termed double negative loop) and "positive feedback/feed-forward loop". Some miRNAs work as buffers of random gene expression changes arising due to stochastic events in transcription, translation and protein stability. Such regulation is typically achieved by the virtue of negative feedback loops or incoherent feed-forward loop uncoupling protein output from mRNA transcription. Turnover Turnover of mature miRNA is needed for rapid changes in miRNA expression profiles. During miRNA maturation in the cytoplasm, uptake by the Argonaute protein is thought to stabilize the guide strand, while the opposite (* or "passenger") strand is preferentially destroyed. In what has been called a "Use it or lose it" strategy, Argonaute may preferentially retain miRNAs with many targets over miRNAs with few or no targets, leading to degradation of the non-targeting molecules. Decay of mature miRNAs in Caenorhabditis elegans is mediated by the 5'-to-3' exoribonuclease XRN2, also known as Rat1p. In plants, SDN (small RNA degrading nuclease) family members degrade miRNAs in the opposite (3'-to-5') direction. Similar enzymes are encoded in animal genomes, but their roles have not been described. Several miRNA modifications affect miRNA stability. As indicated by work in the model organism Arabidopsis thaliana (thale cress), mature plant miRNAs appear to be stabilized by the addition of methyl moieties at the 3' end. The 2'-O-conjugated methyl groups block the addition of uracil (U) residues by uridyltransferase enzymes, a modification that may be associated with miRNA degradation. However, uridylation may also protect some miRNAs; the consequences of this modification are incompletely understood. Uridylation of some animal miRNAs has been reported. Both plant and animal miRNAs may be altered by addition of adenine (A) residues to the 3' end of the miRNA. An extra A added to the end of mammalian miR-122, a liver-enriched miRNA important in hepatitis C, stabilizes the molecule and plant miRNAs ending with an adenine residue have slower decay rates. Cellular functions The function of miRNAs appears to be in gene regulation. For that purpose, a miRNA is complementary to a part of one or more messenger RNAs (mRNAs). Animal miRNAs are usually complementary to a site in the 3' UTR whereas plant miRNAs are usually complementary to coding regions of mRNAs. Perfect or near perfect base pairing with the target RNA promotes cleavage of the RNA. This is the primary mode of plant miRNAs. In animals the match-ups are imperfect. For partially complementary microRNAs to recognise their targets, nucleotides 2–7 of the miRNA (its 'seed region') must be perfectly complementary. Animal miRNAs inhibit protein translation of the target mRNA (this is present but less common in plants). Partially complementary microRNAs can also speed up deadenylation, causing mRNAs to be degraded sooner. While degradation of miRNA-targeted mRNA is well documented, whether or not translational repression is accomplished through mRNA degradation, translational inhibition, or a combination of the two is hotly debated. Recent work on miR-430 in zebrafish, as well as on bantam-miRNA and miR-9 in Drosophila cultured cells, shows that translational repression is caused by the disruption of translation initiation, independent of mRNA deadenylation. miRNAs occasionally also cause histone modification and DNA methylation of promoter sites, which affects the expression of target genes. Nine mechanisms of miRNA action are described and assembled in a unified mathematical model: Cap-40S initiation inhibition; 60S Ribosomal unit joining inhibition; Elongation inhibition; Ribosome drop-off (premature termination); Co-translational nascent protein degradation; Sequestration in P-bodies; mRNA decay (destabilisation); mRNA cleavage; Transcriptional inhibition through microRNA-mediated chromatin reorganization followed by gene silencing. It is often impossible to discern these mechanisms using experimental data about stationary reaction rates. Nevertheless, they are differentiated in dynamics and have different kinetic signatures. Unlike plant microRNAs, the animal microRNAs target diverse genes. However, genes involved in functions common to all cells, such as gene expression, have relatively fewer microRNA target sites and seem to be under selection to avoid targeting by microRNAs. There is a strong correlation between ITPR gene regulations and mir-92 and mir-19. dsRNA can also activate gene expression, a mechanism that has been termed "small RNA-induced gene activation" or RNAa. dsRNAs targeting gene promoters can induce potent transcriptional activation of associated genes. This was demonstrated in human cells using synthetic dsRNAs termed small activating RNAs (saRNAs), but has also been demonstrated for endogenous microRNA. Interactions between microRNAs and complementary sequences on genes and even pseudogenes that share sequence homology are thought to be a back channel of communication regulating expression levels between paralogous genes (genes having a similar structure indicating divergence from a common ancestral gene). Given the name "competing endogenous RNAs" (ceRNAs), these microRNAs bind to "microRNA response elements" on genes and pseudogenes and may provide another explanation for the persistence of non-coding DNA. miRNAs are also found as extracellular circulating miRNAs. Circulating miRNAs are released into body fluids including blood and cerebrospinal fluid and have the potential to be available as biomarkers in a number of diseases. Some researches show that mRNA cargo of exosomes may have a role in implantation, they can savage an adhesion between trophoblast and endometrium or support the adhesion by down regulating or up regulating expression of genes involved in adhesion/invasion. Moreover, miRNA as miR-183/96/182 seems to play a key role in circadian rhythm. Evolution miRNAs are well conserved in both plants and animals, and are thought to be a vital and evolutionarily ancient component of gene regulation. While core components of the microRNA pathway are conserved between plants and animals, miRNA repertoires in the two kingdoms appear to have emerged independently with different primary modes of action. microRNAs are useful phylogenetic markers because of their apparently low rate of evolution. microRNAs' origin as a regulatory mechanism developed from previous RNAi machinery that was initially used as a defense against exogenous genetic material such as viruses. Their origin may have permitted the development of morphological innovation, and by making gene expression more specific and 'fine-tunable', permitted the genesis of complex organs and perhaps, ultimately, complex life. Rapid bursts of morphological innovation are generally associated with a high rate of microRNA accumulation. New microRNAs are created in multiple ways. Novel microRNAs can originate from the random formation of hairpins in "non-coding" sections of DNA (i.e. introns or intergene regions), but also by the duplication and modification of existing microRNAs. microRNAs can also form from inverted duplications of protein-coding sequences, which allows for the creation of a foldback hairpin structure. The rate of evolution (i.e. nucleotide substitution) in recently originated microRNAs is comparable to that elsewhere in the non-coding DNA, implying evolution by neutral drift; however, older microRNAs have a much lower rate of change (often less than one substitution per hundred million years), suggesting that once a microRNA gains a function, it undergoes purifying selection. Individual regions within an miRNA gene face different evolutionary pressures, where regions that are vital for processing and function have higher levels of conservation. At this point, a microRNA is rarely lost from an animal's genome, although newer microRNAs (thus presumably non-functional) are frequently lost. In Arabidopsis thaliana, the net flux of miRNA genes has been predicted to be between 1.2 and 3.3 genes per million years. This makes them a valuable phylogenetic marker, and they are being looked upon as a possible solution to outstanding phylogenetic problems such as the relationships of arthropods. On the other hand, in multiple cases microRNAs correlate poorly with phylogeny, and it is possible that their phylogenetic concordance largely reflects a limited sampling of microRNAs. microRNAs feature in the genomes of most eukaryotic organisms, from the brown algae to the animals. However, the difference in how these microRNAs function and the way they are processed suggests that microRNAs arose independently in plants and animals. Focusing on the animals, the genome of Mnemiopsis leidyi appears to lack recognizable microRNAs, as well as the nuclear proteins Drosha and Pasha, which are critical to canonical microRNA biogenesis. It is the only animal thus far reported to be missing Drosha. MicroRNAs play a vital role in the regulation of gene expression in all non-ctenophore animals investigated thus far except for Trichoplax adhaerens, the first known member of the phylum Placozoa. Across all species, in excess of 5000 different miRNAs had been identified by March 2010. Whilst short RNA sequences (50 – hundreds of base pairs) of a broadly comparable function occur in bacteria, bacteria lack true microRNAs. Experimental detection and manipulation While researchers focused on miRNA expression in physiological and pathological processes, various technical variables related to microRNA isolation emerged. The stability of stored miRNA samples has been questioned. microRNAs degrade much more easily than mRNAs, partly due to their length, but also because of ubiquitously present RNases. This makes it necessary to cool samples on ice and use RNase-free equipment. microRNA expression can be quantified in a two-step polymerase chain reaction process of modified RT-PCR followed by quantitative PCR. Variations of this method achieve absolute or relative quantification. miRNAs can also be hybridized to microarrays, slides or chips with probes to hundreds or thousands of miRNA targets, so that relative levels of miRNAs can be determined in different samples. microRNAs can be both discovered and profiled by high-throughput sequencing methods (microRNA sequencing). The activity of an miRNA can be experimentally inhibited using a locked nucleic acid (LNA) oligo, a Morpholino oligo or a 2'-O-methyl RNA oligo. A specific miRNA can be silenced by a complementary antagomir. microRNA maturation can be inhibited at several points by steric-blocking oligos. The miRNA target site of an mRNA transcript can also be blocked by a steric-blocking oligo. For the "in situ" detection of miRNA, LNA or Morpholino probes can be used. The locked conformation of LNA results in enhanced hybridization properties and increases sensitivity and selectivity, making it ideal for detection of short miRNA. High-throughput quantification of miRNAs is error prone, for the larger variance (compared to mRNAs) that comes with methodological problems. mRNA-expression is therefore often analyzed to check for miRNA-effects in their levels (e.g. in). Databases can be used to pair mRNA- and miRNA-data that predict miRNA-targets based on their base sequence. While this is usually done after miRNAs of interest have been detected (e. g. because of high expression levels), ideas for analysis tools that integrate mRNA- and miRNA-expression information have been proposed. Human and animal diseases Just as miRNA is involved in the normal functioning of eukaryotic cells, so has dysregulation of miRNA been associated with disease. A manually curated, publicly available database, miR2Disease, documents known relationships between miRNA dysregulation and human disease. Inherited diseases A mutation in the seed region of miR-96 causes hereditary progressive hearing loss. A mutation in the seed region of miR-184 causes hereditary keratoconus with anterior polar cataract. Deletion of the miR-17~92 cluster causes skeletal and growth defects. Cancer The first human disease known to be associated with miRNA deregulation was chronic lymphocytic leukemia. Many other miRNAs also have links with cancer and accordingly are sometimes referred to as "oncomirs". In malignant B cells miRNAs participate in pathways fundamental to B cell development like B-cell receptor (BCR) signalling, B-cell migration/adhesion, cell-cell interactions in immune niches and the production and class-switching of immunoglobulins. MiRNAs influence B cell maturation, generation of pre-, marginal zone, follicular, B1, plasma and memory B cells. Another role for miRNA in cancers is to use their expression level for prognosis. In NSCLC samples, low miR-324a levels may serve as an indicator of poor survival. Either high miR-185 or low miR-133b levels may correlate with metastasis and poor survival in colorectal cancer. Furthermore, specific miRNAs may be associated with certain histological subtypes of colorectal cancer. For instance, expression levels of miR-205 and miR-373 have been shown to be increased in mucinous colorectal cancers and mucin-producing Ulcerative Colitis-associated colon cancers, but not in sporadic colonic adenocarcinoma that lack mucinous components. In-vitro studies suggested that miR-205 and miR-373 may functionally induce different features of mucinous-associated neoplastic progression in intestinal epithelial cells. Hepatocellular carcinoma cell proliferation may arise from miR-21 interaction with MAP2K3, a tumor repressor gene. Optimal treatment for cancer involves accurately identifying patients for risk-stratified therapy. Those with a rapid response to initial treatment may benefit from truncated treatment regimens, showing the value of accurate disease response measures. Cell-free circulating miRNAs (cimiRNAs) are highly stable in blood, are overexpressed in cancer and are quantifiable within the diagnostic laboratory. In classical Hodgkin lymphoma, plasma miR-21, miR-494, and miR-1973 are promising disease response biomarkers. Circulating miRNAs have the potential to assist clinical decision making and aid interpretation of positron emission tomography combined with computerized tomography. They can be performed at each consultation to assess disease response and detect relapse. MicroRNAs have the potential to be used as tools or targets for treatment of different cancers. The specific microRNA, miR-506 has been found to work as a tumor antagonist in several studies. A significant number of cervical cancer samples were found to have downregulated expression of miR-506. Additionally, miR-506 works to promote apoptosis of cervical cancer cells, through its direct target hedgehog pathway transcription factor, Gli3. DNA repair and cancer Many miRNAs can directly target and inhibit cell cycle genes to control cell proliferation. A new strategy for tumor treatment is to inhibit tumor cell proliferation by repairing the defective miRNA pathway in tumors. Cancer is caused by the accumulation of mutations from either DNA damage or uncorrected errors in DNA replication. Defects in DNA repair cause the accumulation of mutations, which can lead to cancer. Several genes involved in DNA repair are regulated by microRNAs. Germline mutations in DNA repair genes cause only 2–5% of colon cancer cases. However, altered expression of microRNAs, causing DNA repair deficiencies, are frequently associated with cancers and may be an important causal factor. Among 68 sporadic colon cancers with reduced expression of the DNA mismatch repair protein MLH1, most were found to be deficient due to epigenetic methylation of the CpG island of the MLH1 gene. However, up to 15% of MLH1-deficiencies in sporadic colon cancers appeared to be due to over-expression of the microRNA miR-155, which represses MLH1 expression. In 29–66% of glioblastomas, DNA repair is deficient due to epigenetic methylation of the MGMT gene, which reduces protein expression of MGMT. However, for 28% of glioblastomas, the MGMT protein is deficient, but the MGMT promoter is not methylated. In glioblastomas without methylated MGMT promoters, the level of microRNA miR-181d is inversely correlated with protein expression of MGMT and the direct target of miR-181d is the MGMT mRNA 3'UTR (the three prime untranslated region of MGMT mRNA). Thus, in 28% of glioblastomas, increased expression of miR-181d and reduced expression of DNA repair enzyme MGMT may be a causal factor. HMGA proteins (HMGA1a, HMGA1b and HMGA2) are implicated in cancer, and expression of these proteins is regulated by microRNAs. HMGA expression is almost undetectable in differentiated adult tissues, but is elevated in many cancers. HMGA proteins are polypeptides of ~100 amino acid residues characterized by a modular sequence organization. These proteins have three highly positively charged regions, termed AT hooks, that bind the minor groove of AT-rich DNA stretches in specific regions of DNA. Human neoplasias, including thyroid, prostatic, cervical, colorectal, pancreatic and ovarian carcinomas, show a strong increase of HMGA1a and HMGA1b proteins. Transgenic mice with HMGA1 targeted to lymphoid cells develop aggressive lymphoma, showing that high HMGA1 expression is associated with cancers and that HMGA1 can act as an oncogene. HMGA2 protein specifically targets the promoter of ERCC1, thus reducing expression of this DNA repair gene. ERCC1 protein expression was deficient in 100% of 47 evaluated colon cancers (though the extent to which HGMA2 was involved is not known). Single Nucleotide polymorphisms (SNPs) can alter the binding of miRNAs on 3'UTRs for example the case of hsa-mir181a and hsa-mir181b on the CDON tumor suppressor gene. Heart disease The global role of miRNA function in the heart has been addressed by conditionally inhibiting miRNA maturation in the murine heart. This revealed that miRNAs play an essential role during its development. miRNA expression profiling studies demonstrate that expression levels of specific miRNAs change in diseased human hearts, pointing to their involvement in cardiomyopathies. Furthermore, animal studies on specific miRNAs identified distinct roles for miRNAs both during heart development and under pathological conditions, including the regulation of key factors important for cardiogenesis, the hypertrophic growth response and cardiac conductance. Another role for miRNA in cardiovascular diseases is to use their expression levels for diagnosis, prognosis or risk stratification. miRNA's in animal models have also been linked to cholesterol metabolism and regulation. miRNA-712 Murine microRNA-712 is a potential biomarker (i.e. predictor) for atherosclerosis, a cardiovascular disease of the arterial wall associated with lipid retention and inflammation. Non-laminar blood flow also correlates with development of atherosclerosis as mechanosenors of endothelial cells respond to the shear force of disturbed flow (d-flow). A number of pro-atherogenic genes including matrix metalloproteinases (MMPs) are upregulated by d-flow, mediating pro-inflammatory and pro-angiogenic signals. These findings were observed in ligated carotid arteries of mice to mimic the effects of d-flow. Within 24 hours, pre-existing immature miR-712 formed mature miR-712 suggesting that miR-712 is flow-sensitive. Coinciding with these results, miR-712 is also upregulated in endothelial cells exposed to naturally occurring d-flow in the greater curvature of the aortic arch. Origin Pre-mRNA sequence of miR-712 is generated from the murine ribosomal RN45s gene at the internal transcribed spacer region 2 (ITS2). XRN1 is an exonuclease that degrades the ITS2 region during processing of RN45s. Reduction of XRN1 under d-flow conditions therefore leads to the accumulation of miR-712. Mechanism MiR-712 targets tissue inhibitor of metalloproteinases 3 (TIMP3). TIMPs normally regulate activity of matrix metalloproteinases (MMPs) which degrade the extracellular matrix (ECM). Arterial ECM is mainly composed of collagen and elastin fibers, providing the structural support and recoil properties of arteries. These fibers play a critical role in regulation of vascular inflammation and permeability, which are important in the development of atherosclerosis. Expressed by endothelial cells, TIMP3 is the only ECM-bound TIMP. A decrease in TIMP3 expression results in an increase of ECM degradation in the presence of d-flow. Consistent with these findings, inhibition of pre-miR712 increases expression of TIMP3 in cells, even when exposed to turbulent flow. TIMP3 also decreases the expression of TNFα (a pro-inflammatory regulator) during turbulent flow. Activity of TNFα in turbulent flow was measured by the expression of TNFα-converting enzyme (TACE) in blood. TNFα decreased if miR-712 was inhibited or TIMP3 overexpressed, suggesting that miR-712 and TIMP3 regulate TACE activity in turbulent flow conditions. Anti-miR-712 effectively suppresses d-flow-induced miR-712 expression and increases TIMP3 expression. Anti-miR-712 also inhibits vascular hyperpermeability, thereby significantly reducing atherosclerosis lesion development and immune cell infiltration. Human homolog microRNA-205 The human homolog of miR-712 was found on the RN45s homolog gene, which maintains similar miRNAs to mice. MiR-205 of humans share similar sequences with miR-712 of mice and is conserved across most vertebrates. MiR-205 and miR-712 also share more than 50% of the cell signaling targets, including TIMP3. When tested, d-flow decreased the expression of XRN1 in humans as it did in mice endothelial cells, indicating a potentially common role of XRN1 in humans. Kidney disease Targeted deletion of Dicer in the FoxD1-derived renal progenitor cells in a murine model resulted in a complex renal phenotype including expansion of nephron progenitors, fewer renin cells, smooth muscle arterioles, progressive mesangial loss and glomerular aneurysms. High throughput whole transcriptome profiling of the FoxD1-Dicer knockout mouse model revealed ectopic upregulation of pro-apoptotic gene, Bcl2L11 (Bim) and dysregulation of the p53 pathway with increase in p53 effector genes including Bax, Trp53inp1, Jun, Cdkn1a, Mmp2, and Arid3a. p53 protein levels remained unchanged, suggesting that FoxD1 stromal miRNAs directly repress p53-effector genes. Using a lineage tracing approach followed by Fluorescent-activated cell sorting, miRNA profiling of the FoxD1-derived cells not only comprehensively defined the transcriptional landscape of miRNAs that are critical for vascular development, but also identified key miRNAs that are likely to modulate the renal phenotype in its absence. These miRNAs include miRs-10a, 18a, 19b, 24, 30c, 92a, 106a, 130a, 152, 181a, 214, 222, 302a, 370, and 381 that regulate Bcl2L11 (Bim) and miRs-15b, 18a, 21, 30c, 92a, 106a, 125b-5p, 145, 214, 222, 296-5p and 302a that regulate p53-effector genes. Consistent with the profiling results, ectopic apoptosis was observed in the cellular derivatives of the FoxD1 derived progenitor lineage and reiterates the importance of renal stromal miRNAs in cellular homeostasis. Nervous system MiRNAs are crucial for the healthy development and function of the nervous system. Previous studies demonstrate that miRNAs can regulate neuronal differentiation and maturation at various stages. MiRNAs also play important roles in synaptic development (such as dendritogenesis or spine morphogenesis) and synaptic plasticity (contributing to learning and memory). Elimination of miRNA formation in mice by experimental silencing of Dicer has led to pathological outcomes, such as reduced neuronal size, motor abnormalities (when silenced in striatal neurons), and neurodegeneration (when silenced in forebrain neurons). Altered miRNA expression has been found in neurodegenerative diseases (such as Alzheimer's disease, Parkinson's disease, and Huntington's disease) as well as many psychiatric disorders (including epilepsy, schizophrenia, major depression, bipolar disorder, and anxiety disorders). Stroke According to the Center for Disease Control and Prevention, Stroke is one of the leading causes of death and long-term disability in America. 87% of the cases are ischemic strokes, which results from blockage in the artery of the brain that carries oxygen-rich blood. The obstruction of the blood flow means the brain cannot receive necessary nutrients, such as oxygen and glucose, and remove wastes, such as carbon dioxide. miRNAs plays a role in posttranslational gene silencing by targeting genes in the pathogenesis of cerebral ischemia, such as the inflammatory, angiogenesis, and apoptotic pathway. Alcoholism The vital role of miRNAs in gene expression is significant to addiction, specifically alcoholism. Chronic alcohol abuse results in persistent changes in brain function mediated in part by alterations in gene expression. miRNA global regulation of many downstream genes deems significant regarding the reorganization or synaptic connections or long term neural adaptations involving the behavioral change from alcohol consumption to withdrawal and/or dependence. Up to 35 different miRNAs have been found to be altered in the alcoholic post-mortem brain, all of which target genes that include the regulation of the cell cycle, apoptosis, cell adhesion, nervous system development and cell signaling. Altered miRNA levels were found in the medial prefrontal cortex of alcohol-dependent mice, suggesting the role of miRNA in orchestrating translational imbalances and the creation of differentially expressed proteins within an area of the brain where complex cognitive behavior and decision making likely originate. miRNAs can be either upregulated or downregulated in response to chronic alcohol use. miR-206 expression increased in the prefrontal cortex of alcohol-dependent rats, targeting the transcription factor brain-derived neurotrophic factor (BDNF) and ultimately reducing its expression. BDNF plays a critical role in the formation and maturation of new neurons and synapses, suggesting a possible implication in synapse growth/synaptic plasticity in alcohol abusers. miR-155, important in regulating alcohol-induced neuroinflammation responses, was found to be upregulated, suggesting the role of microglia and inflammatory cytokines in alcohol pathophysiology. Downregulation of miR-382 was found in the nucleus accumbens, a structure in the basal forebrain significant in regulating feelings of reward that power motivational habits. miR-382 is the target for the dopamine receptor D1 (DRD1), and its overexpression results in the upregulation of DRD1 and delta fosB, a transcription factor that activates a series of transcription events in the nucleus accumbens that ultimately result in addictive behaviors. Alternatively, overexpressing miR-382 resulted in attenuated drinking and the inhibition of DRD1 and delta fosB upregulation in rat models of alcoholism, demonstrating the possibility of using miRNA-targeted pharmaceuticals in treatments. Obesity miRNAs play crucial roles in the regulation of stem cell progenitors differentiating into adipocytes. Studies to determine what role pluripotent stem cells play in adipogenesis, were examined in the immortalized human bone marrow-derived stromal cell line hMSC-Tert20. Decreased expression of miR-155, miR-221, and miR-222, have been found during the adipogenic programming of both immortalized and primary hMSCs, suggesting that they act as negative regulators of differentiation. Conversely, ectopic expression of the miRNAs 155, 221, and 222 significantly inhibited adipogenesis and repressed induction of the master regulators PPARγ and CCAAT/enhancer-binding protein alpha (CEBPA). This paves the way for possible genetic obesity treatments. Another class of miRNAs that regulate insulin resistance, obesity, and diabetes, is the let-7 family. Let-7 accumulates in human tissues during the course of aging. When let-7 was ectopically overexpressed to mimic accelerated aging, mice became insulin-resistant, and thus more prone to high fat diet-induced obesity and diabetes. In contrast when let-7 was inhibited by injections of let-7-specific antagomirs, mice become more insulin-sensitive and remarkably resistant to high fat diet-induced obesity and diabetes. Not only could let-7 inhibition prevent obesity and diabetes, it could also reverse and cure the condition. These experimental findings suggest that let-7 inhibition could represent a new therapy for obesity and type 2 diabetes. Hemostasis miRNAs also play crucial roles in the regulation of complex enzymatic cascades including the hemostatic blood coagulation system. Large scale studies of functional miRNA targeting have recently uncovered rationale therapeutic targets in the hemostatic system. They have been directly linked to Calcium homeostasis in the endoplasmic reticulum, which is critical in cell differentiation in early development. Plants miRNAs are considered to be key regulators of many developmental, homeostatic, and immune processes in plants. Their roles in plant development include shoot apical meristem development, leaf growth, flower formation, seed production, or root expansion. In addition, they play a complex role in responses to various abiotic stresses comprising heat stress, low-temperature stress, drought stress, light stress, or gamma radiation exposure. Viruses Viral microRNAs play an important role in the regulation of gene expression of viral and/or host genes to benefit the virus. Hence, miRNAs play a key role in host–virus interactions and pathogenesis of viral diseases. The expression of transcription activators by human herpesvirus-6 DNA is believed to be regulated by viral miRNA. Target prediction miRNAs can bind to target messenger RNA (mRNA) transcripts of protein-coding genes and negatively control their translation or cause mRNA degradation. It is of key importance to identify the miRNA targets accurately. A comparison of the predictive performance of eighteen in silico algorithms is available. Large scale studies of functional miRNA targeting suggest that many functional miRNAs can be missed by target prediction algorithms. See also Anti-miRNA oligonucleotides C19MC miRNA cluster Gene expression List of miRNA gene prediction tools List of miRNA target prediction tools MicroDNA MicroRNA Biosensors MiRNEST MIR222 miR-324-5p Mir-M7 microRNA precursor family RNA interference Small interfering RNA Small nucleolar RNA-derived microRNA References Further reading miRNA definition and classification: Science review of small RNA: Discovery of lin-4, the first miRNA to be discovered: External links The miRBase database miRTarBase, the experimentally validated microRNA-target interactions database. semirna, Web application to search for microRNAs in a plant genome. ONCO.IO: Integrative resource for microRNA and transcription factors analysis in cancer. MirOB : MicroRNA targets database and data analysis and dataviz tool. ChIPBase database: An open access database for decoding the transcription factors that were involved in or affected the transcription of microRNAs from ChIP-seq data. An animated video of the microRNA biogenesis process. miRNA modulation reagents to enable up-regulation or suppression of endogenous mature microRNA function Gene expression RNA Non-coding RNA
MicroRNA
[ "Chemistry", "Biology" ]
10,648
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry" ]
156,968
https://en.wikipedia.org/wiki/Integral%20membrane%20protein
An integral, or intrinsic, membrane protein (IMP) is a type of membrane protein that is permanently attached to the biological membrane. All transmembrane proteins can be classified as IMPs, but not all IMPs are transmembrane proteins. IMPs comprise a significant fraction of the proteins encoded in an organism's genome. Proteins that cross the membrane are surrounded by annular lipids, which are defined as lipids that are in direct contact with a membrane protein. Such proteins can only be separated from the membranes by using detergents, nonpolar solvents, or sometimes denaturing agents. Proteins that adhere only temporarily to cellular membranes are known as peripheral membrane proteins. These proteins can either associate with integral membrane proteins, or independently insert in the lipid bilayer in several ways. Structure Three-dimensional structures of ~160 different integral membrane proteins have been determined at atomic resolution by X-ray crystallography or nuclear magnetic resonance spectroscopy. They are challenging subjects for study owing to the difficulties associated with extraction and crystallization. In addition, structures of many water-soluble protein domains of IMPs are available in the Protein Data Bank. Their membrane-anchoring α-helices have been removed to facilitate the extraction and crystallization. Search integral membrane proteins in the PDB (based on gene ontology classification) IMPs can be divided into two groups: Integral polytopic proteins (Transmembrane proteins) Integral monotopic proteins Integral polytopic protein The most common type of IMP is the transmembrane protein, which spans the entire biological membrane. Single-pass membrane proteins cross the membrane only once, while multi-pass membrane proteins weave in and out, crossing the membrane several times. Single pass membrane proteins can be categorized as Type I, which are positioned such that their carboxyl-terminus is towards the cytosol, or Type II, which have their amino-terminus towards the cytosol. Type III proteins have multiple transmembrane domains in a single polypeptide, while type IV consists of several different polypeptides assembled together in a channel through the membrane. Type V proteins are anchored to the lipid bilayer through covalently linked lipids. Finally Type VI proteins have both transmembrane domains and lipid anchors. Integral monotopic proteins Integral monotopic proteins are permanently attached to the cell membrane from one side. Three-dimensional structures of the following integral monotopic proteins have been determined: prostaglandin H2 syntheses 1 and 2 (cyclooxygenases) lanosterol synthase and squalene-hopene cyclase microsomal prostaglandin E synthase carnitine O-palmitoyltransferase 2 Phosphoglycosyl transferase C There are also structures of integral monotopic domains of transmembrane proteins: monoamine oxidases A and B fatty acid amide hydrolase mammalian cytochrome P450 oxidases corticosteroid 11-beta-dehydrogenases Extraction Many challenges facing the study of integral membrane proteins are attributed to the extraction of those proteins from the phospholipid bilayer. Since integral proteins span the width of the phospholipid bilayer, their extraction involves disrupting the phospholipids surrounding them, without causing any damage that would interrupt the function or structure of the proteins. Several successful methods are available for performing the extraction including the uses of "detergents, low ionic salt (salting out), shearing force, and rapid pressure change". Determination of protein structure The Protein Structure Initiative (PSI), funded by the U.S. National Institute of General Medical Sciences (NIGMS), part of the National Institutes of Health (NIH), has among its aim to determine three-dimensional protein structures and to develop techniques for use in structural biology, including for membrane proteins. Homology modeling can be used to construct an atomic-resolution model of the "target" integral protein from its amino acid sequence and an experimental three-dimensional structure of a related homologous protein. This procedure has been extensively used for ligand-G protein–coupled receptors (GPCR) and their complexes. Function IMPs include transporters, linkers, channels, receptors, enzymes, structural membrane-anchoring domains, proteins involved in accumulation and transduction of energy, and proteins responsible for cell adhesion. Classification of transporters can be found in Transporter Classification Database. As an example of the relationship between the IMP (in this case the bacterial phototrapping pigment, bacteriorhodopsin) and the membrane formed by the phospholipid bilayer is illustrated below. In this case the integral membrane protein spans the phospholipid bilayer seven times. The part of the protein that is embedded in the hydrophobic regions of the bilayer are alpha helical and composed of predominantly hydrophobic amino acids. The C terminal end of the protein is in the cytosol while the N terminal region is in the outside of the cell. A membrane that contains this particular protein is able to function in photosynthesis. Examples Examples of integral membrane proteins: Insulin receptor Some types of cell adhesion proteins or cell adhesion molecules (CAMs) such as integrins, cadherins, NCAMs, or selectins Some types of receptor proteins Glycophorin Rhodopsin Band 3 CD36 Glucose Permease Ion channels and Gates Gap junction Proteins G protein coupled receptors (e.g., Beta-adrenergic receptor) Seipin Photosystem I See also Membrane protein Transmembrane protein Peripheral membrane protein Annular lipid shell Hydrophilicity plot Inner nuclear membrane protein References Protein structure
Integral membrane protein
[ "Chemistry" ]
1,190
[ "Protein structure", "Structural biology" ]
156,998
https://en.wikipedia.org/wiki/Action%20potential
An action potential occurs when the membrane potential of a specific cell rapidly rises and falls. This depolarization then causes adjacent locations to similarly depolarize. Action potentials occur in several types of excitable cells, which include animal cells like neurons and muscle cells, as well as some plant cells. Certain endocrine cells such as pancreatic beta cells, and certain cells of the anterior pituitary gland are also excitable cells. In neurons, action potentials play a central role in cell–cell communication by providing for—or with regard to saltatory conduction, assisting—the propagation of signals along the neuron's axon toward synaptic boutons situated at the ends of an axon; these signals can then connect with other neurons at synapses, or to motor cells or glands. In other types of cells, their main function is to activate intracellular processes. In muscle cells, for example, an action potential is the first step in the chain of events leading to contraction. In beta cells of the pancreas, they provoke release of insulin. Action potentials in neurons are also known as "nerve impulses" or "spikes", and the temporal sequence of action potentials generated by a neuron is called its "spike train". A neuron that emits an action potential, or nerve impulse, is often said to "fire". Action potentials are generated by special types of voltage-gated ion channels embedded in a cell's plasma membrane. These channels are shut when the membrane potential is near the (negative) resting potential of the cell, but they rapidly begin to open if the membrane potential increases to a precisely defined threshold voltage, depolarising the transmembrane potential. When the channels open, they allow an inward flow of sodium ions, which changes the electrochemical gradient, which in turn produces a further rise in the membrane potential towards zero. This then causes more channels to open, producing a greater electric current across the cell membrane and so on. The process proceeds explosively until all of the available ion channels are open, resulting in a large upswing in the membrane potential. The rapid influx of sodium ions causes the polarity of the plasma membrane to reverse, and the ion channels then rapidly inactivate. As the sodium channels close, sodium ions can no longer enter the neuron, and they are then actively transported back out of the plasma membrane. Potassium channels are then activated, and there is an outward current of potassium ions, returning the electrochemical gradient to the resting state. After an action potential has occurred, there is a transient negative shift, called the afterhyperpolarization. In animal cells, there are two primary types of action potentials. One type is generated by voltage-gated sodium channels, the other by voltage-gated calcium channels. Sodium-based action potentials usually last for under one millisecond, but calcium-based action potentials may last for 100 milliseconds or longer. In some types of neurons, slow calcium spikes provide the driving force for a long burst of rapidly emitted sodium spikes. In cardiac muscle cells, on the other hand, an initial fast sodium spike provides a "primer" to provoke the rapid onset of a calcium spike, which then produces muscle contraction. Overview Nearly all cell membranes in animals, plants and fungi maintain a voltage difference between the exterior and interior of the cell, called the membrane potential. A typical voltage across an animal cell membrane is −70 mV. This means that the interior of the cell has a negative voltage relative to the exterior. In most types of cells, the membrane potential usually stays fairly constant. Some types of cells, however, are electrically active in the sense that their voltages fluctuate over time. In some types of electrically active cells, including neurons and muscle cells, the voltage fluctuations frequently take the form of a rapid upward (positive) spike followed by a rapid fall. These up-and-down cycles are known as action potentials. In some types of neurons, the entire up-and-down cycle takes place in a few thousandths of a second. In muscle cells, a typical action potential lasts about a fifth of a second. In plant cells, an action potential may last three seconds or more. The electrical properties of a cell are determined by the structure of its membrane. A cell membrane consists of a lipid bilayer of molecules in which larger protein molecules are embedded. The lipid bilayer is highly resistant to movement of electrically charged ions, so it functions as an insulator. The large membrane-embedded proteins, in contrast, provide channels through which ions can pass across the membrane. Action potentials are driven by channel proteins whose configuration switches between closed and open states as a function of the voltage difference between the interior and exterior of the cell. These voltage-sensitive proteins are known as voltage-gated ion channels. Process in a typical neuron All cells in animal body tissues are electrically polarized – in other words, they maintain a voltage difference across the cell's plasma membrane, known as the membrane potential. This electrical polarization results from a complex interplay between protein structures embedded in the membrane called ion pumps and ion channels. In neurons, the types of ion channels in the membrane usually vary across different parts of the cell, giving the dendrites, axon, and cell body different electrical properties. As a result, some parts of the membrane of a neuron may be excitable (capable of generating action potentials), whereas others are not. Recent studies have shown that the most excitable part of a neuron is the part after the axon hillock (the point where the axon leaves the cell body), which is called the axonal initial segment, but the axon and cell body are also excitable in most cases. Each excitable patch of membrane has two important levels of membrane potential: the resting potential, which is the value the membrane potential maintains as long as nothing perturbs the cell, and a higher value called the threshold potential. At the axon hillock of a typical neuron, the resting potential is around –70 millivolts (mV) and the threshold potential is around –55 mV. Synaptic inputs to a neuron cause the membrane to depolarize or hyperpolarize; that is, they cause the membrane potential to rise or fall. Action potentials are triggered when enough depolarization accumulates to bring the membrane potential up to threshold. When an action potential is triggered, the membrane potential abruptly shoots upward and then equally abruptly shoots back downward, often ending below the resting level, where it remains for some period of time. The shape of the action potential is stereotyped; this means that the rise and fall usually have approximately the same amplitude and time course for all action potentials in a given cell. (Exceptions are discussed later in the article). In most neurons, the entire process takes place in about a thousandth of a second. Many types of neurons emit action potentials constantly at rates of up to 10–100 per second. However, some types are much quieter, and may go for minutes or longer without emitting any action potentials. Biophysical basis Action potentials result from the presence in a cell's membrane of special types of voltage-gated ion channels. A voltage-gated ion channel is a transmembrane protein that has three key properties: It is capable of assuming more than one conformation. At least one of the conformations creates a channel through the membrane that is permeable to specific types of ions. The transition between conformations is influenced by the membrane potential. Thus, a voltage-gated ion channel tends to be open for some values of the membrane potential, and closed for others. In most cases, however, the relationship between membrane potential and channel state is probabilistic and involves a time delay. Ion channels switch between conformations at unpredictable times: The membrane potential determines the rate of transitions and the probability per unit time of each type of transition. Voltage-gated ion channels are capable of producing action potentials because they can give rise to positive feedback loops: The membrane potential controls the state of the ion channels, but the state of the ion channels controls the membrane potential. Thus, in some situations, a rise in the membrane potential can cause ion channels to open, thereby causing a further rise in the membrane potential. An action potential occurs when this positive feedback cycle (Hodgkin cycle) proceeds explosively. The time and amplitude trajectory of the action potential are determined by the biophysical properties of the voltage-gated ion channels that produce it. Several types of channels capable of producing the positive feedback necessary to generate an action potential do exist. Voltage-gated sodium channels are responsible for the fast action potentials involved in nerve conduction. Slower action potentials in muscle cells and some types of neurons are generated by voltage-gated calcium channels. Each of these types comes in multiple variants, with different voltage sensitivity and different temporal dynamics. The most intensively studied type of voltage-dependent ion channels comprises the sodium channels involved in fast nerve conduction. These are sometimes known as Hodgkin-Huxley sodium channels because they were first characterized by Alan Hodgkin and Andrew Huxley in their Nobel Prize-winning studies of the biophysics of the action potential, but can more conveniently be referred to as NaV channels. (The "V" stands for "voltage".) An NaV channel has three possible states, known as deactivated, activated, and inactivated. The channel is permeable only to sodium ions when it is in the activated state. When the membrane potential is low, the channel spends most of its time in the deactivated (closed) state. If the membrane potential is raised above a certain level, the channel shows increased probability of transitioning to the activated (open) state. The higher the membrane potential the greater the probability of activation. Once a channel has activated, it will eventually transition to the inactivated (closed) state. It tends then to stay inactivated for some time, but, if the membrane potential becomes low again, the channel will eventually transition back to the deactivated state. During an action potential, most channels of this type go through a cycle deactivated→activated→inactivated→deactivated. This is only the population average behavior, however – an individual channel can in principle make any transition at any time. However, the likelihood of a channel's transitioning from the inactivated state directly to the activated state is very low: A channel in the inactivated state is refractory until it has transitioned back to the deactivated state. The outcome of all this is that the kinetics of the NaV channels are governed by a transition matrix whose rates are voltage-dependent in a complicated way. Since these channels themselves play a major role in determining the voltage, the global dynamics of the system can be quite difficult to work out. Hodgkin and Huxley approached the problem by developing a set of differential equations for the parameters that govern the ion channel states, known as the Hodgkin-Huxley equations. These equations have been extensively modified by later research, but form the starting point for most theoretical studies of action potential biophysics. As the membrane potential is increased, sodium ion channels open, allowing the entry of sodium ions into the cell. This is followed by the opening of potassium ion channels that permit the exit of potassium ions from the cell. The inward flow of sodium ions increases the concentration of positively charged cations in the cell and causes depolarization, where the potential of the cell is higher than the cell's resting potential. The sodium channels close at the peak of the action potential, while potassium continues to leave the cell. The efflux of potassium ions decreases the membrane potential or hyperpolarizes the cell. For small voltage increases from rest, the potassium current exceeds the sodium current and the voltage returns to its normal resting value, typically −70 mV. However, if the voltage increases past a critical threshold, typically 15 mV higher than the resting value, the sodium current dominates. This results in a runaway condition whereby the positive feedback from the sodium current activates even more sodium channels. Thus, the cell fires, producing an action potential. The frequency at which a neuron elicits action potentials is often referred to as a firing rate or neural firing rate. Currents produced by the opening of voltage-gated channels in the course of an action potential are typically significantly larger than the initial stimulating current. Thus, the amplitude, duration, and shape of the action potential are determined largely by the properties of the excitable membrane and not the amplitude or duration of the stimulus. This all-or-nothing property of the action potential sets it apart from graded potentials such as receptor potentials, electrotonic potentials, subthreshold membrane potential oscillations, and synaptic potentials, which scale with the magnitude of the stimulus. A variety of action potential types exist in many cell types and cell compartments as determined by the types of voltage-gated channels, leak channels, channel distributions, ionic concentrations, membrane capacitance, temperature, and other factors. The principal ions involved in an action potential are sodium and potassium cations; sodium ions enter the cell, and potassium ions leave, restoring equilibrium. Relatively few ions need to cross the membrane for the membrane voltage to change drastically. The ions exchanged during an action potential, therefore, make a negligible change in the interior and exterior ionic concentrations. The few ions that do cross are pumped out again by the continuous action of the sodium–potassium pump, which, with other ion transporters, maintains the normal ratio of ion concentrations across the membrane. Calcium cations and chloride anions are involved in a few types of action potentials, such as the cardiac action potential and the action potential in the single-cell alga Acetabularia, respectively. Although action potentials are generated locally on patches of excitable membrane, the resulting currents can trigger action potentials on neighboring stretches of membrane, precipitating a domino-like propagation. In contrast to passive spread of electric potentials (electrotonic potential), action potentials are generated anew along excitable stretches of membrane and propagate without decay. Myelinated sections of axons are not excitable and do not produce action potentials and the signal is propagated passively as electrotonic potential. Regularly spaced unmyelinated patches, called the nodes of Ranvier, generate action potentials to boost the signal. Known as saltatory conduction, this type of signal propagation provides a favorable tradeoff of signal velocity and axon diameter. Depolarization of axon terminals, in general, triggers the release of neurotransmitter into the synaptic cleft. In addition, backpropagating action potentials have been recorded in the dendrites of pyramidal neurons, which are ubiquitous in the neocortex. These are thought to have a role in spike-timing-dependent plasticity. In the Hodgkin–Huxley membrane capacitance model, the speed of transmission of an action potential was undefined and it was assumed that adjacent areas became depolarized due to released ion interference with neighbouring channels. Measurements of ion diffusion and radii have since shown this not to be possible. Moreover, contradictory measurements of entropy changes and timing disputed the capacitance model as acting alone. Alternatively, Gilbert Ling's adsorption hypothesis, posits that the membrane potential and action potential of a living cell is due to the adsorption of mobile ions onto adsorption sites of cells. Maturation of the electrical properties of the action potential A neuron's ability to generate and propagate an action potential changes during development. How much the membrane potential of a neuron changes as the result of a current impulse is a function of the membrane input resistance. As a cell grows, more channels are added to the membrane, causing a decrease in input resistance. A mature neuron also undergoes shorter changes in membrane potential in response to synaptic currents. Neurons from a ferret lateral geniculate nucleus have a longer time constant and larger voltage deflection at P0 than they do at P30. One consequence of the decreasing action potential duration is that the fidelity of the signal can be preserved in response to high frequency stimulation. Immature neurons are more prone to synaptic depression than potentiation after high frequency stimulation. In the early development of many organisms, the action potential is actually initially carried by calcium current rather than sodium current. The opening and closing kinetics of calcium channels during development are slower than those of the voltage-gated sodium channels that will carry the action potential in the mature neurons. The longer opening times for the calcium channels can lead to action potentials that are considerably slower than those of mature neurons. Xenopus neurons initially have action potentials that take 60–90 ms. During development, this time decreases to 1 ms. There are two reasons for this drastic decrease. First, the inward current becomes primarily carried by sodium channels. Second, the delayed rectifier, a potassium channel current, increases to 3.5 times its initial strength. In order for the transition from a calcium-dependent action potential to a sodium-dependent action potential to proceed new channels must be added to the membrane. If Xenopus neurons are grown in an environment with RNA synthesis or protein synthesis inhibitors that transition is prevented. Even the electrical activity of the cell itself may play a role in channel expression. If action potentials in Xenopus myocytes are blocked, the typical increase in sodium and potassium current density is prevented or delayed. This maturation of electrical properties is seen across species. Xenopus sodium and potassium currents increase drastically after a neuron goes through its final phase of mitosis. The sodium current density of rat cortical neurons increases by 600% within the first two postnatal weeks. Neurotransmission Anatomy of a neuron Several types of cells support an action potential, such as plant cells, muscle cells, and the specialized cells of the heart (in which occurs the cardiac action potential). However, the main excitable cell is the neuron, which also has the simplest mechanism for the action potential. Neurons are electrically excitable cells composed, in general, of one or more dendrites, a single soma, a single axon and one or more axon terminals. Dendrites are cellular projections whose primary function is to receive synaptic signals. Their protrusions, known as dendritic spines, are designed to capture the neurotransmitters released by the presynaptic neuron. They have a high concentration of ligand-gated ion channels. These spines have a thin neck connecting a bulbous protrusion to the dendrite. This ensures that changes occurring inside the spine are less likely to affect the neighboring spines. The dendritic spine can, with rare exception (see LTP), act as an independent unit. The dendrites extend from the soma, which houses the nucleus, and many of the "normal" eukaryotic organelles. Unlike the spines, the surface of the soma is populated by voltage activated ion channels. These channels help transmit the signals generated by the dendrites. Emerging out from the soma is the axon hillock. This region is characterized by having a very high concentration of voltage-activated sodium channels. In general, it is considered to be the spike initiation zone for action potentials, i.e. the trigger zone. Multiple signals generated at the spines, and transmitted by the soma all converge here. Immediately after the axon hillock is the axon. This is a thin tubular protrusion traveling away from the soma. The axon is insulated by a myelin sheath. Myelin is composed of either Schwann cells (in the peripheral nervous system) or oligodendrocytes (in the central nervous system), both of which are types of glial cells. Although glial cells are not involved with the transmission of electrical signals, they communicate and provide important biochemical support to neurons. To be specific, myelin wraps multiple times around the axonal segment, forming a thick fatty layer that prevents ions from entering or escaping the axon. This insulation prevents significant signal decay as well as ensuring faster signal speed. This insulation, however, has the restriction that no channels can be present on the surface of the axon. There are, therefore, regularly spaced patches of membrane, which have no insulation. These nodes of Ranvier can be considered to be "mini axon hillocks", as their purpose is to boost the signal in order to prevent significant signal decay. At the furthest end, the axon loses its insulation and begins to branch into several axon terminals. These presynaptic terminals, or synaptic boutons, are a specialized area within the axon of the presynaptic cell that contains neurotransmitters enclosed in small membrane-bound spheres called synaptic vesicles. Initiation Before considering the propagation of action potentials along axons and their termination at the synaptic knobs, it is helpful to consider the methods by which action potentials can be initiated at the axon hillock. The basic requirement is that the membrane voltage at the hillock be raised above the threshold for firing. There are several ways in which this depolarization can occur. Dynamics Action potentials are most commonly initiated by excitatory postsynaptic potentials from a presynaptic neuron. Typically, neurotransmitter molecules are released by the presynaptic neuron. These neurotransmitters then bind to receptors on the postsynaptic cell. This binding opens various types of ion channels. This opening has the further effect of changing the local permeability of the cell membrane and, thus, the membrane potential. If the binding increases the voltage (depolarizes the membrane), the synapse is excitatory. If, however, the binding decreases the voltage (hyperpolarizes the membrane), it is inhibitory. Whether the voltage is increased or decreased, the change propagates passively to nearby regions of the membrane (as described by the cable equation and its refinements). Typically, the voltage stimulus decays exponentially with the distance from the synapse and with time from the binding of the neurotransmitter. Some fraction of an excitatory voltage may reach the axon hillock and may (in rare cases) depolarize the membrane enough to provoke a new action potential. More typically, the excitatory potentials from several synapses must work together at nearly the same time to provoke a new action potential. Their joint efforts can be thwarted, however, by the counteracting inhibitory postsynaptic potentials. Neurotransmission can also occur through electrical synapses. Due to the direct connection between excitable cells in the form of gap junctions, an action potential can be transmitted directly from one cell to the next in either direction. The free flow of ions between cells enables rapid non-chemical-mediated transmission. Rectifying channels ensure that action potentials move only in one direction through an electrical synapse. Electrical synapses are found in all nervous systems, including the human brain, although they are a distinct minority. "All-or-none" principle The amplitude of an action potential is often thought to be independent of the amount of current that produced it. In other words, larger currents do not create larger action potentials. Therefore, action potentials are said to be all-or-none signals, since either they occur fully or they do not occur at all. This is in contrast to receptor potentials, whose amplitudes are dependent on the intensity of a stimulus. In both cases, the frequency of action potentials is correlated with the intensity of a stimulus. Despite the classical view of the action potential as a stereotyped, uniform signal having dominated the field of neuroscience for many decades, newer evidence does suggest that action potentials are more complex events indeed capable of transmitting information through not just their amplitude, but their duration and phase as well, sometimes even up to distances originally not thought to be possible. Sensory neurons In sensory neurons, an external signal such as pressure, temperature, light, or sound is coupled with the opening and closing of ion channels, which in turn alter the ionic permeabilities of the membrane and its voltage. These voltage changes can again be excitatory (depolarizing) or inhibitory (hyperpolarizing) and, in some sensory neurons, their combined effects can depolarize the axon hillock enough to provoke action potentials. Some examples in humans include the olfactory receptor neuron and Meissner's corpuscle, which are critical for the sense of smell and touch, respectively. However, not all sensory neurons convert their external signals into action potentials; some do not even have an axon. Instead, they may convert the signal into the release of a neurotransmitter, or into continuous graded potentials, either of which may stimulate subsequent neuron(s) into firing an action potential. For illustration, in the human ear, hair cells convert the incoming sound into the opening and closing of mechanically gated ion channels, which may cause neurotransmitter molecules to be released. In similar manner, in the human retina, the initial photoreceptor cells and the next layer of cells (comprising bipolar cells and horizontal cells) do not produce action potentials; only some amacrine cells and the third layer, the ganglion cells, produce action potentials, which then travel up the optic nerve. Pacemaker potentials In sensory neurons, action potentials result from an external stimulus. However, some excitable cells require no such stimulus to fire: They spontaneously depolarize their axon hillock and fire action potentials at a regular rate, like an internal clock. The voltage traces of such cells are known as pacemaker potentials. The cardiac pacemaker cells of the sinoatrial node in the heart provide a good example. Although such pacemaker potentials have a natural rhythm, it can be adjusted by external stimuli; for instance, heart rate can be altered by pharmaceuticals as well as signals from the sympathetic and parasympathetic nerves. The external stimuli do not cause the cell's repetitive firing, but merely alter its timing. In some cases, the regulation of frequency can be more complex, leading to patterns of action potentials, such as bursting. Phases The course of the action potential can be divided into five parts: the rising phase, the peak phase, the falling phase, the undershoot phase, and the refractory period. During the rising phase the membrane potential depolarizes (becomes more positive). The point at which depolarization stops is called the peak phase. At this stage, the membrane potential reaches a maximum. Subsequent to this, there is a falling phase. During this stage the membrane potential becomes more negative, returning towards resting potential. The undershoot, or afterhyperpolarization, phase is the period during which the membrane potential temporarily becomes more negatively charged than when at rest (hyperpolarized). Finally, the time during which a subsequent action potential is impossible or difficult to fire is called the refractory period, which may overlap with the other phases. The course of the action potential is determined by two coupled effects. First, voltage-sensitive ion channels open and close in response to changes in the membrane voltage Vm. This changes the membrane's permeability to those ions. Second, according to the Goldman equation, this change in permeability changes the equilibrium potential Em, and, thus, the membrane voltage Vm. Thus, the membrane potential affects the permeability, which then further affects the membrane potential. This sets up the possibility for positive feedback, which is a key part of the rising phase of the action potential. A complicating factor is that a single ion channel may have multiple internal "gates" that respond to changes in Vm in opposite ways, or at different rates. For example, although raising Vm opens most gates in the voltage-sensitive sodium channel, it also closes the channel's "inactivation gate", albeit more slowly. Hence, when Vm is raised suddenly, the sodium channels open initially, but then close due to the slower inactivation. The voltages and currents of the action potential in all of its phases were modeled accurately by Alan Lloyd Hodgkin and Andrew Huxley in 1952, for which they were awarded the Nobel Prize in Physiology or Medicine in 1963. However, their model considers only two types of voltage-sensitive ion channels, and makes several assumptions about them, e.g., that their internal gates open and close independently of one another. In reality, there are many types of ion channels, and they do not always open and close independently. Stimulation and rising phase A typical action potential begins at the axon hillock with a sufficiently strong depolarization, e.g., a stimulus that increases Vm. This depolarization is often caused by the injection of extra sodium cations into the cell; these cations can come from a wide variety of sources, such as chemical synapses, sensory neurons or pacemaker potentials. For a neuron at rest, there is a high concentration of sodium and chloride ions in the extracellular fluid compared to the intracellular fluid, while there is a high concentration of potassium ions in the intracellular fluid compared to the extracellular fluid. The difference in concentrations, which causes ions to move from a high to a low concentration, and electrostatic effects (attraction of opposite charges) are responsible for the movement of ions in and out of the neuron. The inside of a neuron has a negative charge, relative to the cell exterior, from the movement of K+ out of the cell. The neuron membrane is more permeable to K+ than to other ions, allowing this ion to selectively move out of the cell, down its concentration gradient. This concentration gradient along with potassium leak channels present on the membrane of the neuron causes an efflux of potassium ions making the resting potential close to EK ≈ –75 mV. Since Na+ ions are in higher concentrations outside of the cell, the concentration and voltage differences both drive them into the cell when Na+ channels open. Depolarization opens both the sodium and potassium channels in the membrane, allowing the ions to flow into and out of the axon, respectively. If the depolarization is small (say, increasing Vm from −70 mV to −60 mV), the outward potassium current overwhelms the inward sodium current and the membrane repolarizes back to its normal resting potential around −70 mV. However, if the depolarization is large enough, the inward sodium current increases more than the outward potassium current and a runaway condition (positive feedback) results: the more inward current there is, the more Vm increases, which in turn further increases the inward current. A sufficiently strong depolarization (increase in Vm) causes the voltage-sensitive sodium channels to open; the increasing permeability to sodium drives Vm closer to the sodium equilibrium voltage ENa≈ +55 mV. The increasing voltage in turn causes even more sodium channels to open, which pushes Vm still further towards ENa. This positive feedback continues until the sodium channels are fully open and Vm is close to ENa. The sharp rise in Vm and sodium permeability correspond to the rising phase of the action potential. The critical threshold voltage for this runaway condition is usually around −45 mV, but it depends on the recent activity of the axon. A cell that has just fired an action potential cannot fire another one immediately, since the Na+ channels have not recovered from the inactivated state. The period during which no new action potential can be fired is called the absolute refractory period. At longer times, after some but not all of the ion channels have recovered, the axon can be stimulated to produce another action potential, but with a higher threshold, requiring a much stronger depolarization, e.g., to −30 mV. The period during which action potentials are unusually difficult to evoke is called the relative refractory period. Peak phase The positive feedback of the rising phase slows and comes to a halt as the sodium ion channels become maximally open. At the peak of the action potential, the sodium permeability is maximized and the membrane voltage Vm is nearly equal to the sodium equilibrium voltage ENa. However, the same raised voltage that opened the sodium channels initially also slowly shuts them off, by closing their pores; the sodium channels become inactivated. This lowers the membrane's permeability to sodium relative to potassium, driving the membrane voltage back towards the resting value. At the same time, the raised voltage opens voltage-sensitive potassium channels; the increase in the membrane's potassium permeability drives Vm towards EK. Combined, these changes in sodium and potassium permeability cause Vm to drop quickly, repolarizing the membrane and producing the "falling phase" of the action potential. Afterhyperpolarization The depolarized voltage opens additional voltage-dependent potassium channels, and some of these do not close right away when the membrane returns to its normal resting voltage. In addition, further potassium channels open in response to the influx of calcium ions during the action potential. The intracellular concentration of potassium ions is transiently unusually low, making the membrane voltage Vm even closer to the potassium equilibrium voltage EK. The membrane potential goes below the resting membrane potential. Hence, there is an undershoot or hyperpolarization, termed an afterhyperpolarization, that persists until the membrane potassium permeability returns to its usual value, restoring the membrane potential to the resting state. Refractory period Each action potential is followed by a refractory period, which can be divided into an absolute refractory period, during which it is impossible to evoke another action potential, and then a relative refractory period, during which a stronger-than-usual stimulus is required. These two refractory periods are caused by changes in the state of sodium and potassium channel molecules. When closing after an action potential, sodium channels enter an "inactivated" state, in which they cannot be made to open regardless of the membrane potential—this gives rise to the absolute refractory period. Even after a sufficient number of sodium channels have transitioned back to their resting state, it frequently happens that a fraction of potassium channels remains open, making it difficult for the membrane potential to depolarize, and thereby giving rise to the relative refractory period. Because the density and subtypes of potassium channels may differ greatly between different types of neurons, the duration of the relative refractory period is highly variable. The absolute refractory period is largely responsible for the unidirectional propagation of action potentials along axons. At any given moment, the patch of axon behind the actively spiking part is refractory, but the patch in front, not having been activated recently, is capable of being stimulated by the depolarization from the action potential. Propagation The action potential generated at the axon hillock propagates as a wave along the axon. The currents flowing inwards at a point on the axon during an action potential spread out along the axon, and depolarize the adjacent sections of its membrane. If sufficiently strong, this depolarization provokes a similar action potential at the neighboring membrane patches. This basic mechanism was demonstrated by Alan Lloyd Hodgkin in 1937. After crushing or cooling nerve segments and thus blocking the action potentials, he showed that an action potential arriving on one side of the block could provoke another action potential on the other, provided that the blocked segment was sufficiently short. Once an action potential has occurred at a patch of membrane, the membrane patch needs time to recover before it can fire again. At the molecular level, this absolute refractory period corresponds to the time required for the voltage-activated sodium channels to recover from inactivation, i.e., to return to their closed state. There are many types of voltage-activated potassium channels in neurons. Some of them inactivate fast (A-type currents) and some of them inactivate slowly or not inactivate at all; this variability guarantees that there will be always an available source of current for repolarization, even if some of the potassium channels are inactivated because of preceding depolarization. On the other hand, all neuronal voltage-activated sodium channels inactivate within several milliseconds during strong depolarization, thus making following depolarization impossible until a substantial fraction of sodium channels have returned to their closed state. Although it limits the frequency of firing, the absolute refractory period ensures that the action potential moves in only one direction along an axon. The currents flowing in due to an action potential spread out in both directions along the axon. However, only the unfired part of the axon can respond with an action potential; the part that has just fired is unresponsive until the action potential is safely out of range and cannot restimulate that part. In the usual orthodromic conduction, the action potential propagates from the axon hillock towards the synaptic knobs (the axonal termini); propagation in the opposite direction—known as antidromic conduction—is very rare. However, if a laboratory axon is stimulated in its middle, both halves of the axon are "fresh", i.e., unfired; then two action potentials will be generated, one traveling towards the axon hillock and the other traveling towards the synaptic knobs. Myelin and saltatory conduction In order to enable fast and efficient transduction of electrical signals in the nervous system, certain neuronal axons are covered with myelin sheaths. Myelin is a multilamellar membrane that enwraps the axon in segments separated by intervals known as nodes of Ranvier. It is produced by specialized cells: Schwann cells exclusively in the peripheral nervous system, and oligodendrocytes exclusively in the central nervous system. Myelin sheath reduces membrane capacitance and increases membrane resistance in the inter-node intervals, thus allowing a fast, saltatory movement of action potentials from node to node. Myelination is found mainly in vertebrates, but an analogous system has been discovered in a few invertebrates, such as some species of shrimp. Not all neurons in vertebrates are myelinated; for example, axons of the neurons comprising the autonomous nervous system are not, in general, myelinated. Myelin prevents ions from entering or leaving the axon along myelinated segments. As a general rule, myelination increases the conduction velocity of action potentials and makes them more energy-efficient. Whether saltatory or not, the mean conduction velocity of an action potential ranges from 1 meter per second (m/s) to over 100 m/s, and, in general, increases with axonal diameter. Action potentials cannot propagate through the membrane in myelinated segments of the axon. However, the current is carried by the cytoplasm, which is sufficient to depolarize the first or second subsequent node of Ranvier. Instead, the ionic current from an action potential at one node of Ranvier provokes another action potential at the next node; this apparent "hopping" of the action potential from node to node is known as saltatory conduction. Although the mechanism of saltatory conduction was suggested in 1925 by Ralph Lillie, the first experimental evidence for saltatory conduction came from Ichiji Tasaki and Taiji Takeuchi and from Andrew Huxley and Robert Stämpfli. By contrast, in unmyelinated axons, the action potential provokes another in the membrane immediately adjacent, and moves continuously down the axon like a wave. Myelin has two important advantages: fast conduction speed and energy efficiency. For axons larger than a minimum diameter (roughly 1 micrometre), myelination increases the conduction velocity of an action potential, typically tenfold. Conversely, for a given conduction velocity, myelinated fibers are smaller than their unmyelinated counterparts. For example, action potentials move at roughly the same speed (25 m/s) in a myelinated frog axon and an unmyelinated squid giant axon, but the frog axon has a roughly 30-fold smaller diameter and 1000-fold smaller cross-sectional area. Also, since the ionic currents are confined to the nodes of Ranvier, far fewer ions "leak" across the membrane, saving metabolic energy. This saving is a significant selective advantage, since the human nervous system uses approximately 20% of the body's metabolic energy. The length of axons' myelinated segments is important to the success of saltatory conduction. They should be as long as possible to maximize the speed of conduction, but not so long that the arriving signal is too weak to provoke an action potential at the next node of Ranvier. In nature, myelinated segments are generally long enough for the passively propagated signal to travel for at least two nodes while retaining enough amplitude to fire an action potential at the second or third node. Thus, the safety factor of saltatory conduction is high, allowing transmission to bypass nodes in case of injury. However, action potentials may end prematurely in certain places where the safety factor is low, even in unmyelinated neurons; a common example is the branch point of an axon, where it divides into two axons. Some diseases degrade myelin and impair saltatory conduction, reducing the conduction velocity of action potentials. The most well-known of these is multiple sclerosis, in which the breakdown of myelin impairs coordinated movement. Cable theory The flow of currents within an axon can be described quantitatively by cable theory and its elaborations, such as the compartmental model. Cable theory was developed in 1855 by Lord Kelvin to model the transatlantic telegraph cable and was shown to be relevant to neurons by Hodgkin and Rushton in 1946. In simple cable theory, the neuron is treated as an electrically passive, perfectly cylindrical transmission cable, which can be described by a partial differential equation where V(x, t) is the voltage across the membrane at a time t and a position x along the length of the neuron, and where λ and τ are the characteristic length and time scales on which those voltages decay in response to a stimulus. Referring to the circuit diagram on the right, these scales can be determined from the resistances and capacitances per unit length. These time and length-scales can be used to understand the dependence of the conduction velocity on the diameter of the neuron in unmyelinated fibers. For example, the time-scale τ increases with both the membrane resistance rm and capacitance cm. As the capacitance increases, more charge must be transferred to produce a given transmembrane voltage (by the equation Q = CV); as the resistance increases, less charge is transferred per unit time, making the equilibration slower. In a similar manner, if the internal resistance per unit length ri is lower in one axon than in another (e.g., because the radius of the former is larger), the spatial decay length λ becomes longer and the conduction velocity of an action potential should increase. If the transmembrane resistance rm is increased, that lowers the average "leakage" current across the membrane, likewise causing λ to become longer, increasing the conduction velocity. Termination Chemical synapses In general, action potentials that reach the synaptic knobs cause a neurotransmitter to be released into the synaptic cleft. Neurotransmitters are small molecules that may open ion channels in the postsynaptic cell; most axons have the same neurotransmitter at all of their termini. The arrival of the action potential opens voltage-sensitive calcium channels in the presynaptic membrane; the influx of calcium causes vesicles filled with neurotransmitter to migrate to the cell's surface and release their contents into the synaptic cleft. This complex process is inhibited by the neurotoxins tetanospasmin and botulinum toxin, which are responsible for tetanus and botulism, respectively. Electrical synapses Some synapses dispense with the "middleman" of the neurotransmitter, and connect the presynaptic and postsynaptic cells together. When an action potential reaches such a synapse, the ionic currents flowing into the presynaptic cell can cross the barrier of the two cell membranes and enter the postsynaptic cell through pores known as connexons. Thus, the ionic currents of the presynaptic action potential can directly stimulate the postsynaptic cell. Electrical synapses allow for faster transmission because they do not require the slow diffusion of neurotransmitters across the synaptic cleft. Hence, electrical synapses are used whenever fast response and coordination of timing are crucial, as in escape reflexes, the retina of vertebrates, and the heart. Neuromuscular junctions A special case of a chemical synapse is the neuromuscular junction, in which the axon of a motor neuron terminates on a muscle fiber. In such cases, the released neurotransmitter is acetylcholine, which binds to the acetylcholine receptor, an integral membrane protein in the membrane (the sarcolemma) of the muscle fiber. However, the acetylcholine does not remain bound; rather, it dissociates and is hydrolyzed by the enzyme, acetylcholinesterase, located in the synapse. This enzyme quickly reduces the stimulus to the muscle, which allows the degree and timing of muscular contraction to be regulated delicately. Some poisons inactivate acetylcholinesterase to prevent this control, such as the nerve agents sarin and tabun, and the insecticides diazinon and malathion. Other cell types Cardiac action potentials The cardiac action potential differs from the neuronal action potential by having an extended plateau, in which the membrane is held at a high voltage for a few hundred milliseconds prior to being repolarized by the potassium current as usual. This plateau is due to the action of slower calcium channels opening and holding the membrane voltage near their equilibrium potential even after the sodium channels have inactivated. The cardiac action potential plays an important role in coordinating the contraction of the heart. The cardiac cells of the sinoatrial node provide the pacemaker potential that synchronizes the heart. The action potentials of those cells propagate to and through the atrioventricular node (AV node), which is normally the only conduction pathway between the atria and the ventricles. Action potentials from the AV node travel through the bundle of His and thence to the Purkinje fibers. Conversely, anomalies in the cardiac action potential—whether due to a congenital mutation or injury—can lead to human pathologies, especially arrhythmias. Several anti-arrhythmia drugs act on the cardiac action potential, such as quinidine, lidocaine, beta blockers, and verapamil. Muscular action potentials The action potential in a normal skeletal muscle cell is similar to the action potential in neurons. Action potentials result from the depolarization of the cell membrane (the sarcolemma), which opens voltage-sensitive sodium channels; these become inactivated and the membrane is repolarized through the outward current of potassium ions. The resting potential prior to the action potential is typically −90mV, somewhat more negative than typical neurons. The muscle action potential lasts roughly 2–4 ms, the absolute refractory period is roughly 1–3 ms, and the conduction velocity along the muscle is roughly 5 m/s. The action potential releases calcium ions that free up the tropomyosin and allow the muscle to contract. Muscle action potentials are provoked by the arrival of a pre-synaptic neuronal action potential at the neuromuscular junction, which is a common target for neurotoxins. Plant action potentials Plant and fungal cells are also electrically excitable. The fundamental difference from animal action potentials is that the depolarization in plant cells is not accomplished by an uptake of positive sodium ions, but by release of negative chloride ions. In 1906, J. C. Bose published the first measurements of action potentials in plants, which had previously been discovered by Burdon-Sanderson and Darwin. An increase in cytoplasmic calcium ions may be the cause of anion release into the cell. This makes calcium a precursor to ion movements, such as the influx of negative chloride ions and efflux of positive potassium ions, as seen in barley leaves. The initial influx of calcium ions also poses a small cellular depolarization, causing the voltage-gated ion channels to open and allowing full depolarization to be propagated by chloride ions. Some plants (e.g. Dionaea muscipula) use sodium-gated channels to operate plant movements and "count" stimulation events to determine if a threshold for movement is met. Dionaea muscipula, also known as the Venus flytrap, is found in subtropical wetlands in North and South Carolina. When there are poor soil nutrients, the flytrap relies on a diet of insects and animals. Despite research on the plant, there lacks an understanding behind the molecular basis to the Venus flytraps, and carnivore plants in general. However, plenty of research has been done on action potentials and how they affect movement and clockwork within the Venus flytrap. To start, the resting membrane potential of the Venus flytrap (−120 mV) is lower than animal cells (usually −90 mV to −40 mV). The lower resting potential makes it easier to activate an action potential. Thus, when an insect lands on the trap of the plant, it triggers a hair-like mechanoreceptor. This receptor then activates an action potential that lasts around 1.5 ms. This causes an increase of positive calcium ions into the cell, slightly depolarizing it. However, the flytrap does not close after one trigger. Instead, it requires the activation of two or more hairs. If only one hair is triggered, it disregards the activation as a false positive. Further, the second hair must be activated within a certain time interval (0.75–40 s) for it to register with the first activation. Thus, a buildup of calcium begins and then slowly falls after the first trigger. When the second action potential is fired within the time interval, it reaches the calcium threshold to depolarize the cell, closing the trap on the prey within a fraction of a second. Together with the subsequent release of positive potassium ions the action potential in plants involves an osmotic loss of salt (KCl). Whereas, the animal action potential is osmotically neutral because equal amounts of entering sodium and leaving potassium cancel each other osmotically. The interaction of electrical and osmotic relations in plant cells appears to have arisen from an osmotic function of electrical excitability in a common unicellular ancestors of plants and animals under changing salinity conditions. Further, the present function of rapid signal transmission is seen as a newer accomplishment of metazoan cells in a more stable osmotic environment. It is likely that the familiar signaling function of action potentials in some vascular plants (e.g. Mimosa pudica) arose independently from that in metazoan excitable cells. Unlike the rising phase and peak, the falling phase and after-hyperpolarization seem to depend primarily on cations that are not calcium. To initiate repolarization, the cell requires movement of potassium out of the cell through passive transportation on the membrane. This differs from neurons because the movement of potassium does not dominate the decrease in membrane potential. To fully repolarize, a plant cell requires energy in the form of ATP to assist in the release of hydrogen from the cell – utilizing a transporter called proton ATPase. Taxonomic distribution and evolutionary advantages Action potentials are found throughout multicellular organisms, including plants, invertebrates such as insects, and vertebrates such as reptiles and mammals. Sponges seem to be the main phylum of multicellular eukaryotes that does not transmit action potentials, although some studies have suggested that these organisms have a form of electrical signaling, too. The resting potential, as well as the size and duration of the action potential, have not varied much with evolution, although the conduction velocity does vary dramatically with axonal diameter and myelination. Given its conservation throughout evolution, the action potential seems to confer evolutionary advantages. One function of action potentials is rapid, long-range signaling within the organism; the conduction velocity can exceed 110 m/s, which is one-third the speed of sound. For comparison, a hormone molecule carried in the bloodstream moves at roughly 8 m/s in large arteries. Part of this function is the tight coordination of mechanical events, such as the contraction of the heart. A second function is the computation associated with its generation. Being an all-or-none signal that does not decay with transmission distance, the action potential has similar advantages to digital electronics. The integration of various dendritic signals at the axon hillock and its thresholding to form a complex train of action potentials is another form of computation, one that has been exploited biologically to form central pattern generators and mimicked in artificial neural networks. The common prokaryotic/eukaryotic ancestor, which lived perhaps four billion years ago, is believed to have had voltage-gated channels. This functionality was likely, at some later point, cross-purposed to provide a communication mechanism. Even modern single-celled bacteria can utilize action potentials to communicate with other bacteria in the same biofilm. Experimental methods The study of action potentials has required the development of new experimental methods. The initial work, prior to 1955, was carried out primarily by Alan Lloyd Hodgkin and Andrew Fielding Huxley, who were, along John Carew Eccles, awarded the 1963 Nobel Prize in Physiology or Medicine for their contribution to the description of the ionic basis of nerve conduction. It focused on three goals: isolating signals from single neurons or axons, developing fast, sensitive electronics, and shrinking electrodes enough that the voltage inside a single cell could be recorded. The first problem was solved by studying the giant axons found in the neurons of the squid (Loligo forbesii and Doryteuthis pealeii, at the time classified as Loligo pealeii). These axons are so large in diameter (roughly 1 mm, or 100-fold larger than a typical neuron) that they can be seen with the naked eye, making them easy to extract and manipulate. However, they are not representative of all excitable cells, and numerous other systems with action potentials have been studied. The second problem was addressed with the crucial development of the voltage clamp, which permitted experimenters to study the ionic currents underlying an action potential in isolation, and eliminated a key source of electronic noise, the current IC associated with the capacitance C of the membrane. Since the current equals C times the rate of change of the transmembrane voltage Vm, the solution was to design a circuit that kept Vm fixed (zero rate of change) regardless of the currents flowing across the membrane. Thus, the current required to keep Vm at a fixed value is a direct reflection of the current flowing through the membrane. Other electronic advances included the use of Faraday cages and electronics with high input impedance, so that the measurement itself did not affect the voltage being measured. The third problem, that of obtaining electrodes small enough to record voltages within a single axon without perturbing it, was solved in 1949 with the invention of the glass micropipette electrode, which was quickly adopted by other researchers. Refinements of this method are able to produce electrode tips that are as fine as 100 Å (10 nm), which also confers high input impedance. Action potentials may also be recorded with small metal electrodes placed just next to a neuron, with neurochips containing EOSFETs, or optically with dyes that are sensitive to Ca2+ or to voltage. While glass micropipette electrodes measure the sum of the currents passing through many ion channels, studying the electrical properties of a single ion channel became possible in the 1970s with the development of the patch clamp by Erwin Neher and Bert Sakmann. For this discovery, they were awarded the Nobel Prize in Physiology or Medicine in 1991. Patch-clamping verified that ionic channels have discrete states of conductance, such as open, closed and inactivated. Optical imaging technologies have been developed in recent years to measure action potentials, either via simultaneous multisite recordings or with ultra-spatial resolution. Using voltage-sensitive dyes, action potentials have been optically recorded from a tiny patch of cardiomyocyte membrane. Neurotoxins Several neurotoxins, both natural and synthetic, function by blocking the action potential. Tetrodotoxin from the pufferfish and saxitoxin from the Gonyaulax (the dinoflagellate genus responsible for "red tides") block action potentials by inhibiting the voltage-sensitive sodium channel; similarly, dendrotoxin from the black mamba snake inhibits the voltage-sensitive potassium channel. Such inhibitors of ion channels serve an important research purpose, by allowing scientists to "turn off" specific channels at will, thus isolating the other channels' contributions; they can also be useful in purifying ion channels by affinity chromatography or in assaying their concentration. However, such inhibitors also make effective neurotoxins, and have been considered for use as chemical weapons. Neurotoxins aimed at the ion channels of insects have been effective insecticides; one example is the synthetic permethrin, which prolongs the activation of the sodium channels involved in action potentials. The ion channels of insects are sufficiently different from their human counterparts that there are few side effects in humans. History The role of electricity in the nervous systems of animals was first observed in dissected frogs by Luigi Galvani, who studied it from 1791 to 1797. Galvani's results inspired Alessandro Volta to develop the Voltaic pile—the earliest-known electric battery—with which he studied animal electricity (such as electric eels) and the physiological responses to applied direct-current voltages. In the 19th century scientists studied the propagation of electrical signals in whole nerves (i.e., bundles of neurons) and demonstrated that nervous tissue was made up of cells, instead of an interconnected network of tubes (a reticulum). Carlo Matteucci followed up Galvani's studies and demonstrated that injured nerves and muscles in frogs could produce direct current. Matteucci's work inspired the German physiologist, Emil du Bois-Reymond, who discovered in 1843 that stimulating these muscle and nerve preparations produced a notable diminution in their resting currents, making him the first researcher to identify the electrical nature of the action potential. The conduction velocity of action potentials was then measured in 1850 by du Bois-Reymond's friend, Hermann von Helmholtz. Progress in electrophysiology stagnated thereafter due to the limitations of chemical theory and experimental practice. To establish that nervous tissue is made up of discrete cells, the Spanish physician Santiago Ramón y Cajal and his students used a stain developed by Camillo Golgi to reveal the myriad shapes of neurons, which they rendered painstakingly. For their discoveries, Golgi and Ramón y Cajal were awarded the 1906 Nobel Prize in Physiology. Their work resolved a long-standing controversy in the neuroanatomy of the 19th century; Golgi himself had argued for the network model of the nervous system. The 20th century saw significant breakthroughs in electrophysiology. In 1902 and again in 1912, Julius Bernstein advanced the hypothesis that the action potential resulted from a change in the permeability of the axonal membrane to ions. Bernstein's hypothesis was confirmed by Ken Cole and Howard Curtis, who showed that membrane conductance increases during an action potential. In 1907, Louis Lapicque suggested that the action potential was generated as a threshold was crossed, what would be later shown as a product of the dynamical systems of ionic conductances. In 1949, Alan Hodgkin and Bernard Katz refined Bernstein's hypothesis by considering that the axonal membrane might have different permeabilities to different ions; in particular, they demonstrated the crucial role of the sodium permeability for the action potential. They made the first actual recording of the electrical changes across the neuronal membrane that mediate the action potential. This line of research culminated in the five 1952 papers of Hodgkin, Katz and Andrew Huxley, in which they applied the voltage clamp technique to determine the dependence of the axonal membrane's permeabilities to sodium and potassium ions on voltage and time, from which they were able to reconstruct the action potential quantitatively. Hodgkin and Huxley correlated the properties of their mathematical model with discrete ion channels that could exist in several different states, including "open", "closed", and "inactivated". Their hypotheses were confirmed in the mid-1970s and 1980s by Erwin Neher and Bert Sakmann, who developed the technique of patch clamping to examine the conductance states of individual ion channels. In the 21st century, researchers are beginning to understand the structural basis for these conductance states and for the selectivity of channels for their species of ion, through the atomic-resolution crystal structures, fluorescence distance measurements and cryo-electron microscopy studies. Julius Bernstein was also the first to introduce the Nernst equation for resting potential across the membrane; this was generalized by David E. Goldman to the eponymous Goldman equation in 1943. The sodium–potassium pump was identified in 1957 and its properties gradually elucidated, culminating in the determination of its atomic-resolution structure by X-ray crystallography. The crystal structures of related ionic pumps have also been solved, giving a broader view of how these molecular machines work. Quantitative models Mathematical and computational models are essential for understanding the action potential, and offer predictions that may be tested against experimental data, providing a stringent test of a theory. The most important and accurate of the early neural models is the Hodgkin–Huxley model, which describes the action potential by a coupled set of four ordinary differential equations (ODEs). Although the Hodgkin–Huxley model may be a simplification with few limitations compared to the realistic nervous membrane as it exists in nature, its complexity has inspired several even-more-simplified models, such as the Morris–Lecar model and the FitzHugh–Nagumo model, both of which have only two coupled ODEs. The properties of the Hodgkin–Huxley and FitzHugh–Nagumo models and their relatives, such as the Bonhoeffer–Van der Pol model, have been well-studied within mathematics, computation and electronics. However the simple models of generator potential and action potential fail to accurately reproduce the near threshold neural spike rate and spike shape, specifically for the mechanoreceptors like the Pacinian corpuscle. More modern research has focused on larger and more integrated systems; by joining action-potential models with models of other parts of the nervous system (such as dendrites and synapses), researchers can study neural computation and simple reflexes, such as escape reflexes and others controlled by central pattern generators. See also Anode break excitation Bioelectricity Biological neuron model Bursting Central pattern generator Chronaxie Frog battery Law of specific nerve energies Neural accommodation Single-unit recording Soliton model in neuroscience Notes References Footnotes Journal articles Books Web pages Further reading External links Ionic flow in action potentials at Blackwell Publishing Action potential propagation in myelinated and unmyelinated axons at Blackwell Publishing Generation of AP in cardiac cells and generation of AP in neuron cells Resting membrane potential from Life: The Science of Biology, by WK Purves, D Sadava, GH Orians, and HC Heller, 8th edition, New York: WH Freeman, . Ionic motion and the Goldman voltage for arbitrary ionic concentrations at The University of Arizona A cartoon illustrating the action potential Action potential propagation Production of the action potential: voltage and current clamping simulations Open-source software to simulate neuronal and cardiac action potentials at SourceForge.net Introduction to the Action Potential, Neuroscience Online (electronic neuroscience textbook by UT Houston Medical School) Khan Academy: Electrotonic and action potential Capacitors Neural coding Electrophysiology Electrochemistry Computational neuroscience Cellular neuroscience Cellular processes Membrane biology Plant intelligence Action potentials
Action potential
[ "Physics", "Chemistry", "Biology" ]
13,629
[ "Physical quantities", "Plants", "Membrane biology", "Plant intelligence", "Capacitors", "Electrochemistry", "Cellular processes", "Molecular biology", "Capacitance" ]
157,055
https://en.wikipedia.org/wiki/Law%20of%20large%20numbers
In probability theory, the law of large numbers (LLN) is a mathematical law that states that the average of the results obtained from a large number of independent random samples converges to the true value, if it exists. More formally, the LLN states that given a sample of independent and identically distributed values, the sample mean converges to the true mean. The LLN is important because it guarantees stable long-term results for the averages of some random events. For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. Importantly, the law applies (as the name indicates) only when a large number of observations are considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be "balanced" by the others (see the gambler's fallacy). The LLN only applies to the average of the results obtained from repeated trials and claims that this average converges to the expected value; it does not claim that the sum of n results gets close to the expected value times n as n increases. Throughout its history, many mathematicians have refined this law. Today, the LLN is used in many fields including statistics, probability theory, economics, and insurance. Examples For example, a single roll of a six-sided die produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability. Therefore, the expected value of the roll is: According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called the sample mean) will approach 3.5, with the precision increasing as more dice are rolled. It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable, the expected value is the theoretical probability of success, and the average of n such variables (assuming they are independent and identically distributed (i.i.d.)) is precisely the relative frequency. For example, a fair coin toss is a Bernoulli trial. When a fair coin is flipped once, the theoretical probability that the outcome will be heads is equal to . Therefore, according to the law of large numbers, the proportion of heads in a "large" number of coin flips "should be" roughly . In particular, the proportion of heads after n flips will almost surely converge to as n approaches infinity. Although the proportion of heads (and tails) approaches , almost surely the absolute difference in the number of heads and tails will become large as the number of flips becomes large. That is, the probability that the absolute difference is a small number approaches zero as the number of flips becomes large. Also, almost surely the ratio of the absolute difference to the number of flips will approach zero. Intuitively, the expected difference grows, but at a slower rate than the number of flips. Another good example of the LLN is the Monte Carlo method. These methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The larger the number of repetitions, the better the approximation tends to be. The reason that this method is important is mainly that, sometimes, it is difficult or impossible to use other approaches. Limitation The average of the results obtained from a large number of trials may fail to converge in some cases. For instance, the average of n results taken from the Cauchy distribution or some Pareto distributions (α<1) will not converge as n becomes larger; the reason is heavy tails. The Cauchy distribution and the Pareto distribution represent two cases: the Cauchy distribution does not have an expectation, whereas the expectation of the Pareto distribution (α<1) is infinite. One way to generate the Cauchy-distributed example is where the random numbers equal the tangent of an angle uniformly distributed between −90° and +90°. The median is zero, but the expected value does not exist, and indeed the average of n such variables have the same distribution as one such variable. It does not converge in probability toward zero (or any other value) as n goes to infinity. And if the trials embed a selection bias, typical in human economic/rational behaviour, the law of large numbers does not help in solving the bias. Even if the number of trials is increased the selection bias remains. History The Italian mathematician Gerolamo Cardano (1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers. A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli. It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his (The Art of Conjecturing) in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoulli's theorem". This should not be confused with Bernoulli's principle, named after Jacob Bernoulli's nephew Daniel Bernoulli. In 1837, S. D. Poisson further described it under the name ("the law of large numbers"). Thereafter, it was known under both names, but the "law of large numbers" is most frequently used. After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev, Markov, Borel, Cantelli, Kolmogorov and Khinchin. Markov showed that the law can apply to a random variable that does not have a finite variance under some other weaker assumption, and Khinchin showed in 1929 that if the series consists of independent identically distributed random variables, it suffices that the expected value exists for the weak law of large numbers to be true. These further studies have given rise to two prominent forms of the LLN. One is called the "weak" law and the other the "strong" law, in reference to two different modes of convergence of the cumulative sample means to the expected value; in particular, as explained below, the strong form implies the weak. Forms There are two different versions of the law of large numbers that are described below. They are called the strong law of large numbers and the weak law of large numbers. Stated for the case where X1, X2, ... is an infinite sequence of independent and identically distributed (i.i.d.) Lebesgue integrable random variables with expected value E(X1) = E(X2) = ... = μ, both versions of the law state that the sample average converges to the expected value: (Lebesgue integrability of Xj means that the expected value E(Xj) exists according to Lebesgue integration and is finite. It does not mean that the associated probability measure is absolutely continuous with respect to Lebesgue measure.) Introductory probability texts often additionally assume identical finite variance (for all ) and no correlation between random variables. In that case, the variance of the average of n random variables is which can be used to shorten and simplify the proofs. This assumption of finite variance is not necessary. Large or infinite variance will make the convergence slower, but the LLN holds anyway. Mutual independence of the random variables can be replaced by pairwise independence or exchangeability in both versions of the law. The difference between the strong and the weak version is concerned with the mode of convergence being asserted. For interpretation of these modes, see Convergence of random variables. Weak law The weak law of large numbers (also called Khinchin's law) states that given a collection of independent and identically distributed (iid) samples from a random variable with finite mean, the sample mean converges in probability to the expected value That is, for any positive number ε, Interpreting this result, the weak law states that for any nonzero margin specified (ε), no matter how small, with a sufficiently large sample there will be a very high probability that the average of the observations will be close to the expected value; that is, within the margin. As mentioned earlier, the weak law applies in the case of i.i.d. random variables, but it also applies in some other cases. For example, the variance may be different for each random variable in the series, keeping the expected value constant. If the variances are bounded, then the law applies, as shown by Chebyshev as early as 1867. (If the expected values change during the series, then we can simply apply the law to the average deviation from the respective expected values. The law then states that this converges in probability to zero.) In fact, Chebyshev's proof works so long as the variance of the average of the first n values goes to zero as n goes to infinity. As an example, assume that each random variable in the series follows a Gaussian distribution (normal distribution) with mean zero, but with variance equal to , which is not bounded. At each stage, the average will be normally distributed (as the average of a set of normally distributed variables). The variance of the sum is equal to the sum of the variances, which is asymptotic to . The variance of the average is therefore asymptotic to and goes to zero. There are also examples of the weak law applying even though the expected value does not exist. Strong law The strong law of large numbers (also called Kolmogorov's law) states that the sample average converges almost surely to the expected value That is, What this means is that, as the number of trials n goes to infinity, the probability that the average of the observations converges to the expected value, is equal to one. The modern proof of the strong law is more complex than that of the weak law, and relies on passing to an appropriate subsequence. The strong law of large numbers can itself be seen as a special case of the pointwise ergodic theorem. This view justifies the intuitive interpretation of the expected value (for Lebesgue integration only) of a random variable when sampled repeatedly as the "long-term average". Law 3 is called the strong law because random variables which converge strongly (almost surely) are guaranteed to converge weakly (in probability). However the weak law is known to hold in certain conditions where the strong law does not hold and then the convergence is only weak (in probability). See differences between the weak law and the strong law. The strong law applies to independent identically distributed random variables having an expected value (like the weak law). This was proved by Kolmogorov in 1930. It can also apply in other cases. Kolmogorov also showed, in 1933, that if the variables are independent and identically distributed, then for the average to converge almost surely on something (this can be considered another statement of the strong law), it is necessary that they have an expected value (and then of course the average will converge almost surely on that). If the summands are independent but not identically distributed, then provided that each Xk has a finite second moment and This statement is known as Kolmogorov's strong law, see e.g. . Differences between the weak law and the strong law The weak law states that for a specified large n, the average is likely to be near μ. Thus, it leaves open the possibility that happens an infinite number of times, although at infrequent intervals. (Not necessarily for all n). The strong law shows that this almost surely will not occur. It does not imply that with probability 1, we have that for any the inequality holds for all large enough n, since the convergence is not necessarily uniform on the set where it holds. The strong law does not hold in the following cases, but the weak law does. Uniform laws of large numbers There are extensions of the law of large numbers to collections of estimators, where the convergence is uniform over the collection; thus the name uniform law of large numbers. Suppose f(x,θ) is some function defined for θ ∈ Θ, and continuous in θ. Then for any fixed θ, the sequence {f(X1,θ), f(X2,θ), ...} will be a sequence of independent and identically distributed random variables, such that the sample mean of this sequence converges in probability to E[f(X,θ)]. This is the pointwise (in θ) convergence. A particular example of a uniform law of large numbers states the conditions under which the convergence happens uniformly in θ. If Θ is compact, f(x,θ) is continuous at each θ ∈ Θ for almost all xs, and measurable function of x at each θ. there exists a dominating function d(x) such that E[d(X)] < ∞, and Then E[f(X,θ)] is continuous in θ, and This result is useful to derive consistency of a large class of estimators (see Extremum estimator). Borel's law of large numbers Borel's law of large numbers, named after Émile Borel, states that if an experiment is repeated a large number of times, independently under identical conditions, then the proportion of times that any specified event is expected to occur approximately equals the probability of the event's occurrence on any particular trial; the larger the number of repetitions, the better the approximation tends to be. More precisely, if E denotes the event in question, p its probability of occurrence, and Nn(E) the number of times E occurs in the first n trials, then with probability one, This theorem makes rigorous the intuitive notion of probability as the expected long-run relative frequency of an event's occurrence. It is a special case of any of several more general laws of large numbers in probability theory. Chebyshev's inequality. Let X be a random variable with finite expected value μ and finite non-zero variance σ2. Then for any real number , Proof of the weak law Given X1, X2, ... an infinite sequence of i.i.d. random variables with finite expected value , we are interested in the convergence of the sample average The weak law of large numbers states: Proof using Chebyshev's inequality assuming finite variance This proof uses the assumption of finite variance (for all ). The independence of the random variables implies no correlation between them, and we have that The common mean μ of the sequence is the mean of the sample average: Using Chebyshev's inequality on results in This may be used to obtain the following: As n approaches infinity, the expression approaches 1. And by definition of convergence in probability, we have obtained Proof using convergence of characteristic functions By Taylor's theorem for complex functions, the characteristic function of any random variable, X, with finite mean μ, can be written as All X1, X2, ... have the same characteristic function, so we will simply denote this φX. Among the basic properties of characteristic functions there are if X and Y are independent. These rules can be used to calculate the characteristic function of in terms of φX: The limit eitμ is the characteristic function of the constant random variable μ, and hence by the Lévy continuity theorem, converges in distribution to μ: μ is a constant, which implies that convergence in distribution to μ and convergence in probability to μ are equivalent (see Convergence of random variables.) Therefore, This shows that the sample mean converges in probability to the derivative of the characteristic function at the origin, as long as the latter exists. Proof of the strong law We give a relatively simple proof of the strong law under the assumptions that the are iid, , , and . Let us first note that without loss of generality we can assume that by centering. In this case, the strong law says that or It is equivalent to show that Note that and thus to prove the strong law we need to show that for every , we have Define the events , and if we can show that then the Borel-Cantelli Lemma implies the result. So let us estimate . We compute We first claim that every term of the form where all subscripts are distinct, must have zero expectation. This is because by independence, and the last term is zero --- and similarly for the other terms. Therefore the only terms in the sum with nonzero expectation are and . Since the are identically distributed, all of these are the same, and moreover . There are terms of the form and terms of the form , and so Note that the right-hand side is a quadratic polynomial in , and as such there exists a such that for sufficiently large. By Markov, for sufficiently large, and therefore this series is summable. Since this holds for any , we have established the Strong LLN. Another proof was given by Etemadi. For a proof without the added assumption of a finite fourth moment, see Section 22 of Billingsley. Consequences The law of large numbers provides an expectation of an unknown distribution from a realization of the sequence, but also any feature of the probability distribution. By applying Borel's law of large numbers, one could easily obtain the probability mass function. For each event in the objective probability mass function, one could approximate the probability of the event's occurrence with the proportion of times that any specified event occurs. The larger the number of repetitions, the better the approximation. As for the continuous case: , for small positive h. Thus, for large n: With this method, one can cover the whole x-axis with a grid (with grid size 2h) and obtain a bar graph which is called a histogram. Applications One application of the LLN is an important method of approximation known as the Monte Carlo method, which uses a random sampling of numbers to approximate numerical results. The algorithm to compute an integral of f(x) on an interval [a,b] is as follows: Simulate uniform random variables X1, X2, ..., Xn which can be done using a software, and use a random number table that gives U1, U2, ..., Un independent and identically distributed (i.i.d.) random variables on [0,1]. Then let Xi = a+(b - a)Ui for i= 1, 2, ..., n. Then X1, X2, ..., Xn are independent and identically distributed uniform random variables on [a, b]. Evaluate f(X1), f(X2), ..., f(Xn) Take the average of f(X1), f(X2), ..., f(Xn) by computing and then by the Strong Law of Large Numbers, this converges to = = We can find the integral of on [-1,2]. Using traditional methods to compute this integral is very difficult, so the Monte Carlo method can be used here. Using the above algorithm, we get = 0.905 when n=25 and = 1.028 when n=250 We observe that as n increases, the numerical value also increases. When we get the actual results for the integral we get = 1.000194 When the LLN was used, the approximation of the integral was closer to its true value, and thus more accurate. Another example is the integration of f(x) = on [0,1]. Using the Monte Carlo method and the LLN, we can see that as the number of samples increases, the numerical value gets closer to 0.4180233. See also Asymptotic equipartition property Central limit theorem Infinite monkey theorem Keynes' Treatise on Probability Law of averages Law of the iterated logarithm Law of truly large numbers Lindy effect Regression toward the mean Sortition Strong law of small numbers Notes References External links Animations for the Law of Large Numbers by Yihui Xie using the R package animation Apple CEO Tim Cook said something that would make statisticians cringe. "We don't believe in such laws as laws of large numbers. This is sort of, uh, old dogma, I think, that was cooked up by somebody [..]" said Tim Cook and while: "However, the law of large numbers has nothing to do with large companies, large revenues, or large growth rates. The law of large numbers is a fundamental concept in probability theory and statistics, tying together theoretical probabilities that we can calculate to the actual outcomes of experiments that we empirically perform. explained Business Insider Probability theorems Mathematical proofs Asymptotic theory (statistics) Theorems in statistics Large numbers
Law of large numbers
[ "Mathematics" ]
4,358
[ "Mathematical theorems", "Theorems in statistics", "Mathematical objects", "Theorems in probability theory", "Large numbers", "nan", "Mathematical problems", "Numbers" ]
157,057
https://en.wikipedia.org/wiki/Correlation
In statistics, correlation or dependence is any statistical relationship, whether causal or not, between two random variables or bivariate data. Although in the broadest sense, "correlation" may indicate any type of association, in statistics it usually refers to the degree to which a pair of variables are linearly related. Familiar examples of dependent phenomena include the correlation between the height of parents and their offspring, and the correlation between the price of a good and the quantity the consumers are willing to purchase, as it is depicted in the demand curve. Correlations are useful because they can indicate a predictive relationship that can be exploited in practice. For example, an electrical utility may produce less power on a mild day based on the correlation between electricity demand and weather. In this example, there is a causal relationship, because extreme weather causes people to use more electricity for heating or cooling. However, in general, the presence of a correlation is not sufficient to infer the presence of a causal relationship (i.e., correlation does not imply causation). Formally, random variables are dependent if they do not satisfy a mathematical property of probabilistic independence. In informal parlance, correlation is synonymous with dependence. However, when used in a technical sense, correlation refers to any of several specific types of mathematical relationship between the conditional expectation of one variable given the other is not constant as the conditioning variable changes; broadly correlation in this specific sense is used when is related to in some manner (such as linearly, monotonically, or perhaps according to some particular functional form such as logarithmic). Essentially, correlation is the measure of how two or more variables are related to one another. There are several correlation coefficients, often denoted or , measuring the degree of correlation. The most common of these is the Pearson correlation coefficient, which is sensitive only to a linear relationship between two variables (which may be present even when one variable is a nonlinear function of the other). Other correlation coefficients – such as Spearman's rank correlation coefficient – have been developed to be more robust than Pearson's, that is, more sensitive to nonlinear relationships. Mutual information can also be applied to measure dependence between two variables. Pearson's product-moment coefficient The most familiar measure of dependence between two quantities is the Pearson product-moment correlation coefficient (PPMCC), or "Pearson's correlation coefficient", commonly called simply "the correlation coefficient". It is obtained by taking the ratio of the covariance of the two variables in question of our numerical dataset, normalized to the square root of their variances. Mathematically, one simply divides the covariance of the two variables by the product of their standard deviations. Karl Pearson developed the coefficient from a similar but slightly different idea by Francis Galton. A Pearson product-moment correlation coefficient attempts to establish a line of best fit through a dataset of two variables by essentially laying out the expected values and the resulting Pearson's correlation coefficient indicates how far away the actual dataset is from the expected values. Depending on the sign of our Pearson's correlation coefficient, we can end up with either a negative or positive correlation if there is any sort of relationship between the variables of our data set. The population correlation coefficient between two random variables and with expected values and and standard deviations and is defined as: where is the expected value operator, means covariance, and is a widely used alternative notation for the correlation coefficient. The Pearson correlation is defined only if both standard deviations are finite and positive. An alternative formula purely in terms of moments is: Correlation and independence It is a corollary of the Cauchy–Schwarz inequality that the absolute value of the Pearson correlation coefficient is not bigger than 1. Therefore, the value of a correlation coefficient ranges between −1 and +1. The correlation coefficient is +1 in the case of a perfect direct (increasing) linear relationship (correlation), −1 in the case of a perfect inverse (decreasing) linear relationship (anti-correlation), and some value in the open interval in all other cases, indicating the degree of linear dependence between the variables. As it approaches zero there is less of a relationship (closer to uncorrelated). The closer the coefficient is to either −1 or 1, the stronger the correlation between the variables. If the variables are independent, Pearson's correlation coefficient is 0. However, because the correlation coefficient detects only linear dependencies between two variables, the converse is not necessarily true. A correlation coefficient of 0 does not imply that the variables are independent. For example, suppose the random variable is symmetrically distributed about zero, and . Then is completely determined by , so that and are perfectly dependent, but their correlation is zero; they are uncorrelated. However, in the special case when and are jointly normal, uncorrelatedness is equivalent to independence. Even though uncorrelated data does not necessarily imply independence, one can check if random variables are independent if their mutual information is 0. Sample correlation coefficient Given a series of measurements of the pair indexed by , the sample correlation coefficient can be used to estimate the population Pearson correlation between and . The sample correlation coefficient is defined as where and are the sample means of and , and and are the corrected sample standard deviations of and . Equivalent expressions for are where and are the uncorrected sample standard deviations of and . If and are results of measurements that contain measurement error, the realistic limits on the correlation coefficient are not −1 to +1 but a smaller range. For the case of a linear model with a single independent variable, the coefficient of determination (R squared) is the square of , Pearson's product-moment coefficient. Example Consider the joint probability distribution of and given in the table below. {| class="wikitable" style="text-align:center;" |+ ! !−1 !0 !1 |- !0 |0 | |0 |- !1 | |0 | |} For this joint distribution, the marginal distributions are: This yields the following expectations and variances: Therefore: Rank correlation coefficients Rank correlation coefficients, such as Spearman's rank correlation coefficient and Kendall's rank correlation coefficient (τ) measure the extent to which, as one variable increases, the other variable tends to increase, without requiring that increase to be represented by a linear relationship. If, as the one variable increases, the other decreases, the rank correlation coefficients will be negative. It is common to regard these rank correlation coefficients as alternatives to Pearson's coefficient, used either to reduce the amount of calculation or to make the coefficient less sensitive to non-normality in distributions. However, this view has little mathematical basis, as rank correlation coefficients measure a different type of relationship than the Pearson product-moment correlation coefficient, and are best seen as measures of a different type of association, rather than as an alternative measure of the population correlation coefficient. To illustrate the nature of rank correlation, and its difference from linear correlation, consider the following four pairs of numbers : (0, 1), (10, 100), (101, 500), (102, 2000). As we go from each pair to the next pair increases, and so does . This relationship is perfect, in the sense that an increase in is always accompanied by an increase in . This means that we have a perfect rank correlation, and both Spearman's and Kendall's correlation coefficients are 1, whereas in this example Pearson product-moment correlation coefficient is 0.7544, indicating that the points are far from lying on a straight line. In the same way if always decreases when increases, the rank correlation coefficients will be −1, while the Pearson product-moment correlation coefficient may or may not be close to −1, depending on how close the points are to a straight line. Although in the extreme cases of perfect rank correlation the two coefficients are both equal (being both +1 or both −1), this is not generally the case, and so values of the two coefficients cannot meaningfully be compared. For example, for the three pairs (1, 1) (2, 3) (3, 2) Spearman's coefficient is 1/2, while Kendall's coefficient is 1/3. Other measures of dependence among random variables The information given by a correlation coefficient is not enough to define the dependence structure between random variables. The correlation coefficient completely defines the dependence structure only in very particular cases, for example when the distribution is a multivariate normal distribution. (See diagram above.) In the case of elliptical distributions it characterizes the (hyper-)ellipses of equal density; however, it does not completely characterize the dependence structure (for example, a multivariate t-distribution's degrees of freedom determine the level of tail dependence). For continuous variables, multiple alternative measures of dependence were introduced to address the deficiency of Pearson's correlation that it can be zero for dependent random variables (see and reference references therein for an overview). They all share the important property that a value of zero implies independence. This led some authors to recommend their routine usage, particularly of Distance correlation. Another alternative measure is the Randomized Dependence Coefficient. The RDC is a computationally efficient, copula-based measure of dependence between multivariate random variables and is invariant with respect to non-linear scalings of random variables. One important disadvantage of the alternative, more general measures is that, when used to test whether two variables are associated, they tend to have lower power compared to Pearson's correlation when the data follow a multivariate normal distribution. This is an implication of the No free lunch theorem. To detect all kinds of relationships, these measures have to sacrifice power on other relationships, particularly for the important special case of a linear relationship with Gaussian marginals, for which Pearson's correlation is optimal. Another problem concerns interpretation. While Person's correlation can be interpreted for all values, the alternative measures can generally only be interpreted meaningfully at the extremes. For two binary variables, the odds ratio measures their dependence, and takes range non-negative numbers, possibly infinity: . Related statistics such as Yule's Y and Yule's Q normalize this to the correlation-like range . The odds ratio is generalized by the logistic model to model cases where the dependent variables are discrete and there may be one or more independent variables. The correlation ratio, entropy-based mutual information, total correlation, dual total correlation and polychoric correlation are all also capable of detecting more general dependencies, as is consideration of the copula between them, while the coefficient of determination generalizes the correlation coefficient to multiple regression. Sensitivity to the data distribution The degree of dependence between variables and does not depend on the scale on which the variables are expressed. That is, if we are analyzing the relationship between and , most correlation measures are unaffected by transforming to and to , where a, b, c, and d are constants (b and d being positive). This is true of some correlation statistics as well as their population analogues. Some correlation statistics, such as the rank correlation coefficient, are also invariant to monotone transformations of the marginal distributions of and/or . Most correlation measures are sensitive to the manner in which and are sampled. Dependencies tend to be stronger if viewed over a wider range of values. Thus, if we consider the correlation coefficient between the heights of fathers and their sons over all adult males, and compare it to the same correlation coefficient calculated when the fathers are selected to be between 165 cm and 170 cm in height, the correlation will be weaker in the latter case. Several techniques have been developed that attempt to correct for range restriction in one or both variables, and are commonly used in meta-analysis; the most common are Thorndike's case II and case III equations. Various correlation measures in use may be undefined for certain joint distributions of and . For example, the Pearson correlation coefficient is defined in terms of moments, and hence will be undefined if the moments are undefined. Measures of dependence based on quantiles are always defined. Sample-based statistics intended to estimate population measures of dependence may or may not have desirable statistical properties such as being unbiased, or asymptotically consistent, based on the spatial structure of the population from which the data were sampled. Sensitivity to the data distribution can be used to an advantage. For example, scaled correlation is designed to use the sensitivity to the range in order to pick out correlations between fast components of time series. By reducing the range of values in a controlled manner, the correlations on long time scale are filtered out and only the correlations on short time scales are revealed. Correlation matrices The correlation matrix of random variables is the matrix whose entry is Thus the diagonal entries are all identically one. If the measures of correlation used are product-moment coefficients, the correlation matrix is the same as the covariance matrix of the standardized random variables for . This applies both to the matrix of population correlations (in which case is the population standard deviation), and to the matrix of sample correlations (in which case denotes the sample standard deviation). Consequently, each is necessarily a positive-semidefinite matrix. Moreover, the correlation matrix is strictly positive definite if no variable can have all its values exactly generated as a linear function of the values of the others. The correlation matrix is symmetric because the correlation between and is the same as the correlation between and . A correlation matrix appears, for example, in one formula for the coefficient of multiple determination, a measure of goodness of fit in multiple regression. In statistical modelling, correlation matrices representing the relationships between variables are categorized into different correlation structures, which are distinguished by factors such as the number of parameters required to estimate them. For example, in an exchangeable correlation matrix, all pairs of variables are modeled as having the same correlation, so all non-diagonal elements of the matrix are equal to each other. On the other hand, an autoregressive matrix is often used when variables represent a time series, since correlations are likely to be greater when measurements are closer in time. Other examples include independent, unstructured, M-dependent, and Toeplitz. In exploratory data analysis, the iconography of correlations consists in replacing a correlation matrix by a diagram where the "remarkable" correlations are represented by a solid line (positive correlation), or a dotted line (negative correlation). Nearest valid correlation matrix In some applications (e.g., building data models from only partially observed data) one wants to find the "nearest" correlation matrix to an "approximate" correlation matrix (e.g., a matrix which typically lacks semi-definite positiveness due to the way it has been computed). In 2002, Higham formalized the notion of nearness using the Frobenius norm and provided a method for computing the nearest correlation matrix using the Dykstra's projection algorithm, of which an implementation is available as an online Web API. This sparked interest in the subject, with new theoretical (e.g., computing the nearest correlation matrix with factor structure) and numerical (e.g. usage the Newton's method for computing the nearest correlation matrix) results obtained in the subsequent years. Uncorrelatedness and independence of stochastic processes Similarly for two stochastic processes and : If they are independent, then they are uncorrelated. The opposite of this statement might not be true. Even if two variables are uncorrelated, they might not be independent to each other. Common misconceptions Correlation and causality The conventional dictum that "correlation does not imply causation" means that correlation cannot be used by itself to infer a causal relationship between the variables. This dictum should not be taken to mean that correlations cannot indicate the potential existence of causal relations. However, the causes underlying the correlation, if any, may be indirect and unknown, and high correlations also overlap with identity relations (tautologies), where no causal process exists. Consequently, a correlation between two variables is not a sufficient condition to establish a causal relationship (in either direction). A correlation between age and height in children is fairly causally transparent, but a correlation between mood and health in people is less so. Does improved mood lead to improved health, or does good health lead to good mood, or both? Or does some other factor underlie both? In other words, a correlation can be taken as evidence for a possible causal relationship, but cannot indicate what the causal relationship, if any, might be. Simple linear correlations The Pearson correlation coefficient indicates the strength of a linear relationship between two variables, but its value generally does not completely characterize their relationship. In particular, if the conditional mean of given , denoted , is not linear in , the correlation coefficient will not fully determine the form of . The adjacent image shows scatter plots of Anscombe's quartet, a set of four different pairs of variables created by Francis Anscombe. The four variables have the same mean (7.5), variance (4.12), correlation (0.816) and regression line (). However, as can be seen on the plots, the distribution of the variables is very different. The first one (top left) seems to be distributed normally, and corresponds to what one would expect when considering two variables correlated and following the assumption of normality. The second one (top right) is not distributed normally; while an obvious relationship between the two variables can be observed, it is not linear. In this case the Pearson correlation coefficient does not indicate that there is an exact functional relationship: only the extent to which that relationship can be approximated by a linear relationship. In the third case (bottom left), the linear relationship is perfect, except for one outlier which exerts enough influence to lower the correlation coefficient from 1 to 0.816. Finally, the fourth example (bottom right) shows another example when one outlier is enough to produce a high correlation coefficient, even though the relationship between the two variables is not linear. These examples indicate that the correlation coefficient, as a summary statistic, cannot replace visual examination of the data. The examples are sometimes said to demonstrate that the Pearson correlation assumes that the data follow a normal distribution, but this is only partially correct. The Pearson correlation can be accurately calculated for any distribution that has a finite covariance matrix, which includes most distributions encountered in practice. However, the Pearson correlation coefficient (taken together with the sample mean and variance) is only a sufficient statistic if the data is drawn from a multivariate normal distribution. As a result, the Pearson correlation coefficient fully characterizes the relationship between variables if and only if the data are drawn from a multivariate normal distribution. Bivariate normal distribution If a pair of random variables follows a bivariate normal distribution, the conditional mean is a linear function of , and the conditional mean is a linear function of The correlation coefficient between and and the marginal means and variances of and determine this linear relationship: where and are the expected values of and respectively, and and are the standard deviations of and respectively. The empirical correlation is an estimate of the correlation coefficient A distribution estimate for is given by where is the Gaussian hypergeometric function. This density is both a Bayesian posterior density and an exact optimal confidence distribution density. See also Autocorrelation Canonical correlation Coefficient of determination Cointegration Concordance correlation coefficient Cophenetic correlation Correlation disattenuation Correlation function Correlation gap Covariance Covariance and correlation Cross-correlation Ecological correlation Fraction of variance unexplained Genetic correlation Goodman and Kruskal's lambda Iconography of correlations Illusory correlation Interclass correlation Intraclass correlation Lift (data mining) Mean dependence Modifiable areal unit problem Multiple correlation Point-biserial correlation coefficient Quadrant count ratio Spurious correlation Statistical correlation ratio Subindependence References Further reading External links MathWorld page on the (cross-)correlation coefficient/s of a sample Compute significance between two correlations, for the comparison of two correlation values. Proof that the Sample Bivariate Correlation has limits plus or minus 1 Interactive Flash simulation on the correlation of two normally distributed variables by Juha Puranen. Correlation analysis. Biomedical Statistics R-Psychologist Correlation visualization of correlation between two numeric variables Covariance and correlation Dimensionless numbers
Correlation
[ "Mathematics" ]
4,190
[ "Dimensionless numbers", "Mathematical objects", "Numbers" ]
157,093
https://en.wikipedia.org/wiki/Dot%20product
In mathematics, the dot product or scalar product is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called the inner product (or rarely the projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more). Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In modern geometry, Euclidean spaces are often defined by using vector spaces. In this case, the dot product is used for defining lengths (the length of a vector is the square root of the dot product of the vector by itself) and angles (the cosine of the angle between two vectors is the quotient of their dot product by the product of their lengths). The name "dot product" is derived from the dot operator " · " that is often used to designate this operation; the alternative name "scalar product" emphasizes that the result is a scalar, rather than a vector (as with the vector product in three-dimensional space). Definition The dot product may be defined algebraically or geometrically. The geometric definition is based on the notions of angle and distance (magnitude) of vectors. The equivalence of these two definitions relies on having a Cartesian coordinate system for Euclidean space. In modern presentations of Euclidean geometry, the points of space are defined in terms of their Cartesian coordinates, and Euclidean space itself is commonly identified with the real coordinate space . In such a presentation, the notions of length and angle are defined by means of the dot product. The length of a vector is defined as the square root of the dot product of the vector by itself, and the cosine of the (non oriented) angle between two vectors of length one is defined as their dot product. So the equivalence of the two definitions of the dot product is a part of the equivalence of the classical and the modern formulations of Euclidean geometry. Coordinate definition The dot product of two vectors and specified with respect to an orthonormal basis, is defined as: where denotes summation and is the dimension of the vector space. For instance, in three-dimensional space, the dot product of vectors and is: Likewise, the dot product of the vector with itself is: If vectors are identified with column vectors, the dot product can also be written as a matrix product where denotes the transpose of . Expressing the above example in this way, a 1 × 3 matrix (row vector) is multiplied by a 3 × 1 matrix (column vector) to get a 1 × 1 matrix that is identified with its unique entry: Geometric definition In Euclidean space, a Euclidean vector is a geometric object that possesses both a magnitude and a direction. A vector can be pictured as an arrow. Its magnitude is its length, and its direction is the direction to which the arrow points. The magnitude of a vector is denoted by . The dot product of two Euclidean vectors and is defined by where is the angle between and . In particular, if the vectors and are orthogonal (i.e., their angle is or ), then , which implies that At the other extreme, if they are codirectional, then the angle between them is zero with and This implies that the dot product of a vector with itself is which gives the formula for the Euclidean length of the vector. Scalar projection and first properties The scalar projection (or scalar component) of a Euclidean vector in the direction of a Euclidean vector is given by where is the angle between and . In terms of the geometric definition of the dot product, this can be rewritten as where is the unit vector in the direction of . The dot product is thus characterized geometrically by The dot product, defined in this manner, is homogeneous under scaling in each variable, meaning that for any scalar , It also satisfies the distributive law, meaning that These properties may be summarized by saying that the dot product is a bilinear form. Moreover, this bilinear form is positive definite, which means that is never negative, and is zero if and only if , the zero vector. Equivalence of the definitions If are the standard basis vectors in , then we may write The vectors are an orthonormal basis, which means that they have unit length and are at right angles to each other. Since these vectors have unit length, and since they form right angles with each other, if , Thus in general, we can say that: where is the Kronecker delta. Also, by the geometric definition, for any vector and a vector , we note that where is the component of vector in the direction of . The last step in the equality can be seen from the figure. Now applying the distributivity of the geometric version of the dot product gives which is precisely the algebraic definition of the dot product. So the geometric dot product equals the algebraic dot product. Properties The dot product fulfills the following properties if , , and are real vectors and , , and are scalars. Commutative which follows from the definition ( is the angle between and ): The commutative property can also be easily proven with the algebraic definition, and in more general spaces (where the notion of angle might not be geometrically intuitive but an analogous product can be defined) the angle between two vectors can be defined as Bilinear (additive, distributive and scalar-multiplicative in both arguments) Not associative Because the dot product is not defined between a scalar and a vector associativity is meaningless. However, bilinearity implies This property is sometimes called the "associative law for scalar and dot product", and one may say that "the dot product is associative with respect to scalar multiplication". Orthogonal Two non-zero vectors and are orthogonal if and only if . No cancellation Unlike multiplication of ordinary numbers, where if , then always equals unless is zero, the dot product does not obey the cancellation law: If and , then we can write: by the distributive law; the result above says this just means that is perpendicular to , which still allows , and therefore allows . Product rule If and are vector-valued differentiable functions, then the derivative (denoted by a prime ) of is given by the rule Application to the law of cosines Given two vectors and separated by angle (see the upper image), they form a triangle with a third side . Let , and denote the lengths of , , and , respectively. The dot product of this with itself is: which is the law of cosines. Triple product There are two ternary operations involving dot product and cross product. The scalar triple product of three vectors is defined as Its value is the determinant of the matrix whose columns are the Cartesian coordinates of the three vectors. It is the signed volume of the parallelepiped defined by the three vectors, and is isomorphic to the three-dimensional special case of the exterior product of three vectors. The vector triple product is defined by This identity, also known as Lagrange's formula, may be remembered as "ACB minus ABC", keeping in mind which vectors are dotted together. This formula has applications in simplifying vector calculations in physics. Physics In physics, the dot product takes two vectors and returns a scalar quantity. It is also known as the "scalar product". The dot product of two vectors can be defined as the product of the magnitudes of the two vectors and the cosine of the angle between the two vectors. Thus, Alternatively, it is defined as the product of the projection of the first vector onto the second vector and the magnitude of the second vector. For example: Mechanical work is the dot product of force and displacement vectors, Power is the dot product of force and velocity. Generalizations Complex vectors For vectors with complex entries, using the given definition of the dot product would lead to quite different properties. For instance, the dot product of a vector with itself could be zero without the vector being the zero vector (e.g. this would happen with the vector This in turn would have consequences for notions like length and angle. Properties such as the positive-definite norm can be salvaged at the cost of giving up the symmetric and bilinear properties of the dot product, through the alternative definition where is the complex conjugate of . When vectors are represented by column vectors, the dot product can be expressed as a matrix product involving a conjugate transpose, denoted with the superscript H: In the case of vectors with real components, this definition is the same as in the real case. The dot product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, the complex dot product is sesquilinear rather than bilinear, as it is conjugate linear and not linear in . The dot product is not symmetric, since The angle between two complex vectors is then given by The complex dot product leads to the notions of Hermitian forms and general inner product spaces, which are widely used in mathematics and physics. The self dot product of a complex vector , involving the conjugate transpose of a row vector, is also known as the norm squared, , after the Euclidean norm; it is a vector generalization of the absolute square of a complex scalar (see also: Squared Euclidean distance). Inner product The inner product generalizes the dot product to abstract vector spaces over a field of scalars, being either the field of real numbers or the field of complex numbers . It is usually denoted using angular brackets by . The inner product of two vectors over the field of complex numbers is, in general, a complex number, and is sesquilinear instead of bilinear. An inner product space is a normed vector space, and the inner product of a vector with itself is real and positive-definite. Functions The dot product is defined for vectors that have a finite number of entries. Thus these vectors can be regarded as discrete functions: a length- vector is, then, a function with domain , and is a notation for the image of by the function/vector . This notion can be generalized to square-integrable functions: just as the inner product on vectors uses a sum over corresponding components, the inner product on functions is defined as an integral over some measure space : For example, if and are continuous functions over a compact subset of with the standard Lebesgue measure, the above definition becomes: Generalized further to complex continuous functions and , by analogy with the complex inner product above, gives: Weight function Inner products can have a weight function (i.e., a function which weights each term of the inner product with a value). Explicitly, the inner product of functions and with respect to the weight function is Dyadics and matrices A double-dot product for matrices is the Frobenius inner product, which is analogous to the dot product on vectors. It is defined as the sum of the products of the corresponding components of two matrices and of the same size: And for real matrices, Writing a matrix as a dyadic, we can define a different double-dot product (see ) however it is not an inner product. Tensors The inner product between a tensor of order and a tensor of order is a tensor of order , see Tensor contraction for details. Computation Algorithms The straightforward algorithm for calculating a floating-point dot product of vectors can suffer from catastrophic cancellation. To avoid this, approaches such as the Kahan summation algorithm are used. Libraries A dot product function is included in: BLAS level 1 real , ; complex , , , Fortran as or Julia as   or standard library LinearAlgebra as R (programming language) as for vectors or, more generally for matrices, as Matlab as    or    or    or   Python (package NumPy) as    or    or   GNU Octave as  , and similar code as Matlab Intel oneAPI Math Kernel Library real p?dot ; complex p?dotc See also Cauchy–Schwarz inequality Cross product Dot product representation of a graph Euclidean norm, the square-root of the self dot product Matrix multiplication Metric tensor Multiplication of vectors Outer product Notes References External links Explanation of dot product including with complex vectors "Dot Product" by Bruce Torrence, Wolfram Demonstrations Project, 2007. Articles containing proofs Bilinear forms Operations on vectors Analytic geometry Tensors Scalars
Dot product
[ "Mathematics", "Engineering" ]
2,627
[ "Articles containing proofs", "Tensors" ]
157,105
https://en.wikipedia.org/wiki/VxWorks
VxWorks is a real-time operating system (or RTOS) developed as proprietary software by Wind River Systems, a subsidiary of Aptiv. First released in 1987, VxWorks is designed for use in embedded systems requiring real-time, deterministic performance and in many cases, safety and security certification for industries such as aerospace, defense, medical devices, industrial equipment, robotics, energy, transportation, network infrastructure, automotive, and consumer electronics. VxWorks supports AMD/Intel architecture, POWER architecture, ARM architectures, and RISC-V. The RTOS can be used in multicore asymmetric multiprocessing (AMP), symmetric multiprocessing (SMP), and mixed modes and multi-OS (via Type 1 hypervisor) designs on 32- and 64-bit processors. VxWorks comes with the kernel, middleware, board support packages, Wind River Workbench development suite, complementary third-party software and hardware. In its latest release, VxWorks 7, the RTOS has been re-engineered for modularity and upgradeability so the OS kernel is separate from middleware, applications, and other packages. Scalability, security, safety, connectivity, and graphics have been improved to address Internet of Things (IOT) needs. History VxWorks started in the late 1980s as a set of enhancements to a simple RTOS called VRTX sold by Ready Systems (becoming a Mentor Graphics product in 1995). Wind River acquired rights to distribute VRTX and significantly enhanced it by adding, among other things, a file system and an integrated development environment. In 1987, anticipating the termination of its reseller contract by Ready Systems, Wind River proceeded to develop its own kernel to replace VRTX within VxWorks. Published in 2003 with a Wind River copyright, "Real-Time Concepts for Embedded Systems" describes the development environment, runtime setting, and system call families of the RTOS. Written by Wind River employees with a foreword by Jerry Fiddler, chairman, and co-founder of Wind River, the textbook is an excellent tutorial on the RTOS. (It does not, however, replace Wind River documentation as might be needed by practicing engineers.) Some key milestones for VxWorks include: 1980s: VxWorks adds support for 32-bit processors. 1990s: VxWorks 5 becomes the first RTOS with a networking stack. 2000s: VxWorks 6 supports SMP and adds derivative industry-specific platforms. 2010s: VxWorks adds support for 64-bit processing and introduces VxWorks 7 for IoT in 2016. 2020s: VxWorks continues to update and add support, including the ability to power the Mars 2020 lander. Platform overview VxWorks supports Intel architecture, Power architecture, and ARM architectures. The RTOS can be used in multi-core asymmetric multiprocessing (AMP), symmetric multiprocessing (SMP), mixed modes and multi-OS (via Type 1 hypervisor) designs on 32- and 64- bit processors. The VxWorks consists of a set of runtime components and development tools. The run time components are an operating system (UP and SMP; 32- and 64-bit), software for applications support (file system, core network stack, USB stack, and inter-process communications), and hardware support (architecture adapter, processor support library, device driver library, and board support packages). VxWorks core development tools are compilers such as Diab, GNU, and Intel C++ Compiler (ICC) and its build and configuration tools. The system also includes productivity tools such as its Workbench development suite and Intel tools and development support tools for asset tracking and host support. The platform is a modular, vendor-neutral, open system that supports a range of third-party software and hardware. The OS kernel is separate from middleware, applications, and other packages, which enables easier bug fixes and testing of new features. An implementation of a layered source build system allows multiple versions of any stack to be installed at the same time so developers can select which version of any feature set should go into the VxWorks kernel libraries. Optional advanced add-ons for VxWorks provide additional capabilities, including the following: Advanced security features to safeguard devices and data residing in and traveling across the Internet of Things (IoT) Advanced safety partitioning to enable reliable application consolidation Real-time advanced visual edge analytics allow autonomous responses on VxWorks-based devices in real-time without latency Optimized embedded Java runtime engine enabling the deployment of Java applications Virtualization capability with a real-time embedded, Type 1 hypervisor Features Core features of the OS include: Multitasking kernel with preemptive and round-robin scheduling and fast interrupt response Native 64-bit operating system (only one 64-bit architecture supported: x86-64). Data model: LP64 User-mode applications ("Real-Time Processes", or RTP) isolated from other user-mode applications as well as the kernel via memory protection mechanisms SMP, AMP and mixed mode multiprocessing support Error handling framework Bluetooth, USB, CAN protocols, Firewire IEEE 1394, BLE, L2CAP, Continua stack, health device profile Binary, counting, and mutual exclusion semaphores with priority inheritance Local and distributed message queues POSIX PSE52 certified conformity in user-mode execution environment File systems: High Reliability File System (HRFS), FAT-based file system (DOSFS), Network File System (NFS), and TFFS Dual-mode IPv6 networking stack with IPv6 Ready Logo certification Memory protection including real-time processes (RTPs), error detection and reporting, and IPC Multi-OS messaging using TIPC and Wind River multi-OS IPC Symbolic debugging In March 2014 Wind River introduced VxWorks 7, emphasizing scalability, security, safety, connectivity, graphics, and virtualization. The following lists some of the release 7 updates. More information can be found on the Wind Rivers VxWorks website. Modular, componentized architecture using a layered build system with the ability to update each layer of code independently VxWorks microkernel (a full RTOS that can be as small as 20 KB) Security features such as digitally-signed modules (X.509), encryption, password management, ability to add/delete users at runtime SHA-256 hashing algorithm as the default password hashing algorithm Human machine interface with Vector Graphics, and Tilcon user interface (UI) Graphical user interface (GUI): OpenVG stack, Open GL, Tilcon UI, Frame Buffer Driver, EV Dev Interface Updated configuration interfaces for VxWorks Source Build VSB projects and VxWorks Image Projects Single authentication control used for Telnet, SSH, FTP, and rlogin daemons Connectivity with Bluetooth and SocketCAN protocol stacks Inclusion of MIPC File System (MFS) and MIPC Network Device (MND) Networking features with 64-bit support including Wind River MACsec, Wind River's implementation of IEEE 802.1A, Point-to-Point Protocol (PPP) over L2TP, PPP over virtual local area network (VLAN) and Diameter secure key storage New Wind River Workbench 4 for VxWorks 7 integrated development environment with new system analysis tools Wind River Diab Compiler 5.9.4; Wind River GNU Compiler 4.8; Intel C++ Compiler 14 and Intel Integrated Performance Primitives (IPP) 8 Hardware support VxWorks has been ported to a number of platforms. This includes the Intel x86 family (including the Intel Quark SoC), MIPS, PowerPC (and BAE RAD), Freescale ColdFire, Intel i960, SPARC, Fujitsu FR-V, SH-4 and the closely related family of ARM, StrongARM and xScale CPUs. VxWorks provides a standard board support package (BSP) interface between all its supported hardware and the OS. Wind River's BSP developer kit provides a common application programming interface (API) and a stable environment for real-time operating system development. VxWorks is supported by popular SSL/TLS libraries such as wolfSSL. Development environment As is common in embedded system development, cross-compiling is used with VxWorks. Development is done on a "host" system where an integrated development environment (IDE), including the editor, compiler toolchain, debugger, and emulator can be used. Software is then compiled to run on the "target" system. This allows the developer to work with powerful development tools while targeting more limited hardware. VxWorks uses the following host environments and target hardware architectures: Supported target architectures and processor families VxWorks supports a range of target architectures including ARM, Intel, Power architecture, RISC-V architecture and more. For the latest target architecture processors and board support packages, refer to the VxWorks Marketplace or via citation. The Eclipse-based Workbench IDE that comes with VxWorks is used to configure, analyze, optimize, and debug a VxWorks-based system under development. The Tornado IDE was used for VxWorks 5.x and was replaced by the Eclipse-based Workbench IDE for VxWorks 6.x. and later. Workbench is also the IDE for the Wind River Linux, On-Chip Debugging, and Wind River Diab Compiler product lines. VxWorks 7 uses Wind River Workbench 4 which updates to the Eclipse 4 base provides full third party plug-in support and usability improvements. Wind River Simics is a standalone simulation tool compatible with VxWorks. It simulates the full target system (hardware and software) to create a shared platform for software development. Multiple developers can share a complete virtual system and its entire state, including execution history. Simics enables early and continuous system integration and faster prototyping by utilizing virtual prototypes instead of physical prototypes. Notable uses VxWorks is used by products across a wide range of market areas: aerospace and defense, automotive, industrial such as robots, consumer electronics, medical area and networking. Several notable products also use VxWorks as the onboard operating system. Aerospace and defense Spacecraft The Mars 2020 rover The Mars Reconnaissance Orbiter The Mars Science Laboratory, also known as the Curiosity rover NASA Mars rovers (Sojourner, Spirit, Opportunity) The Deep Space Program Science Experiment (DSPSE) also known as Clementine (spacecraft) Clementine launched in 1994 running VxWorks 5.1 on a MIPS-based CPU responsible for the Star Tracker and image processing algorithms. The use of a commercial RTOS on board a spacecraft was considered experimental at the time Phoenix Mars lander The Deep Impact space probe The Mars Pathfinder mission NASA's Juno space probe sent to Jupiter Aircraft AgustaWestland Project Zero Northrop Grumman X-47B Unmanned Combat Air System Airbus A400M Airlifter BAE Systems Tornado Advanced Radar Display Information System (TARDIS) used in the Tornado GR4 aircraft for the U.K. Royal Air Force Lockheed Martin RQ-170 Sentinel UAV Boeing 787 Space telescopes Fermi Gamma-ray Space Telescope(FGST) James Webb Space Telescope Others European Geostationary Navigation Overlay System (EGNOS) TacNet Tracker, Sandia National Laboratory’s rugged handheld communication device BAE Systems SCC500TM series of infrared camera cores Barco CDMS-3000 next generation control display and management system Automotive Toshiba TMPV75 Series image recognition SoCs for advanced driver assistance systems (ADAS) Bosch Motor Sports race car telemetry system Hyundai Mobis IVI system Magneti Marelli's telemetry logger and GENIVI-compliant infotainment system BMW iDrive 2.0 (2003-2008) Siemens VDO automotive navigation systems Most of Renault Trucks T, K and C trucks' electronic control units. European Volkswagen RNS 510 navigation systems Consumer electronics TPLink RE190 Wireless repeater Apple Airport Extreme AMX NetLinx Controllers (NI-xx00/x00) Brother printers Drobo data storage robot Honda robot ASIMO Linksys WRT54G wireless routers (versions 5.0 and later) MacroSystem Casablanca-2 digital video editor (Avio, Kron, Prestige, Claro, Renommee, Solitaire) Motorola's DCT2500 interactive digital set-top box Mobile Technika MobbyTalk and MobbyTalk253 phones ReplayTV home digital video recorder Industrial Industrial robots ABB industrial robots The C5G robotic project by Comau KUKA industrial robots Stäubli industrial robots Yaskawa Electric Corporation's industrial robots Comau Robotics SMART5 industrial robot Test and Measurement Teledyne LeCroy WaveRunner LT, WaveRunner2LT and WavePro 900 oscilloscope series Some Tektronix TDS, DPO, and MSO series oscilloscopes Hexagon Metrology GLOBAL Silver coordinate measuring machine (CMM) Transportation FITSCO Automatic Train Protection (ATP)system Bombardier HMI410 Train Information System Controllers Bachmann M1 Controller System Invensys Foxboro PAC System National Instruments CompactRIO 901x, 902x 907x controllers Emerson distributed control system controllers AMX controls system devices The Experimental Physics and Industrial Control System (EPICS) Bosch Rexroth Industrial Tightening Control Systems MCE iBox elevator controller Rockwell Automation PLCs - ControlLogix, CompactLogix, Assorted Communication Cards, and Servo Drives Schneider Electric Industrial Controller B&R Automation Runtime Storage systems External RAID controllers designed by the LSI Corporation/Engenio prior to 2011, now designed by NetApp. And used in RDAC class arrays as NetApp E/EF Series and OEM arrays Fujitsu ETERNUS DX Sx family of unified data storage arrays Imaging Toshiba eBridge based range of photocopiers Others GrandMA Full-Size and Light Console by MA Lighting Medical Varian Medical Systems Truebeam - a radiotherapy device for treating cancer Olympus Corporation's surgical generator BD Biosciences FACSCount HIV/AIDS Monitoring System Fedegari Autoclavi S.p.A. Thema4 process controller Sirona Dental Systems: CEREC extraoral X-ray CAD/CAM systems General Electric Healthcare: CT and MRI scanners Carl Zeiss Meditec: Humphrey Field Analyzer HFA-II Series Philips MRI scanners and C-arm Radiology Equipment Networking and communication infrastructure Arkoon Network Security appliances Ubee Interactive's AirWalk EdgePoint Kontron's ACTA processor boards QQTechnologies's QQSG A significant portion of Huawei's telecoms equipment uses VxWorks BroadLight’s GPON/PON products Shiron Satellite Communications’ InterSKY Sky Pilot's SkyGateway, SkyExtender and SkyControl EtherRaptor-1010 by Raptor Network Technology CPG-3000 and CPX-5000 routers from Siemens Nokia Solutions and Networks FlexiPacket series microwave engineering product Acme Packet Net-Net series of Session Border Controllers Alcatel-Lucent IP Touch 40x8 IP Deskphones Avaya ERS 8600 Avaya IP400 Office Cisco CSS platform Cisco ONS platform Ciena Common Photonic Layer Dell PowerConnect switches that are 'powered by' Broadcom, except latest PCT8100 which runs on Linux platform Ericsson SmartEdge routers (SEOS 11 run NetBSD 3.0 and VxWorks for Broadcom BCM1480 version 5.5.1 kernel version 2.6) Hewlett Packard HP 9000 Superdome Guardian Service Processor Hirschmann EAGLE20 Industrial Firewall HughesNet/Direcway satellite internet modems Mitel Networks' MiVoice Business (formerly Mitel Communications Director (MCD)), 3300 ICP Media Gateways and SX-200 and SX-200 ICP Motorola Solutions MCD5000 IP Deskset System Motorola SB5100 cable modem Motorola Cable Headend Equipment including SEM, NC, OM and other lines Nortel CS1000 PBX (formerly Nortel Meridian 1 (Option 11C, Option 61C, Option 81C) Nortel Passport Radware OnDemand Switches Samsung DCS and OfficeServ series PBX SonicWALL firewalls Thuraya SO-2510 satellite phone and ThurayaModule Radvision 3G communications equipment 3com NBX phone systems Zhone Technologies access systems Oracle EAGLE STP system TCP vulnerability and CVE patches As of July 2019, a paper published by Armis exposed 11 critical vulnerabilities, including remote code execution, denial of service, information leaks, and logical flaws impacting more than two billion devices using the VxWorks RTOS. The vulnerability allows attackers to tunnel into an internal network using the vulnerability and hack into printers, laptops, and any other connected devices. The vulnerability can bypass firewalls as well. The system is in use by quite a few mission-critical products, many of which could not be easily patched. References External links ARM operating systems Embedded operating systems Intel software MIPS operating systems PowerPC operating systems Real-time operating systems Robot operating systems IA-32 operating systems X86-64 operating systems Monolithic kernels
VxWorks
[ "Technology" ]
3,624
[ "Real-time computing", "Real-time operating systems" ]
157,115
https://en.wikipedia.org/wiki/Cut%2C%20copy%2C%20and%20paste
Cut, copy, and paste are essential commands of modern human–computer interaction and user interface design. They offer an interprocess communication technique for transferring data through a computer's user interface. The cut command removes the selected data from its original position, and the copy command creates a duplicate; in both cases the selected data is kept in temporary storage called the clipboard. Clipboard data is later inserted wherever a paste command is issued. The data remains available to any application supporting the feature, thus allowing easy data transfer between applications. The command names are an interface metaphor based on the physical procedure used in manuscript print editing to create a page layout, like with paper. The commands were pioneered into computing by Xerox PARC in 1974, popularized by Apple Computer in the 1983 Lisa workstation and the 1984 Macintosh computer, and in a few home computer applications such the 1984 word processor Cut & Paste. This interaction technique has close associations with related techniques in graphical user interfaces (GUIs) that use pointing devices such as a computer mouse (by drag and drop, for example). Typically, clipboard support is provided by an operating system as part of its GUI and widget toolkit. The capability to replicate information with ease, changing it between contexts and applications, involves privacy concerns because of the risks of disclosure when handling sensitive information. Terms like cloning, copy forward, carry forward, or re-use refer to the dissemination of such information through documents, and may be subject to regulation by administrative bodies. History Origins The term "cut and paste" comes from the traditional practice in manuscript editing, whereby people cut paragraphs from a page with scissors and paste them onto another page. This practice remained standard into the 1980s. Stationery stores sold "editing scissors" with blades long enough to cut an 8½"-wide page. The advent of photocopiers made the practice easier and more flexible. The act of copying or transferring text from one part of a computer-based document ("buffer") to a different location within the same or different computer-based document was a part of the earliest on-line computer editors. As soon as computer data entry moved from punch-cards to online files (in the mid/late 1960s) there were "commands" for accomplishing this operation. This mechanism was often used to transfer frequently-used commands or text snippets from additional buffers into the document, as was the case with the QED text editor. Early methods The earliest editors (designed for teleprinter terminals) provided keyboard commands to delineate a contiguous region of text, then delete or move it. Since moving a region of text requires first removing it from its initial location and then inserting it into its new location, various schemes had to be invented to allow for this multi-step process to be specified by the user. Often this was done with a "move" command, but some text editors required that the text be first put into some temporary location for later retrieval/placement. In 1983, the Apple Lisa became the first text editing system to call that temporary location "the clipboard". Earlier control schemes such as NLS used a verb—object command structure, where the command name was provided first and the object to be copied or moved was second. The inversion from verb—object to object—verb on which copy and paste are based, where the user selects the object to be operated before initiating the operation, was an innovation crucial for the success of the desktop metaphor as it allowed copy and move operations based on direct manipulation. Popularization Inspired by early line and character editors, such as Pentti Kanerva's TV-Edit, that broke a move or copy operation into two steps—between which the user could invoke a preparatory action such as navigation—Lawrence G. "Larry" Tesler proposed the names "cut" and "copy" for the first step and "paste" for the second step. Beginning in 1974, he and colleagues at Xerox PARC implemented several text editors that used cut/copy-and-paste commands to move and copy text. Apple Computer popularized this paradigm with its Lisa (1983) and Macintosh (1984) operating systems and applications. The functions were mapped to key combinations using the key as a special modifier, which is held down while also pressing for cut, for copy, or for paste. These few keyboard shortcuts allow the user to perform all the basic editing operations, and the keys are clustered at the left end of the bottom row of the standard QWERTY keyboard. These are the standard shortcuts: Control-Z (or ) to undo Control-X (or ) to cut Control-C (or ) to copy Control-V (or ) to paste The IBM Common User Access (CUA) standard also uses combinations of the Insert, Del, Shift and Control keys. Early versions of Windows used the IBM standard. Microsoft later also adopted the Apple key combinations with the introduction of Windows, using the control key as modifier key. For users migrating to Windows from DOS this was a big change as DOS users used the "COPY" and "MOVE" commands. Similar patterns of key combinations, later borrowed by others, are widely available in most GUI applications. The original cut, copy, and paste workflow, as implemented at PARC, utilizes a unique workflow: With two windows on the same screen, the user could use the mouse to pick a point at which to make an insertion in one window (or a segment of text to replace). Then, by holding shift and selecting the copy source elsewhere on the same screen, the copy would be made as soon as the shift was released. Similarly, holding shift and control would copy and cut (delete) the source. This workflow requires many fewer keystrokes/mouse clicks than the current multi-step workflows, and did not require an explicit copy buffer. It was dropped, one presumes, because the original Apple and IBM GUIs were not high enough density to permit multiple windows, as were the PARC machines, and so multiple simultaneous windows were rarely used. Cut and paste Computer-based editing can involve very frequent use of cut-and-paste operations. Most software-suppliers provide several methods for performing such tasks, and this can involve (for example) key combinations, pulldown menus, pop-up menus, or toolbar buttons. The user selects or "highlights" the text or file for moving by some method, typically by dragging over the text or file name with the pointing-device or holding down the Shift key while using the arrow keys to move the text cursor. The user performs a "cut" operation via key combination ( for Macintosh users), menu, or other means. Visibly, "cut" text immediately disappears from its location. "Cut" files typically change color to indicate that they will be moved. Conceptually, the text has now moved to a location often called the clipboard. The clipboard typically remains invisible. On most systems only one clipboard location exists, hence another cut or copy operation overwrites the previously stored information. Many UNIX text-editors provide multiple clipboard entries, as do some Macintosh programs such as Clipboard Master, and Windows clipboard-manager programs such as the one in Microsoft Office. The user selects a location for insertion by some method, typically by clicking at the desired insertion point. A paste operation takes place which visibly inserts the clipboard text at the insertion point. (The paste operation does not typically destroy the clipboard text: it remains available in the clipboard and the user can insert additional copies at other points). Whereas cut-and-paste often takes place with a mouse-equivalent in Windows-like GUI environments, it may also occur entirely from the keyboard, especially in UNIX text editors, such as Pico or vi. Cutting and pasting without a mouse can involve a selection (for which is pressed in most graphical systems) or the entire current line, but it may also involve text after the cursor until the end of the line and other more sophisticated operations. The clipboard usually stays invisible, because the operations of cutting and pasting, while actually independent, usually take place in quick succession, and the user (usually) needs no assistance in understanding the operation or maintaining mental context. Some application programs provide a means of viewing, or sometimes even editing, the data on the clipboard. Copy and paste The term "copy-and-paste" refers to the popular, simple method of reproducing text or other data from a source to a destination. It differs from cut and paste in that the original source text or data does not get deleted or removed. The popularity of this method stems from its simplicity and the ease with which users can move data between various applications visually – without resorting to permanent storage. Use in healthcare documentation and electronic health records are sensitive, with potential for the introduction of medical errors, information overload, and fraud. See also Clipboard Control key Copypasta Copy & paste programming Copy Cursor Drag and drop Photomontage Publishing Interchange Language Simultaneous editing X Window selection Transposable element — Cut, copy, and paste in the genome. References External links 2. Peer-to-Peer Communication by Means of Selections in the ICCCM A personal history of modeless text editing and cut/copy-paste by Larry Tesler (pdf) User interface techniques Data management Clipboard (computing) Copying
Cut, copy, and paste
[ "Technology" ]
1,927
[ "Data management", "Data" ]
157,139
https://en.wikipedia.org/wiki/Multiple%20instruction%2C%20multiple%20data
In computing, multiple instruction, multiple data (MIMD) is a technique employed to achieve parallelism. Machines using MIMD have a number of processor cores that function asynchronously and independently. At any time, different processors may be executing different instructions on different pieces of data. MIMD architectures may be used in a number of application areas such as computer-aided design/computer-aided manufacturing, simulation, modeling, and as communication switches. MIMD machines can be of either shared memory or distributed memory categories. These classifications are based on how MIMD processors access memory. Shared memory machines may be of the bus-based, extended, or hierarchical type. Distributed memory machines may have hypercube or mesh interconnection schemes. Examples An example of MIMD system is Intel Xeon Phi, descended from Larrabee microarchitecture. These processors have multiple processing cores (up to 61 as of 2015) that can execute different instructions on different data. Most parallel computers, as of 2013, are MIMD systems. Shared memory model In shared memory model the processors are all connected to a "globally available" memory, via either software or hardware means. The operating system usually maintains its memory coherence. From a programmer's point of view, this memory model is better understood than the distributed memory model. Another advantage is that memory coherence is managed by the operating system and not the written program. Two known disadvantages are: scalability beyond thirty-two processors is difficult, and the shared memory model is less flexible than the distributed memory model. There are many examples of shared memory (multiprocessors): UMA (uniform memory access), COMA (cache-only memory access). Bus-based MIMD machines with shared memory have processors which share a common, central memory. In the simplest form, all processors are attached to a bus which connects them to memory. This means that every machine with shared memory shares a specific CM, common bus system for all the clients. For example, if we consider a bus with clients A, B, C connected on one side and P, Q, R connected on the opposite side, any one of the clients will communicate with the other by means of the bus interface between them. Hierarchical MIMD machines with hierarchical shared memory use a hierarchy of buses (as, for example, in a "fat tree") to give processors access to each other's memory. Processors on different boards may communicate through inter-nodal buses. Buses support communication between boards. With this type of architecture, the machine may support over nine thousand processors. Distributed memory In distributed memory MIMD (multiple instruction, multiple data) machines, each processor has its own individual memory location. Each processor has no direct knowledge about other processor's memory. For data to be shared, it must be passed from one processor to another as a message. Since there is no shared memory, contention is not as great a problem with these machines. It is not economically feasible to connect a large number of processors directly to each other. A way to avoid this multitude of direct connections is to connect each processor to just a few others. This type of design can be inefficient because of the added time required to pass a message from one processor to another along the message path. The amount of time required for processors to perform simple message routing can be substantial. Systems were designed to reduce this time loss and hypercube and mesh are among two of the popular interconnection schemes. Examples of distributed memory (multiple computers) include MPP (massively parallel processors), COW (clusters of workstations) and NUMA (non-uniform memory access). The former is complex and expensive: Many super-computers coupled by broad-band networks. Examples include hypercube and mesh interconnections. COW is the "home-made" version for a fraction of the price. Hypercube interconnection network In an MIMD distributed memory machine with a hypercube system interconnection network containing four processors, a processor and a memory module are placed at each vertex of a square. The diameter of the system is the minimum number of steps it takes for one processor to send a message to the processor that is the farthest away. So, for example, the diameter of a 2-cube is 2. In a hypercube system with eight processors and each processor and memory module being placed in the vertex of a cube, the diameter is 3. In general, a system that contains 2^N processors with each processor directly connected to N other processors, the diameter of the system is N. One disadvantage of a hypercube system is that it must be configured in powers of two, so a machine must be built that could potentially have many more processors than is really needed for the application. Mesh interconnection network In an MIMD distributed memory machine with a mesh interconnection network, processors are placed in a two-dimensional grid. Each processor is connected to its four immediate neighbors. Wrap around connections may be provided at the edges of the mesh. One advantage of the mesh interconnection network over the hypercube is that the mesh system need not be configured in powers of two. A disadvantage is that the diameter of the mesh network is greater than the hypercube for systems with more than four processors. See also SMP NUMA Torus interconnect Flynn's taxonomy SPMD Superscalar Very long instruction word References Flynn's taxonomy Parallel computing Mimd de:Flynnsche Klassifikation#MIMD (Multiple Instruction, Multiple Data)
Multiple instruction, multiple data
[ "Technology" ]
1,151
[ "Classes of computers", "Computers", "Computer systems" ]
157,174
https://en.wikipedia.org/wiki/Swithun
Swithun (or Swithin; ; ; died 863) was an Anglo-Saxon bishop of Winchester and subsequently patron saint of Winchester Cathedral. His historical importance as bishop is overshadowed by his reputation for posthumous miracle-working. According to tradition, if it rains on Saint Swithun's bridge (Winchester) on his feast day (15 July) it will continue for forty days. Biography St. Swithun was Bishop of Winchester from his consecration on 30 October 852 until his death on 2 July 863. However, he is scarcely mentioned in any document of his own time. His death is entered in the Canterbury manuscript of the Anglo-Saxon Chronicle (MS F) under the year 861. He is recorded as a witness to nine charters, the earliest of which (S 308) is dated 854. More than a hundred years later, when Dunstan and Æthelwold of Winchester were inaugurating their church reform, Swithun was adopted as patron of the restored church at Winchester, formerly dedicated to St. Peter and St. Paul. His body was transferred from its almost forgotten grave to Æthelwold's new basilica on 15 July 971; according to contemporary writers, numerous miracles preceded and followed the move. In legend The revival of Swithun's fame gave rise to a mass of legendary literature. The so-called Vita S. Swithuni of Lantfred and Wulfstan, written about 1000, hardly contains any biographical fact; all that has in later years passed for authentic detail of Swithun's life is extracted from a late eleventh-century hagiography ascribed to Goscelin of St. Bertin's, a monk who came over to England with Hermann, bishop of Salisbury from 1058 to 1078. According to this writer Saint Swithun was born in the reign of Egbert of Wessex, and was ordained priest by Helmstan, bishop of Winchester (838-c. 852). His fame reached the king's ears, and he appointed him tutor of his son, Æthelwulf (alias Adulphus), and considered him one of his chief friends. However, Michael Lapidge describes the work as "pure fiction" and shows that the attribution to Goscelin is false. Under Æthelwulf, Swithun was appointed bishop of Winchester, to which see he was consecrated by Archbishop Ceolnoth. In his new office he was known for his piety and his zeal in building new churches or restoring old ones. At his request Æthelwulf gave the tenth of his royal lands to the Church. Swithun made his diocesan journeys on foot; when he gave a banquet he invited the poor and not the rich. William of Malmesbury adds that, if Bishop Ealhstan of Sherborne was Æthelwulf's minister for temporal matters, Swithun was the minister for spiritual matters. Swithun's best-known miracle was his restoration on a bridge of a basket of eggs that workmen had maliciously broken. Of stories connected with Swithun the two most famous are those of the Winchester egg-woman and Queen Emma's ordeal. The former is to be found in the hagiography attributed to Goscelin, the latter in Thomas Rudborne's Historia major (15th century), a work which is also responsible for the story that Swithun accompanied Alfred on his visit to Rome in the 850s. He died on 2 July 862. On his deathbed Swithun begged that he should be buried outside the north wall of his cathedral where passers-by should pass over his grave and raindrops from the eaves drop upon it. Veneration Swithun's feast day in England is on 15 July and in Norway (and formerly in medieval Wales) on 2 July. He is also listed on 2 July in the Roman Martyrology. He was moved from his grave to an indoor shrine in the Old Minster at Winchester in 971. His body was probably later split between a number of smaller shrines. His head was certainly detached and, in the Middle Ages, taken to Canterbury Cathedral. Peterborough Abbey had an arm. His main shrine was transferred into the new Norman cathedral at Winchester in 1093. He was installed on a 'feretory platform' above and behind the high altar. The retrochoir was built in the early 13th century to accommodate the huge numbers of pilgrims wishing to visit his shrine and enter the 'holy hole' beneath him. His empty tomb in the ruins of the Old Minster was also popular with visitors. The shrine was only moved into the retrochoir itself in 1476. It was demolished in 1538 during the English Reformation. A modern representation of it now stands on the site. The shrine of Swithun at Winchester was supposedly a site of numerous miracles in the Middle Ages. Æthelwold of Winchester ordered that all monks were to stop whatever they were doing and head to the church to praise God every time that a miracle happened. A story exists that the monks at some point got so fed up with this, because they sometimes had to wake up and go to the church three or four times each night, that they decided to stop going. St. Swithun then appeared in a dream to someone (possibly two people) and warned them that if they stopped going to the church, then miracles would cease. This person (or persons) then warned the monks about the dream they had, and the monks then caved in and decided to go to the church each time a miracle happened again. Swithun is remembered in the Church of England with a Lesser Festival on 15 July. Patronage Swithun is regarded as one of the saints to whom one should pray in the event of drought. Legacy There are in excess of forty churches dedicated to St Swithun, which can be found throughout the south of England, especially in Hampshire – see this list. An example is St Swithun's, Headbourne Worthy, to the north of Winchester. This church is surrounded on three sides by a brook that flows from a spring in the village; the lych gate on the south side is also a bridge over the brook, which is unusual. Other churches dedicated to St Swithun can be found at Walcot, Lincoln, Worcester, Cheswardine, Shropshire and western Norway, where Stavanger Cathedral is dedicated to him. He is also commemorated at St Swithin's Lane in the City of London (site of the former church of St Swithin, London Stone, demolished after wartime damage in 1962), St Swithun's School for girls in Winchester and St Swithun's quadrangle in Magdalen College, Oxford. In Stavanger, Norway, several schools and institutions are named “St Svithun” after him. Proverb The name of Swithun is best known today for a British weather lore proverb, which says that if it rains on St. Swithun's day, 15 July, it will rain for forty days. A Buckinghamshire variation has: Swithun was initially buried outdoors, rather than in his cathedral, apparently at his own request. William of Malmesbury recorded that the bishop left instructions that his body should be buried outside the church, ubi et pedibus praetereuntium et stillicidiis ex alto rorantibus esset obnoxius [where it might be subject to the feet of passers-by and to the raindrops pouring from on high], which has been taken as indicating that the legend was already well known in the 12th century. In 971 it was decided to move his body to a new indoor shrine, and one theory traces the origin of the legend to a heavy shower by which, on the day of the move, the saint marked his displeasure towards those who were removing his remains. This story, however, cannot be traced further back than the 17th or 18th century. Also, it is at variance with the 10th century writers, who all agreed that the move took place in accordance with the saint's desire expressed in a vision. James Raine suggested that the legend was derived from the tremendous downpour of rain that occurred, according to the Durham chroniclers, on St. Swithun's Day, 1315. John Earle suggests that the legend comes from a pagan or possibly prehistoric day of augury. In France, St. Medard (8 June), Urban of Langres, and St. Gervase and St. Protais (19 June) are credited with an influence on the weather almost identical with that attributed to St. Swithun in England. In Flanders, there is St. Godelieve (6 July) and in Germany the Seven Sleepers' Day (27 June). In Russia it is the day of Sampson the Hospitable (27 June old style). There is a scientific basis to the weather pattern behind the legend of St. Swithun's day. Around the middle of July, the jet stream settles into a pattern which, in the majority of years, holds reasonably steady until the end of August. When the jet stream lies north of the British Isles then continental high pressure is able to move in; when it lies across or south of the British Isles, Arctic air and Atlantic weather systems predominate. The most false that the prediction has been, according to the Guinness Book of Records, was in 1924 when 13.5 hours of sunshine in London were followed by 30 of the next 40 days being wet, and in 1913 when a 15-hour rainstorm was followed by 30 dry days out of 40. See also Saint Swithun in popular culture References Notes Bibliography Andrew Godsell "Saint Swithin and the Rain" in "Legends of British History" (2008). Further reading Aelfric, and Geoffrey Ivor Needham. Lives of Three English Saints. N.Y.: Appleton-Century-Crofts, 1966. Series: Methuen's old English library. 119 pages. OCLC: 422028061. Blakely, Ruth Margaret. St. Swithun of Winchester: An Investigation into the Literature Relating to His Life, Legends and Cult. Thesis (FLA) -- Library Association 1981, n.d. OCLC: 557018780. Bussby, Frederick. Saint Swithun: Patron Saint of Winchester. Winchester: Friends of Winchester Cathedral, 1971. OCLC: 7477761. Davidson, George, and John Faed. Legend of St. Swithin: A Rhyme for Rainy Weather. London: Hamilton, Adams, 1861. OCLC: 16140471. Deshman, Robert, "Saint Swithun in Early Medieval Art," in Idem, Eye and Mind: Collected Essays in Anglo-Saxon and Early Medieval Art Edited by Adam Cohen (Kalamazoo, Michigan: Medieval Institute Publications, Western Michigan University, 2010) (Publications of the Richard Rawlinson Center). Fridegodus, A. Campbell, Eddius Stephanus, Wulfstan, and Lamfridus. Frithegodi monachi Breviloquium vitae Beati Wilfredi, et Wulfstani cantoris Narratio metrica de Sancto Swithuno. Turici: In Aedibus Thesauri Mundi, 1950. 183 pages. Notes: Fridegodus' work is a versification of the Vita Sancti Wilfredi I, usually attributed to Eddi. Wulfstan's work is a versification of Lamfridus' Miracula Sancti Swithuni. OCLC: 62612752. Swithun, and John Earle. Facsimile of Some Leaves in Saxon Handwriting on St. Swithun, Copied by Photozincography, with Literal Translation and Notes. 1861. 20 pages. OCLC: 863315099. Wolstanus Wintonensis, Michael Huber, and Lamfridus. S. Swithinus, miracula metrica, I. Text; beitrag zur altenglischen geschichte und literatur. Landshut: J. Thomann'sche buch-u. kunstdruckerei, 1905. 105 pages. Notes: Programm—Humanistisches Gymnasium Metten. A versification of Lantfred's work. OCLC: 669193. Yorke, Barbara. "Swithun [St Swithun] (d. 863)." Oxford Dictionary of National Biography. Oxford University Press, 2004. External links Guardian netnotes on St. Swithin's Day BBC "Landward" feature on St. Swithin's Day 800s births 863 deaths 9th-century English bishops 9th-century Christian saints Bishops of Winchester Burials at Winchester Cathedral English legendary characters History of Winchester Weather lore West Saxon saints Anglican saints Swithun Year of birth uncertain
Swithun
[ "Physics" ]
2,685
[ "Weather", "Physical phenomena", "Weather lore" ]
157,175
https://en.wikipedia.org/wiki/Ramsey%20theory
Ramsey theory, named after the British mathematician and philosopher Frank P. Ramsey, is a branch of the mathematical field of combinatorics that focuses on the appearance of order in a substructure given a structure of a known size. Problems in Ramsey theory typically ask a question of the form: "how big must some structure be to guarantee that a particular property holds?" Examples A typical result in Ramsey theory starts with some mathematical structure that is then cut into pieces. How big must the original structure be in order to ensure that at least one of the pieces has a given interesting property? This idea can be defined as partition regularity. For example, consider a complete graph of order n; that is, there are n vertices and each vertex is connected to every other vertex by an edge. A complete graph of order 3 is called a triangle. Now colour each edge either red or blue. How large must n be in order to ensure that there is either a blue triangle or a red triangle? It turns out that the answer is 6. See the article on Ramsey's theorem for a rigorous proof. Another way to express this result is as follows: at any party with at least six people, there are three people who are all either mutual acquaintances (each one knows the other two) or mutual strangers (none of them knows either of the other two). See theorem on friends and strangers. This also is a special case of Ramsey's theorem, which says that for any given integer c, any given integers n1,...,nc, there is a number, R(n1,...,nc), such that if the edges of a complete graph of order R(n1,...,nc) are coloured with c different colours, then for some i between 1 and c, it must contain a complete subgraph of order ni whose edges are all colour i. The special case above has c = 2 and n1 = n2 = 3. Results Two key theorems of Ramsey theory are: Van der Waerden's theorem: For any given c and n, there is a number V, such that if V consecutive numbers are coloured with c different colours, then it must contain an arithmetic progression of length n whose elements are all the same colour. Hales–Jewett theorem: For any given n and c, there is a number H such that if the cells of an H-dimensional n×n×n×...×n cube are coloured with c colours, there must be one row, column, etc. of length n all of whose cells are the same colour. That is: a multi-player n-in-a-row tic-tac-toe cannot end in a draw, no matter how large n is, and no matter how many people are playing, if you play on a board with sufficiently many dimensions. The Hales–Jewett theorem implies Van der Waerden's theorem. A theorem similar to van der Waerden's theorem is Schur's theorem: for any given c there is a number N such that if the numbers 1, 2, ..., N are coloured with c different colours, then there must be a pair of integers x, y such that x, y, and x+y are all the same colour. Many generalizations of this theorem exist, including Rado's theorem, Rado–Folkman–Sanders theorem, Hindman's theorem, and the Milliken–Taylor theorem. A classic reference for these and many other results in Ramsey theory is Graham, Rothschild, Spencer and Solymosi, updated and expanded in 2015 to its first new edition in 25 years. Results in Ramsey theory typically have two primary characteristics. Firstly, they are unconstructive: they may show that some structure exists, but they give no process for finding this structure (other than brute-force search). For instance, the pigeonhole principle is of this form. Secondly, while Ramsey theory results do say that sufficiently large objects must necessarily contain a given structure, often the proof of these results requires these objects to be enormously large – bounds that grow exponentially, or even as fast as the Ackermann function are not uncommon. In some small niche cases, upper and lower bounds are improved, but not in general. In many cases these bounds are artifacts of the proof, and it is not known whether they can be substantially improved. In other cases it is known that any bound must be extraordinarily large, sometimes even greater than any primitive recursive function; see the Paris–Harrington theorem for an example. Graham's number, one of the largest numbers ever used in serious mathematical proof, is an upper bound for a problem related to Ramsey theory. Another large example is the Boolean Pythagorean triples problem. Theorems in Ramsey theory are generally one of the following two types. Many such theorems, which are modeled after Ramsey's theorem itself, assert that in every partition of a large structured object, one of the classes necessarily contains its own structured object, but gives no information about which class this is. In other cases, the reason behind a Ramsey-type result is that the largest partition class always contains the desired substructure. The results of this latter kind are called either density results or Turán-type result, after Turán's theorem. Notable examples include Szemerédi's theorem, which is such a strengthening of van der Waerden's theorem, and the density version of the Hales-Jewett theorem. See also Ergodic Ramsey theory Extremal graph theory Goodstein's theorem Bartel Leendert van der Waerden Discrepancy theory References Further reading . (behind a paywall). . . Matthew Katz and Jan Reimann An Introduction to Ramsey Theory: Fast Functions, Infinity, and Metamathematics Student Mathematical Library Volume: 87; 2018; 207 pp;
Ramsey theory
[ "Mathematics" ]
1,227
[ "Ramsey theory", "Combinatorics" ]
157,178
https://en.wikipedia.org/wiki/Van%20der%20Waerden%27s%20theorem
Van der Waerden's theorem is a theorem in the branch of mathematics called Ramsey theory. Van der Waerden's theorem states that for any given positive integers r and k, there is some number N such that if the integers {1, 2, ..., N} are colored, each with one of r different colors, then there are at least k integers in arithmetic progression whose elements are of the same color. The least such N is the Van der Waerden number W(r, k), named after the Dutch mathematician B. L. van der Waerden. This was conjectured by Pierre Joseph Henry Baudet in 1921. Waerden heard of it in 1926 and published his proof in 1927, titled Beweis einer Baudetschen Vermutung [Proof of Baudet's conjecture]. Example For example, when r = 2, you have two colors, say red and blue. W(2, 3) is bigger than 8, because you can color the integers from {1, ..., 8} like this: and no three integers of the same color form an arithmetic progression. But you can't add a ninth integer to the end without creating such a progression. If you add a red 9, then the red 3, 6, and 9 are in arithmetic progression. Alternatively, if you add a blue 9, then the blue 1, 5, and 9 are in arithmetic progression. In fact, there is no way of coloring 1 through 9 without creating such a progression (it can be proved by considering examples). Therefore, W(2, 3) is 9. Open problem It is an open problem to determine the values of W(r, k) for most values of r and k. The proof of the theorem provides only an upper bound. For the case of r = 2 and k = 3, for example, the argument given below shows that it is sufficient to color the integers {1, ..., 325} with two colors to guarantee there will be a single-colored arithmetic progression of length 3. But in fact, the bound of 325 is very loose; the minimum required number of integers is only 9. Any coloring of the integers {1, ..., 9} will have three evenly spaced integers of one color. For r = 3 and k = 3, the bound given by the theorem is 7(2·37 + 1)(2·37·(2·37 + 1) + 1), or approximately 4.22·1014616. But actually, you don't need that many integers to guarantee a single-colored progression of length 3; you only need 27. (And it is possible to color {1, ..., 26} with three colors so that there is no single-colored arithmetic progression of length 3; for example: An open problem is the attempt to reduce the general upper bound to any 'reasonable' function. Ronald Graham offered a prize of US$1000 for showing W(2, k) < 2k2. In addition, he offered a US$250 prize for a proof of his conjecture involving more general off-diagonal van der Waerden numbers, stating W(2; 3, k) ≤ kO(1), while mentioning numerical evidence suggests W(2; 3, k) = k2 + o(1). Ben Green disproved this latter conjecture and proved super-polynomial counterexamples to W(2; 3, k) < kr for any r. The best upper bound currently known is due to Timothy Gowers, who establishes by first establishing a similar result for Szemerédi's theorem, which is a stronger version of Van der Waerden's theorem. The previously best-known bound was due to Saharon Shelah and proceeded via first proving a result for the Hales–Jewett theorem, which is another strengthening of Van der Waerden's theorem. The best lower bound currently known for is that for all positive we have , for all sufficiently large . Proof of Van der Waerden's theorem (in a special case) The following proof is due to Ron Graham, B.L. Rothschild, and Joel Spencer. Khinchin gives a fairly simple proof of the theorem without estimating W(r, k). Proof in the case of W(2, 3) We will prove the special case mentioned above, that W(2, 3) ≤ 325. Let c(n) be a coloring of the integers {1, ..., 325}. We will find three elements of {1, ..., 325} in arithmetic progression that are the same color. Divide {1, ..., 325} into the 65 blocks {1, ..., 5}, {6, ..., 10}, ... {321, ..., 325}, thus each block is of the form {5b + 1, ..., 5b + 5} for some b in {0, ..., 64}. Since each integer is colored either red or blue, each block is colored in one of 32 different ways. By the pigeonhole principle, there are two blocks among the first 33 blocks that are colored identically. That is, there are two integers b1 and b2, both in {0,...,32}, such that c(5b1 + k) = c(5b2 + k) for all k in {1, ..., 5}. Among the three integers 5b1 + 1, 5b1 + 2, 5b1 + 3, there must be at least two that are of the same color. (The pigeonhole principle again.) Call these 5b1 + a1 and 5b1 + a2, where the ai are in {1,2,3} and a1 < a2. Suppose (without loss of generality) that these two integers are both red. (If they are both blue, just exchange 'red' and 'blue' in what follows.) Let a3 = 2a2 − a1. If 5b1 + a3 is red, then we have found our arithmetic progression: 5b1 + ai are all red. Otherwise, 5b1 + a3 is blue. Since a3 ≤ 5, 5b1 + a3 is in the b1 block, and since the b2 block is colored identically, 5b2 + a3 is also blue. Now let b3 = 2b2 − b1. Then b3 ≤ 64. Consider the integer 5b3 + a3, which must be ≤ 325. What color is it? If it is red, then 5b1 + a1, 5b2 + a2, and 5b3 + a3 form a red arithmetic progression. But if it is blue, then 5b1 + a3, 5b2 + a3, and 5b3 + a3 form a blue arithmetic progression. Either way, we are done. Proof in the case of W(3, 3) A similar argument can be advanced to show that W(3, 3) ≤ 7(2·37+1)(2·37·(2·37+1)+1). One begins by dividing the integers into 2·37·(2·37 + 1) + 1 groups of 7(2·37 + 1) integers each; of the first 37·(2·37 + 1) + 1 groups, two must be colored identically. Divide each of these two groups into 2·37+1 subgroups of 7 integers each; of the first 37 + 1 subgroups in each group, two of the subgroups must be colored identically. Within each of these identical subgroups, two of the first four integers must be the same color, say red; this implies either a red progression or an element of a different color, say blue, in the same subgroup. Since we have two identically-colored subgroups, there is a third subgroup, still in the same group that contains an element which, if either red or blue, would complete a red or blue progression, by a construction analogous to the one for W(2, 3). Suppose that this element is green. Since there is a group that is colored identically, it must contain copies of the red, blue, and green elements we have identified; we can now find a pair of red elements, a pair of blue elements, and a pair of green elements that 'focus' on the same integer, so that whatever color it is, it must complete a progression. Proof in general case The proof for W(2, 3) depends essentially on proving that W(32, 2) ≤ 33. We divide the integers {1,...,325} into 65 'blocks', each of which can be colored in 32 different ways, and then show that two blocks of the first 33 must be the same color, and there is a block colored the opposite way. Similarly, the proof for W(3, 3) depends on proving that By a double induction on the number of colors and the length of the progression, the theorem is proved in general. Proof A D-dimensional arithmetic progression (AP) consists of numbers of the form: where is the basepoint, the 's are positive step-sizes, and the 's range from 0 to . A -dimensional AP is homogeneous for some coloring when it is all the same color. A -dimensional arithmetic progression with benefits is all numbers of the form above, but where you add on some of the "boundary" of the arithmetic progression, i.e. some of the indices 's can be equal to . The sides you tack on are ones where the first 's are equal to , and the remaining 's are less than . The boundaries of a -dimensional AP with benefits are these additional arithmetic progressions of dimension , down to 0. The 0-dimensional arithmetic progression is the single point at index value . A -dimensional AP with benefits is homogeneous when each of the boundaries are individually homogeneous, but different boundaries do not have to necessarily have the same color. Next define the quantity to be the least integer so that any assignment of colors to an interval of length or more necessarily contains a homogeneous -dimensional arithmetical progression with benefits. The goal is to bound the size of . Note that is an upper bound for Van der Waerden's number. There are two inductions steps, as follows: Base case: , i.e. if you want a length 1 homogeneous -dimensional arithmetic sequence, with or without benefits, you have nothing to do. So this forms the base of the induction. The Van der Waerden theorem itself is the assertion that is finite, and it follows from the base case and the induction steps. Ergodic theory Furstenberg and Weiss proved an equivalent form of the theorem in 1978, using ergodic theory. The proof of the above theorem is delicate, and the reader is referred to. With this recurrence theorem, the van der Waerden theorem can be proved in the ergodic-theoretic style. See also Van der Waerden numbers for all known values for W(n,r) and the best known bounds for unknown values. Van der Waerden game – a game where the player picks integers from the set 1, 2, ..., N, and tries to collect an arithmetic progression of length n. Hales–Jewett theorem Rado's theorem Szemerédi's theorem Bartel Leendert van der Waerden Notes References (second edition originally published in Russian in 1948) External links Articles containing proofs Ramsey theory Theorems in discrete mathematics
Van der Waerden's theorem
[ "Mathematics" ]
2,432
[ "Discrete mathematics", "Mathematical theorems", "Combinatorics", "Theorems in discrete mathematics", "Articles containing proofs", "Mathematical problems", "Ramsey theory" ]
157,302
https://en.wikipedia.org/wiki/Pepper%20spray
Pepper spray, oleoresin capsicum spray, OC spray, capsaicin spray,mace, or capsicum spray is a lachrymator (tear gas) product containing the compound capsaicin as the active ingredient that irritates the eyes to cause burning and pain sensations, as well as temporary blindness. Its inflammatory effects cause the eyes to close, temporarily taking away vision. This temporary blindness allows officers to more easily restrain subjects and permits people in danger to use pepper spray in self-defense for an opportunity to escape. It also causes temporary discomfort and burning of the lungs which causes shortness of breath. Pepper spray is used as a less lethal weapon in policing, riot control, crowd control, and self-defense, including defense against dogs and bears. Pepper spray was engineered originally for defense against bears, mountain lions, wolves and other dangerous predators, and is often referred to colloquially as bear spray. Kamran Loghman, the person who developed it for use in riot control, wrote the guide for police departments on how it should be used. It was successfully adopted, except for improper usages such as when police sprayed peaceful protestors at University of California, Davis in 2011. Loghman commented, "I have never seen such an inappropriate and improper use of chemical agents", prompting court rulings completely barring its use on docile persons. Components The active ingredient in pepper spray is capsaicin, which is derived from the fruit of plants in the genus Capsicum, including chilis in the form of oleoresin capsicum (OC). Extraction of OC from peppers requires capsicum to be finely ground, from which capsaicin is then extracted using an organic solvent such as ethanol. The solvent is then evaporated, and the remaining waxlike resin is the oleoresin capsaicin. An emulsifier such as propylene glycol is used to suspend OC in water, and the suspension is then pressurized to make an aerosol pepper spray. Other sprays may use an alcohol (such as isopropyl alcohol) base for a more penetrating product, but a risk of fire is present if combined with a taser. Determining the strength of pepper sprays made by different manufacturers can be confusing and difficult. Statements a company makes about their product strength are not regulated. The US federal government uses CRC (capsaicin and related capsaicinoids) content for regulation. CRC is the pain-producing component of the OC that produces the burning sensation. Personal pepper sprays can range from a low of 0.18% to a high of 3%. Most law enforcement pepper sprays use between 1.3% and 2%. The federal government of the United States has determined that bear attack deterrent sprays must contain at least 1.0% and not more than 2% CRC. Because the six different types of capsaicinoids under the CRC heading has different levels of potency (up to 2× on the SHU scale), the measurement does not fully represent the strength. Manufacturers do not state which particular type of capsaicinoids are used. Using the OC concentration is unreliable because the concentration of CRC (and potency of these compounds) can vary. Some manufacturers may show a very high percentage of OC, but the resin itself may not be spicy enough. Higher OC content only reliably implies a higher oil content, which may be undesirable as the hydrophobic oil is less able to soak and penetrate skin. Solutions of more than 5% OC may not spray properly. Scoville heat units (SHU) is a common indication of pepper spiciness. It does take into account the different potency of CRC compounds, but it cannot be reliably used in pepper spray because it measures the strength of the dry product, i.e. the OC resin and not what comes in the aerosol spray. As the resin is always diluted to make it spray-able, the SHU rating is not useful on its own. Counterparts There are several counterparts of pepper spray developed and legal to possess in some countries. In the United Kingdom, desmethyldihydrocapsaicin (known also as PAVA spray) is used by police officers. As a Section 5 weapon, it is not generally permitted to the public. Pelargonic acid morpholide (MPK) is widely used as a self-defense chemical agent spray in Russia, though its effectiveness compared to natural pepper spray is unclear. In China, Ministry of Public Security police units and security guards use tear gas ejectors with OC, CS or CN gases. These are defined as a "restricted" weapon that only police officers, as well as approved security, can use. Types Aerosol compound Cone pattern dispersion - wide pattern, don't have to aim precisely. It can be blown back by wind and if used inside a building, will eventually make room temporarily uninhabitable. Fog pattern dispersion (fogger) Stream pattern dispersion Grenade Gel compound: has greater accuracy and a reduced risk of blowback and area cross-contamination as the carrying gel does not disperse over a large area. The gel compound also adheres to the target making it more difficult to remove. Foam compound Effects Pepper spray is an inflammatory agent. It inflames the mucous membranes in the eyes, nose, throat and lungs. It causes immediate closing of the eyes, difficulty breathing, runny nose, and coughing. The duration of its effects depends on the strength of the spray; the average full effect lasts from 20 to 90 minutes, but eye irritation and redness can last for up to 24 hours. The Journal of Investigative Ophthalmology and Visual Science published a study that concluded that single exposure of the eye to OC is harmless, but repeated exposure can result in long-lasting changes in corneal sensitivity. They found no lasting decrease in visual acuity. The European Parliament Scientific and Technological Options Assessment (STOA) published in 1998 "An Appraisal of Technologies of Political Control" The STOA appraisal states: "Past experience has shown that to rely on manufacturers unsubstantiated claims about the absence of hazards is unwise. In the US, companies making crowd control weapons, (e.g. pepper-gas manufacturer Zarc International), have put their technical data in the public domain without loss of profitability." and "Research on chemical irritants should be published in open scientific journals before authorization for any usage is permitted and that the safety criteria for such chemicals should be treated as if they were drugs rather than riot control agents;" For those taking drugs, or those subjected to restraining techniques that restrict the breathing passages, there is a risk of death. In 1995, the Los Angeles Times reported at least 61 deaths associated with police use of pepper spray since 1990 in the USA. The American Civil Liberties Union (ACLU) documented 27 people in police custody who died after exposure to pepper spray in California since 1993. However, the ACLU report counts all deaths occurring within hours of exposure to pepper spray regardless of prior interaction, taser use, or if drugs are involved. In all 27 cases listed by the ACLU, the coroners' report listed other factors as the primary cause of death; in a few cases the use of pepper spray may have been a contributing factor. The US Army performed studies in 1993 at Aberdeen Proving Ground, and a UNC study in 2000 stated that the compound in peppers, capsaicin, is mildly mutagenic, and 10% of mice exposed to it developed cancer. Where the study also found many beneficial effects of capsaicin, the Occupational Safety and Health Administration released statements declaring exposure of employees to OC is an unnecessary health risk. As of 1999, it was in use by more than 2,000 public safety agencies. The head of the FBI's Less-Than-Lethal Weapons Program at the time of the 1991 study, Special Agent Thomas W. W. Ward, was fired by the FBI and was sentenced to two months in prison for receiving payments from a pepper-gas manufacturer while conducting and authoring the FBI study that eventually approved pepper spray for FBI use. Prosecutors said that from December 1989 through 1990, Ward received about $5,000 a month for a total of $57,500, from Luckey Police Products, a Fort Lauderdale, Florida-based company that was a major producer and supplier of pepper spray. The payments were paid through a Florida company owned by Ward's wife. Direct close-range spray can cause more serious eye irritation by attacking the cornea with a concentrated stream of liquid (the so-called "hydraulic needle" effect). Some brands have addressed this problem by means of an elliptically cone-shaped spray pattern. Pepper spray has been associated with positional asphyxiation of individuals in police custody. There is much debate over the actual cause of death in these cases. There have been few controlled clinical studies of the human health effects of pepper spray marketed for police use, and those studies are contradictory. Some studies have found no harmful effects beyond the effects described above. Due to these studies and deaths, many law enforcement agencies have moved to include policies and training to prevent positional deaths. However, there are some scientific studies that argue the positional asphyxiation claim is a myth due to pinpoint pressure on a person. The study by two universities stressed that no pressure should be applied to the neck area. They concluded that the person's own weight is not enough to stop a person's breathing with the rest of their body supported. Acute response For individuals not previously exposed to OC effects, the general feelings after being sprayed can be best likened to being "set alight". The initial reaction, should the spray be directed at the face, is the involuntary closing of the eyes, an instant sensation of the restriction of the airways and the general feeling of sudden and intense searing pain about the face, nose, and throat. This is due to irritation of mucous membranes. Many people experience fear and are disoriented due to sudden restriction of vision even though it is temporary. There is associated shortness of breath, although studies performed with asthmatics have not produced any asthma attacks in those individuals, and monitoring is still needed for the individuals after exposure. Police are trained to repeatedly instruct targets to breathe normally if they complain of difficulty, as the shock of the exposure can generate considerable panic as opposed to actual physical symptoms. Treatment Capsaicin is not soluble in water, and even large volumes of water will not wash it off, only dilute it. In general, victims are encouraged to blink vigorously in order to encourage tears, which will help flush the irritant from the eyes. A study of five often-recommended treatments for skin pain (Maalox, 2% lidocaine gel, baby shampoo, milk, or water) concluded that: "...there was no significant difference in pain relief provided by five different treatment regimens. Time after exposure appeared to be the best predictor for a decrease in pain...". Many ambulance services and emergency departments carry saline to remove the spray. Some of the OC and CS will remain in the respiratory system, but a recovery of vision and the coordination of the eyes can be expected within 7 to 15 minutes. Some "triple-action" pepper sprays also contain "tear gas" (CS gas), which can be neutralized with sodium metabisulfite (Campden tablets), though it is not for use on a person, only for area clean up. Use Pepper spray typically comes in canisters, which are often small enough to be carried or concealed in a pocket or purse. Pepper spray can also be purchased concealed in items such as rings. There are also pepper spray projectiles available, which can be fired from a paintball gun or similar platform. It has been used for years against demonstrators and aggressive animals like bears. There are also many types such as foam, gel, foggers, and spray. Oleoresin capsicum Oleoresin capsicum, also known as capsicum oleoresin, is also used in food and medicine. In food, it serves as a concentrated and predictable source of spiciness. The food industry has accordingly changed to prefer a combination of milder and more predictable strains of jalapeno and OC for flavoring. In medicine, OC is used in a number of products for external use. OC used for food is generally rated between 80 000 and 500 000 SHU, roughly equivalent to 0.6-3.9% capsaicin. Paprika oleoresin is a different extract, containing very little heat and mostly used for coloring. Legality Pepper spray is banned for use in war by Article I.5 of the Chemical Weapons Convention, which bans the use of all riot control agents in warfare whether lethal or less-than-lethal. Depending on the location, it may be legal to use for self-defense. Africa Nigeria: Assistant Police Commissioner stated that pepper sprays are illegal for civilians to possess. South Africa: Pepper sprays are legal to own by civilians for self defense. Asia Bangladesh: Bengal Police started using pepper spray to control opposition movement. China: Forbidden for civilians, it is used only by law enforcement agencies. Underground trade leads to some civilian self-defense use. Hong Kong: Forbidden for civilians, it is legal to possess and use only by the members of Disciplined Services when on duty. Such devices are classified as "arms" under the "Laws of Hong Kong". Chap 238 Firearms and Ammunition Ordinance. Without a valid license from the Hong Kong Police Force, it is a crime to possess and can result in a fine of $100,000 and imprisonment for up to 14 years. India: Legal They are sold via government-approved companies after performing a background verification. Indonesia: It is legal, but there are restrictions on its sale and possession. Iran: Forbidden for civilians, it is used only by the police. Israel: OC and CS spray cans may be purchased by any member of the public without restriction and carried in public. In the 1980s, a firearms license was required for doing so, but these sprays have since been deregulated. Japan: There are no laws against possession or use, but using it could result in imprisonment, depending on the damage caused to the target. Malaysia: Use and possession of pepper spray for self-defense are legal. Mongolia: Possession and use for self-defense are legal, and it is freely available in stores. Pakistan: Possession and use for self-defense is legal and its available at physical and online stores. Philippines: Possession and use for self-defense is legal, and it is freely available in stores. Saudi Arabia: Use and possession of pepper spray for self-defense are legal. It is an offense to use pepper spray on anyone for reasons other than self-defense. Singapore: Travellers are prohibited from bringing pepper spray into the country, and it is illegal for the public to possess it. South Korea: Pepper sprays containing OC are legal. Requires a permit to distribute, own, carry pepper sprays containing pre-compressed gas or explosive propellent. Pepper sprays without any pre-compressed gas or explosive propellent are unrestricted. Thailand: Use for self-defense is legal, and it is freely available in stores. Possession in a public place can be punished by confiscation and a fine. Taiwan: Legal for self-defense, it is available in some shops. It is an offense to use pepper spray on anyone for reasons other than self-defense. Vietnam: Forbidden for civilians and used only by the police. Europe Austria: Pepper spray is classified as a self-defense device, they may be owned and carried by adults without registration or permission. Justified use against humans as self-defense is allowed. Belgium: Pepper spray is classified as a prohibited weapon. Possession is illegal for anyone other than police officers, police agents (assistant police officers), security officers of public transport companies, soldiers and customs officers to carry a capsicum spray. It's also authorised after obtaining permission from the Minister of Internal Affairs. Czech Republic: Possession and carrying is legal. Police also encourage vulnerable groups like pensioners, children, and women to carry pepper spray. Carrying at public demonstrations and into court buildings is illegal (pepper spray as well as other weapons may be left with armed guard upon entry of a courthouse). Denmark: Pepper spray is generally illegal to own. Finland: Possession of pepper spray requires a license. Licenses are issued for defensive purposes and to individuals working jobs where such a device is needed such as the private security sector. France: It is legal for anyone over the age of 18 to buy pepper spray in an armory or military surplus store. It is classified as a Category D Weapon in French law and if the aerosol contains more than , it is classed as an offensive weapon; possession in a public place can be punished by confiscation and a fine. However, if it contains less than , while still a Category 6 Weapon, it is not classed as a punishable offense for the purposes of the Weapons law. Upon control, it will be confiscated and a verbal warning might be issued. Germany: Pepper sprays labeled for the purpose of defense against animals may be owned and carried by all citizens regardless of age. Such sprays are not legally considered as weapons §1. Carrying it at (or on the way to and from) demonstrations may still be punished. Sprays that are not labelled "animal-defence spray" or do not bear the test mark of the (MPA, material testing institute) are classified as prohibited weapons. Justified use against humans as self-defense is allowed. CS sprays bearing a test mark of the MPA may be owned and carried by anyone over the age of 14. Greece: Such items are illegal. They will be confiscated and possession may result in detention and arrest. Hungary: Such items are reserved for law enforcement (including civilian members of the auxiliary police). Civilians may carry canisters filled with maximum of any other lachrymatory agent. However, there is no restriction for pepper gas pistol cartridges. Iceland: Possession of pepper spray is illegal for private citizens. Police officers and customs officers carry it. Coast guardsmen as well as prison officers have access to it. Members of the riot police use larger pepper-spray canisters than what is used by a normal police officer. Ireland: Possession of this spray by persons other the Garda Síochána (national police) is an offence under the Firearms and Offensive Weapons Act. Italy: Any citizen over 16 years of age without a criminal record could possess, carry and purchase any OC-based compounds and personal defence devices that respond to the following criteria: Containing a payload not exceeding , with a percentage of Oleoresin Capsicum not exceeding 10% and a maximum concentration of capsaicin and capsaicinoid substances not exceeding 2,5%; Containing no flammable, corrosive, toxic or carcinogenic substances, and no other aggressive chemical compound than OC itself; Being sealed when sold and featuring a safety device against accidental discharge; Featuring a range not exceeding . Latvia: Pepper spray is classified as a self-defense device. It can be bought and carried by anyone over 16 years of age. Pepper spray handguns can be bought and carried without any license by anyone over 18. Lithuania: Classified as D category weapon, but can be bought and carried by anyone over 18 years of age (without registration nor permission). Issued as auxiliary service device to police. Police also encourages vulnerable groups like pensioners or women to carry one. Montenegro: It is legal for civilians over the age of 16 to buy, own and carry pepper spray but it is illegal to carry it in a way that it is shown to other people in public spaces or disturb people with it in any way. You are allowed to use it as a self-defense tool if needed. Netherlands: It is illegal for civilians to own and carry pepper spray. Only police officers trained in the specific use of pepper spray are allowed to carry and use it against civilians and animals. Norway: It is illegal for civilians. Police officers are allowed to carry pepper spray as part of their standard equipment. Poland: Called precisely in Polish Penal Code "a hand-held disabling gas thrower", sprays are not considered a weapon. They can be carried by anyone without further registration or permission. Portugal: Civilians who do not have criminal records are allowed to get police permits to purchase from gun shops, carry, and use OC sprays with a maximum concentration of 5%. Romania: Pepper spray is banned at sporting and cultural events, public transportation and entertainment locations (according to Penal Code 2012, art 372, (1), c). Russia: It is classified as a self-defense weapon and can be carried by anyone over 18. Use against humans is legal. OC is not the only legal agent used. CS, CR, PAM (МПК), and (rarely) CN are also legal and highly popular. Serbia: Pepper spray is legal under the new law as of 2016 and can be carried by anyone over the age of 16. Use against humans in self-defence is legal. Slovakia: It is classified as a self-defense weapon. It is available to anyone over 18. The police recommend its use. Spain: Approved pepper spray made with 5% CS is available to anyone older than 18 years. OC pepper spray, recently adopted for some civilian use (e.g., one of , with no registration DGSP-07-22-SDP, is approved by the Ministry of Health and Consumption). Sweden: Requires weapons licence, essentially always illegal to carry in public or private. Issued as supplementary service weapon to police. Switzerland: Pepper spray in Switzerland is subject to the Chemicals Legislation. It may only be distributed to buyers above 18 years of age and against ID evidence. Self-service is not permitted and the customer ought to be made aware of safe storage, use and disposal. The vendor needs to possess the "Know-how for the distribution of particularly hazardous chemicals". Potential mailing has to be shipped by registered courier with the remark "to addressee only". The products must be classified and labeled at least an irritant (Xi;R36/37). Regulations for aerosol packages need to be observed. Sprays with greenhouse relevant propellants such as R134a (1,1,1,2-Tetrafluorethan) are banned. Spray products for self-defense with irritants such as CA, CS, CN, CR are considered as weapons in terms of the gun control law. The weapon purchase permit, as well as the weapon carrier permit, are required for the purchase of such weapons. In 2009, the Swiss Army introduced for the military personnel the irritant atomizer 2000 (RSG-2000) and is introduced during watch functions. The military bearer permit is granted after passing the half-day training. Ukraine: Called legally "Tearing and irritating aerosols (gas canisters)", sprays are not considered a weapon and can be carried by anyone over 18 without further registration or permission. It is classified as a self-defense device. United Kingdom: Pepper spray is illegal under Section 5(1)(b) of the Firearms Act 1968: "A person commits an offence if [...] he has in his possession [...] any weapon of whatever description designed or adapted for the discharge of any noxious liquid, gas or other thing." Police officers are exempt from this law and permitted to carry pepper spray as part of their standard equipment. North America Canada Pepper spray designed to be used against people is considered a prohibited weapon in Canada. The definition under regulation states "any device designed to be used for the purpose of injuring, immobilizing or otherwise incapacitating any person by the discharge therefrom of (a) tear gas, Mace or other gas, or (b) any liquid, spray, powder or other substance that is capable of injuring, immobilizing or otherwise incapacitating any person" is a prohibited weapon. Only law enforcement officers may legally carry or possess pepper spray labeled for use on persons. Any similar canister with the labels reading "dog spray" or "bear spray" is regulated under the Pest Control Products Act—while legal to be carried by anyone, it is against the law if its use causes "a risk of imminent death or serious bodily harm to another person" or harming the environment and carries a penalty up to a fine of $500,000 and jail time of maximum 3 years. Carrying bear spray in public, without justification, may also lead to charges under the Criminal Code. United States It is a federal offense to carry/ship pepper spray on a commercial airliner or possess it in the secure area of an airport. State law and local ordinances regarding possession and use vary across the country. Pepper spray up to 4 oz. is permitted in checked baggage. When pepper spray is used in the workplace, OSHA requires a pepper spray Safety Data Sheet (SDS) be available to all employees. Pepper spray can be legally purchased and carried in all 50 states and the District of Columbia. Some states regulate the maximum allowed strength of the pepper spray, age restriction, content and use. California: As of January 1, 1996 and as a result of Assembly Bill 830 (Speier), the pepper spray and Mace programs are now deregulated. Consumers will no longer be required to have the training, and a certificate is not required to purchase or possess these items. Pepper spray and Mace are available through gun shops, sporting goods stores, and other business outlets. California Penal Code Section 12400–12460 govern pepper spray use in California. Container holding the defense spray must contain no more than net weight of aerosol spray. Certain individuals are still prohibited from possessing pepper spray, including minors under the age of 16, convicted felons, individuals convicted of certain drug offenses, individuals convicted of assault, and individuals convicted of misusing pepper spray. Massachusetts: Before July 1, 2014, residents may purchase defense sprays only from licensed Firearms Dealers in that state, and must hold a valid Firearms Identification Card (FID) or License to Carry Firearms (LTC) to purchase or to possess outside of one's own private property. New legislations allow residents to purchase pepper spray without a Firearms Identification Card starting July 1. Florida: Any pepper spray containing no more than of chemical can be carried in public openly or concealed without a permit. Furthermore, any such pepper spray is classified as "self-defense chemical spray" and therefore not considered a weapon under Florida law. Michigan: Allows "reasonable use" of spray containing not more than 18% oleoresin capsicum to protect "a person or property under circumstances that would justify the person's use of physical force". It is illegal to distribute a "self-defense spray" to a person under 18 years of age. New Jersey: Non-felons over the age of 18 can possess a small amount of pepper spray, with no more than three-quarters of an ounce of chemical substance. New York: Can be legally possessed by any person age 18 or over. Restricted to no more than 0.67% capsaicin content. It must be purchased in person (i.e., cannot be purchased by mail-order or internet sale) either at a pharmacy or from a licensed firearm retailer (NY Penal Law 265.20 14) and the seller must keep a record of purchases. The use of pepper spray to prevent a public official from performing his/her official duties is a class-E felony. Texas law makes it legal for an individual to possess a small, commercially sold container of pepper spray for personal self-defense. However, Texas law otherwise makes it illegal to carry a "Chemical dispensing device". Virginia: Code of Virginia § 18.2-312. Illegal use of tear gas, phosgene, and other gases. "If any person maliciously releases or cause or procure to be released in any private home, place of business or place of public gathering any tear gas, mustard gas, phosgene gas or other noxious or nauseating gases or mixtures of chemicals designed to, and capable of, producing vile or injurious or nauseating odors or gases, and bodily injury results to any person from such gas or odor, the offending person shall be guilty of a Class 3 felony. If such act be done unlawfully, but not maliciously, the offending person shall be guilty of a Class 6 felony. Nothing herein contained shall prevent the use of tear gas or other gases by police officers or other peace officers in the proper performance of their duties, or by any person or persons in the protection of the person, life or property." Washington: Persons over 18 may carry personal-protection spray devices. Persons over age 14 may carry personal-protection spray devices with their legal guardian's consent. Wisconsin: Tear gas is not permissible. By regulation, OC products with a maximum OC concentration of 10% and weight range of oleoresin of capsicum and inert ingredients of are authorized. Further, the product cannot be camouflaged and must have a safety feature designed to prevent accidental discharge. The units may not have an effective range of over and must have an effective range of . In addition there are certain labeling and packaging requirements, it must not be sold to anyone under 18 and the phone number of the manufacturer has to be on the label. The units must also be sold in sealed tamper-proof packages. South America Brazil: Classified as a weapon by Federal Act n° 3665/2000 (Regulation for Fiscalization of Controlled Products). Only law enforcement officers and private security agents with a recognized Less Lethal Weapons training certificate can carry it. Colombia: Can be sold without any kind of restriction to anyone older than 14 years. Use has not been inducted on the law enforcement officer's arsenal. Australia Australian Capital Territory: Pepper spray is a "prohibited weapon", making it an offence to possess or use it. New South Wales: Possession of pepper spray by unauthorized persons is illegal, under schedule 1 of the Weapons Prohibition Act 1998, being classified as a "prohibited weapon". Northern Territory: Prescribed by regulation to be a prohibited weapon under the Weapons Control Act. This legislation makes it an offense for someone without a permit, normally anyone who is not an officer of Police/Correctional Services/Customs/Defence, to carry a prohibited weapon. Tasmania: Possession of pepper spray by unauthorized persons is illegal, under an amendment of the Police Offences Act 1935, being classified as an "offensive weapon". Likewise, possession of knives, batons, and any other instrument that may be considered, "Offensive Weapons" if they are possessed by an individual, in a Public Place, "Without lawful excuse", leading to confusion within the police force over what constitutes "lawful excuse". Self-defense as a lawful excuse to carry such items varies from one officer to the next. Pepper spray is commercially available without a license. Authority to possess and use Oleoresin Capsicum devices remains with Tasmania Police Officers (As part of general-issue operational equipment), and Tasmanian Justice Department (H.M. Prisons) Officers. South Australia: in South Australia, possession of pepper spray without lawful excuse is illegal. Western Australia: The possession of pepper spray by individuals for self-defense subject to a "reasonable excuse" test has been legal in Western Australia following the landmark Supreme Court decision in Hall v Collins [2003] WASCA 74 (4 April 2003). Victoria: Schedule 3 of the Control of Weapons Regulations 2011 designates "an article designed or adapted to discharge oleoresin capsicum spray" as a prohibited weapon. Queensland: in Queensland, pepper spray is considered an offensive weapon and can not be used for self-defence. New Zealand Classed as a restricted weapon. A permit is required to obtain or carry pepper spray. Front-line police officers have routinely carried pepper spray since 1997. New Zealand Prison Service made OC spray available for use in approved situations in 2013. New Zealand Defence Force Military Police are permitted to carry OC spray under a special agreement due to the nature of their duties. The Scoville rating of these sprays are 500,000 (sabre MK9 HVS unit) and 2,000,000 (Sabre, cell buster fog delivery). This was as a result of excessive staff assaults and a two-year trial in ten prisons throughout the country. Civilian use advocates In June 2002, West Australian resident Rob Hall was convicted for using a canister of pepper spray to break up an altercation between two guests at his home in Midland. He was sentenced to a good behavior bond and granted a spent conviction order, which he appealed to the Supreme Court. Justice Christine Wheeler ruled in his favor, thereby legalizing pepper spray in the state on a case-by-case basis for those who are able to show a reasonable excuse. On 14 March 2012, a person dressed entirely in black entered the public gallery of the New South Wales Legislative Council and launched a paper plane into the air in the form of a petition to Police Minister Mike Gallacher calling on the government to allow civilians to carry capsicum spray. See also Mace (spray) Offensive weapon Defensive weapon Bear spray Notes References External links Are guns more effective than pepper spray in an Alaska bear attack? Chemical weapons Lachrymatory agents Riot control agents Self-defense
Pepper spray
[ "Chemistry", "Biology" ]
6,894
[ "Chemical accident", "Chemical weapons", "Lachrymatory agents", "Riot control agents", "Biochemistry" ]
157,550
https://en.wikipedia.org/wiki/Karl%20Schwarzschild
Karl Schwarzschild (; 9 October 1873 – 11 May 1916) was a German physicist and astronomer. Schwarzschild provided the first exact solution to the Einstein field equations of general relativity, for the limited case of a single spherical non-rotating mass, which he accomplished in 1915, the same year that Einstein first introduced general relativity. The Schwarzschild solution, which makes use of Schwarzschild coordinates and the Schwarzschild metric, leads to a derivation of the Schwarzschild radius, which is the size of the event horizon of a non-rotating black hole. Schwarzschild accomplished this while serving in the German army during World War I. He died the following year from the autoimmune disease pemphigus, which he developed while at the Russian front. Various forms of the disease particularly affect people of Ashkenazi Jewish origin. Asteroid 837 Schwarzschilda is named in his honour, as is the large crater Schwarzschild, on the far side of the Moon. Life Karl Schwarzschild was born on 9 October 1873 in Frankfurt on Main, the eldest of six boys and one girl, to Jewish parents. His father was active in the business community of the city, and the family had ancestors in Frankfurt from the sixteenth century onwards. The family owned two fabric stores in Frankfurt. His brother Alfred became a painter. The young Schwarzschild attended a Jewish primary school until 11 years of age and then the Lessing-Gymnasium (secondary school). He received an all-encompassing education, including subjects like Latin, Ancient Greek, music and art, but developed a special interest in astronomy early on. In fact he was something of a child prodigy, having two papers on binary orbits (celestial mechanics) published before the age of sixteen. After graduation in 1890, he attended the University of Strasbourg to study astronomy. After two years he transferred to the Ludwig Maximilian University of Munich where he obtained his doctorate in 1896 for a work on Henri Poincaré's theories. From 1897, he worked as assistant at the Kuffner Observatory in Vienna. His work here concentrated on the photometry of star clusters and laid the foundations for a formula linking the intensity of the starlight, exposure time, and the resulting contrast on a photographic plate. An integral part of that theory is the Schwarzschild exponent (astrophotography). In 1899, he returned to Munich to complete his Habilitation. From 1901 until 1909, he was a professor at the prestigious Göttingen Observatory within the University of Göttingen, where he had the opportunity to work with some significant figures, including David Hilbert and Hermann Minkowski. Schwarzschild became the director of the observatory. He married Else Rosenbach, a great-granddaughter of Friedrich Wöhler and daughter of a professor of surgery at Göttingen, in 1909. Later that year they moved to Potsdam, where he took up the post of director of the Astrophysical Observatory. This was then the most prestigious post available for an astronomer in Germany. From 1912, Schwarzschild was a member of the Prussian Academy of Sciences. At the outbreak of World War I in 1914, Schwarzschild volunteered for service in the German army despite being over 40 years old. He served on both the western and eastern fronts, specifically helping with ballistic calculations and rising to the rank of second lieutenant in the artillery. While serving on the front in Russia in 1915, he began to suffer from pemphigus, a rare and painful autoimmune skin-disease. Nevertheless, he managed to write three outstanding papers, two on the theory of relativity and one on quantum theory. His papers on relativity produced the first exact solutions to the Einstein field equations, and a minor modification of these results gives the well-known solution that now bears his name — the Schwarzschild metric. In March 1916, Schwarzschild left military service because of his illness and returned to Göttingen. Two months later, on May 11, 1916, his struggle with pemphigus may have led to his death at the age of 42. He rests in his family grave at the Stadtfriedhof Göttingen. With his wife Else he had three children: Agathe Thornton (1910–2006) emigrated to Great Britain in 1933. In 1946, she moved to New Zealand, where she became a classics professor at the University of Otago in Dunedin. Martin Schwarzschild (1912–1997) became a professor of astronomy at Princeton University. Alfred Schwarzschild (1914–1944) remained in Nazi Germany and was murdered during the Holocaust. Work Thousands of dissertations, articles, and books have since been devoted to the study of Schwarzschild's solutions to the Einstein field equations. However, although his best known work lies in the area of general relativity, his research interests were extremely broad, including work in celestial mechanics, observational stellar photometry, quantum mechanics, instrumental astronomy, stellar structure, stellar statistics, Halley's comet, and spectroscopy. Some of his particular achievements include measurements of variable stars, using photography, and the improvement of optical systems, through the perturbative investigation of geometrical aberrations. Physics of photography While at Vienna in 1897, Schwarzschild developed a formula, now known as the Schwarzschild law, to calculate the optical density of photographic material. It involved an exponent now known as the Schwarzschild exponent, which is the in the formula: (where is optical density of exposed photographic emulsion, a function of , the intensity of the source being observed, and , the exposure time, with a constant). This formula was important for enabling more accurate photographic measurements of the intensities of faint astronomical sources. Electrodynamics According to Wolfgang Pauli, Schwarzschild is the first to introduce the correct Lagrangian formalism of the electromagnetic field as where are the electric and applied magnetic fields, is the vector potential and is the electric potential. He also introduced a field free variational formulation of electrodynamics (also known as "action at distance" or "direct interparticle action") based only on the world line of particles as where are the world lines of the particle, the (vectorial) arc element along the world line. Two points on two world lines contribute to the Lagrangian (are coupled) only if they are a zero Minkowskian distance (connected by a light ray), hence the term . The idea was further developed by Hugo Tetrode and Adriaan Fokker in the 1920s and John Archibald Wheeler and Richard Feynman in the 1940s and constitutes an alternative but equivalent formulation of electrodynamics. Relativity Einstein himself was pleasantly surprised to learn that the field equations admitted exact solutions, because of their prima facie complexity, and because he himself had produced only an approximate solution. Einstein's approximate solution was given in his famous 1915 article on the advance of the perihelion of Mercury. There, Einstein used rectangular coordinates to approximate the gravitational field around a spherically symmetric, non-rotating, non-charged mass. Schwarzschild, in contrast, chose a more elegant "polar-like" coordinate system and was able to produce an exact solution which he first set down in a letter to Einstein of 22 December 1915, written while he was serving in the war stationed on the Russian front. He concluded the letter by writing: "As you see, the war is kindly disposed toward me, allowing me, despite fierce gunfire at a decidedly terrestrial distance, to take this walk into this your land of ideas." In 1916, Einstein wrote to Schwarzschild on this result: Schwarzschild's second paper, which gives what is now known as the "Inner Schwarzschild solution" (in German: "innere Schwarzschild-Lösung"), is valid within a sphere of homogeneous and isotropic distributed molecules within a shell of radius r=R. It is applicable to solids; incompressible fluids; the sun and stars viewed as a quasi-isotropic heated gas; and any homogeneous and isotropic distributed gas. Schwarzschild's first (spherically symmetric) solution does not contain a coordinate singularity on a surface that is now named after him. In his coordinates, this singularity lies on the sphere of points at a particular radius, called the Schwarzschild radius: where G is the gravitational constant, M is the mass of the central body, and c is the speed of light in vacuum. In cases where the radius of the central body is less than the Schwarzschild radius, represents the radius within which all massive bodies, and even photons, must inevitably fall into the central body (ignoring quantum tunnelling effects near the boundary). When the mass density of this central body exceeds a particular limit, it triggers a gravitational collapse which, if it occurs with spherical symmetry, produces what is known as a Schwarzschild black hole. This occurs, for example, when the mass of a neutron star exceeds the Tolman–Oppenheimer–Volkoff limit (about three solar masses). Cultural references Karl Schwarzschild appears as a character in the science fiction short story "Schwarzschild Radius" (1987) by Connie Willis. Karl Schwarzchild appears as a fictionalized character in the story “Schwarzchild’s Singularity” in the collection "When We Cease to Understand the World" (2020) by Benjamín Labatut. Works The entire scientific estate of Karl Schwarzschild is stored in a special collection of the Lower Saxony National- and University Library of Göttingen. Relativity Über das Gravitationsfeld eines Massenpunktes nach der Einstein’schen Theorie. Reimer, Berlin 1916, S. 189 ff. (Sitzungsberichte der Königlich-Preussischen Akademie der Wissenschaften; 1916) Über das Gravitationsfeld einer Kugel aus inkompressibler Flüssigkeit. Reimer, Berlin 1916, S. 424-434 (Sitzungsberichte der Königlich-Preussischen Akademie der Wissenschaften; 1916) Other papers Untersuchungen zur geometrischen Optik I. Einleitung in die Fehlertheorie optischer Instrumente auf Grund des Eikonalbegriffs, 1906, Abhandlungen der Gesellschaft der Wissenschaften in Göttingen, Band 4, Nummero 1, S. 1-31 Untersuchungen zur geometrischen Optik II. Theorie der Spiegelteleskope, 1906, Abhandlungen der Gesellschaft der Wissenschaften in Göttingen, Band 4, Nummero 2, S. 1-28 Untersuchungen zur geometrischen Optik III. Über die astrophotographischen Objektive, 1906, Abhandlungen der Gesellschaft der Wissenschaften in Göttingen, Band 4, Nummero 3, S. 1-54 Über Differenzformeln zur Durchrechnung optischer Systeme, 1907, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, S. 551-570 Aktinometrie der Sterne der B. D. bis zur Größe 7.5 in der Zone 0° bis +20° Deklination. Teil A. Unter Mitwirkung von Br. Meyermann, A. Kohlschütter und O. Birck, 1910, Abhandlungen der Gesellschaft der Wissenschaften in Göttingen, Band 6, Numero 6, S. 1-117 Über das Gleichgewicht der Sonnenatmosphäre, 1906, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, S. 41-53 Die Beugung und Polarisation des Lichts durch einen Spalt. I., 1902, Mathematische Annalen, Band 55, S. 177-247 Zur Elektrodynamik. I. Zwei Formen des Princips der Action in der Elektronentheorie, 1903, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, S. 126-131 Zur Elektrodynamik. II. Die elementare elektrodynamische Kraft, 1903, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, S. 132-141 Zur Elektrodynamik. III. Ueber die Bewegung des Elektrons, 1903, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, S. 245-278 Ueber die Eigenbewegungen der Fixsterne, 1907, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, S. 614-632 Ueber die Bestimmung von Vertex und Apex nach der Ellipsoidhypothese aus einer geringeren Anzahl beobachteter Eigenbewegungen, 1908, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, S. 191-200 K. Schwarzschild, E. Kron: Ueber die Helligkeitsverteilung im Schweif des Halley´schen Kometen, 1911, Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, S. 197-208 Die naturwissenschaftlichen Ergebnisse und Ziele der neueren Mechanik., 1904, Jahresbericht der Deutschen Mathematiker-Vereinigung, Band 13, S. 145-156 Über die astronomische Ausbildung der Lehramtskandidaten., 1907, Jahresbericht der Deutschen Mathematiker-Vereinigung, Band 16, S. 519-522 English translations On the Gravitational Field of a Point-Mass, According to Einstein's Theory, The Abraham Zelmanov Journal, 2008, Volume 1, P. 10-19 On the Gravitational Field of a Sphere of Incompressible Liquid, According to Einstein's Theory, The Abraham Zelmanov Journal, 2008, Volume 1, P. 20-32 On the Permissible Numerical Value of the Curvature of Space, The Abraham Zelmanov Journal, Volume 1, 2008, pp. 64-73 See also List of things named after Karl Schwarzschild References External links Roberto B. Salgado The Light Cone: The Schwarzschild Black Hole Obituary in the Astrophysical Journal, written by Ejnar Hertzsprung Biography of Karl Schwarzschild by Indranu Suhendro, The Abraham Zelmanov Journal, 2008, Volume 1. 1873 births 1916 deaths Jewish astronomers 19th-century German astronomers German relativity theorists German Ashkenazi Jews Jewish German physicists Ludwig Maximilian University of Munich alumni Members of the Prussian Academy of Sciences Scientists from Frankfurt People from Hesse-Nassau University of Strasbourg alumni Academic staff of the University of Göttingen German Jewish military personnel of World War I 20th-century German astronomers Deaths from autoimmune disease
Karl Schwarzschild
[ "Astronomy" ]
3,149
[ "Astronomers", "Jewish astronomers" ]
157,592
https://en.wikipedia.org/wiki/Saturnalia
Saturnalia is an ancient Roman festival and holiday in honour of the god Saturn, held on 17 December in the Julian calendar and later expanded with festivities until 19 December. By the 1st century BC, the celebration had been extended until 23 December, for a total of seven days of festivities. The holiday was celebrated with a sacrifice at the Temple of Saturn, in the Roman Forum, and a public banquet, followed by private gift-giving, continual partying, and a carnival atmosphere that overturned Roman social norms: gambling was permitted, and masters provided table service for their slaves as it was seen as a time of liberty for both slaves and freedmen alike. A common custom was the election of a "King of the Saturnalia", who gave orders to people, which were followed and presided over the merrymaking. The gifts exchanged were usually gag gifts or small figurines made of wax or pottery known as sigillaria. The poet Catullus called it "the best of days". Saturnalia was the Roman equivalent to the earlier Greek holiday of Kronia, which was celebrated during the Attic month of Hekatombaion in late midsummer. It held theological importance for some Romans, who saw it as a restoration of the ancient Golden Age, when the world was ruled by Saturn. The Neoplatonist philosopher Porphyry interpreted the freedom associated with Saturnalia as symbolizing the "freeing of souls into immortality". Saturnalia may have influenced some of the customs associated with later celebrations in western Europe occurring in midwinter, particularly traditions associated with Christmas, the Feast of the Holy Innocents, and Epiphany. In particular, the historical western European Christmas custom of electing a "Lord of Misrule" may have its roots in Saturnalia celebrations. Origins In Roman mythology, Saturn was an agricultural deity who was said to have reigned over the world in the Golden Age, when humans enjoyed the spontaneous bounty of the earth without labour in a state of innocence. The revelries of Saturnalia were supposed to reflect the conditions of the lost mythical age. The Greek equivalent was the Kronia, which was celebrated on the twelfth day of the month of Hekatombaion, which occurred from around mid-July to mid-August on the Attic calendar. The Greek writer Athenaeus cites numerous other examples of similar festivals celebrated throughout the Greco-Roman world, including the Cretan festival of Hermaia in honor of Hermes, an unnamed festival from Troezen in honor of Poseidon, the Thessalian festival of Peloria in honor of Zeus Pelorios, and an unnamed festival from Babylon. He also mentions that the custom of masters dining with their slaves was associated with the Athenian festival of Anthesteria and the Spartan festival of Hyacinthia. The Argive festival of Hybristica, though not directly related to the Saturnalia, involved a similar reversal of roles in which women would dress as men and men would dress as women. The ancient Roman historian Justinus credits Saturn with being a historical king of the pre-Roman inhabitants of Italy: Although probably the best-known Roman holiday, Saturnalia as a whole is not described from beginning to end in any single ancient source. Modern understanding of the festival is pieced together from several accounts dealing with various aspects. The Saturnalia was the dramatic setting of the multivolume work of that name by Macrobius, a Latin writer from late antiquity who is the major source for information about the holiday. Macrobius describes the reign of Justinus's "king Saturn" as "a time of great happiness, both on account of the universal plenty that prevailed and because as yet there was no division into bond and free – as one may gather from the complete license enjoyed by slaves at the Saturnalia." In Lucian's Saturnalia it is Chronos himself who proclaims a "festive season, when 'tis lawful to be drunken, and slaves have license to revile their lords". In one of the interpretations in Macrobius's work, Saturnalia is a festival of light leading to the winter solstice, with the abundant presence of candles symbolizing the quest for knowledge and truth. The renewal of light and the coming of the new year was celebrated in the later Roman Empire at the Dies Natalis Solis Invicti, the "Birthday of the Unconquerable Sun", on 25 December. The popularity of Saturnalia continued into the 3rd and 4th centuries CE, and as the Roman Empire came under Christian rule, many of its customs were recast into or at least influenced the seasonal celebrations surrounding Christmas and the New Year. Historical context Saturnalia underwent a major reform in 217 BC, after the Battle of Lake Trasimene, when the Romans suffered one of their most crushing defeats by Carthage during the Second Punic War. Until that time, they had celebrated the holiday according to Roman custom (more Romano). It was after a consultation of the Sibylline Books that they adopted "Greek rite", introducing sacrifices carried out in the Greek manner, the public banquet, and the continual shouts of io Saturnalia that became characteristic of the celebration. Cato the Elder (234–149 BC) remembered a time before the so-called "Greek" elements had been added to the Roman Saturnalia. It was not unusual for the Romans to offer cult (cultus) to the deities of other nations in the hope of redirecting their favour (see evocatio), and the Second Punic War in particular created pressures on Roman society that led to a number of religious innovations and reforms. Robert E.A. Palmer has argued that the introduction of new rites at this time was in part an effort to appease Ba'al Hammon, the Carthaginian god who was regarded as the counterpart of the Roman Saturn and Greek Cronus. The table service that masters offered their slaves thus would have extended to Carthaginian or African war captives. Public religious observance Rite at the temple of Saturn The statue of Saturn at his main temple normally had its feet bound in wool, which was removed for the holiday as an act of liberation. The official rituals were carried out according to "Greek rite" (ritus graecus). The sacrifice was officiated by a priest, whose head was uncovered; in Roman rite, priests sacrificed capite velato, with head covered by a special fold of the toga. This procedure is usually explained by Saturn's assimilation with his Greek counterpart Cronus, since the Romans often adopted and reinterpreted Greek myths, iconography, and even religious practices for their own deities, but the uncovering of the priest's head may also be one of the Saturnalian reversals, the opposite of what was normal. Following the sacrifice the Roman Senate arranged a lectisternium, a ritual of Greek origin that typically involved placing a deity's image on a sumptuous couch, as if he were present and actively participating in the festivities. A public banquet followed (convivium publicum). The day was supposed to be a holiday from all forms of work. Schools were closed, and exercise regimens were suspended. Courts were not in session, so no justice was administered, and no declaration of war could be made. After the public rituals, observances continued at home. On 18 and 19 December, which were also holidays from public business, families conducted domestic rituals. They bathed early, and those with means sacrificed a suckling pig, a traditional offering to an earth deity. Human offerings Saturn also had a less benevolent aspect. One of his consorts was Lua, sometimes called Lua Saturni ("Saturn's Lua") and identified with Lua Mater, "Mother Destruction", a goddess in whose honor the weapons of enemies killed in war were burned, perhaps in expiation. Saturn's chthonic nature connected him to the underworld and its ruler Dīs Pater, the Roman equivalent of Greek Plouton (Pluto in Latin) who was also a god of hidden wealth. In sources of the third century AD and later, Saturn is recorded as receiving dead gladiators as offerings (munera) during or near the Saturnalia. These gladiatorial events, ten days in all throughout December, were presented mainly by the quaestors and sponsored with funds from the treasury of Saturn. The practice of gladiator munera was criticized by Christian apologists as a form of human sacrifice. Although there is no evidence of this practice during the Republic, the offering of gladiators led to later theories that the primeval Saturn had demanded human victims. Macrobius says that Dīs Pater was placated with human heads and Saturn with sacrificial victims consisting of men (virorum victimis). In mythic lore, during the visit of Hercules to Italy, the civilizing demigod insisted that the practice be halted and the ritual reinterpreted. Instead of heads to Dīs Pater, the Romans were to offer effigies or masks (oscilla); a mask appears in the representation of Saturnalia in the Calendar of Filocalus. Since the Greek word phota meant both 'man' and 'lights', candles were a substitute offering to Saturn for the light of life. The figurines that were exchanged as gifts (sigillaria) may also have represented token substitutes. Private festivities Role reversal Saturnalia was characterized by role reversals and behavioral license. Slaves were treated to a banquet of the kind usually enjoyed by their masters. Ancient sources differ on the circumstances: some suggest that master and slave dined together, while others indicate that the slaves feasted first, or that the masters actually served the food. The practice might have varied over time. Saturnalian license also permitted slaves to disrespect their masters without the threat of a punishment. It was a time for free speech: the Augustan poet Horace calls it "December liberty". In two satires set during the Saturnalia, Horace has a slave offer sharp criticism to his master. Everyone knew, however, that the leveling of the social hierarchy was temporary and had limits; no social norms were ultimately threatened, because the holiday would end. The toga, the characteristic garment of the male Roman citizen, was set aside in favor of the Greek synthesis, colourful "dinner clothes" otherwise considered in poor taste for daytime wear. Romans of citizen status normally went about bare-headed, but for the Saturnalia donned the pilleus, the conical felt cap that was the usual mark of a freedman. Slaves, who ordinarily were not entitled to wear the pilleus, wore it as well, so that everyone was "pilleated" without distinction. The participation of freeborn Roman women is implied by sources that name gifts for women, but their presence at banquets may have depended on the custom of their time; from the late Republic onward, women mingled socially with men more freely than they had in earlier times. Female entertainers were certainly present at some otherwise all-male gatherings. Role-playing was implicit in the Saturnalia's status reversals, and there are hints of mask-wearing or "guising". No theatrical events are mentioned in connection with the festivities, but the classicist Erich Segal saw Roman comedy, with its cast of impudent, free-wheeling slaves and libertine seniors, as imbued with the Saturnalian spirit. Gambling Gambling and dice-playing, normally prohibited or at least frowned upon, were permitted for all, even slaves. Coins and nuts were the stakes. On the Calendar of Philocalus, the Saturnalia is represented by a man wearing a fur-trimmed coat next to a table with dice, and a caption reading: "Now you have license, slave, to game with your master." Rampant overeating and drunkenness became the rule, and a sober person the exception. Seneca looked forward to the holiday, if somewhat tentatively, in a letter to a friend: "It is now the month of December, when the greatest part of the city is in a bustle. Loose reins are given to public dissipation; everywhere you may hear the sound of great preparations, as if there were some real difference between the days devoted to Saturn and those for transacting business. ... Were you here, I would willingly confer with you as to the plan of our conduct; whether we should eve in our usual way, or, to avoid singularity, both take a better supper and throw off the toga." Some Romans found it all a bit much. Pliny describes a secluded suite of rooms in his Laurentine villa, which he used as a retreat: "... especially during the Saturnalia when the rest of the house is noisy with the licence of the holiday and festive cries. This way I don't hamper the games of my people and they don't hinder my work or studies." Gift-giving The Sigillaria on 19 December was a day of gift-giving. Because gifts of value would mark social status contrary to the spirit of the season, these were often the pottery or wax figurines called sigillaria made specially for the day, candles, or "gag gifts", of which Augustus was particularly fond. Children received toys as gifts. In his many poems about the Saturnalia, Martial names both expensive and quite cheap gifts, including writing tablets, dice, knucklebones, moneyboxes, combs, toothpicks, a hat, a hunting knife, an axe, various lamps, balls, perfumes, pipes, a pig, a sausage, a parrot, tables, cups, spoons, items of clothing, statues, masks, books, and pets. Gifts might be as costly as a slave or exotic animal, but Martial suggests that token gifts of low intrinsic value inversely measure the high quality of a friendship. Patrons or "bosses" might pass along a gratuity (sigillaricium) to their poorer clients or dependents to help them buy gifts. Some emperors were noted for their devoted observance of the Sigillaria. In a practice that might be compared to modern greeting cards, verses sometimes accompanied the gifts. Martial has a collection of poems written as if to be attached to gifts. Catullus received a book of bad poems by "the worst poet of all time" as a joke from a friend. Gift-giving was not confined to the day of the Sigillaria. In some households, guests and family members received gifts after the feast in which slaves had shared. King of the Saturnalia Imperial sources refer to a Saturnalicius princeps ("Ruler of the Saturnalia"), who ruled as master of ceremonies for the proceedings. He was appointed by lot, and has been compared to the medieval Lord of Misrule at the Feast of Fools. His capricious commands, such as "Sing naked!" or "Throw him into cold water!", had to be obeyed by the other guests at the convivium: he creates and (mis)rules a chaotic and absurd world. The future emperor Nero is recorded as playing the role in his youth. Since this figure does not appear in accounts from the Republican period, the princeps of the Saturnalia may have developed as a satiric response to the new era of rule by a princeps, the title assumed by the first emperor Augustus to avoid the hated connotations of the word "king" (rex). Art and literature under Augustus celebrated his reign as a new Golden Age, but the Saturnalia makes a mockery of a world in which law is determined by one man and the traditional social and political networks are reduced to the power of the emperor over his subjects. In a poem about a lavish Saturnalia under Domitian, Statius makes it clear that the emperor, like Jupiter, still reigns during the temporary return of Saturn. Io Saturnalia The phrase io Saturnalia was the characteristic shout or salutation of the festival, originally commencing after the public banquet on the single day of 17 December. The interjection io (Greek ἰώ, ǐō) is pronounced either with two syllables (a short i and a long o) or as a single syllable (with the i becoming the Latin consonantal j and pronounced yō). It was a strongly emotive ritual exclamation or invocation, used for instance in announcing triumph or celebrating Bacchus, but also to punctuate a joke. On the calendar As an observance of state religion, Saturnalia was supposed to have been held "... quarto decimo Kalendarum Ianuariarum", on the fourteenth day before the Kalends of the pre-Julian, twenty-nine day December, on the oldest Roman religious calendar, which the Romans believed to have been established by the legendary founder Romulus and his successor Numa Pompilius. It was a dies festus, a legal holiday when no public business could be conducted. The day marked the dedication anniversary (dies natalis) of the Temple to Saturn in the Roman Forum in 497 BC. When Julius Caesar had the calendar reformed because it had fallen out of synchronization with the solar year, two days were added to the month, and the date of Saturnalia then changed, still falling on the 17 December, but with this now being the sixteenth day before the Kalends, as per the Roman reckoning of dates of this time. It was felt, thus, that the original day had thus been moved by two days, and so Saturnalia was celebrated under Augustus as a three-day official holiday encompassing both dates. By the late Republic, the private festivities of Saturnalia had expanded to seven days, but during the Imperial period contracted variously to three to five days. Caligula extended official observances to five. The date 17 December was the first day of the astrological sign Capricorn, the house of Saturn, the planet named for the god. Its proximity to the winter solstice (21 to 23 December on the Julian calendar) was endowed with various meanings by both ancient and modern scholars: for instance, the widespread use of wax candles (cerei, singular cereus) could refer to "the returning power of the sun's light after the solstice". Ancient theological and philosophical views Roman The Saturnalia reflects the contradictory nature of the deity Saturn himself: "There are joyful and utopian aspects of careless well-being side by side with disquieting elements of threat and danger." As a deity of agricultural bounty, Saturn embodied prosperity and wealth in general. The name of his consort Ops meant "wealth, resources". Her festival, Opalia, was celebrated on 19 December. The Temple of Saturn housed the state treasury (aerarium Saturni) and was the administrative headquarters of the quaestors, the public officials whose duties included oversight of the mint. It was among the oldest cult sites in Rome, and had been the location of "a very ancient" altar (ara) even before the building of the first temple in 497 BC. The Romans regarded Saturn as the original and autochthonous ruler of the Capitolium, and the first king of Latium or even the whole of Italy. At the same time, there was a tradition that Saturn had been an immigrant deity, received by Janus after he was usurped by his son Jupiter (Zeus) and expelled from Greece. His contradictions—a foreigner with one of Rome's oldest sanctuaries, and a god of liberation who is kept in fetters most of the year—indicate Saturn's capacity for obliterating social distinctions. Roman mythology of the Golden Age of Saturn's reign differed from the Greek tradition. He arrived in Italy "dethroned and fugitive", but brought agriculture and civilization and became a king. As the Augustan poet Virgil described it: "[H]e gathered together the unruly race [of fauns and nymphs] scattered over mountain heights, and gave them laws .... Under his reign were the golden ages men tell of: in such perfect peace he ruled the nations." The third century Neoplatonic philosopher Porphyry took an allegorical view of the Saturnalia. He saw the festival's theme of liberation and dissolution as representing the "freeing of souls into immortality"—an interpretation that Mithraists may also have followed, since they included many slaves and freedmen. According to Porphyry, the Saturnalia occurred near the winter solstice because the sun enters Capricorn, the astrological house of Saturn, at that time. In the Saturnalia of Macrobius, the proximity of the Saturnalia to the winter solstice leads to an exposition of solar monotheism, the belief that the Sun (see Sol Invictus) ultimately encompasses all divinities as one. Jewish M. Avodah Zarah lists Saturnalia as a "festival of the gentiles," along with the Calends of January and Kratesis. B. Avodah Zarah records that Ḥanan b. Rava said, "Kalends is held during the eight days after the [winter] solstice and Saturnura begins eight days before the [winter] solstice". Ḥananel b. Ḥushiel, followed by Rashi, claims: "Eight days before the solstice -- their festival was for all eight days," which slightly overstates the Saturnalia's historical six-day length, possibly to associate the holiday with Hanukkah. In the Jerusalem Talmud, Avodah Zarah claims the etymology of Saturnalia is שנאה טמונה śinʾâ ṭǝmûnâ "hidden hatred," and refers to the hatred Esau, whom the Rabbis believed had fathered Rome, harbored for Jacob. The Babylonian Talmud's Avodah Zarah ascribes the origins of Saturnalia (and Kalends) to Adam, who saw that the days were getting shorter and thought it was punishment for his sin: In the Babylonian Avodah Zarah, this etiology is attributed to the tannaim, but the story is suspiciously similar to the etiology of Kalends attributed by the Jerusalem Avodah Zarah to Abba Arikha. Influence Unlike several Roman religious festivals which were particular to cult sites in the city, the prolonged seasonal celebration of Saturnalia at home could be held anywhere in the Empire. Saturnalia continued as a secular celebration long after it was removed from the official calendar. As William Warde Fowler notes: "[Saturnalia] has left its traces and found its parallels in great numbers of medieval and modern customs, occurring about the time of the winter solstice." The date of Jesus's birth is unknown. A spurious correspondence between Cyril of Jerusalem and Pope Julius I (337–352), quoted by John of Nikiu in the 9th century, is sometimes given as a source for a claim that, in the fourth century AD, Pope Julius I decreed that the birth of Jesus be celebrated on 25 December. Some speculate that the date was chosen to create a Christian replacement or alternative to Saturnalia and the birthday festival of Sol Invictus, held on 25 December. Around AD 200, Tertullian had berated Christians for continuing to celebrate the pagan Saturnalia festival. The Church may have hoped to attract more converts to Christianity by allowing them to continue to celebrate on the same day. The Church may have also been influenced by the idea that Jesus was conceived and died on the same date; Jesus died during Passover and, in the third century AD, Passover was celebrated on 25 March. The Church may have calculated Jesus's birthday as nine months later, on 25 December. But in fact the correspondence is spurious. As a result of the close proximity of dates, many Christians in western Europe continued to celebrate traditional Saturnalia customs in association with Christmas and the surrounding holidays. Like Saturnalia, Christmas during the Middle Ages was a time of ruckus, drinking, gambling, and overeating. The tradition of the Saturnalicius princeps was particularly influential. In medieval France and Switzerland, a boy would be elected "bishop for a day" on 28 December (the Feast of the Holy Innocents) and would issue decrees much like the Saturnalicius princeps. The boy bishop's tenure ended during the evening vespers. This custom was common across western Europe, but varied considerably by region; in some places, the boy bishop's orders could become quite rowdy and unrestrained, but, in others, his power was only ceremonial. In some parts of France, during the boy bishop's tenure, the actual clergy would wear masks or dress in women's clothing, a reversal of roles in line with the traditional character of Saturnalia. During the late medieval period and early Renaissance, many towns in England elected a "Lord of Misrule" at Christmas time to preside over the Feast of Fools. This custom was sometimes associated with the Twelfth Night or Epiphany. A common tradition in western Europe was to drop a bean, coin, or other small token into a cake or pudding; whoever found the object would become the "King (or Queen) of the Bean". During the Protestant Reformation, reformers sought to revise or even completely abolish such practices, which they regarded as "popish"; these efforts were largely successful. The Puritans banned the "Lord of Misrule" in England and the custom was largely forgotten shortly thereafter, though the bean in the pudding survived as a tradition of a small gift to the one finding a single almond hidden in the traditional Christmas porridge in Scandinavia. Nonetheless, in the middle of the nineteenth century, some of the old ceremonies, such as gift-giving, were revived in English-speaking countries as part of a widespread "Christmas revival". During this revival, authors such as Charles Dickens sought to reform the "conscience of Christmas" and turn the formerly riotous holiday into a family-friendly occasion. Vestiges of the Saturnalia festivities may still be preserved in some of the traditions now associated with Christmas. The custom of gift-giving at Christmas time resembles the Roman tradition of giving sigillaria and the lighting of Advent candles resembles the Roman tradition of lighting torches and wax tapers. Likewise, Saturnalia and Christmas both share associations with eating, drinking, singing, and dancing. See also Brumalia Yule Bacchanalia References Bibliography Ancient sources Horace Satire 2.7.4 Justinus Epitome of Pompeius Trogus Macrobius Saturnalia Pliny the Younger Letters Modern secondary sources External links Saturnalia – World History Encyclopedia Saturnalia, A longer article by James Grout Saturn (mythology) Ancient Roman festivals December observances Winter festivals Religious festivals in Italy Winter solstice
Saturnalia
[ "Astronomy" ]
5,547
[ "Astronomical events", "Winter solstice" ]
157,606
https://en.wikipedia.org/wiki/Glass%20fiber
Glass fiber (or glass fibre) is a material consisting of numerous extremely fine fibers of glass. Glassmakers throughout history have experimented with glass fibers, but mass manufacture of glass fiber was only made possible with the invention of finer machine tooling. In 1893, Edward Drummond Libbey exhibited a dress at the World's Columbian Exposition incorporating glass fibers with the diameter and texture of silk fibers. Glass fibers can also occur naturally, as Pele's hair. Glass wool, which is one product called "fiberglass" today, was invented some time between 1932 and 1933 by Games Slayter of Owens-Illinois, as a material to be used as thermal building insulation. It is marketed under the trade name Fiberglas, which has become a genericized trademark. Glass fiber, when used as a thermal insulating material, is specially manufactured with a bonding agent to trap many small air cells, resulting in the characteristically air-filled low-density "glass wool" family of products. Glass fiber has roughly comparable mechanical properties to other fibers such as polymers and carbon fiber. Although not as rigid as carbon fiber, it is much cheaper and significantly less brittle when used in composites. Glass fiber reinforced composites are used in marine industry and piping industries because of good environmental resistance, better damage tolerance for impact loading, high specific strength and stiffness. Fiber formation Glass fiber is formed when thin strands of silica-based or other formulation glass are extruded into many fibers with small diameters suitable for textile processing. The technique of heating and drawing glass into fine fibers has been known for millennia, and was practiced in Egypt and Venice. Before the recent use of these fibers for textile applications, all glass fiber had been manufactured as staple (that is, clusters of short lengths of fiber). The modern method for producing glass wool is the invention of Games Slayter working at the Owens-Illinois Glass Company (Toledo, Ohio). He first applied for a patent for a new process to make glass wool in 1933. The first commercial production of glass fiber was in 1936. In 1938 Owens-Illinois Glass Company and Corning Glass Works joined to form the Owens-Corning Fiberglas Corporation. When the two companies joined to produce and promote glass fiber, they introduced continuous filament glass fibers. Owens-Corning is still the major glass-fiber producer in the market today. The most common type of glass fiber used in fiberglass is E-glass, which is alumino-borosilicate glass with less than 1% w/w alkali oxides, mainly used for glass-reinforced plastics. Other types of glass used are A-glass (Alkali-lime glass with little or no boron oxide), E-CR-glass (Electrical/Chemical Resistance; alumino-lime silicate with less than 1% w/w alkali oxides, with high acid resistance), C-glass (alkali-lime glass with high boron oxide content, used for glass staple fibers and insulation), D-glass (borosilicate glass, named for its low dielectric constant), R-glass (alumino silicate glass without MgO and CaO with high mechanical requirements as reinforcement), and S-glass (alumino silicate glass without CaO but with high MgO content with high tensile strength). Pure silica (silicon dioxide), when cooled as fused quartz into a glass with no true melting point, can be used as a glass fiber for fiberglass, but has the drawback that it must be worked at very high temperatures. In order to lower the necessary work temperature, other materials are introduced as "fluxing agents" (i.e., components to lower the melting point). Ordinary A-glass ("A" for "alkali-lime") or soda lime glass, crushed and ready to be remelted, as so-called cullet glass, was the first type of glass used for fiberglass. E-glass ("E" because of initial electrical application), is alkali free, and was the first glass formulation used for continuous filament formation. It now makes up most of the fiberglass production in the world, and also is the single largest consumer of boron minerals globally. It is susceptible to chloride ion attack and is a poor choice for marine applications. S-glass ("S" for "Strength") is used when high tensile strength (modulus) is important, and is thus important in composites for building and aircraft construction. The same substance is known as R-glass ("R" for "reinforcement") in Europe. C-glass ("C" for "chemical resistance") and T-glass ("T" is for "thermal insulator" – a North American variant of C-glass) are resistant to chemical attack; both are often found in insulation-grades of blown fiberglass. Chemistry The basis of textile-grade glass fibers is silica, SiO2. In its pure form it exists as a polymer, (SiO2)n. It has no true melting point but softens up to 1200 °C, where it starts to degrade. At 1713 °C, most of the molecules can move about freely. If the glass is extruded and cooled quickly at this temperature, it will be unable to form an ordered structure. In the polymer it forms SiO4 groups which are configured as a tetrahedron with the silicon atom at the center, and four oxygen atoms at the corners. These atoms then form a network bonded at the corners by sharing the oxygen atoms. The vitreous and crystalline states of silica (glass and quartz) have similar energy levels on a molecular basis, also implying that the glassy form is extremely stable. In order to induce crystallization, it must be heated to temperatures above 1200 °C for long periods of time. Although pure silica is a perfectly viable glass and glass fiber, it must be worked with at very high temperatures, which is a drawback unless its specific chemical properties are needed. It is usual to introduce impurities into the glass in the form of other materials to lower its working temperature. These materials also impart various other properties to the glass that may be beneficial in different applications. The first type of glass used for fiber was soda lime glass or A-glass ("A" for the alkali it contains). It is not very resistant to alkali. A newer, alkali-free (<2%) type, E-glass, is an alumino-borosilicate glass. C-glass was developed to resist attack from chemicals, mostly acids that destroy E-glass. T-glass is a North American variant of C-glass. AR-glass is alkali-resistant glass. Most glass fibers have limited solubility in water but are very dependent on pH. Chloride ions will also attack and dissolve E-glass surfaces. E-glass does not actually melt, but softens instead, the softening point being "the temperature at which a 0.55–0.77 mm diameter fiber 235 mm long, elongates under its own weight at 1 mm/min when suspended vertically and heated at the rate of 5 °C per minute". The strain point is reached when the glass has a viscosity of 1014.5 poise. The annealing point, which is the temperature where the internal stresses are reduced to an acceptable commercial limit in 15 minutes, is marked by a viscosity of 1013 poise. Properties Thermal Fabrics of woven glass fibers are useful thermal insulators because of their high ratio of surface area to weight. However, the increased surface area makes them much more susceptible to chemical attack. By trapping air within them, blocks of glass fiber make good thermal insulation, with a thermal conductivity of the order of 0.05 W/(m·K). Selected properties Mechanical properties The strength of glass is usually tested and reported for "virgin" or pristine fibers—those that have just been manufactured. The freshest, thinnest fibers are the strongest because the thinner fibers are more ductile. The more the surface is scratched, the less the resulting tenacity. Because glass has an amorphous structure, its properties are the same along the fiber and across the fiber. Humidity is an important factor in the tensile strength. Moisture is easily adsorbed and can worsen microscopic cracks and surface defects, and lessen tenacity. In contrast to carbon fiber, glass can undergo more elongation before it breaks. Thinner filaments can bend further before they break. The viscosity of the molten glass is very important for manufacturing success. During drawing, the process where the hot glass is pulled to reduce the diameter of the fiber, the viscosity must be relatively low. If it is too high, the fiber will break during drawing. However, if it is too low, the glass will form droplets instead of being drawn out into a fiber. Manufacturing processes Melting There are two main types of glass fiber manufacture and two main types of glass fiber product. First, fiber is made either from a direct melt process or a marble remelt process. Both start with the raw materials in solid form. The materials are mixed together and melted in a furnace. Then, for the marble process, the molten material is sheared and rolled into marbles which are cooled and packaged. The marbles are taken to the fiber manufacturing facility where they are inserted into a can and remelted. The molten glass is extruded to the bushing to be formed into fiber. In the direct melt process, the molten glass in the furnace goes directly to the bushing for formation. Formation The bushing plate is the most important part of the machinery for making the fiber. This is a small metal furnace containing nozzles for the fiber to be formed through. It is almost always made of platinum alloyed with rhodium for durability. Platinum is used because the glass melt has a natural affinity for wetting it. When bushings were first used they were pure platinum, and the glass wetted the bushing so easily that it ran under the plate after exiting the nozzle and accumulated on the underside. Also, due to its cost and the tendency to wear, the platinum was alloyed with rhodium. In the direct melt process, the bushing serves as a collector for the molten glass. It is heated slightly to keep the glass at the correct temperature for fiber formation. In the marble melt process, the bushing acts more like a furnace as it melts more of the material. Bushings are the major expense in fiber glass production. The nozzle design is also critical. The number of nozzles ranges from 200 to 4000 in multiples of 200. The important part of the nozzle in continuous filament manufacture is the thickness of its walls in the exit region. It was found that inserting a counterbore here reduced wetting. Today, the nozzles are designed to have a minimum thickness at the exit. As glass flows through the nozzle, it forms a drop which is suspended from the end. As it falls, it leaves a thread attached by the meniscus to the nozzle as long as the viscosity is in the correct range for fiber formation. The smaller the annular ring of the nozzle and the thinner the wall at exit, the faster the drop will form and fall away, and the lower its tendency to wet the vertical part of the nozzle. The surface tension of the glass is what influences the formation of the meniscus. For E-glass it should be around 400 mN/m. The attenuation (drawing) speed is important in the nozzle design. Although slowing this speed down can make coarser fiber, it is uneconomic to run at speeds for which the nozzles were not designed. Continuous filament process In the continuous filament process, after the fiber is drawn, a size is applied. This size helps protect the fiber as it is wound onto a bobbin. The particular size applied relates to end-use. While some sizes are processing aids, others make the fiber have an affinity for a certain resin, if the fiber is to be used in a composite. Size is usually added at 0.5–2.0% by weight. Winding then takes place at around 1 km/min. Staple fiber process For staple fiber production, there are a number of ways to manufacture the fiber. The glass can be blown or blasted with heat or steam after exiting the formation machine. Usually these fibers are made into some sort of mat. The most common process used is the rotary process. Here, the glass enters a rotating spinner, and due to centrifugal force is thrown out horizontally. The air jets push it down vertically, and binder is applied. Then the mat is vacuumed to a screen and the binder is cured in the oven. Safety Glass fiber has increased in popularity since the discovery that asbestos causes cancer and its subsequent removal from most products. Following this increase in popularity, the safety of glass fiber has also been called into question. Research shows that the composition of glass fiber can cause similar toxicity as asbestos since both are silicate fibers. Studies on rats conducted during the 1970s found that fibrous glass of less than 3 μm in diameter and greater than 20 μm in length is a "potent carcinogen". Likewise, the International Agency for Research on Cancer found it "may reasonably be anticipated to be a carcinogen" in 1990. The American Conference of Governmental Industrial Hygienists, on the other hand, says that there is insufficient evidence, and that glass fiber is in group A4: "Not classifiable as a human carcinogen". The North American Insulation Manufacturers Association (NAIMA) claims that glass fiber is fundamentally different from asbestos, since it is man-made instead of naturally occurring. They claim that glass fiber "dissolves in the lungs", while asbestos remains in the body for life. Although both glass fiber and asbestos are made from silica filaments, NAIMA claims that asbestos is more dangerous because of its crystalline structure, which causes it to cleave into smaller, more dangerous pieces, citing the U.S. Department of Health and Human Services: A 1998 study using rats found that the biopersistence of synthetic fibers after one year was 0.04–13%, but 27% for amosite asbestos. Fibers that persisted longer were found to be more carcinogenic. Glass-reinforced plastic (fiberglass) Glass-reinforced plastic (GRP) is a composite material or fiber-reinforced plastic made of a plastic reinforced by fine glass fibers. The glass can be in the form of a chopped strand mat (CSM) or a woven fabric. As with many other composite materials (such as reinforced concrete), the two materials act together, each overcoming the deficits of the other. Whereas the plastic resins are strong in compressive loading and relatively weak in tensile strength, the glass fibers are very strong in tension but tend not to resist compression. By combining the two materials, GRP becomes a material that resists both compressive and tensile forces well. The two materials may be used uniformly or the glass may be specifically placed in those portions of the structure that will experience tensile loads. Uses Uses for regular glass fiber include mats and fabrics for thermal insulation, electrical insulation, sound insulation, high-strength fabrics or heat- and corrosion-resistant fabrics. It is also used to reinforce various materials, such as tent poles, pole vault poles, arrows, bows and crossbows, translucent roofing panels, automobile bodies, hockey sticks, surfboards, boat hulls, and paper honeycomb. It has been used for medical purposes in casts. Glass fiber is extensively used for making FRP tanks and vessels. Open-weave glass fiber grids are used to reinforce asphalt pavement. Non-woven glass fiber/polymer blend mats are used saturated with asphalt emulsion and overlaid with asphalt, producing a waterproof, crack-resistant membrane. Use of glass-fiber reinforced polymer rebar instead of steel rebar shows promise in areas where avoidance of steel corrosion is desired. Potential uses Glass fiber has recently seen use in biomedical applications in the assistance of joint replacement where the electric field orientation of short phosphate glass fibers can improve osteogenic qualities through the proliferation of osteoblasts and with improved surface chemistry. Another potential use is within electronic applications as sodium based glass fibers assist or replace lithium in lithium-ion batteries due to its improved electronic properties. Role of recycling in glass fiber manufacturing Manufacturers of glass-fiber insulation can use recycled glass. Recycled glass fiber contains up to 40% recycled glass. See also Basalt fiber Carbon fiber BS4994 Composite materials Fiberglass Fiberglass molding Filament tape Gelcoat Glass cloth Glass fiber reinforced concrete (GFRC or GRC) Glass microsphere Glass Poling Glass wool Optical fiber Pele's hair, naturally occurring glass fibre. Quartz fiber Notes and references External links CDC – Fibrous Glass – NIOSH Workplace Safety and Health Topic Fiberglass and health International Geosynthetics Society, information on geotextiles and geosynthetics in general. Composite materials Glass types Glass production Building insulation materials Synthetic fibers 1938 introductions
Glass fiber
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
3,560
[ "Glass engineering and science", "Synthetic fibers", "Synthetic materials", "Glass production", "Composite materials", "Materials", "Matter" ]
157,616
https://en.wikipedia.org/wiki/Composite%20material
A composite or composite material (also composition material) is a material which is produced from two or more constituent materials. These constituent materials have notably dissimilar chemical or physical properties and are merged to create a material with properties unlike the individual elements. Within the finished structure, the individual elements remain separate and distinct, distinguishing composites from mixtures and solid solutions. Composite materials with more than one distinct layer are called composite laminates. Typical engineered composite materials are made up of a binding agent forming the matrix and a filler material (particulates or fibres) giving substance, e.g.: Concrete, reinforced concrete and masonry with cement, lime or mortar (which is itself a composite material) as a binder Composite wood such as glulam and plywood with wood glue as a binder Reinforced plastics, such as fiberglass and fibre-reinforced polymer with resin or thermoplastics as a binder Ceramic matrix composites (composite ceramic and metal matrices) Metal matrix composites advanced composite materials, often first developed for spacecraft and aircraft applications. Composite materials can be less expensive, lighter, stronger or more durable than common materials. Some are inspired by biological structures found in plants and animals. Robotic materials are composites that include sensing, actuation, computation, and communication components. Composite materials are used for construction and technical structures such as boat hulls, swimming pool panels, racing car bodies, shower stalls, bathtubs, storage tanks, imitation granite, and cultured marble sinks and countertops. They are also being increasingly used in general automotive applications. History The earliest composite materials were made from straw and mud combined to form bricks for building construction. Ancient brick-making was documented by Egyptian tomb paintings. Wattle and daub might be the oldest composite materials, at over 6000 years old. Woody plants, both true wood from trees and such plants as palms and bamboo, yield natural composites that were used prehistorically by humankind and are still used widely in construction and scaffolding. Plywood, 3400 BC, by the Ancient Mesopotamians; gluing wood at different angles gives better properties than natural wood. Cartonnage, layers of linen or papyrus soaked in plaster dates to the First Intermediate Period of Egypt c. 2181–2055 BC and was used for death masks. Cob mud bricks, or mud walls, (using mud (clay) with straw or gravel as a binder) have been used for thousands of years. Concrete was described by Vitruvius, writing around 25 BC in his Ten Books on Architecture, distinguished types of aggregate appropriate for the preparation of lime mortars. For structural mortars, he recommended pozzolana, which were volcanic sands from the sandlike beds of Pozzuoli brownish-yellow-gray in colour near Naples and reddish-brown at Rome. Vitruvius specifies a ratio of 1 part lime to 3 parts pozzolana for cements used in buildings and a 1:2 ratio of lime to pulvis Puteolanus for underwater work, essentially the same ratio mixed today for concrete used at sea. Natural cement-stones, after burning, produced cements used in concretes from post-Roman times into the 20th century, with some properties superior to manufactured Portland cement. Papier-mâché, a composite of paper and glue, has been used for hundreds of years. The first artificial fibre reinforced plastic was a combination of fiber glass and bakelite, performed in 1935 by Al Simison and Arthur D Little in Owens Corning Company One of the most common and familiar composite is fibreglass, in which small glass fibre are embedded within a polymeric material (normally an epoxy or polyester). The glass fibre is relatively strong and stiff (but also brittle), whereas the polymer is ductile (but also weak and flexible). Thus the resulting fibreglass is relatively stiff, strong, flexible, and ductile. Composite bow Leather cannon, wooden cannon Examples Composite materials Concrete is the most common artificial composite material of all. , about 7.5 billion cubic metres of concrete are made each year. Concrete typically consists of loose stones (construction aggregate) held with a matrix of cement. Concrete is an inexpensive material resisting large compressive forces, however, susceptible to tensile loading. To give concrete the ability to resist being stretched, steel bars, which can resist high stretching (tensile) forces, are often added to concrete to form reinforced concrete. Fibre-reinforced polymers include carbon-fiber-reinforced polymers and glass-reinforced plastic. If classified by matrix then there are thermoplastic composites, short fibre thermoplastics, long fibre thermoplastics or long-fiber-reinforced thermoplastics. There are numerous thermoset composites, including paper composite panels. Many advanced thermoset polymer matrix systems usually incorporate aramid fibre and carbon fibre in an epoxy resin matrix. Shape-memory polymer composites are high-performance composites, formulated using fibre or fabric reinforcements and shape-memory polymer resin as the matrix. Since a shape-memory polymer resin is used as the matrix, these composites have the ability to be easily manipulated into various configurations when they are heated above their activation temperatures and will exhibit high strength and stiffness at lower temperatures. They can also be reheated and reshaped repeatedly without losing their material properties. These composites are ideal for applications such as lightweight, rigid, deployable structures; rapid manufacturing; and dynamic reinforcement. High strain composites are another type of high-performance composites that are designed to perform in a high deformation setting and are often used in deployable systems where structural flexing is advantageous. Although high strain composites exhibit many similarities to shape-memory polymers, their performance is generally dependent on the fibre layout as opposed to the resin content of the matrix. Composites can also use metal fibres reinforcing other metals, as in metal matrix composites (MMC) or ceramic matrix composites (CMC), which includes bone (hydroxyapatite reinforced with collagen fibres), cermet (ceramic and metal), and concrete. Ceramic matrix composites are built primarily for fracture toughness, not for strength. Another class of composite materials involve woven fabric composite consisting of longitudinal and transverse laced yarns. Woven fabric composites are flexible as they are in form of fabric. Organic matrix/ceramic aggregate composites include asphalt concrete, polymer concrete, mastic asphalt, mastic roller hybrid, dental composite, syntactic foam, and mother of pearl. Chobham armour is a special type of composite armour used in military applications. Additionally, thermoplastic composite materials can be formulated with specific metal powders resulting in materials with a density range from 2 g/cm3 to 11 g/cm3 (same density as lead). The most common name for this type of material is "high gravity compound" (HGC), although "lead replacement" is also used. These materials can be used in place of traditional materials such as aluminium, stainless steel, brass, bronze, copper, lead, and even tungsten in weighting, balancing (for example, modifying the centre of gravity of a tennis racquet), vibration damping, and radiation shielding applications. High density composites are an economically viable option when certain materials are deemed hazardous and are banned (such as lead) or when secondary operations costs (such as machining, finishing, or coating) are a factor. There have been several studies indicating that interleaving stiff and brittle epoxy-based carbon-fiber-reinforced polymer laminates with flexible thermoplastic laminates can help to make highly toughened composites that show improved impact resistance. Another interesting aspect of such interleaved composites is that they are able to have shape memory behaviour without needing any shape-memory polymers or shape-memory alloys e.g. balsa plies interleaved with hot glue, aluminium plies interleaved with acrylic polymers or PVC and carbon-fiber-reinforced polymer laminates interleaved with polystyrene. A sandwich-structured composite is a special class of composite material that is fabricated by attaching two thin but stiff skins to a lightweight but thick core. The core material is normally low strength material, but its higher thickness provides the sandwich composite with high bending stiffness with overall low density. Wood is a naturally occurring composite comprising cellulose fibres in a lignin and hemicellulose matrix. Engineered wood includes a wide variety of different products such as wood fibre board, plywood, oriented strand board, wood plastic composite (recycled wood fibre in polyethylene matrix), Pykrete (sawdust in ice matrix), plastic-impregnated or laminated paper or textiles, Arborite, Formica (plastic), and Micarta. Other engineered laminate composites, such as Mallite, use a central core of end grain balsa wood, bonded to surface skins of light alloy or GRP. These generate low-weight, high rigidity materials. Particulate composites have particle as filler material dispersed in matrix, which may be nonmetal, such as glass, epoxy. Automobile tire is an example of particulate composite. Advanced diamond-like carbon (DLC) coated polymer composites have been reported where the coating increases the surface hydrophobicity, hardness and wear resistance. Ferromagnetic composites, including those with a polymer matrix consisting, for example, of nanocrystalline filler of Fe-based powders and polymers matrix. Amorphous and nanocrystalline powders obtained, for example, from metallic glasses can be used. Their use makes it possible to obtain ferromagnetic nanocomposites with controlled magnetic properties. Products Fibre-reinforced composite materials have gained popularity (despite their generally high cost) in high-performance products that need to be lightweight, yet strong enough to take harsh loading conditions such as aerospace components (tails, wings, fuselages, propellers), boat and scull hulls, bicycle frames, and racing car bodies. Other uses include fishing rods, storage tanks, swimming pool panels, and baseball bats. The Boeing 787 and Airbus A350 structures including the wings and fuselage are composed largely of composites. Composite materials are also becoming more common in the realm of orthopedic surgery, and it is the most common hockey stick material. Carbon composite is a key material in today's launch vehicles and heat shields for the re-entry phase of spacecraft. It is widely used in solar panel substrates, antenna reflectors and yokes of spacecraft. It is also used in payload adapters, inter-stage structures and heat shields of launch vehicles. Furthermore, disk brake systems of airplanes and racing cars are using carbon/carbon material, and the composite material with carbon fibres and silicon carbide matrix has been introduced in luxury vehicles and sports cars. In 2006, a fibre-reinforced composite pool panel was introduced for in-ground swimming pools, residential as well as commercial, as a non-corrosive alternative to galvanized steel. In 2007, an all-composite military Humvee was introduced by TPI Composites Inc and Armor Holdings Inc, the first all-composite military vehicle. By using composites the vehicle is lighter, allowing higher payloads. In 2008, carbon fibre and DuPont Kevlar (five times stronger than steel) were combined with enhanced thermoset resins to make military transit cases by ECS Composites creating 30-percent lighter cases with high strength. Pipes and fittings for various purpose like transportation of potable water, fire-fighting, irrigation, seawater, desalinated water, chemical and industrial waste, and sewage are now manufactured in glass reinforced plastics. Composite materials used in tensile structures for facade application provides the advantage of being translucent. The woven base cloth combined with the appropriate coating allows better light transmission. This provides a very comfortable level of illumination compared to the full brightness of outside. The wings of wind turbines, in growing sizes in the order of 50 m length are fabricated in composites since several years. Two-lower-leg-amputees run on carbon-composite spring-like artificial feet as quick as non-amputee athletes. High-pressure gas cylinders typically about 7–9 litre volume x 300 bar pressure for firemen are nowadays constructed from carbon composite. Type-4-cylinders include metal only as boss that carries the thread to screw in the valve. On 5 September 2019, HMD Global unveiled the Nokia 6.2 and Nokia 7.2 which are claimed to be using polymer composite for the frames. Overview Composite materials are created from individual materials. These individual materials are known as constituent materials, and there are two main categories of it. One is the matrix (binder) and the other reinforcement. A portion of each kind is needed at least. The reinforcement receives support from the matrix as the matrix surrounds the reinforcement and maintains its relative positions. The properties of the matrix are improved as the reinforcements impart their exceptional physical and mechanical properties. The mechanical properties become unavailable from the individual constituent materials by synergism. At the same time, the designer of the product or structure receives options to choose an optimum combination from the variety of matrix and strengthening materials. To shape the engineered composites, it must be formed. The reinforcement is placed onto the mould surface or into the mould cavity. Before or after this, the matrix can be introduced to the reinforcement. The matrix undergoes a melding event which sets the part shape necessarily. This melding event can happen in several ways, depending upon the matrix nature, such as solidification from the melted state for a thermoplastic polymer matrix composite or chemical polymerization for a thermoset polymer matrix. According to the requirements of end-item design, various methods of moulding can be used. The natures of the chosen matrix and reinforcement are the key factors influencing the methodology. The gross quantity of material to be made is another main factor. To support high capital investments for rapid and automated manufacturing technology, vast quantities can be used. Cheaper capital investments but higher labour and tooling expenses at a correspondingly slower rate assists the small production quantities. Many commercially produced composites use a polymer matrix material often called a resin solution. There are many different polymers available depending upon the starting raw ingredients. There are several broad categories, each with numerous variations. The most common are known as polyester, vinyl ester, epoxy, phenolic, polyimide, polyamide, polypropylene, PEEK, and others. The reinforcement materials are often fibres but also commonly ground minerals. The various methods described below have been developed to reduce the resin content of the final product, or the fibre content is increased. As a rule of thumb, lay up results in a product containing 60% resin and 40% fibre, whereas vacuum infusion gives a final product with 40% resin and 60% fibre content. The strength of the product is greatly dependent on this ratio. Martin Hubbe and Lucian A Lucia consider wood to be a natural composite of cellulose fibres in a matrix of lignin. Cores in composites Several layup designs of composite also involve a co-curing or post-curing of the prepreg with many other media, such as foam or honeycomb. Generally, this is known as a sandwich structure. This is a more general layup for the production of cowlings, doors, radomes or non-structural parts. Open- and closed-cell-structured foams like polyvinyl chloride, polyurethane, polyethylene, or polystyrene foams, balsa wood, syntactic foams, and honeycombs are generally utilized core materials. Open- and closed-cell metal foam can also be utilized as core materials. Recently, 3D graphene structures ( also called graphene foam) have also been employed as core structures. A recent review by Khurram and Xu et al., have provided the summary of the state-of-the-art techniques for fabrication of the 3D structure of graphene, and the examples of the use of these foam like structures as a core for their respective polymer composites. Semi-crystalline polymers Although the two phases are chemically equivalent, semi-crystalline polymers can be described both quantitatively and qualitatively as composite materials. The crystalline portion has a higher elastic modulus and provides reinforcement for the less stiff, amorphous phase. Polymeric materials can range from 0% to 100% crystallinity aka volume fraction depending on molecular structure and thermal history. Different processing techniques can be employed to vary the percent crystallinity in these materials and thus the mechanical properties of these materials as described in the physical properties section. This effect is seen in a variety of places from industrial plastics like polyethylene shopping bags to spiders which can produce silks with different mechanical properties. In many cases these materials act like particle composites with randomly dispersed crystals known as spherulites. However they can also be engineered to be anisotropic and act more like fiber reinforced composites. In the case of spider silk, the properties of the material can even be dependent on the size of the crystals, independent of the volume fraction. Ironically, single component polymeric materials are some of the most easily tunable composite materials known. Methods of fabrication Normally, the fabrication of composite includes wetting, mixing or saturating the reinforcement with the matrix. The matrix is then induced to bind together (with heat or a chemical reaction) into a rigid structure. Usually, the operation is done in an open or closed forming mould. However, the order and ways of introducing the constituents alters considerably. Composites fabrication is achieved by a wide variety of methods, including advanced fibre placement (automated fibre placement), fibreglass spray lay-up process, filament winding, lanxide process, tailored fibre placement, tufting, and z-pinning. Overview of mould The reinforcing and matrix materials are merged, compacted, and cured (processed) within a mould to undergo a melding event. The part shape is fundamentally set after the melding event. However, under particular process conditions, it can deform. The melding event for a thermoset polymer matrix material is a curing reaction that is caused by the possibility of extra heat or chemical reactivity such as an organic peroxide. The melding event for a thermoplastic polymeric matrix material is a solidification from the melted state. The melding event for a metal matrix material such as titanium foil is a fusing at high pressure and a temperature near the melting point. It is suitable for many moulding methods to refer to one mould piece as a "lower" mould and another mould piece as an "upper" mould. Lower and upper does not refer to the mould's configuration in space, but the different faces of the moulded panel. There is always a lower mould, and sometimes an upper mould in this convention. Part construction commences by applying materials to the lower mould. Lower mould and upper mould are more generalized descriptors than more common and specific terms such as male side, female side, a-side, b-side, tool side, bowl, hat, mandrel, etc. Continuous manufacturing utilizes a different nomenclature. Usually, the moulded product is referred to as a panel. It can be referred to as casting for certain geometries and material combinations. It can be referred to as a profile for certain continuous processes. Some of the processes are autoclave moulding, vacuum bag moulding, pressure bag moulding, resin transfer moulding, and light resin transfer moulding. Other fabrication methods Other types of fabrication include casting, centrifugal casting, braiding (onto a former), continuous casting, filament winding, press moulding, transfer moulding, pultrusion moulding, and slip forming. There are also forming capabilities including CNC filament winding, vacuum infusion, wet lay-up, compression moulding, and thermoplastic moulding, to name a few. The practice of curing ovens and paint booths is also required for some projects. Finishing methods The composite parts finishing is also crucial in the final design. Many of these finishes will involve rain-erosion coatings or polyurethane coatings. Tooling The mould and mould inserts are referred to as "tooling". The mould/tooling can be built from different materials. Tooling materials include aluminium, carbon fibre, invar, nickel, reinforced silicone rubber and steel. The tooling material selection is normally based on, but not limited to, the coefficient of thermal expansion, expected number of cycles, end item tolerance, desired or expected surface condition, cure method, glass transition temperature of the material being moulded, moulding method, matrix, cost, and other various considerations. Physical properties Usually, the composite's physical properties are not isotropic (independent of the direction of applied force) in nature. But they are typically anisotropic (different depending on the direction of the applied force or load). For instance, the composite panel's stiffness will usually depend upon the orientation of the applied forces and/or moments. The composite's strength is bounded by two loading conditions, as shown in the plot to the right. Isostrain rule of mixtures If both the fibres and matrix are aligned parallel to the loading direction, the deformation of both phases will be the same (assuming there is no delamination at the fibre-matrix interface). This isostrain condition provides the upper bound for composite strength, and is determined by the rule of mixtures: where EC is the effective composite Young's modulus, and Vi and Ei are the volume fraction and Young's moduli, respectively, of the composite phases. For example, a composite material made up of α and β phases as shown in the figure to the right under isostrain, the Young's modulus would be as follows:where Vα and Vβ are the respective volume fractions of each phase. This can be derived by considering that in the isostrain case, Assuming that the composite has a uniform cross section, the stress on the composite is a weighted average between the two phases, The stresses in the individual phases are given by Hooke's Law, Combining these equations gives that the overall stress in the composite is Then it can be shown that Isostress rule of mixtures The lower bound is dictated by the isostress condition, in which the fibres and matrix are oriented perpendicularly to the loading direction:and now the strains become a weighted averageRewriting Hooke's Law for the individual phases This leads toFrom the definition of Hooke's Lawand, in general, Following the example above, if one had a composite material made up of α and β phases under isostress conditions as shown in the figure to the right, the composition Young's modulus would be: The isostrain condition implies that under an applied load, both phases experience the same strain but will feel different stress. Comparatively, under isostress conditions both phases will feel the same stress but the strains will differ between each phase. A generalized equation for any loading condition between isostrain and isostress can be written as: where X is a material property such as modulus or stress, c, m, and r stand for the properties of the composite, matrix, and reinforcement materials respectively, and n is a value between 1 and −1. The above equation can be further generalized beyond a two phase composite to an m-component system: Though composite stiffness is maximized when fibres are aligned with the loading direction, so is the possibility of fibre tensile fracture, assuming the tensile strength exceeds that of the matrix. When a fibre has some angle of misorientation θ, several fracture modes are possible. For small values of θ the stress required to initiate fracture is increased by a factor of (cos θ)−2 due to the increased cross-sectional area (A cos θ) of the fibre and reduced force (F/cos θ) experienced by the fibre, leading to a composite tensile strength of σparallel /cos2 θ where σparallel is the tensile strength of the composite with fibres aligned parallel with the applied force. Intermediate angles of misorientation θ lead to matrix shear failure. Again the cross sectional area is modified but since shear stress is now the driving force for failure the area of the matrix parallel to the fibres is of interest, increasing by a factor of 1/sin θ. Similarly, the force parallel to this area again decreases (F/cos θ) leading to a total tensile strength of τmy /sin θ cos θ where τmy is the matrix shear strength. Finally, for large values of θ (near π/2) transverse matrix failure is the most likely to occur, since the fibres no longer carry the majority of the load. Still, the tensile strength will be greater than for the purely perpendicular orientation, since the force perpendicular to the fibres will decrease by a factor of 1/sin θ and the area decreases by a factor of 1/sin θ producing a composite tensile strength of σperp /sin2θ where σperp is the tensile strength of the composite with fibres align perpendicular to the applied force. The majority of commercial composites are formed with random dispersion and orientation of the strengthening fibres, in which case the composite Young's modulus will fall between the isostrain and isostress bounds. However, in applications where the strength-to-weight ratio is engineered to be as high as possible (such as in the aerospace industry), fibre alignment may be tightly controlled. Panel stiffness is also dependent on the design of the panel. For instance, the fibre reinforcement and matrix used, the method of panel build, thermoset versus thermoplastic, and type of weave. In contrast to composites, isotropic materials (for example, aluminium or steel), in standard wrought forms, possess the same stiffness typically despite the directional orientation of the applied forces and/or moments. The relationship between forces/moments and strains/curvatures for an isotropic material can be described with the following material properties: Young's Modulus, the shear modulus, and the Poisson's ratio, in relatively simple mathematical relationships. For the anisotropic material, it needs the mathematics of a second-order tensor and up to 21 material property constants. For the special case of orthogonal isotropy, there are three distinct material property constants for each of Young's Modulus, Shear Modulus and Poisson's ratio—a total of 9 constants to express the relationship between forces/moments and strains/curvatures. Techniques that take benefit of the materials' anisotropic properties involve mortise and tenon joints (in natural composites such as wood) and pi joints in synthetic composites. Mechanical properties of composites Particle reinforcement In general, particle reinforcement is strengthening the composites less than fiber reinforcement. It is used to enhance the stiffness of the composites while increasing the strength and the toughness. Because of their mechanical properties, they are used in applications in which wear resistance is required. For example, hardness of cement can be increased by reinforcing gravel particles, drastically. Particle reinforcement a highly advantageous method of tuning mechanical properties of materials since it is very easy implement while being low cost. The elastic modulus of particle-reinforced composites can be expressed as, where E is the elastic modulus, V is the volume fraction. The subscripts c, p and m are indicating composite, particle and matrix, respectively. is a constant can be found empirically. Similarly, tensile strength of particle-reinforced composites can be expressed as, where T.S. is the tensile strength, and is a constant (not equal to ) that can be found empirically. Continuous fiber reinforcement In general, continuous fiber reinforcement is implemented by incorporating a fiber as the strong phase into a weak phase, matrix. The reason for the popularity of fiber usage is materials with extraordinary strength can be obtained in their fiber form. Non-metallic fibers are usually showing a very high strength to density ratio compared to metal fibers because of the covalent nature of their bonds. The most famous example of this is carbon fibers that have many applications extending from sports gear to protective equipment to space industries. The stress on the composite can be expressed in terms of the volume fraction of the fiber and the matrix. where is the stress, V is the volume fraction. The subscripts c, f and m are indicating composite, fiber and matrix, respectively. Although the stress–strain behavior of fiber composites can only be determined by testing, there is an expected trend, three stages of the stress–strain curve. The first stage is the region of the stress–strain curve where both fiber and the matrix are elastically deformed. This linearly elastic region can be expressed in the following form. where is the stress, is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. After passing the elastic region for both fiber and the matrix, the second region of the stress–strain curve can be observed. In the second region, the fiber is still elastically deformed while the matrix is plastically deformed since the matrix is the weak phase. The instantaneous modulus can be determined using the slope of the stress–strain curve in the second region. The relationship between stress and strain can be expressed as, where is the stress, is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. To find the modulus in the second region derivative of this equation can be used since the slope of the curve is equal to the modulus. In most cases it can be assumed since the second term is much less than the first one. In reality, the derivative of stress with respect to strain is not always returning the modulus because of the binding interaction between the fiber and matrix. The strength of the interaction between these two phases can result in changes in the mechanical properties of the composite. The compatibility of the fiber and matrix is a measure of internal stress. The covalently bonded high strength fibers (e.g. carbon fibers) experience mostly elastic deformation before the fracture since the plastic deformation can happen due to dislocation motion. Whereas, metallic fibers have more space to plastically deform, so their composites exhibit a third stage where both fiber and the matrix are plastically deforming. Metallic fibers have many applications to work at cryogenic temperatures that is one of the advantages of composites with metal fibers over nonmetallic. The stress in this region of the stress–strain curve can be expressed as, where is the stress, is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. and are for fiber and matrix flow stresses respectively. Just after the third region the composite exhibit necking. The necking strain of composite is happened to be between the necking strain of the fiber and the matrix just like other mechanical properties of the composites. The necking strain of the weak phase is delayed by the strong phase. The amount of the delay depends upon the volume fraction of the strong phase. Thus, the tensile strength of the composite can be expressed in terms of the volume fraction. where T.S. is the tensile strength, is the stress, is the strain, E is the elastic modulus, and V is the volume fraction. The subscripts c, f, and m are indicating composite, fiber, and matrix, respectively. The composite tensile strength can be expressed as for is less than or equal to (arbitrary critical value of volume fraction) for is greater than or equal to The critical value of volume fraction can be expressed as, Evidently, the composite tensile strength can be higher than the matrix if is greater than . Thus, the minimum volume fraction of the fiber can be expressed as, Although this minimum value is very low in practice, it is very important to know since the reason for the incorporation of continuous fibers is to improve the mechanical properties of the materials/composites, and this value of volume fraction is the threshold of this improvement. The effect of fiber orientation Aligned fibers A change in the angle between the applied stress and fiber orientation will affect the mechanical properties of fiber-reinforced composites, especially the tensile strength. This angle, , can be used predict the dominant tensile fracture mechanism. At small angles, , the dominant fracture mechanism is the same as with load-fiber alignment, tensile fracture. The resolved force acting upon the length of the fibers is reduced by a factor of from rotation. . The resolved area on which the fiber experiences the force is increased by a factor of from rotation. . Taking the effective tensile strength to be and the aligned tensile strength . At moderate angles, , the material experiences shear failure. The effective force direction is reduced with respect to the aligned direction. . The resolved area on which the force acts is . The resulting tensile strength depends on the shear strength of the matrix, . At extreme angles, , the dominant mode of failure is tensile fracture in the matrix in the perpendicular direction. As in the isostress case of layered composite materials, the strength in this direction is lower than in the aligned direction. The effective areas and forces act perpendicular to the aligned direction so they both scale by . The resolved tensile strength is proportional to the transverse strength, . The critical angles from which the dominant fracture mechanism changes can be calculated as, where is the critical angle between longitudinal fracture and shear failure, and is the critical angle between shear failure and transverse fracture. By ignoring length effects, this model is most accurate for continuous fibers and does not effectively capture the strength-orientation relationship for short fiber reinforced composites. Furthermore, most realistic systems do not experience the local maxima predicted at the critical angles. The Tsai-Hill criterion provides a more complete description of fiber composite tensile strength as a function of orientation angle by coupling the contributing yield stresses: , , and . Randomly oriented fibers Anisotropy in the tensile strength of fiber reinforced composites can be removed by randomly orienting the fiber directions within the material. It sacrifices the ultimate strength in the aligned direction for an overall, isotropically strengthened material. Where K is an empirically determined reinforcement factor; similar to the particle reinforcement equation. For fibers with randomly distributed orientations in a plane, , and for a random distribution in 3D, . Stiffness and Compliance Elasticity For real application, most composite is anisotropic material or orthotropic material. The three-dimension stress tensor is required for stress and strain analysis. The stiffness and compliance can be written as follows and In order to simplify the 3D stress direction, the plane stress assumption is apply that the out–of–plane stress and out–of–plane strain are insignificant or zero. That is and . The stiffness matrix and compliance matrix can be reduced to and For fiber-reinforced composite, the fiber orientation in material affect anisotropic properties of the structure. From characterizing technique i.e. tensile testing, the material properties were measured based on sample (1-2) coordinate system. The tensors above express stress-strain relationship in (1-2) coordinate system. While the known material properties is in the principal coordinate system (x-y) of material. Transforming the tensor between two coordinate system help identify the material properties of the tested sample. The transformation matrix with degree rotation is for for Types of fibers and mechanical properties The most common types of fibers used in industry are glass fibers, carbon fibers, and kevlar due to their ease of production and availability. Their mechanical properties are very important to know, therefore the table of their mechanical properties is given below to compare them with S97 steel. The angle of fiber orientation is very important because of the anisotropy of fiber composites (please see the section "Physical properties" for a more detailed explanation). The mechanical properties of the composites can be tested using standard mechanical testing methods by positioning the samples at various angles (the standard angles are 0°, 45°, and 90°) with respect to the orientation of fibers within the composites. In general, 0° axial alignment makes composites resistant to longitudinal bending and axial tension/compression, 90° hoop alignment is used to obtain resistance to internal/external pressure, and ± 45° is the ideal choice to obtain resistance against pure torsion. Mechanical properties of fiber composite materials Carbon fiber & fiberglass composites vs. aluminum alloy and steel Although strenth and stiffness of steel and aluminum alloys are comparable to fiber composites, specific strength and stiffness of composites (i.e. in relation to their weight) are significantly higher. Failure Shock, impact of varying speed, or repeated cyclic stresses can provoke the laminate to separate at the interface between two layers, a condition known as delamination. Individual fibres can separate from the matrix, for example, fibre pull-out. Composites can fail on the macroscopic or microscopic scale. Compression failures can happen at both the macro scale or at each individual reinforcing fibre in compression buckling. Tension failures can be net section failures of the part or degradation of the composite at a microscopic scale where one or more of the layers in the composite fail in tension of the matrix or failure of the bond between the matrix and fibres. Some composites are brittle and possess little reserve strength beyond the initial onset of failure while others may have large deformations and have reserve energy absorbing capacity past the onset of damage. The distinctions in fibres and matrices that are available and the mixtures that can be made with blends leave a very broad range of properties that can be designed into a composite structure. The most famous failure of a brittle ceramic matrix composite occurred when the carbon-carbon composite tile on the leading edge of the wing of the Space Shuttle Columbia fractured when impacted during take-off. It directed to the catastrophic break-up of the vehicle when it re-entered the Earth's atmosphere on 1 February 2003. Composites have relatively poor bearing strength compared to metals. Testing Composites are tested before and after construction to assist in predicting and preventing failures. Pre-construction testing may adopt finite element analysis (FEA) for ply-by-ply analysis of curved surfaces and predicting wrinkling, crimping and dimpling of composites. Materials may be tested during manufacturing and after construction by various non-destructive methods including ultrasonic, thermography, shearography and X-ray radiography, and laser bond inspection for NDT of relative bond strength integrity in a localized area. See also 3D composites Aluminium composite panel American Composites Manufacturers Association Chemical vapour infiltration Composite laminate Discontinuous aligned composite Epoxy granite Hybrid material Lay-up process Nanocomposite Pykrete Rule of mixtures Scaled Composites Smart material Smart Materials and Structures Void (composites) References Further reading External links Composites Design and Manufacturing HUB Distance learning course in polymers and composites OptiDAT composite material database
Composite material
[ "Physics" ]
8,181
[ "Materials", "Composite materials", "Matter" ]
157,620
https://en.wikipedia.org/wiki/Electrochemical%20potential
In electrochemistry, the electrochemical potential (ECP), , is a thermodynamic measure of chemical potential that does not omit the energy contribution of electrostatics. Electrochemical potential is expressed in the unit of J/mol. Introduction Each chemical species (for example, "water molecules", "sodium ions", "electrons", etc.) has an electrochemical potential (a quantity with units of energy) at any given point in space, which represents how easy or difficult it is to add more of that species to that location. If possible, a species will move from areas with higher electrochemical potential to areas with lower electrochemical potential; in equilibrium, the electrochemical potential will be constant everywhere for each species (it may have a different value for different species). For example, if a glass of water has sodium ions (Na+) dissolved uniformly in it, and an electric field is applied across the water, then the sodium ions will tend to get pulled by the electric field towards one side. We say the ions have electric potential energy, and are moving to lower their potential energy. Likewise, if a glass of water has a lot of dissolved sugar on one side and none on the other side, each sugar molecule will randomly diffuse around the water, until there is equal concentration of sugar everywhere. We say that the sugar molecules have a "chemical potential", which is higher in the high-concentration areas, and the molecules move to lower their chemical potential. These two examples show that an electrical potential and a chemical potential can both give the same result: A redistribution of the chemical species. Therefore, it makes sense to combine them into a single "potential", the electrochemical potential, which can directly give the net redistribution taking both into account. It is (in principle) easy to measure whether or not two regions (for example, two glasses of water) have the same electrochemical potential for a certain chemical species (for example, a solute molecule): Allow the species to freely move back and forth between the two regions (for example, connect them with a semi-permeable membrane that lets only that species through). If the chemical potential is the same in the two regions, the species will occasionally move back and forth between the two regions, but on average there is just as much movement in one direction as the other, and there is zero net migration (this is called "diffusive equilibrium"). If the chemical potentials of the two regions are different, more molecules will move to the lower chemical potential than the other direction. Moreover, when there is not diffusive equilibrium, i.e., when there is a tendency for molecules to diffuse from one region to another, then there is a certain free energy released by each net-diffusing molecule. This energy, which can sometimes be harnessed (a simple example is a concentration cell), and the free-energy per mole is exactly equal to the electrochemical potential difference between the two regions. Conflicting terminologies It is common in electrochemistry and solid-state physics to discuss both the chemical potential and the electrochemical potential of the electrons. However, in the two fields, the definitions of these two terms are sometimes swapped. In electrochemistry, the electrochemical potential of electrons (or any other species) is the total potential, including both the (internal, nonelectrical) chemical potential and the electric potential, and is by definition constant across a device in equilibrium, whereas the chemical potential of electrons is equal to the electrochemical potential minus the local electric potential energy per electron. In solid-state physics, the definitions are normally compatible with this, but occasionally the definitions are swapped. This article uses the electrochemistry definitions. Definition and usage In generic terms, electrochemical potential is the mechanical work done in bringing 1 mole of an ion from a standard state to a specified concentration and electrical potential. According to the IUPAC definition, it is the partial molar Gibbs energy of the substance at the specified electric potential, where the substance is in a specified phase. Electrochemical potential can be expressed as where: i is the electrochemical potential of species i, in J/mol, μi is the chemical potential of the species i, in J/mol, zi is the valency (charge) of the ion i, a dimensionless integer, F is the Faraday constant, in C/mol, Φ is the local electrostatic potential in V. In the special case of an uncharged atom, zi = 0, and so i = μi. Electrochemical potential is important in biological processes that involve molecular diffusion across membranes, in electroanalytical chemistry, and industrial applications such as batteries and fuel cells. It represents one of the many interchangeable forms of potential energy through which energy may be conserved. In cell membranes, the electrochemical potential is the sum of the chemical potential and the membrane potential. Incorrect usage The term electrochemical potential is sometimes used to mean an electrode potential (either of a corroding electrode, an electrode with a non-zero net reaction or current, or an electrode at equilibrium). In some contexts, the electrode potential of corroding metals is called "electrochemical corrosion potential", which is often abbreviated as ECP, and the word "corrosion" is sometimes omitted. This usage can lead to confusion. The two quantities have different meanings and different dimensions: the dimension of electrochemical potential is energy per mole while that of electrode potential is voltage (energy per charge). See also Concentration cell Electrochemical gradient Fermi level Membrane potential Nernst equation Poisson–Boltzmann equation Reduction potential Standard electrode potential References External links Electrochemical potential – lecture notes from University of Illinois at Urbana-Champaign Electrochemistry Thermodynamics Electrochemical potentials
Electrochemical potential
[ "Physics", "Chemistry", "Mathematics" ]
1,193
[ "Dynamical systems", "Electrochemistry", "Thermodynamics", "Electrochemical potentials" ]
157,658
https://en.wikipedia.org/wiki/Specific%20replant%20disease
Specific replant disease (also known as sick soil syndrome) is a malady that manifests itself when susceptible plants such as apples, pears, plums, cherries and roses are placed into soil previously occupied by a related species. The exact causes are not known, but in the first year the new plants will grow poorly. Root systems are weak and may become blackened, and plants may fail to establish properly. One theory is that replant disease is due to a whole menagerie of tree pathogens — fungi, bacteria, nematodes, viruses and other organisms. These parasites target the living tissues of the mature tree, hastening senility and death, and survive in the soil and decaying roots after the tree has died. Putting a young traumatized tree with an immature root system into this broth of pathogens can be too much for an infant tree to cope with. Any new root growth is rapidly and heavily colonized, so that shoot growth is virtually zero. This is especially true if it is on a dwarfing rootstock, which by its nature will be relatively inefficient. As a rule, replant disease persists for around fifteen years in the soil, although this varies with local conditions. Pathogens survive in dead wood and organic matter until exposed to predation by their home rotting away, and will also depend on whether the original orchard was planted with dwarf or standard trees. Standards have more vigorous therefore larger - roots, and are thus likely to take longer to degrade. It is good organic rotation practice not to follow "like with like" and this rule applies to long lived trees as much as annual vegetables. In the case of temperate fruit trees, the "pomes and stones" rule for rotation should be observed- don't follow a "pome" fruit (with an apple-type core—apples, pears, medlar, quince) with a tree from the same group. A "stone" fruit (i.e., with a plum-type stone, such as plum, cherry, peach, apricot, almond) should be all right, and vice versa. However, rotation is not always easy in a well planned old orchard when the site it occupies may well be the best available, and starting another orchard elsewhere may not be practical. In this case, and replanting is unavoidable, a large hole should be dug out, and the soil removed and replaced with clean soil from a site where susceptible plants have not been grown. Using trees on vigorous rootstocks which will have a better chance of competing with the pathogens, or plants grown in large containers with a large root ball may also have a better chance of resisting replant disease. The extra time to cropping may be offset if new trees are planted a few years in advance of old trees finally falling over, furthermore, if the old orchard was grubbed— i.e. trees were healthy when removed, it is unlikely that replant disease would be a problem as pathogen levels may never have been high. The malady is worse where trees have died in situ-pathogens are likely to have contributed to the death and therefore be at a higher level in the soil. Soil fumigation is another common method employed to control replant disease in both apple and cherry trees. Throughout the 1990s, fumigants like methyl bromide (bromomethane) were commonly used in this way to control and treat the disease, through this was later phased out in the 2000s in favour of more modern alternatives such as chloropicrin, which some studies have shown to be an effective method for resolving SARD in apple tree monoculture in Europe. References Plant pathogens and diseases Apple tree diseases Pear tree diseases Stone fruit tree diseases Rose diseases
Specific replant disease
[ "Biology" ]
778
[ "Plant pathogens and diseases", "Plants" ]
157,700
https://en.wikipedia.org/wiki/Moment%20of%20inertia
The moment of inertia, otherwise known as the mass moment of inertia, angular/rotational mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is defined relative to a rotational axis. It is the ratio between the torque applied and the resulting angular acceleration about that axis. It plays the same role in rotational motion as mass does in linear motion. A body's moment of inertia about a particular axis depends both on the mass and its distribution relative to the axis, increasing with mass & distance from the axis. It is an extensive (additive) property: for a point mass the moment of inertia is simply the mass times the square of the perpendicular distance to the axis of rotation. The moment of inertia of a rigid composite system is the sum of the moments of inertia of its component subsystems (all taken about the same axis). Its simplest definition is the second moment of mass with respect to distance from an axis. For bodies constrained to rotate in a plane, only their moment of inertia about an axis perpendicular to the plane, a scalar value, matters. For bodies free to rotate in three dimensions, their moments can be described by a symmetric 3-by-3 matrix, with a set of mutually perpendicular principal axes for which this matrix is diagonal and torques around the axes act independently of each other. In mechanical engineering, simply "inertia" is often used to refer to "inertial mass" or "moment of inertia". Introduction When a body is free to rotate around an axis, torque must be applied to change its angular momentum. The amount of torque needed to cause any given angular acceleration (the rate of change in angular velocity) is proportional to the moment of inertia of the body. Moments of inertia may be expressed in units of kilogram metre squared (kg·m2) in SI units and pound-foot-second squared (lbf·ft·s2) in imperial or US units. The moment of inertia plays the role in rotational kinetics that mass (inertia) plays in linear kinetics—both characterize the resistance of a body to changes in its motion. The moment of inertia depends on how mass is distributed around an axis of rotation, and will vary depending on the chosen axis. For a point-like mass, the moment of inertia about some axis is given by , where is the distance of the point from the axis, and is the mass. For an extended rigid body, the moment of inertia is just the sum of all the small pieces of mass multiplied by the square of their distances from the axis in rotation. For an extended body of a regular shape and uniform density, this summation sometimes produces a simple expression that depends on the dimensions, shape and total mass of the object. In 1673, Christiaan Huygens introduced this parameter in his study of the oscillation of a body hanging from a pivot, known as a compound pendulum. The term moment of inertia ("momentum inertiae" in Latin) was introduced by Leonhard Euler in his book Theoria motus corporum solidorum seu rigidorum in 1765, and it is incorporated into Euler's second law. The natural frequency of oscillation of a compound pendulum is obtained from the ratio of the torque imposed by gravity on the mass of the pendulum to the resistance to acceleration defined by the moment of inertia. Comparison of this natural frequency to that of a simple pendulum consisting of a single point of mass provides a mathematical formulation for moment of inertia of an extended body. The moment of inertia also appears in momentum, kinetic energy, and in Newton's laws of motion for a rigid body as a physical parameter that combines its shape and mass. There is an interesting difference in the way moment of inertia appears in planar and spatial movement. Planar movement has a single scalar that defines the moment of inertia, while for spatial movement the same calculations yield a 3 × 3 matrix of moments of inertia, called the inertia matrix or inertia tensor. The moment of inertia of a rotating flywheel is used in a machine to resist variations in applied torque to smooth its rotational output. The moment of inertia of an airplane about its longitudinal, horizontal and vertical axes determine how steering forces on the control surfaces of its wings, elevators and rudder(s) affect the plane's motions in roll, pitch and yaw. Definition The moment of inertia is defined as the product of mass of section and the square of the distance between the reference axis and the centroid of the section. The moment of inertia is also defined as the ratio of the net angular momentum of a system to its angular velocity around a principal axis, that is If the angular momentum of a system is constant, then as the moment of inertia gets smaller, the angular velocity must increase. This occurs when spinning figure skaters pull in their outstretched arms or divers curl their bodies into a tuck position during a dive, to spin faster. If the shape of the body does not change, then its moment of inertia appears in Newton's law of motion as the ratio of an applied torque on a body to the angular acceleration around a principal axis, that is For a simple pendulum, this definition yields a formula for the moment of inertia in terms of the mass of the pendulum and its distance from the pivot point as, Thus, the moment of inertia of the pendulum depends on both the mass of a body and its geometry, or shape, as defined by the distance to the axis of rotation. This simple formula generalizes to define moment of inertia for an arbitrarily shaped body as the sum of all the elemental point masses each multiplied by the square of its perpendicular distance to an axis . An arbitrary object's moment of inertia thus depends on the spatial distribution of its mass. In general, given an object of mass , an effective radius can be defined, dependent on a particular axis of rotation, with such a value that its moment of inertia around the axis is where is known as the radius of gyration around the axis. Examples Simple pendulum Mathematically, the moment of inertia of a simple pendulum is the ratio of the torque due to gravity about the pivot of a pendulum to its angular acceleration about that pivot point. For a simple pendulum, this is found to be the product of the mass of the particle with the square of its distance to the pivot, that is This can be shown as follows: The force of gravity on the mass of a simple pendulum generates a torque around the axis perpendicular to the plane of the pendulum movement. Here is the distance vector from the torque axis to the pendulum center of mass, and is the net force on the mass. Associated with this torque is an angular acceleration, , of the string and mass around this axis. Since the mass is constrained to a circle the tangential acceleration of the mass is . Since the torque equation becomes: where is a unit vector perpendicular to the plane of the pendulum. (The second to last step uses the vector triple product expansion with the perpendicularity of and .) The quantity is the moment of inertia of this single mass around the pivot point. The quantity also appears in the angular momentum of a simple pendulum, which is calculated from the velocity of the pendulum mass around the pivot, where is the angular velocity of the mass about the pivot point. This angular momentum is given by using a similar derivation to the previous equation. Similarly, the kinetic energy of the pendulum mass is defined by the velocity of the pendulum around the pivot to yield This shows that the quantity is how mass combines with the shape of a body to define rotational inertia. The moment of inertia of an arbitrarily shaped body is the sum of the values for all of the elements of mass in the body. Compound pendulums A compound pendulum is a body formed from an assembly of particles of continuous shape that rotates rigidly around a pivot. Its moment of inertia is the sum of the moments of inertia of each of the particles that it is composed of. The natural frequency () of a compound pendulum depends on its moment of inertia, , where is the mass of the object, is local acceleration of gravity, and is the distance from the pivot point to the center of mass of the object. Measuring this frequency of oscillation over small angular displacements provides an effective way of measuring moment of inertia of a body. Thus, to determine the moment of inertia of the body, simply suspend it from a convenient pivot point so that it swings freely in a plane perpendicular to the direction of the desired moment of inertia, then measure its natural frequency or period of oscillation (), to obtain where is the period (duration) of oscillation (usually averaged over multiple periods). Center of oscillation A simple pendulum that has the same natural frequency as a compound pendulum defines the length from the pivot to a point called the center of oscillation of the compound pendulum. This point also corresponds to the center of percussion. The length is determined from the formula, or The seconds pendulum, which provides the "tick" and "tock" of a grandfather clock, takes one second to swing from side-to-side. This is a period of two seconds, or a natural frequency of for the pendulum. In this case, the distance to the center of oscillation, , can be computed to be Notice that the distance to the center of oscillation of the seconds pendulum must be adjusted to accommodate different values for the local acceleration of gravity. Kater's pendulum is a compound pendulum that uses this property to measure the local acceleration of gravity, and is called a gravimeter. Measuring moment of inertia The moment of inertia of a complex system such as a vehicle or airplane around its vertical axis can be measured by suspending the system from three points to form a trifilar pendulum. A trifilar pendulum is a platform supported by three wires designed to oscillate in torsion around its vertical centroidal axis. The period of oscillation of the trifilar pendulum yields the moment of inertia of the system. Moment of inertia of area Moment of inertia of area is also known as the second moment of area and its physical meaning is completely different from the mass moment of inertia. These calculations are commonly used in civil engineering for structural design of beams and columns. Cross-sectional areas calculated for vertical moment of the x-axis and horizontal moment of the y-axis . Height (h) and breadth (b) are the linear measures, except for circles, which are effectively half-breadth derived, Sectional areas moment calculated thus Square: Rectangular: and; Triangular: Circular: Motion in a fixed plane Point mass The moment of inertia about an axis of a body is calculated by summing for every particle in the body, where is the perpendicular distance to the specified axis. To see how moment of inertia arises in the study of the movement of an extended body, it is convenient to consider a rigid assembly of point masses. (This equation can be used for axes that are not principal axes provided that it is understood that this does not fully describe the moment of inertia.) Consider the kinetic energy of an assembly of masses that lie at the distances from the pivot point , which is the nearest point on the axis of rotation. It is the sum of the kinetic energy of the individual masses, This shows that the moment of inertia of the body is the sum of each of the terms, that is Thus, moment of inertia is a physical property that combines the mass and distribution of the particles around the rotation axis. Notice that rotation about different axes of the same body yield different moments of inertia. The moment of inertia of a continuous body rotating about a specified axis is calculated in the same way, except with infinitely many point particles. Thus the limits of summation are removed, and the sum is written as follows: Another expression replaces the summation with an integral, Here, the function gives the mass density at each point , is a vector perpendicular to the axis of rotation and extending from a point on the rotation axis to a point in the solid, and the integration is evaluated over the volume of the body . The moment of inertia of a flat surface is similar with the mass density being replaced by its areal mass density with the integral evaluated over its area. Note on second moment of area: The moment of inertia of a body moving in a plane and the second moment of area of a beam's cross-section are often confused. The moment of inertia of a body with the shape of the cross-section is the second moment of this area about the -axis perpendicular to the cross-section, weighted by its density. This is also called the polar moment of the area, and is the sum of the second moments about the - and -axes. The stresses in a beam are calculated using the second moment of the cross-sectional area around either the -axis or -axis depending on the load. Examples The moment of inertia of a compound pendulum constructed from a thin disc mounted at the end of a thin rod that oscillates around a pivot at the other end of the rod, begins with the calculation of the moment of inertia of the thin rod and thin disc about their respective centers of mass. The moment of inertia of a thin rod with constant cross-section and density and with length about a perpendicular axis through its center of mass is determined by integration. Align the -axis with the rod and locate the origin its center of mass at the center of the rod, then where is the mass of the rod. The moment of inertia of a thin disc of constant thickness , radius , and density about an axis through its center and perpendicular to its face (parallel to its axis of rotational symmetry) is determined by integration. Align the -axis with the axis of the disc and define a volume element as , then where is its mass. The moment of inertia of the compound pendulum is now obtained by adding the moment of inertia of the rod and the disc around the pivot point as, where is the length of the pendulum. Notice that the parallel axis theorem is used to shift the moment of inertia from the center of mass to the pivot point of the pendulum. A list of moments of inertia formulas for standard body shapes provides a way to obtain the moment of inertia of a complex body as an assembly of simpler shaped bodies. The parallel axis theorem is used to shift the reference point of the individual bodies to the reference point of the assembly. As one more example, consider the moment of inertia of a solid sphere of constant density about an axis through its center of mass. This is determined by summing the moments of inertia of the thin discs that can form the sphere whose centers are along the axis chosen for consideration. If the surface of the sphere is defined by the equation then the square of the radius of the disc at the cross-section along the -axis is Therefore, the moment of inertia of the sphere is the sum of the moments of inertia of the discs along the -axis, where is the mass of the sphere. Rigid body If a mechanical system is constrained to move parallel to a fixed plane, then the rotation of a body in the system occurs around an axis parallel to this plane. In this case, the moment of inertia of the mass in this system is a scalar known as the polar moment of inertia. The definition of the polar moment of inertia can be obtained by considering momentum, kinetic energy and Newton's laws for the planar movement of a rigid system of particles. If a system of particles, , are assembled into a rigid body, then the momentum of the system can be written in terms of positions relative to a reference point , and absolute velocities : where is the angular velocity of the system and is the velocity of . For planar movement the angular velocity vector is directed along the unit vector which is perpendicular to the plane of movement. Introduce the unit vectors from the reference point to a point , and the unit vector , so This defines the relative position vector and the velocity vector for the rigid system of the particles moving in a plane. Note on the cross product: When a body moves parallel to a ground plane, the trajectories of all the points in the body lie in planes parallel to this ground plane. This means that any rotation that the body undergoes must be around an axis perpendicular to this plane. Planar movement is often presented as projected onto this ground plane so that the axis of rotation appears as a point. In this case, the angular velocity and angular acceleration of the body are scalars and the fact that they are vectors along the rotation axis is ignored. This is usually preferred for introductions to the topic. But in the case of moment of inertia, the combination of mass and geometry benefits from the geometric properties of the cross product. For this reason, in this section on planar movement the angular velocity and accelerations of the body are vectors perpendicular to the ground plane, and the cross product operations are the same as used for the study of spatial rigid body movement. Angular momentum The angular momentum vector for the planar movement of a rigid system of particles is given by Use the center of mass as the reference point so and define the moment of inertia relative to the center of mass as then the equation for angular momentum simplifies to The moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass is known as the polar moment of inertia. Specifically, it is the second moment of mass with respect to the orthogonal distance from an axis (or pole). For a given amount of angular momentum, a decrease in the moment of inertia results in an increase in the angular velocity. Figure skaters can change their moment of inertia by pulling in their arms. Thus, the angular velocity achieved by a skater with outstretched arms results in a greater angular velocity when the arms are pulled in, because of the reduced moment of inertia. A figure skater is not, however, a rigid body. Kinetic energy The kinetic energy of a rigid system of particles moving in the plane is given by Let the reference point be the center of mass of the system so the second term becomes zero, and introduce the moment of inertia so the kinetic energy is given by The moment of inertia is the polar moment of inertia of the body. Newton's laws Newton's laws for a rigid system of particles, , can be written in terms of a resultant force and torque at a reference point , to yield where denotes the trajectory of each particle. The kinematics of a rigid body yields the formula for the acceleration of the particle in terms of the position and acceleration of the reference particle as well as the angular velocity vector and angular acceleration vector of the rigid system of particles as, For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors from the reference point to a point and the unit vectors , so This yields the resultant torque on the system as where , and is the unit vector perpendicular to the plane for all of the particles . Use the center of mass as the reference point and define the moment of inertia relative to the center of mass , then the equation for the resultant torque simplifies to Motion in space of a rigid body, and the inertia matrix The scalar moments of inertia appear as elements in a matrix when a system of particles is assembled into a rigid body that moves in three-dimensional space. This inertia matrix appears in the calculation of the angular momentum, kinetic energy and resultant torque of the rigid system of particles. Let the system of particles, be located at the coordinates with velocities relative to a fixed reference frame. For a (possibly moving) reference point , the relative positions are and the (absolute) velocities are where is the angular velocity of the system, and is the velocity of . Angular momentum Note that the cross product can be equivalently written as matrix multiplication by combining the first operand and the operator into a skew-symmetric matrix, , constructed from the components of : The inertia matrix is constructed by considering the angular momentum, with the reference point of the body chosen to be the center of mass : where the terms containing () sum to zero by the definition of center of mass. Then, the skew-symmetric matrix obtained from the relative position vector , can be used to define, where defined by is the symmetric inertia matrix of the rigid system of particles measured relative to the center of mass . Kinetic energy The kinetic energy of a rigid system of particles can be formulated in terms of the center of mass and a matrix of mass moments of inertia of the system. Let the system of particles be located at the coordinates with velocities , then the kinetic energy is where is the position vector of a particle relative to the center of mass. This equation expands to yield three terms Since the center of mass is defined by , the second term in this equation is zero. Introduce the skew-symmetric matrix so the kinetic energy becomes Thus, the kinetic energy of the rigid system of particles is given by where is the inertia matrix relative to the center of mass and is the total mass. Resultant torque The inertia matrix appears in the application of Newton's second law to a rigid assembly of particles. The resultant torque on this system is, where is the acceleration of the particle . The kinematics of a rigid body yields the formula for the acceleration of the particle in terms of the position and acceleration of the reference point, as well as the angular velocity vector and angular acceleration vector of the rigid system as, Use the center of mass as the reference point, and introduce the skew-symmetric matrix to represent the cross product , to obtain The calculation uses the identity obtained from the Jacobi identity for the triple cross product as shown in the proof below: Thus, the resultant torque on the rigid system of particles is given by where is the inertia matrix relative to the center of mass. Parallel axis theorem The inertia matrix of a body depends on the choice of the reference point. There is a useful relationship between the inertia matrix relative to the center of mass and the inertia matrix relative to another point . This relationship is called the parallel axis theorem. Consider the inertia matrix obtained for a rigid system of particles measured relative to a reference point , given by Let be the center of mass of the rigid system, then where is the vector from the center of mass to the reference point . Use this equation to compute the inertia matrix, Distribute over the cross product to obtain The first term is the inertia matrix relative to the center of mass. The second and third terms are zero by definition of the center of mass . And the last term is the total mass of the system multiplied by the square of the skew-symmetric matrix constructed from . The result is the parallel axis theorem, where is the vector from the center of mass to the reference point . Note on the minus sign: By using the skew symmetric matrix of position vectors relative to the reference point, the inertia matrix of each particle has the form , which is similar to the that appears in planar movement. However, to make this to work out correctly a minus sign is needed. This minus sign can be absorbed into the term , if desired, by using the skew-symmetry property of . Scalar moment of inertia in a plane The scalar moment of inertia, , of a body about a specified axis whose direction is specified by the unit vector and passes through the body at a point is as follows: where is the moment of inertia matrix of the system relative to the reference point , and is the skew symmetric matrix obtained from the vector . This is derived as follows. Let a rigid assembly of particles, , have coordinates . Choose as a reference point and compute the moment of inertia around a line L defined by the unit vector through the reference point , . The perpendicular vector from this line to the particle is obtained from by removing the component that projects onto . where is the identity matrix, so as to avoid confusion with the inertia matrix, and is the outer product matrix formed from the unit vector along the line . To relate this scalar moment of inertia to the inertia matrix of the body, introduce the skew-symmetric matrix such that , then we have the identity noting that is a unit vector. The magnitude squared of the perpendicular vector is The simplification of this equation uses the triple scalar product identity where the dot and the cross products have been interchanged. Exchanging products, and simplifying by noting that and are orthogonal: Thus, the moment of inertia around the line through in the direction is obtained from the calculation where is the moment of inertia matrix of the system relative to the reference point . This shows that the inertia matrix can be used to calculate the moment of inertia of a body around any specified rotation axis in the body. Inertia tensor For the same object, different axes of rotation will have different moments of inertia about those axes. In general, the moments of inertia are not equal unless the object is symmetric about all axes. The moment of inertia tensor is a convenient way to summarize all moments of inertia of an object with one quantity. It may be calculated with respect to any point in space, although for practical purposes the center of mass is most commonly used. Definition For a rigid object of point masses , the moment of inertia tensor is given by Its components are defined as where , is equal to 1, 2 or 3 for , , and , respectively, is the vector to the point mass from the point about which the tensor is calculated and is the Kronecker delta. Note that, by the definition, is a symmetric tensor. The diagonal elements are more succinctly written as while the off-diagonal elements, also called the , are Here denotes the moment of inertia around the -axis when the objects are rotated around the x-axis, denotes the moment of inertia around the -axis when the objects are rotated around the -axis, and so on. These quantities can be generalized to an object with distributed mass, described by a mass density function, in a similar fashion to the scalar moment of inertia. One then has where is their outer product, E3 is the 3×3 identity matrix, and V is a region of space completely containing the object. Alternatively it can also be written in terms of the angular momentum operator : The inertia tensor can be used in the same way as the inertia matrix to compute the scalar moment of inertia about an arbitrary axis in the direction , where the dot product is taken with the corresponding elements in the component tensors. A product of inertia term such as is obtained by the computation and can be interpreted as the moment of inertia around the -axis when the object rotates around the -axis. The components of tensors of degree two can be assembled into a matrix. For the inertia tensor this matrix is given by, It is common in rigid body mechanics to use notation that explicitly identifies the , , and -axes, such as and , for the components of the inertia tensor. Alternate inertia convention There are some CAD and CAE applications such as SolidWorks, Unigraphics NX/Siemens NX and MSC Adams that use an alternate convention for the products of inertia. According to this convention, the minus sign is removed from the product of inertia formulas and instead inserted in the inertia matrix: Determine inertia convention (Principal axes method) If one has the inertia data without knowing which inertia convention that has been used, it can be determined if one also has the principal axes. With the principal axes method, one makes inertia matrices from the following two assumptions: The standard inertia convention has been used . The alternate inertia convention has been used . Next, one calculates the eigenvectors for the two matrices. The matrix whose eigenvectors are parallel to the principal axes corresponds to the inertia convention that has been used. Derivation of the tensor components The distance of a particle at from the axis of rotation passing through the origin in the direction is , where is unit vector. The moment of inertia on the axis is Rewrite the equation using matrix transpose: where E3 is the 3×3 identity matrix. This leads to a tensor formula for the moment of inertia For multiple particles, we need only recall that the moment of inertia is additive in order to see that this formula is correct. Inertia tensor of translation Let be the inertia tensor of a body calculated at its center of mass, and be the displacement vector of the body. The inertia tensor of the translated body respect to its original center of mass is given by: where is the body's mass, E3 is the 3 × 3 identity matrix, and is the outer product. Inertia tensor of rotation Let be the matrix that represents a body's rotation. The inertia tensor of the rotated body is given by: Inertia matrix in different reference frames The use of the inertia matrix in Newton's second law assumes its components are computed relative to axes parallel to the inertial frame and not relative to a body-fixed reference frame. This means that as the body moves the components of the inertia matrix change with time. In contrast, the components of the inertia matrix measured in a body-fixed frame are constant. Body frame Let the body frame inertia matrix relative to the center of mass be denoted , and define the orientation of the body frame relative to the inertial frame by the rotation matrix , such that, where vectors in the body fixed coordinate frame have coordinates in the inertial frame. Then, the inertia matrix of the body measured in the inertial frame is given by Notice that changes as the body moves, while remains constant. Principal axes Measured in the body frame, the inertia matrix is a constant real symmetric matrix. A real symmetric matrix has the eigendecomposition into the product of a rotation matrix and a diagonal matrix , given by where The columns of the rotation matrix define the directions of the principal axes of the body, and the constants , , and are called the principal moments of inertia. This result was first shown by J. J. Sylvester (1852), and is a form of Sylvester's law of inertia. The principal axis with the highest moment of inertia is sometimes called the figure axis or axis of figure. A toy top is an example of a rotating rigid body, and the word top is used in the names of types of rigid bodies. When all principal moments of inertia are distinct, the principal axes through center of mass are uniquely specified and the rigid body is called an asymmetric top. If two principal moments are the same, the rigid body is called a symmetric top and there is no unique choice for the two corresponding principal axes. If all three principal moments are the same, the rigid body is called a spherical top (although it need not be spherical) and any axis can be considered a principal axis, meaning that the moment of inertia is the same about any axis. The principal axes are often aligned with the object's symmetry axes. If a rigid body has an axis of symmetry of order , meaning it is symmetrical under rotations of about the given axis, that axis is a principal axis. When , the rigid body is a symmetric top. If a rigid body has at least two symmetry axes that are not parallel or perpendicular to each other, it is a spherical top, for example, a cube or any other Platonic solid. The motion of vehicles is often described in terms of yaw, pitch, and roll which usually correspond approximately to rotations about the three principal axes. If the vehicle has bilateral symmetry then one of the principal axes will correspond exactly to the transverse (pitch) axis. A practical example of this mathematical phenomenon is the routine automotive task of balancing a tire, which basically means adjusting the distribution of mass of a car wheel such that its principal axis of inertia is aligned with the axle so the wheel does not wobble. Rotating molecules are also classified as asymmetric, symmetric, or spherical tops, and the structure of their rotational spectra is different for each type. Ellipsoid The moment of inertia matrix in body-frame coordinates is a quadratic form that defines a surface in the body called Poinsot's ellipsoid. Let be the inertia matrix relative to the center of mass aligned with the principal axes, then the surface or defines an ellipsoid in the body frame. Write this equation in the form, to see that the semi-principal diameters of this ellipsoid are given by Let a point on this ellipsoid be defined in terms of its magnitude and direction, , where is a unit vector. Then the relationship presented above, between the inertia matrix and the scalar moment of inertia around an axis in the direction , yields Thus, the magnitude of a point in the direction on the inertia ellipsoid is See also Central moment List of moments of inertia Planar lamina Rotational energy Moment of inertia factor References External links Angular momentum and rigid-body rotation in two and three dimensions Lecture notes on rigid-body rotation and moments of inertia The moment of inertia tensor An introductory lesson on moment of inertia: keeping a vertical pole not falling down (Java simulation) Tutorial on finding moments of inertia, with problems and solutions on various basic shapes Notes on mechanics of manipulation: the angular inertia tensor Easy to use and Free Moment of Inertia Calculator online Mechanical quantities Rigid bodies Rotation Articles containing video clips Moment (physics)
Moment of inertia
[ "Physics", "Mathematics" ]
7,143
[ "Physical phenomena", "Mechanical quantities", "Physical quantities", "Quantity", "Classical mechanics", "Rotation", "Motion (physics)", "Mechanics", "Moment (physics)" ]
157,706
https://en.wikipedia.org/wiki/Hemiola
In music, hemiola (also hemiolia) is the ratio 3:2. The equivalent Latin term is sesquialtera. In rhythm, hemiola refers to three beats of equal value in the time normally occupied by two beats. In pitch, hemiola refers to the interval of a perfect fifth. Etymology The word hemiola comes from the Greek adjective ἡμιόλιος, hemiolios, meaning "containing one and a half," "half as much again," "in the ratio of one and a half to one (3:2), as in musical sounds." The words "hemiola" and "sesquialtera" both signify the ratio 3:2, and in music were first used to describe relations of pitch. Dividing the string of a monochord in this ratio produces the interval of a perfect fifth. Beginning in the 15th century, both words were also used to describe rhythmic relationships, specifically the substitution (usually through the use of coloration—red notes in place of black ones, or black in place of "white", hollow noteheads) of three imperfect notes (divided into two parts) for two perfect ones (divided into three parts) in tempus perfectum or in prolatio maior. Rhythm In rhythm, hemiola refers to three beats of equal value in the time normally occupied by two beats. Vertical hemiola: sesquialtera The Oxford Dictionary of Music illustrates hemiola with a superimposition of three notes in the time of two and vice versa. One textbook states that, although the word "hemiola" is commonly used for both simultaneous and successive durational values, describing a simultaneous combination of three against two is less accurate than for successive values and the "preferred term for a vertical two against three … is sesquialtera." The New Harvard Dictionary of Music states that in some contexts, a sesquialtera is equivalent to a hemiola. Grove's Dictionary, on the other hand, has maintained from the first edition of 1880 down to the most recent edition of 2001 that the Greek and Latin terms are equivalent and interchangeable, both in the realms of pitch and rhythm, although David Hiley, E. Thomas Stanford, and Paul R. Laird hold that, though similar in effect, hemiola properly applies to a momentary occurrence of three duple values in place of two triple ones, whereas sesquialtera represents a proportional metric change between successive sections. Sub-Saharan African music A repeating vertical hemiola is known as polyrhythm, or more specifically, cross-rhythm. The most basic rhythmic cell of sub-Saharan Africa is the 3:2 cross-rhythm. Novotney observes: "The 3:2 relationship (and [its] permutations) is the foundation of most typical polyrhythmic textures found in West African musics." Agawu states: "[The] resultant [3:2] rhythm holds the key to understanding ... there is no independence here, because 2 and 3 belong to a single Gestalt." In the following example, a Ghanaian gyil plays a hemiola as the basis of an ostinato melody. The left hand (lower notes) sounds the two main beats, while the right hand (upper notes) sounds the three cross-beats. European music In compound time ( or ). Where a regular pattern of two beats to a measure is established at the start of a phrase. This changes to a pattern of three beats at the end of the phrase. The minuet from J. S. Bach's keyboard Partita No. 5 in G major articulates groups of 2 times 3 quavers that are really in time, despite the metre stated in the initial time-signature. The latter time is restored only at the cadences (bars 4 and 11–12): Later in the same piece, Bach creates a conflict between the two metres ( against ): Hemiola is found in many Renaissance pieces in triple rhythm. One composer who exploited this characteristic was the 16th-century French composer Claude Le Jeune, a leading exponent of musique mesurée à l'antique. One of his best-known chansons is "Revoici venir du printemps", where the alternation of compound-duple and simple-triple metres with a common counting unit for the beat subdivisions can be clearly heard: The hemiola was commonly used in baroque music, particularly in dances, such as the courante and minuet. Other composers who have used the device extensively include Corelli, Handel, Weber and Beethoven. A spectacular example from Beethoven comes in the scherzo from his String Quartet No. 6. As Philip Radcliffe puts it, "The constant cross-rhythms shifting between and , more common at certain earlier and later periods, were far from usual in 1800, and here they are made to sound especially eccentric owing to frequent sforzandi on the last quaver of the bar... it looks ahead to later works and must have sounded very disconcerting to contemporary audiences." Later in the nineteenth century, Tchaikovsky frequently used hemiolas in his waltzes, as did Richard Strauss in the waltzes from Der Rosenkavalier, and the third movement of Robert Schumann's Piano Concerto is noted for the ambiguity of its rhythm. John Daverio says that the movement's "fanciful hemiolas... serve to legitimize the dance-like material as a vehicle for symphonic elaboration." Johannes Brahms was particularly famous for exploiting the hemiola's potential for large-scale thematic development. Writing about the rhythm and meter of Brahms's Symphony No. 3, Frisch says "Perhaps in no other first movement by Brahms does the development of these elements play so critical a role. The first movement of the third is cast in meter that is also open, through internal recasting as (a so-called hemiola). Metrical ambiguity arises in the very first appearance of the motto [opening theme]." At the beginning of the second movement, , of his String Quartet (1903), Ravel "uses the pizzicato as a vehicle for rhythmic interplay between and ." Horizontal hemiola Peter Manuel, in the context of an analysis of the flamenco soleá song form, refers to the following figure as a horizontal hemiola or "sesquialtera" (which mistranslates as: "six that alters"). It is "a cliché of various Spanish and Latin American musics ... well established in Spain since the sixteenth century", a twelve-beat scheme with internal accents, consisting of a bar followed by one in , for a 3 + 3 + 2 + 2 + 2 pattern. This figure is a common African bell pattern, used by the Hausa people of Nigeria, in Haitian Vodou drumming, Cuban palo, and many other drumming systems. The horizontal hemiola suggests metric modulation ( changing to ). This interpretational switch has been exploited, for example, by Leonard Bernstein, in the song "America" from West Side Story, as can be heard in the prominent motif (suggesting a duple beat scheme, followed by a triple beat scheme): Pitch The perfect fifth Hemiola can be used to describe the ratio of the lengths of two strings as three-to-two (3:2), that together sound a perfect fifth. The early Pythagoreans, such as Hippasus and Philolaus, used this term in a music-theoretic context to mean a perfect fifth. The justly tuned pitch ratio of a perfect fifth means that the upper note makes three vibrations in the same amount of time that the lower note makes two. In the cent system of pitch measurement, the 3:2 ratio corresponds to approximately 702 cents, or 2% of a semitone wider than seven semitones. The just perfect fifth can be heard when a violin is tuned: if adjacent strings are adjusted to the exact ratio of 3:2, the result is a smooth and consonant sound, and the violin sounds in tune. Just perfect fifths are the basis of Pythagorean tuning, and are employed together with other just intervals in just intonation. The 3:2 just perfect fifth arises in the justly tuned C major scale between C and G. Other intervals Later Greek authors such as Aristoxenus and Ptolemy use the word to describe smaller intervals as well, such as the hemiolic chromatic pyknon, which is one-and-a-half times the size of the semitone comprising the enharmonic pyknon. See also Syncopation References Sources Further reading Brandel, Rose (1959). The African Hemiola Style, Ethnomusicology, 3(3):106–117, correction, 4(1):iv. Károlyi, Ottó (1998). Traditional African & Oriental Music, Penguin Books. . Ratios Musical techniques Musical terminology Rhythm and meter
Hemiola
[ "Physics", "Mathematics" ]
1,881
[ "Physical quantities", "Time", "Rhythm and meter", "Arithmetic", "Spacetime", "Ratios" ]
157,736
https://en.wikipedia.org/wiki/Hybrid%20vehicle
A hybrid vehicle is one that uses two or more distinct types of power, such as submarines that use diesel when surfaced and batteries when submerged. Other means to store energy include pressurized fluid in hydraulic hybrids. Hybrid powertrains are designed to switch from one power source to another to maximize both fuel efficiency and energy efficiency. In hybrid electric vehicles, for instance, the electric motor is more efficient at producing torque, or turning power, while the combustion engine is better for maintaining high speed. Improved efficiency, lower emissions, and reduced running costs relative to non-hybrid vehicles are three primary benefits of hybridization. Vehicle types Two-wheeled and cycle-type vehicles Mopeds, electric bicycles, and even electric kick scooters are a simple form of a hybrid, powered by an internal combustion engine or electric motor and the rider's muscles. Early prototype motorcycles in the late 19th century used the same principle. In a parallel hybrid bicycle human and motor torques are mechanically coupled at the pedal or one of the wheels, e.g. using a hub motor, a roller pressing onto a tire, or a connection to a wheel using a transmission element. Most motorized bicycles, mopeds are of this type. In a series hybrid bicycle (SHB) (a kind of chainless bicycle) the user pedals a generator, charging a battery or feeding the motor, which delivers all of the torque required. They are commercially available, being simple in theory and manufacturing. The first published prototype of an SHB is by Augustus Kinzel (US Patent 3'884'317) in 1975. In 1994 Bernie Macdonalds conceived the Electrilite SHB with power electronics allowing regenerative braking and pedaling while stationary. In 1995 Thomas Muller designed and built a "Fahrrad mit elektromagnetischem Antrieb" for his 1995 diploma thesis. In 1996 Jürg Blatter and Andreas Fuchs of Berne University of Applied Sciences built an SHB and in 1998 modified a Leitra tricycle (European patent EP 1165188). Until 2005 they built several prototype SH tricycles and quadricycles. In 1999 Harald Kutzke described an "active bicycle": the aim is to approach the ideal bicycle weighing nothing and having no drag by electronic compensation. A series hybrid electric–petroleum bicycle (SHEPB) is powered by pedals, batteries, a petrol generator, or plug-in charger—providing flexibility and range enhancements over electric-only bicycles. A SHEPB prototype made by David Kitson in Australia in 2014 used a lightweight brushless DC electric motor from an aerial drone and small hand-tool sized internal combustion engine, and a 3D printed drive system and lightweight housing, altogether weighing less than 4.5 kg. Active cooling keeps plastic parts from softening. The prototype uses a regular electric bicycle charge port. Heavy vehicle Hybrid power trains use diesel–electric or turbo-electric to power railway locomotives, buses, heavy goods vehicles, mobile hydraulic machinery, and ships. A diesel/turbine engine drives an electric generator or hydraulic pump, which powers electric/hydraulic motors—strictly an electric/hydraulic transmission (not a hybrid), unless it can accept power from outside. With large vehicles, conversion losses decrease and the advantages in distributing power through wires or pipes rather than mechanical elements become more prominent, especially when powering multiple drives—e.g. driven wheels or propellers. Until recently most heavy vehicles had little secondary energy storage, e.g. batteries/hydraulic accumulators—excepting non-nuclear submarines, one of the oldest production hybrids, running on diesel while surfaced and batteries when submerged. Both series and parallel setups were used in World War II-era submarines. Rail transport Europe The new Autorail à grande capacité (AGC or high-capacity railcar) built by the Canadian company Bombardier for service in France is diesel/electric motors, using 1500 or 25,000 V on different rail systems. It was tested in Rotterdam, the Netherlands with Railfeeding, a Genesee & Wyoming company. China The First Hybrid Evaluating locomotive was designed by rail research center Matrai in 1999 and built in 2000. It was an EMD G12 locomotive upgraded with batteries, a 200 kW diesel generator, and four AC motors. Japan Japan's first hybrid train with significant energy storage is the KiHa E200, with roof-mounted lithium-ion batteries. India Indian railway launched one of its kind CNG-Diesel hybrid trains in January 2015. The train has a 1400 hp engine which uses fumigation technology. The first of these trains is set to run on the 81 km long Rewari-Rohtak route. CNG is less-polluting alternative for diesel and petrol and is popular as an alternative fuel in India. Already many transport vehicles such as auto-rickshaws and buses run on CNG fuel. North America In the US, General Electric made a locomotive with sodium–nickel chloride (Na-NiCl2) battery storage. They expect ≥10% fuel economy. Variant diesel electric locomotive include the Green Goat (GG) and Green Kid (GK) switching/yard engines built by Canada's Railpower Technologies, with lead acid (Pba) batteries and 1000 to 2000 hp electric motors, and a new clean-burning ≈160 hp diesel generator. No fuel is wasted for idling: ≈60–85% of the time for these types of locomotives. It is unclear if regenerative braking is used; but in principle, it is easily utilized. Since these engines typically need extra weight for traction purposes anyway the battery pack's weight is a negligible penalty. The diesel generator and batteries are normally built on an existing "retired" "yard" locomotive's frame. The existing motors and running gear are all rebuilt and reused. Fuel savings of 40–60% and up to 80% pollution reductions are claimed over a "typical" older switching/yard engine. The advantages hybrid cars have for frequent starts and stops and idle periods apply to typical switching yard use. "Green Goat" locomotives have been purchased by Canadian Pacific, BNSF, Kansas City Southern Railway and Union Pacific among others. Cranes Railpower Technologies engineers working with TSI Terminal Systems are testing a hybrid diesel–electric power unit with battery storage for use in Rubber Tyred Gantry (RTG) cranes. RTG cranes are typically used for loading and unloading shipping containers onto trains or trucks in ports and container storage yards. The energy used to lift the containers can be partially regained when they are lowered. Diesel fuel and emission reductions of 50–70% are predicted by Railpower engineers. First systems are expected to be operational in 2007. Road transport, commercial vehicles Hybrid systems are regularly in use for trucks, buses and other heavy highway vehicles. Small fleet sizes and installation costs are compensated by fuel savings, with advances such as higher capacity, lowered battery cost, etc. Toyota, Ford, GM and others are introducing hybrid pickups and SUVs. Kenworth Truck Company recently introduced the Kenworth T270 Class 6 that for city usage seems to be competitive. FedEx and others are investing in hybrid delivery vehicles—particularly for city use where hybrid technology may pay off first. FedEx is trialling two delivery trucks with Wrightspeed electric motors and diesel generators; the retrofit kits are claimed to pay for themselves in a few years. The diesel engines run at a constant RPM for peak efficiency. In 1978 students at Minneapolis, Minnesota's Hennepin Vocational Technical Center, converted a Volkswagen Beetle to a petro-hydraulic hybrid with off-the shelf components. A car rated at 32 mpg was returning 75 mpg with the 60 hp engine replaced by a 16 hp engine, and reached 70 mph. In the 1990s, engineers at EPA's National Vehicle and Fuel Emissions Laboratory developed a petro-hydraulic powertrain for a typical American sedan car. The test car achieved over 80 mpg on combined EPA city/highway driving cycles. Acceleration was 0-60 mph in 8 seconds, using a 1.9-liter diesel engine. No lightweight materials were used. The EPA estimated that produced in high volumes the hydraulic components would add only $700 to the cost. Under EPA testing, a hydraulic hybrid Ford Expedition returned 32 mpg (7.4 L/100 km) City, and 22 mpg (11 L/100 km) highway. UPS currently has two trucks in service using this technology. Military off-road vehicles Since 1985, the US military has been testing serial hybrid Humvees and have found them to deliver faster acceleration, a stealth mode with low thermal signature, near silent operation, and greater fuel economy. Ships Ships with both mast-mounted sails and steam engines were an early form of a hybrid vehicle. Another example is the diesel–electric submarine. This runs on batteries when submerged and the batteries can be recharged by the diesel engine when the craft is on the surface. , there are 550 ships with an average of 1.6 MWh of batteries. The average was 500 kWh in 2016. Newer hybrid ship-propulsion schemes include large towing kites manufactured by companies such as SkySails. Towing kites can fly at heights several times higher than the tallest ship masts, capturing stronger and steadier winds. Aircraft The Boeing Fuel Cell Demonstrator Airplane has a Proton-Exchange Membrane (PEM) fuel cell/lithium-ion battery hybrid system to power an electric motor, which is coupled to a conventional propeller. The fuel cell provides all power for the cruise phase of flight. During takeoff and climb, the flight segment that requires the most power, the system draws on lightweight lithium-ion batteries. The demonstrator aircraft is a Dimona motor glider, built by Diamond Aircraft Industries of Austria, which also carried out structural modifications to the aircraft. With a wingspan of , the airplane will be able to cruise at about on power from the fuel cell. Hybrid FanWings have been designed. A FanWing is created by two engines with the capability to autorotate and landing like a helicopter. Engine type Hybrid electric-petroleum vehicles When the term hybrid vehicle is used, it most often refers to a Hybrid electric vehicle. These encompass such vehicles as the Saturn Vue, Toyota Prius, Toyota Yaris, Toyota Camry Hybrid, Ford Escape Hybrid, Ford Fusion Hybrid, Toyota Highlander Hybrid, Honda Insight, Honda Civic Hybrid, Lexus RX 400h, and 450h, Hyundai Ioniq Hybrid, Hyundai Sonata Hybrid, Hyundai Elantra Hybrid, Kia Sportage Hybrid, Kia Niro Hybrid, Kia Sorento Hybrid and others. A petroleum-electric hybrid most commonly uses internal combustion engines (using a variety of fuels, generally gasoline or Diesel engines) and electric motors to power the vehicle. The energy is stored in the fuel of the internal combustion engine and an electric battery set. There are many types of petroleum-electric hybrid drivetrains, from Full hybrid to Mild hybrid, which offer varying advantages and disadvantages. William H. Patton filed a patent application for a gasoline-electric hybrid rail-car propulsion system in early 1889, and for a similar hybrid boat propulsion system in mid 1889. There is no evidence that his hybrid boat met with any success, but he built a prototype hybrid tram and sold a small hybrid locomotive. In 1899, Henri Pieper developed the world's first petro-electric hybrid automobile. In 1900, Ferdinand Porsche developed a series-hybrid using two motor-in-wheel-hub arrangements with an internal combustion generator set providing the electric power; Porsche's hybrid set two-speed records. While liquid fuel/electric hybrids date back to the late 19th century, the braking regenerative hybrid was invented by David Arthurs, an electrical engineer from Springdale, Arkansas, in 1978–79. His home-converted Opel GT was reported to return as much as 75 mpg with plans still sold to this original design, and the "Mother Earth News" modified version on their website. The plug-in-electric-vehicle (PEV) is becoming more and more common. It has the range needed in locations where there are wide gaps with no services. The batteries can be plugged into house (mains) electricity for charging, as well being charged while the engine is running. Continuously outboard recharged electric vehicle Some battery electric vehicles can be recharged while the user drives. Such a vehicle establishes contact with an electrified rail, plate, or overhead wires on the highway via an attached conducting wheel or other similar mechanisms (see conduit current collection). The vehicle's batteries are recharged by this process—on the highway—and can then be used normally on other roads until the battery is discharged. For example, some of the battery-electric locomotives used for maintenance trains on the London Underground are capable of this mode of operation. Developing an infrastructure for battery electric vehicles would provide the advantage of virtually unrestricted highway range. Since many destinations are within 100 km of a major highway, this technology could reduce the need for expensive battery systems. However, private use of the existing electrical system is almost universally prohibited. Besides, the technology for such electrical infrastructure is largely outdated and, outside some cities, not widely distributed (see Conduit current collection, trams, electric rail, trolleys, third rail). Updating the required electrical and infrastructure costs could perhaps be funded by toll revenue or by dedicated transportation taxes. Hybrid fuel (dual mode) In addition to vehicles that use two or more different devices for propulsion, some also consider vehicles that use distinct energy sources or input types ("fuels") using the same engine to be hybrids, although to avoid confusion with hybrids as described above and to use correctly the terms, these are perhaps more correctly described as dual mode vehicles: Some trolleybuses can switch between an onboard diesel engine and overhead electrical power depending on conditions (see dual-mode bus). In principle, this could be combined with a battery subsystem to create a true plug-in hybrid trolleybus, although , no such design seems to have been announced. Flexible-fuel vehicles can use a mixture of input fuels mixed in one tank—typically gasoline and ethanol, methanol, or biobutanol. Bi-fuel vehicle: Liquified petroleum gas and natural gas are very different from petroleum or diesel and cannot be used in the same tanks, so it would be challenging to build an (LPG or NG) flexible fuel system. Instead vehicles are built with two, parallel, fuel systems feeding one engine. For example, some Chevrolet Silverado 2500 HDs can effortlessly switch between petroleum and natural gas, offering a range of over 1000 km (650 miles). While the duplicated tanks cost space in some applications, the increased range, decreased cost of fuel, and flexibility where LPG or CNG infrastructure is incomplete may be a significant incentive to purchase. While the US Natural gas infrastructure is partially incomplete, it is increasing and in 2013 had 2600 CNG stations in place. Rising gas prices may push consumers to purchase these vehicles. In 2013 when gas prices traded around US, the price of gasoline was US, compared to natural gas's . On a per unit of energy comparative basis, this makes natural gas much cheaper than gasoline. Some vehicles have been modified to use another fuel source if it is available, such as cars modified to run on autogas (LPG) and diesels modified to run on waste vegetable oil that has not been processed into biodiesel. Power-assist mechanisms for bicycles and other human-powered vehicles are also included (see Motorized bicycle). Fluid power hybrid Hydraulic hybrid and pneumatic hybrid vehicles use an engine or regenerative braking (or both) to charge a pressure accumulator to drive the wheels via hydraulic (liquid) or pneumatic (compressed gas) drive units. In most cases the engine is detached from the drivetrain, serving solely to charge the energy accumulator. The transmission is seamless. Regenerative braking can be used to recover some of the supplied drive energy back into the accumulator. Petro-air hybrid A French company, MDI, has designed and has running models of a petro-air hybrid engine car. The system does not use air motors to drive the vehicle, being directly driven by a hybrid engine. The engine uses a mixture of compressed air and gasoline injected into the cylinders. A key aspect of the hybrid engine is the "active chamber", which is a compartment heating air via fuel doubling the energy output. Tata Motors of India assessed the design phase towards full production for the Indian market and moved into "completing detailed development of the compressed air engine into specific vehicle and stationary applications". Petro-hydraulic hybrid Petro-hydraulic configurations have been common in trains and heavy vehicles for decades. The auto industry recently focused on this hybrid configuration as it now shows promise for introduction into smaller vehicles. In petro-hydraulic hybrids, the energy recovery rate is high and therefore the system is more efficient than electric battery charged hybrids using the current electric battery technology, demonstrating a 60% to 70% increase in energy economy in US Environmental Protection Agency (EPA) testing. The charging engine needs only to be sized for average usage with acceleration bursts using the stored energy in the hydraulic accumulator, which is charged when in low energy demanding vehicle operation. The charging engine runs at optimum speed and load for efficiency and longevity. Under tests undertaken by the US Environmental Protection Agency (EPA), a hydraulic hybrid Ford Expedition returned City, and highway. UPS currently has two trucks in service using this technology. Although petro-hydraulic hybrid technology has been known for decades and used in trains as well as very large construction vehicles, the high costs of the equipment precluded the systems from lighter trucks and cars. In the modern sense, an experiment proved the viability of small petro-hydraulic hybrid road vehicles in 1978. A group of students at Minneapolis, Minnesota's Hennepin Vocational Technical Center, converted a Volkswagen Beetle car to run as a petro-hydraulic hybrid using off-the-shelf components. A car rated at was returning with the 60 hp engine replaced by a 16 hp engine. The experimental car reached . In the 1990s, a team of engineers working at EPA's National Vehicle and Fuel Emissions Laboratory succeeded in developing a revolutionary type of petro-hydraulic hybrid powertrain that would propel a typical American sedan car. The test car achieved over 80 mpg on combined EPA city/highway driving cycles. Acceleration was 0-60 mph in 8 seconds, using a 1.9 L diesel engine. No lightweight materials were used. The EPA estimated that produced in high volumes the hydraulic components would add only $700 to the base cost of the vehicle. The petro-hydraulic hybrid system has a faster and more efficient charge/discharge cycling than petro-electric hybrids and is also cheaper to build. The accumulator vessel size dictates total energy storage capacity and may require more space than an electric battery set. Any vehicle space consumed by a larger size of accumulator vessel may be offset by the need for a smaller sized charging engine, in HP and physical size. Research is underway in large corporations and small companies. The focus has now switched to smaller vehicles. The system components were expensive which precluded installation in smaller trucks and cars. A drawback was that the power driving motors were not efficient enough at part load. A British company (Artemis Intelligent Power) made a breakthrough introducing an electronically controlled hydraulic motor/pump, the Digital Displacement® motor/pump. The pump is highly efficient at all speed ranges and loads, giving feasibility to small applications of petro-hydraulic hybrids. The company converted a BMW car as a test bed to prove viability. The BMW 530i gave double the mpg in city driving compared to the standard car. This test was using the standard 3,000 cc engine, with a smaller engine the figures would have been more impressive. The design of petro-hydraulic hybrids using well sized accumulators allows downsizing an engine to average power usage, not peak power usage. Peak power is provided by the energy stored in the accumulator. A smaller more efficient constant speed engine reduces weight and liberates space for a larger accumulator. Current vehicle bodies are designed around the mechanicals of existing engine/transmission setups. It is restrictive and far from ideal to install petro-hydraulic mechanicals into existing bodies not designed for hydraulic setups. One research project's goal is to create a blank paper design new car, to maximize the packaging of petro-hydraulic hybrid components in the vehicle. All bulky hydraulic components are integrated into the chassis of the car. One design has claimed to return 130 mpg in tests by using a large hydraulic accumulator which is also the structural chassis of the car. The small hydraulic driving motors are incorporated within the wheel hubs driving the wheels and reversing to claw-back kinetic braking energy. The hub motors eliminate the need for friction brakes, mechanical transmissions, driveshafts, and U-joints, reducing costs and weight. Hydrostatic drive with no friction brakes is used in industrial vehicles. The aim is 170 mpg in average driving conditions. The energy created by shock absorbers and kinetic braking energy that normally would be wasted assists in charging the accumulator. A small fossil-fuelled piston engine sized for average power use charges the accumulator. The accumulator is sized at running the car for 15 minutes when fully charged. The aim is a fully charged accumulator that will produce a 0-60 mph acceleration speed of under 5 seconds using four wheel drive. In January 2011 industry giant Chrysler announced a partnership with the US Environmental Protection Agency (EPA) to design and develop an experimental petro-hydraulic hybrid powertrain suitable for use in large passenger cars. In 2012 an existing production minivan was adapted to the new hydraulic powertrain for assessment. PSA Peugeot Citroën exhibited an experimental "Hybrid Air" engine at the 2013 Geneva Motor Show. The vehicle uses nitrogen gas compressed by energy harvested from braking or deceleration to power a hydraulic drive which supplements power from its conventional gasoline engine. The hydraulic and electronic components were supplied by Robert Bosch GmbH. Mileage was estimated to be about on the Euro test cycle if installed in a Citroën C3 type of body. PSA Although the car was ready for production and was proven and feasible delivering the claimed results, Peugeot Citroën were unable to attract a major manufacturer to share the high development costs and are shelving the project until a partnership can be arranged. Electric-human power hybrid vehicle Another form of a hybrid vehicle are the human-powered electric vehicles. These include such vehicles as the Sinclair C5, Twike, electric bicycles, electric skateboards, and Electric motorcycles and scooters Hybrid vehicle power train configurations Parallel hybrid In a parallel hybrid vehicle, an electric motor and an internal combustion engine are coupled such that they can power the vehicle either individually or together. Most commonly the internal combustion engine, the electric motor and gearbox are coupled by automatically controlled clutches. For electric driving, the clutch between the internal combustion engine is open while the clutch to the gearbox is engaged. While in combustion mode the engine and motor run at the same speed. The first mass-production parallel hybrid sold outside Japan was the 1st generation Honda Insight. The Mercedes-Benz E 300 BlueTEC HYBRID released in 2012 only in European markets is a very rare mass-produced diesel hybrid vehicle powered by a Mercedes-Benz OM651 engine developing paired with a electric motor, positioned between the engine and the gearbox, for a combined output of . The vehicle has a fuel consumption rate of . Mild parallel hybrid These types use a generally compact electric motor (usually <20 kW) to provide auto-stop/start features and to provide extra power assist during the acceleration, and to generate on the deceleration phase (also known as regenerative braking). On-road examples include Honda Civic Hybrid, Honda Insight 2nd generation, Honda CR-Z, Honda Accord Hybrid, Mercedes Benz S400 BlueHYBRID, BMW 7 Series hybrids, General Motors BAS Hybrids, Suzuki S-Cross, Suzuki Wagon R and Smart fortwo with micro hybrid drive. Power-split or series-parallel hybrid In a power-split hybrid electric drive train, there are two motors: a traction electric motor and an internal combustion engine. The power from these two motors can be shared to drive the wheels via a power split device, which is a simple planetary gear set. The ratio can be from 100% for the combustion engine to 100% for the traction electric motor, or anything in between. The combustion engine can act as a generator charging the batteries. Modern versions such as the Toyota Hybrid Synergy Drive have a second electric motor/generator connected to the planetary gear. In cooperation with the traction motor/generator and the power-split device, this provides a continuously variable transmission. On the open road, the primary power source is the internal combustion engine. When maximum power is required, for example, to overtake, the traction electric motor is used to assist. This increases the available power for a short period, giving the effect of having a larger engine than actually installed. In most applications, the combustion engine is switched off when the car is slow or stationary thereby reducing curbside emissions. Passenger car installations include Toyota Prius, Ford Escape and Fusion, as well as Lexus RX400h, RX450h, GS450h, LS600h, and CT200h. Series hybrid A series- or serial-hybrid vehicle is driven by an electric motor, functioning as an electric vehicle while the battery pack energy supply is sufficient, with an engine tuned for running as a generator when the battery pack is insufficient. There is typically no mechanical connection between the engine and the wheels, and the primary purpose of the range extender is to charge the battery. Series-hybrids have also been referred to as extended range electric vehicle, range-extended electric vehicle, or electric vehicle-extended range (EREV/REEV/EVER). The BMW i3 with range extender is a production series-hybrid. It operates as an electric vehicle until the battery charge is low, and then activates an engine-powered generator to maintain power, and is also available without the range extender. The Fisker Karma was the first series-hybrid production vehicle. When describing cars, the battery of a series-hybrid is usually charged by being plugged in—but a series-hybrid may also allow for a battery to only act as a buffer (and for regeneration purposes), and for the electric motor's power to be supplied constantly by a supporting engine. Series arrangements have been common in diesel-electric locomotives and ships. Ferdinand Porsche effectively invented this arrangement in speed-record-setting racing cars in the early 20th century, such as the Lohner–Porsche Mixte Hybrid. Porsche named his arrangement "System Mixt" and it was a wheel hub motor design, where each of the two front wheels was powered by a separate motor. This arrangement was sometimes referred to as an electric transmission, as the electric generator and driving motor replaced a mechanical transmission. The vehicle could not move unless the internal combustion engine was running. In 1997 Toyota released the first series-hybrid bus sold in Japan. GM introduced the Chevy Volt series plug-in hybrid in 2010, aiming for an all-electric range of , though this car also has a mechanical connection between the engine and drivetrain. Supercapacitors combined with a lithium-ion battery bank have been used by AFS Trinity in a converted Saturn Vue SUV vehicle. Using supercapacitors they claim up to 150 mpg in a series-hybrid arrangement. Nissan Note e-power is an example of a series hybrid technology since 2016 in Japan. Plug-in hybrid electric vehicle Another subtype of hybrid vehicles is the plug-in hybrid electric vehicle. The plug-in hybrid is usually a general fuel-electric (parallel or serial) hybrid with increased energy storage capacity, usually through a lithium-ion battery, which allows the vehicle to drive on all-electric mode a distance that depends on the battery size and its mechanical layout (series or parallel). It may be connected to mains electricity supply at the end of the journey to avoid charging using the on-board internal combustion engine. This concept is attractive to those seeking to minimize on-road emissions by avoiding—or at least minimizing—the use of ICE during daily driving. As with pure electric vehicles, the total emissions saving, for example in CO2 terms, is dependent upon the energy source of the electricity generating company. For some users, this type of vehicle may also be financially attractive so long as the electrical energy being used is cheaper than the petrol/diesel that they would have otherwise used. Current tax systems in many European countries use mineral oil taxation as a major income source. This is generally not the case for electricity, which is taxed uniformly for the domestic customer, however that person uses it. Some electricity suppliers also offer price benefits for off-peak night users, which may further increase the attractiveness of the plug-in option for commuters and urban motorists. Road safety for cyclists, pedestrians A 2009 National Highway Traffic Safety Administration report examined hybrid electric vehicle accidents that involved pedestrians and cyclists and compared them to accidents involving internal combustion engine vehicles (ICEV). The findings showed that, in certain road situations, HEVs are more dangerous for those on foot or bicycle. For accidents where a vehicle was slowing or stopping, backing up, entering, or leaving a parking space (when the sound difference between HEVs and ICEVs is most pronounced), HEVs were twice as likely to be involved in a pedestrian crash than ICEVs. For crashes involving cyclists or pedestrians, there was a higher incident rate for HEVs than ICEVs when a vehicle was turning a corner. However, there was no statistically significant difference between the types of vehicles when they were driving straight. Several automakers developed electric vehicle warning sounds designed to alert pedestrians to the presence of electric drive vehicles such as hybrid electric vehicle, plug-in hybrid electric vehicles and all-electric vehicles (EVs) travelling at low speeds. Their purpose is to make pedestrians, cyclists, the blind, and others aware of the vehicle's presence while operating in all-electric mode. Vehicles in the market with such safety devices include the Nissan Leaf, Chevrolet Volt, Fisker Karma, Honda FCX Clarity, Nissan Fuga Hybrid/Infiniti M35, Hyundai ix35 FCEV, Hyundai Sonata Hybrid, 2012 Honda Fit EV, the 2012 Toyota Camry Hybrid, 2012 Lexus CT200h, and all the Prius family of cars. Environmental issues Fuel consumption and emissions reductions The hybrid vehicle typically achieves greater fuel economy and lower emissions than conventional internal combustion engine vehicles (ICEVs), resulting in fewer emissions being generated. These savings are primarily achieved by three elements of a typical hybrid design: Relying on both the engine and the electric motors for peak power needs, resulting in a smaller engine size more for average usage rather than peak power usage. A smaller engine can have fewer internal losses and lower weight. Having significant battery storage capacity to store and reuse recaptured energy, especially in stop-and-go traffic typical of the city driving cycle. Recapturing significant amounts of energy during braking that are normally wasted as heat. This regenerative braking reduces vehicle speed by converting some of its kinetic energy into electricity, depending upon the power rating of the motor/generator; Other techniques that are not necessarily 'hybrid' features, but that are frequently found on hybrid vehicles include: Using Atkinson cycle engines instead of Otto cycle engines for improved fuel economy. Shutting down the engine during traffic stops or while coasting or during other idle periods. Improving aerodynamics; (part of the reason that SUVs get such bad fuel economy is the drag on the car. A box-shaped car or truck has to exert more force to move through the air causing more stress on the engine making it work harder). Improving the shape and aerodynamics of a car is a good way to help better the fuel economy and also improve vehicle handling at the same time. Using low rolling resistance tires (tires were often made to give a quiet, smooth ride, high grip, etc., but efficiency was a lower priority). Tires cause mechanical drag, once again making the engine work harder, consuming more fuel. Hybrid cars may use special tires that are more inflated than regular tires and stiffer or by choice of carcass structure and rubber compound have lower rolling resistance while retaining acceptable grip, and so improving fuel economy whatever the power source. Powering the a/c, power steering, and other auxiliary pumps electrically as and when needed; this reduces mechanical losses when compared with driving them continuously with traditional engine belts. These features make a hybrid vehicle particularly efficient for city traffic where there are frequent stops, coasting, and idling periods. In addition noise emissions are reduced, particularly at idling and low operating speeds, in comparison to conventional engine vehicles. For continuous high-speed highway use, these features are much less useful in reducing emissions. Hybrid vehicle emissions Hybrid vehicle emissions today are getting close to or even lower than the recommended level set by the EPA (Environmental Protection Agency). The recommended levels they suggest for a typical passenger vehicle should be equated to 5.5 metric tons of . The three most popular hybrid vehicles, Honda Civic, Honda Insight and Toyota Prius, set the standards even higher by producing 4.1, 3.5, and 3.5 tons showing a major improvement in carbon dioxide emissions. Hybrid vehicles can reduce air emissions of smog-forming pollutants by up to 90% and cut carbon dioxide emissions in half. An increase in fossil fuels is needed to build hybrid vehicles versus conventional cars. This increase is more than offset by reduced emissions when running the vehicle. Hybrid emissions have been understated when comparing certification cycles to real-world driving. In one study using real-world driving data, it was shown they use on average 120 g of per km instead of the 44 g per km in official tests. Toyota states that three Hybrid vehicles equal one battery electric vehicle in reduction effect from carbon neutrality viewpoint which means reducing emissions to zero throughout the entire life cycle of a product, starting from procurement of raw materials, manufacturing, and transportation to use, recycling, and disposal. Environmental impact of hybrid car battery Though hybrid cars consume less fuel than conventional cars, there is still an issue regarding the environmental damage of the hybrid car battery. Today, most hybrid car batteries are Lithium-ion, which has higher energy density than nickel–metal hydride batteries and is more environmentally friendly than lead-based batteries which constitute the bulk of petrol car starter batteries today. There are many types of batteries. Some are far more toxic than others. Lithium-ion is the least toxic of the batteries mentioned above. The toxicity levels and environmental impact of nickel metal hydride batteries—the type previously used in hybrids—are much lower than batteries like lead acid or nickel cadmium according to one source. Another source claims nickel metal hydride batteries are much more toxic than lead batteries, also that recycling them and disposing of them safely is difficult. In general various soluble and insoluble nickel compounds, such as nickel chloride and nickel oxide, have known carcinogenic effects in chick embryos and rats. The main nickel compound in NiMH batteries is nickel oxyhydroxide (NiOOH), which is used as the positive electrode. However Nickel Metal Hydride Batteries have fallen out of favour in hybrid vehicles as various lithium-ion chemistries have become more mature to market. The lithium-ion battery has become a market leader in this segment due to its high energy density, stability, and cost when compared to other technologies. A market leader in this area is Panasonic with their partnership with Tesla The lithium-ion batteries are appealing because they have the highest energy density of any rechargeable batteries and can produce a voltage more than three times that of nickel–metal hydride battery cell while simultaneously storing large quantities of electricity as well. The batteries also produce higher output (boosting vehicle power), higher efficiency (avoiding wasteful use of electricity), and provides excellent durability, compared with the life of the battery being roughly equivalent to the life of the vehicle. Additionally, the use of lithium-ion batteries reduces the overall weight of the vehicle and also achieves improved fuel economy of 30% better than petro-powered vehicles with a consequent reduction in CO2 emissions helping to prevent global warming. Lithium-ion batteries are also safer to recycle, with Volkswagen Group pioneering processes to recycle lithium-ion batteries; this is also being chased by various other large companies, such as BMW, Audi, Mercedes-Benz and Tesla. The main goal within many of these companies is to combat disinformation about the nature of lithium batteries, primarily that they are not recyclable, which primarily stem from articles discussing the difficulties of recycling. Charging There are two different levels of charging in plug-in hybrids. Level one charging is the slower method as it uses a 120 V/15 A single-phase grounded outlet. Level two is a faster method; existing Level 2 equipment offers charging from 208 V or 240 V (at up to 80 A, 19.2 kW). It may require dedicated equipment and a connection installation for home or public units. The optimum charging window for lithium-ion batteries is 3–4.2 V. Recharging with a 120-volt household outlet takes several hours, a 240-volt charger takes 1–4 hours, and a quick charge takes approximately 30 minutes to achieve 80% charge. Three important factors—distance on charge, cost of charging, and time to charge In order for hybrids to run on electrical power, the car must perform the action of braking in order to generate some electricity. The electricity then gets discharged most effectively when the car accelerates or climbs up an incline. In 2014, hybrid electric car batteries can run on solely electricity for 70–130 miles (110–210 km) on a single charge. Hybrid battery capacity currently ranges from 4.4 kWh to 85 kWh on a fully electric car. On a hybrid car, the battery packs currently range from 0.6 kWh to 2.4 kWh representing a large difference in use of electricity in hybrid cars. Raw materials increasing costs There is an impending increase in the costs of many rare materials used in the manufacture of hybrid cars. For example, the rare-earth element dysprosium is required to fabricate many of the advanced electric motors and battery systems in hybrid propulsion systems. Neodymium is another rare earth metal which is a crucial ingredient in high-strength magnets that are found in permanent magnet electric motors. Nearly all the rare-earth elements in the world come from China, and many analysts believe that an overall increase in Chinese electronics manufacturing will consume this entire supply by 2012. In addition, export quotas on Chinese rare-earth elements have resulted in an unknown amount of supply. A few non-Chinese sources such as the advanced Hoidas Lake project in northern Canada as well as Mount Weld in Australia are currently under development; however, the barriers to entry are high and require years to go online. How hybrid-electric vehicles work Hybrid-electric vehicles (HEVs) combine the advantage of gasoline engines and electric motors. The key areas for efficiency or performance gains are regenerative braking, dual power sources, and less idling. Regenerative braking. The electric motor normally converts electricity into physical motion. Used in reverse as a generator, it can also convert physical motion into electricity. This both slows the car (braking) and recharges (regenerates) the batteries. Dual power. Power can come from either the engine, motor, or both depending on driving circumstances. Additional power to assist the engine in accelerating or climbing might be provided by the electric motor. Or more commonly, a smaller electric motor provides all of the power for low-speed driving conditions and is augmented by the engine at higher speeds. Automatic start/shutoff. It automatically shuts off the engine when the vehicle comes to a stop and restarts it when the accelerator is pressed down. This automation is much simpler with an electric motor. Also, see dual power above. Alternative green vehicles Other types of green vehicles include other vehicles that go fully or partly on alternative energy sources than fossil fuel. Another option is to use alternative fuel composition (i.e. biofuels) in conventional fossil fuel-based vehicles, making them go partly on renewable energy sources. Other approaches include personal rapid transit, a public transportation concept that offers automated on-demand non-stop transportation, on a network of specially built guideways. Marketing Adoption Automakers spend around $US8 million in marketing Hybrid vehicles each year. With combined effort from many car companies, the Hybrid industry has sold millions of Hybrids. Hybrid car companies like Toyota, Honda, Ford, and BMW have pulled together to create a movement of Hybrid vehicle sales pushed by Washington lobbyists to lower the world's emissions and become less reliant on our petroleum consumption. In 2005, sales went beyond 200,000 Hybrids, but in retrospect that only reduced the global use for gasoline consumption by 200,000 gallons per day—a tiny fraction of the 360 million gallons used per day. According to Bradley Berman author of Driving Change—One Hybrid at a time, "cold economics shows that in real dollars, except for a brief spike in the 1970s, gas prices have remained remarkably steady and cheap. Fuel continues to represent a small part of the overall cost of owning and operating a personal vehicle". Other marketing tactics include greenwashing which is the "unjustified appropriation of environmental virtue." Temma Ehrenfeld explained in an article by Newsweek. Hybrids may be more efficient than many other gasoline motors as far as gasoline consumption is concerned but as far as being green and good for the environment is completely inaccurate. Hybrid car companies have a long time to go if they expect to really go green. According to Harvard business professor Theodore Levitt states "managing products" and "meeting customers' needs", "you must adapt to consumer expectations and anticipation of future desires." This means people buy what they want, if they want a fuel efficient car they buy a Hybrid without thinking about the actual efficiency of the product. This "green myopia" as Ottman calls it, fails because marketers focus on the greenness of the product and not on the actual effectiveness. Researchers and analysts say people are drawn to the new technology, as well as the convenience of fewer fill-ups. Secondly, people find it rewarding to own the better, newer, flashier, and so-called greener car. Misleading advertising In 2019 the term self-charging hybrid became prevalent in advertising, though cars referred to by this name do not offer any different functionality than a standard hybrid electric vehicle provides. The only self-charging effect is in energy recovery via regenerative braking, which is also true of plug-in hybrids, fuel cell electric vehicles and battery electric vehicles. In January 2020, using this term has been prohibited in Norway, for misleading advertising by Toyota and Lexus. "Our claim is based on the fact that customers never have to charge the battery of their vehicle, as it is recharged during the vehicle use. There is no intention to mislead customers, on the contrary: the point is to clearly explain the difference with plug-in hybrid vehicles." Adoption rate While the adoption rate for hybrids in the US is small today (2.2% of new car sales in 2011), this compares with a 17.1% share of new car sales in Japan in 2011, and it has the potential to be very large over time as more models are offered and incremental costs decline due to learning and scale benefits. However, forecasts vary widely. For instance, Bob Lutz, a long-time skeptic of hybrids, indicated he expects hybrids "will never comprise more than 10% of the US auto market." Other sources also expect hybrid penetration rates in the US will remain under 10% for many years. More optimistic views as of 2006 include predictions that hybrids would dominate new car sales in the US and elsewhere over the next 10 to 20 years. Another approach, taken by Saurin Shah, examines the penetration rates (or S-curves) of four analogs (historical and current) to hybrid and electrical vehicles in an attempt to gauge how quickly the vehicle stock could be hybridized and/or electrified in the United States. The analogs are (1) the electric motors in US factories in the early 20th century, (2) diesel-electric locomotives on US railways in the 1920–1945 period, (3) a range of new automotive features/technologies introduced in the US over the past fifty years, and 4) e-bike purchases in China over the past few years. These analogs collectively suggest it would take at least 30 years for hybrid and electric vehicles to capture 80% of the US passenger vehicle stock. The EPA expects the combined market share of new gasoline hybrid light-duty vehicles to reach 13.6% for the 2023 model year from 10.2% in the 2022 model year. European Union 2020 regulation standards The European Parliament, Council, and European Commission have reached an agreement which is aimed at reducing the average CO2 passenger car emissions to 95 g/km by 2020, according to a European Commission press release. According to the release, the key details of the agreement are as follows: Emissions target: The agreement will reduce average CO2 emissions from new cars to 95 g/km from 2020, as proposed by the commission. This is a 40% reduction from the mandatory 2015 target of 130 g/km. The target is an average for each manufacturer's new car fleet; it allows OEMs to build some vehicles that emit less than the average and some that emit more. 2025 target: The commission is required to propose a further emissions reduction target by the end-2015 to take effect in 2025. This target will be in line with the EU's long-term climate goals. Super credits for low-emission vehicles: The Regulation will give manufacturers additional incentives to produce cars with CO2 emissions of 50 g/km or less (which will be electric or plug-in hybrid cars). Each of these vehicles will be counted as two vehicles in 2020, 1.67 in 2021, 1.33 in 2022, and then as one vehicle from 2023 onwards. These super credits will help manufacturers further reduce the average emissions of their new car fleet. However, to prevent the scheme from undermining the environmental integrity of the legislation, there will be a 2.5 g/km cap per manufacturer on the contribution that super credits can make to their target in any year. See also Alternative propulsion Bivalent (engine) Efficient energy use Electric vehicle Global Hybrid Cooperation Global warming Human-electric hybrid vehicle Hybrid vehicle drivetrain List of hybrid vehicles Multifuel stove PNGV Solid oxide fuel cell Triple-hybrid References External links Hybrid Taxi Pilot Program The Future of Flight (Obese Pelicans to Shape-Shifting Switchblades) Engines Electric vehicles Hybrid electric buses Fossil fuel phase-out
Hybrid vehicle
[ "Physics", "Technology" ]
9,677
[ "Physical systems", "Machines", "Engines" ]
157,755
https://en.wikipedia.org/wiki/Fermat%20primality%20test
The Fermat primality test is a probabilistic test to determine whether a number is a probable prime. Concept Fermat's little theorem states that if p is prime and a is not divisible by p, then If one wants to test whether p is prime, then we can pick random integers a not divisible by p and see whether the congruence holds. If it does not hold for a value of a, then p is composite. This congruence is unlikely to hold for a random a if p is composite. Therefore, if the equality does hold for one or more values of a, then we say that p is probably prime. However, note that the above congruence holds trivially for , because the congruence relation is compatible with exponentiation. It also holds trivially for if p is odd, for the same reason. That is why one usually chooses a random a in the interval . Any a such that when n is composite is known as a Fermat liar. In this case n is called Fermat pseudoprime to base a. If we do pick an a such that then a is known as a Fermat witness for the compositeness of n. Example Suppose we wish to determine whether n = 221 is prime. Randomly pick 1 < a < 220, say a = 38. We check the above congruence and find that it holds: Either 221 is prime, or 38 is a Fermat liar, so we take another a, say 24: So 221 is composite and 38 was indeed a Fermat liar. Furthermore, 24 is a Fermat witness for the compositeness of 221. Algorithm The algorithm can be written as follows: Inputs: n: a value to test for primality, n>3; k: a parameter that determines the number of times to test for primality Output: composite if n is composite, otherwise probably prime Repeat k times: Pick a randomly in the range [2, n − 2] If , then return composite If composite is never returned: return probably prime The a values 1 and n-1 are not used as the equality holds for all n and all odd n respectively, hence testing them adds no value. Complexity Using fast algorithms for modular exponentiation and multiprecision multiplication, the running time of this algorithm is , where k is the number of times we test a random a, and n is the value we want to test for primality; see Miller–Rabin primality test for details. Flaw There are infinitely many Fermat pseudoprimes to any given basis a > 1. Even worse, there are infinitely many Carmichael numbers. These are numbers for which all values of with are Fermat liars. For these numbers, repeated application of the Fermat primality test performs the same as a simple random search for factors. While Carmichael numbers are substantially rarer than prime numbers (Erdös' upper bound for the number of Carmichael numbers is lower than the prime number function n/log(n)) there are enough of them that Fermat's primality test is not often used in the above form. Instead, other more powerful extensions of the Fermat test, such as Baillie–PSW, Miller–Rabin, and Solovay–Strassen are more commonly used. In general, if is a composite number that is not a Carmichael number, then at least half of all (i.e. ) are Fermat witnesses. For proof of this, let be a Fermat witness and , , ..., be Fermat liars. Then and so all for are Fermat witnesses. Applications As mentioned above, most applications use a Miller–Rabin or Baillie–PSW test for primality. Sometimes a Fermat test (along with some trial division by small primes) is performed first to improve performance. GMP since version 3.0 uses a base-210 Fermat test after trial division and before running Miller–Rabin tests. Libgcrypt uses a similar process with base 2 for the Fermat test, but OpenSSL does not. In practice with most big number libraries such as GMP, the Fermat test is not noticeably faster than a Miller–Rabin test, and can be slower for many inputs. As an exception, OpenPFGW uses only the Fermat test for probable prime testing. The program is typically used with multi-thousand digit inputs with a goal of maximum speed with very large inputs. Another well known program that relies only on the Fermat test is PGP where it is only used for testing of self-generated large random values (an open source counterpart, GNU Privacy Guard, uses a Fermat pretest followed by Miller–Rabin tests). References Primality tests Modular arithmetic
Fermat primality test
[ "Mathematics" ]
1,000
[ "Arithmetic", "Modular arithmetic", "Number theory" ]
157,774
https://en.wikipedia.org/wiki/Permafrost
Permafrost () is soil or underwater sediment which continuously remains below for two years or more: the oldest permafrost had been continuously frozen for around 700,000 years. Whilst the shallowest permafrost has a vertical extent of below a meter (3 ft), the deepest is greater than . Similarly, the area of individual permafrost zones may be limited to narrow mountain summits or extend across vast Arctic regions. The ground beneath glaciers and ice sheets is not usually defined as permafrost, so on land, permafrost is generally located beneath a so-called active layer of soil which freezes and thaws depending on the season. Around 15% of the Northern Hemisphere or 11% of the global surface is underlain by permafrost, covering a total area of around . This includes large areas of Alaska, Canada, Greenland, and Siberia. It is also located in high mountain regions, with the Tibetan Plateau being a prominent example. Only a minority of permafrost exists in the Southern Hemisphere, where it is consigned to mountain slopes like in the Andes of Patagonia, the Southern Alps of New Zealand, or the highest mountains of Antarctica. Permafrost contains large amounts of dead biomass that have accumulated throughout millennia without having had the chance to fully decompose and release their carbon, making tundra soil a carbon sink. As global warming heats the ecosystem, frozen soil thaws and becomes warm enough for decomposition to start anew, accelerating the permafrost carbon cycle. Depending on conditions at the time of thaw, decomposition can release either carbon dioxide or methane, and these greenhouse gas emissions act as a climate change feedback. The emissions from thawing permafrost will have a sufficient impact on the climate to impact global carbon budgets. It is difficult to accurately predict how much greenhouse gases the permafrost releases because of the different thaw processes are still uncertain. There is widespread agreement that the emissions will be smaller than human-caused emissions and not large enough to result in runaway warming. Instead, the annual permafrost emissions are likely comparable with global emissions from deforestation, or to annual emissions of large countries such as Russia, the United States or China. Apart from its climate impact, permafrost thaw brings more risks. Formerly frozen ground often contains enough ice that when it thaws, hydraulic saturation is suddenly exceeded, so the ground shifts substantially and may even collapse outright. Many buildings and other infrastructure were built on permafrost when it was frozen and stable, and so are vulnerable to collapse if it thaws. Estimates suggest nearly 70% of such infrastructure is at risk by 2050, and that the associated costs could rise to tens of billions of dollars in the second half of the century. Furthermore, between 13,000 and 20,000 sites contaminated with toxic waste are present in the permafrost, as well as the natural mercury deposits, which are all liable to leak and pollute the environment as the warming progresses. Lastly, concerns have been raised about the potential for pathogenic microorganisms surviving the thaw and contributing to future pandemics. However, this is considered unlikely, and a scientific review on the subject describes the risks as "generally low". Classification and extent Permafrost is soil, rock or sediment that is frozen for more than two consecutive years. In practice, this means that permafrost occurs at a mean annual temperature of or below. In the coldest regions, the depth of continuous permafrost can exceed . It typically exists beneath the so-called active layer, which freezes and thaws annually, and so can support plant growth, as the roots can only take hold in the soil that's thawed. Active layer thickness is measured during its maximum extent at the end of summer: as of 2018, the average thickness in the Northern Hemisphere is ~, but there are significant regional differences. Northeastern Siberia, Alaska and Greenland have the most solid permafrost with the lowest extent of active layer (less than on average, and sometimes only ), while southern Norway and the Mongolian Plateau are the only areas where the average active layer is deeper than , with the record of . The border between active layer and permafrost itself is sometimes called permafrost table. Around 15% of Northern Hemisphere land that is not completely covered by ice is directly underlain by permafrost; 22% is defined as part of a permafrost zone or region. This is because only slightly more than half of this area is defined as a continuous permafrost zone, where 90%–100% of the land is underlain by permafrost. Around 20% is instead defined as discontinuous permafrost, where the coverage is between 50% and 90%. Finally, the remaining <30% of permafrost regions consists of areas with 10%–50% coverage, which are defined as sporadic permafrost zones, and some areas that have isolated patches of permafrost covering 10% or less of their area. Most of this area is found in Siberia, northern Canada, Alaska and Greenland. Beneath the active layer annual temperature swings of permafrost become smaller with depth. The greatest depth of permafrost occurs right before the point where geothermal heat maintains a temperature above freezing. Above that bottom limit there may be permafrost with a consistent annual temperature—"isothermal permafrost". Continuity of coverage Permafrost typically forms in any climate where the mean annual air temperature is lower than the freezing point of water. Exceptions are found in humid boreal forests, such as in Northern Scandinavia and the North-Eastern part of European Russia west of the Urals, where snow acts as an insulating blanket. Glaciated areas may also be exceptions. Since all glaciers are warmed at their base by geothermal heat, temperate glaciers, which are near the pressure melting point throughout, may have liquid water at the interface with the ground and are therefore free of underlying permafrost. "Fossil" cold anomalies in the geothermal gradient in areas where deep permafrost developed during the Pleistocene persist down to several hundred metres. This is evident from temperature measurements in boreholes in North America and Europe. Discontinuous permafrost The below-ground temperature varies less from season to season than the air temperature, with mean annual temperatures tending to increase with depth due to the geothermal crustal gradient. Thus, if the mean annual air temperature is only slightly below , permafrost will form only in spots that are sheltered (usually with a northern or southern aspect, in the north and south hemispheres respectively) creating discontinuous permafrost. Usually, permafrost will remain discontinuous in a climate where the mean annual soil surface temperature is between . In the moist-wintered areas mentioned before, there may not even be discontinuous permafrost down to . Discontinuous permafrost is often further divided into extensive discontinuous permafrost, where permafrost covers between 50 and 90 percent of the landscape and is usually found in areas with mean annual temperatures between , and sporadic permafrost, where permafrost cover is less than 50 percent of the landscape and typically occurs at mean annual temperatures between . In soil science, the sporadic permafrost zone is abbreviated SPZ and the extensive discontinuous permafrost zone DPZ. Exceptions occur in un-glaciated Siberia and Alaska where the present depth of permafrost is a relic of climatic conditions during glacial ages where winters were up to colder than those of today. Continuous permafrost At mean annual soil surface temperatures below the influence of aspect can never be sufficient to thaw permafrost and a zone of continuous permafrost (abbreviated to CPZ) forms. A line of continuous permafrost in the Northern Hemisphere represents the most southern border where land is covered by continuous permafrost or glacial ice. The line of continuous permafrost varies around the world northward or southward due to regional climatic changes. In the southern hemisphere, most of the equivalent line would fall within the Southern Ocean if there were land there. Most of the Antarctic continent is overlain by glaciers, under which much of the terrain is subject to basal melting. The exposed land of Antarctica is substantially underlain with permafrost, some of which is subject to warming and thawing along the coastline. Alpine permafrost A range of elevations in both the Northern and Southern Hemisphere are cold enough to support perennially frozen ground: some of the best-known examples include the Canadian Rockies, the European Alps, Himalaya and the Tien Shan. In general, it has been found that extensive alpine permafrost requires mean annual air temperature of , though this can vary depending on local topography, and some mountain areas are known to support permafrost at . It is also possible for subsurface alpine permafrost to be covered by warmer, vegetation-supporting soil. Alpine permafrost is particularly difficult to study, and systematic research efforts did not begin until the 1970s. Consequently, there remain uncertainties about its geography As recently as 2009, permafrost had been discovered in a new area – Africa's highest peak, Mount Kilimanjaro ( above sea level and approximately 3° south of the equator). In 2014, a collection of regional estimates of alpine permafrost extent had established a global extent of . Yet, by 2014, alpine permafrost in the Andes has not been fully mapped, although its extent has been modeled to assess the amount of water bound up in these areas. Subsea permafrost Subsea permafrost occurs beneath the seabed and exists in the continental shelves of the polar regions. These areas formed during the last Ice Age, when a larger portion of Earth's water was bound up in ice sheets on land and when sea levels were low. As the ice sheets melted to again become seawater during the Holocene glacial retreat, coastal permafrost became submerged shelves under relatively warm and salty boundary conditions, compared to surface permafrost. Since then, these conditions led to the gradual and ongoing decline of subsea permafrost extent. Nevertheless, its presence remains an important consideration for the "design, construction, and operation of coastal facilities, structures founded on the seabed, artificial islands, sub-sea pipelines, and wells drilled for exploration and production". Subsea permafrost can also overlay deposits of methane clathrate, which were once speculated to be a major climate tipping point in what was known as a clathrate gun hypothesis, but are now no longer believed to play any role in projected climate change. Past extent of permafrost At the Last Glacial Maximum, continuous permafrost covered a much greater area than it does today, covering all of ice-free Europe south to about Szeged (southeastern Hungary) and the Sea of Azov (then dry land) and East Asia south to present-day Changchun and Abashiri. In North America, only an extremely narrow belt of permafrost existed south of the ice sheet at about the latitude of New Jersey through southern Iowa and northern Missouri, but permafrost was more extensive in the drier western regions where it extended to the southern border of Idaho and Oregon. In the Southern Hemisphere, there is some evidence for former permafrost from this period in central Otago and Argentine Patagonia, but was probably discontinuous, and is related to the tundra. Alpine permafrost also occurred in the Drakensberg during glacial maxima above about . Manifestations Base depth Permafrost extends to a base depth where geothermal heat from the Earth and the mean annual temperature at the surface achieve an equilibrium temperature of . This base depth of permafrost can vary wildly – it is less than a meter (3 ft) in the areas where it is shallowest, yet reaches in the northern Lena and Yana River basins in Siberia. Calculations indicate that the formation time of permafrost greatly slows past the first several metres. For instance, over half a million years was required to form the deep permafrost underlying Prudhoe Bay, Alaska, a time period extending over several glacial and interglacial cycles of the Pleistocene. Base depth is affected by the underlying geology, and particularly by thermal conductivity, which is lower for permafrost in soil than in bedrock. Lower conductivity leaves permafrost less affected by the geothermal gradient, which is the rate of increasing temperature with respect to increasing depth in the Earth's interior. It occurs as the Earth's internal thermal energy is generated by radioactive decay of unstable isotopes and flows to the surface by conduction at a rate of ~47 terawatts (TW). Away from tectonic plate boundaries, this is equivalent to an average heat flow of 25–30 °C/km (124–139 °F/mi) near the surface. Massive ground ice When the ice content of a permafrost exceeds 250 percent (ice to dry soil by mass) it is classified as massive ice. Massive ice bodies can range in composition, in every conceivable gradation from icy mud to pure ice. Massive icy beds have a minimum thickness of at least 2 m and a short diameter of at least 10 m. First recorded North American observations of this phenomenon were by European scientists at Canning River (Alaska) in 1919. Russian literature provides an earlier date of 1735 and 1739 during the Great North Expedition by P. Lassinius and Khariton Laptev, respectively. Russian investigators including I.A. Lopatin, B. Khegbomov, S. Taber and G. Beskow had also formulated the original theories for ice inclusion in freezing soils. While there are four categories of ice in permafrost – pore ice, ice wedges (also known as vein ice), buried surface ice and intrasedimental (sometimes also called constitutional) ice – only the last two tend to be large enough to qualify as massive ground ice. These two types usually occur separately, but may be found together, like on the coast of Tuktoyaktuk in western Arctic Canada, where the remains of Laurentide Ice Sheet are located. Buried surface ice may derive from snow, frozen lake or sea ice, aufeis (stranded river ice) and even buried glacial ice from the former Pleistocene ice sheets. The latter hold enormous value for paleoglaciological research, yet even as of 2022, the total extent and volume of such buried ancient ice is unknown. Notable sites with known ancient ice deposits include Yenisei River valley in Siberia, Russia as well as Banks and Bylot Island in Canada's Nunavut and Northwest Territories. Some of the buried ice sheet remnants are known to host thermokarst lakes. Intrasedimental or constitutional ice has been widely observed and studied across Canada. It forms when subterranean waters freeze in place, and is subdivided into intrusive, injection and segregational ice. The latter is the dominant type, formed after crystallizational differentiation in wet sediments, which occurs when water migrates to the freezing front under the influence of van der Waals forces. This is a slow process, which primarily occurs in silts with salinity less than 20% of seawater: silt sediments with higher salinity and clay sediments instead have water movement prior to ice formation dominated by rheological processes. Consequently, it takes between 1 and 1000 years to form intrasedimental ice in the top 2.5 meters of clay sediments, yet it takes between 10 and 10,000 years for peat sediments and between 1,000 and 1,000,000 years for silt sediments. Landforms Permafrost processes such as thermal contraction generating cracks which eventually become ice wedges and solifluction – gradual movement of soil down the slope as it repeatedly freezes and thaws – often lead to the formation of ground polygons, rings, steps and other forms of patterned ground found in arctic, periglacial and alpine areas. In ice-rich permafrost areas, melting of ground ice initiates thermokarst landforms such as thermokarst lakes, thaw slumps, thermal-erosion gullies, and active layer detachments. Notably, unusually deep permafrost in Arctic moorlands and bogs often attracts meltwater in warmer seasons, which pools and freezes to form ice lenses, and the surrounding ground begins to jut outward at a slope. This can eventually result in the formation of large-scale land forms around this core of permafrost, such as palsas – long (), wide () yet shallow (< tall) peat mounds – and the even larger pingos, which can be high and in diameter. Ecology Only plants with shallow roots can survive in the presence of permafrost. Black spruce tolerates limited rooting zones, and dominates flora where permafrost is extensive. Likewise, animal species which live in dens and burrows have their habitat constrained by the permafrost, and these constraints also have a secondary impact on interactions between species within the ecosystem. While permafrost soil is frozen, it is not completely inhospitable to microorganisms, though their numbers can vary widely, typically from 1 to 1000 million per gram of soil. The permafrost carbon cycle (Arctic Carbon Cycle) deals with the transfer of carbon from permafrost soils to terrestrial vegetation and microbes, to the atmosphere, back to vegetation, and finally back to permafrost soils through burial and sedimentation due to cryogenic processes. Some of this carbon is transferred to the ocean and other portions of the globe through the global carbon cycle. The cycle includes the exchange of carbon dioxide and methane between terrestrial components and the atmosphere, as well as the transfer of carbon between land and water as methane, dissolved organic carbon, dissolved inorganic carbon, particulate inorganic carbon and particulate organic carbon. Most of the bacteria and fungi found in permafrost cannot be cultured in the laboratory, but the identity of the microorganisms can be revealed by DNA-based techniques. For instance, analysis of 16S rRNA genes from late Pleistocene permafrost samples in eastern Siberia's Kolyma Lowland revealed eight phylotypes, which belonged to the phyla Actinomycetota and Pseudomonadota. "Muot-da-Barba-Peider", an alpine permafrost site in eastern Switzerland, was found to host a diverse microbial community in 2016. Prominent bacteria groups included phylum Acidobacteriota, Actinomycetota, AD3, Bacteroidota, Chloroflexota, Gemmatimonadota, OD1, Nitrospirota, Planctomycetota, Pseudomonadota, and Verrucomicrobiota, in addition to eukaryotic fungi like Ascomycota, Basidiomycota, and Zygomycota. In the presently living species, scientists observed a variety of adaptations for sub-zero conditions, including reduced and anaerobic metabolic processes. Construction on permafrost There are only two large cities in the world built in areas of continuous permafrost (where the frozen soil forms an unbroken, below-zero sheet) and both are in Russia – Norilsk in Krasnoyarsk Krai and Yakutsk in the Sakha Republic. Building on permafrost is difficult because the heat of the building (or pipeline) can spread to the soil, thawing it. As ice content turns to water, the ground's ability to provide structural support is weakened, until the building is destabilized. For instance, during the construction of the Trans-Siberian Railway, a steam engine factory complex built in 1901 began to crumble within a month of operations for these reasons. Additionally, there is no groundwater available in an area underlain with permafrost. Any substantial settlement or installation needs to make some alternative arrangement to obtain water. A common solution is placing foundations on wood piles, a technique pioneered by Soviet engineer Mikhail Kim in Norilsk. However, warming-induced change of friction on the piles can still cause movement through creep, even as the soil remains frozen. The Melnikov Permafrost Institute in Yakutsk found that pile foundations should extend down to to avoid the risk of buildings sinking. At this depth the temperature does not change with the seasons, remaining at about . Two other approaches are building on an extensive gravel pad (usually thick); or using anhydrous ammonia heat pipes. The Trans-Alaska Pipeline System uses heat pipes built into vertical supports to prevent the pipeline from sinking and the Qingzang railway in Tibet employs a variety of methods to keep the ground cool, both in areas with frost-susceptible soil. Permafrost may necessitate special enclosures for buried utilities, called "utilidors". Impacts of climate change Increasing active layer thickness Globally, permafrost warmed by about between 2007 and 2016, with stronger warming observed in the continuous permafrost zone relative to the discontinuous zone. Observed warming was up to in parts of Northern Alaska (early 1980s to mid-2000s) and up to in parts of the Russian European North (1970–2020). This warming inevitably causes permafrost to thaw: active layer thickness has increased in the European and Russian Arctic across the 21st century and at high elevation areas in Europe and Asia since the 1990s. Between 2000 and 2018, the average active layer thickness had increased from ~ to ~, at an average annual rate of ~. In Yukon, the zone of continuous permafrost might have moved poleward since 1899, but accurate records only go back 30 years. The extent of subsea permafrost is decreasing as well; as of 2019, ~97% of permafrost under Arctic ice shelves is becoming warmer and thinner. Based on high agreement across model projections, fundamental process understanding, and paleoclimate evidence, it is virtually certain that permafrost extent and volume will continue to shrink as the global climate warms, with the extent of the losses determined by the magnitude of warming. Permafrost thaw is associated with a wide range of issues, and International Permafrost Association (IPA) exists to help address them. It convenes International Permafrost Conferences and maintains Global Terrestrial Network for Permafrost, which undertakes special projects such as preparing databases, maps, bibliographies, and glossaries, and coordinates international field programmes and networks. Climate change feedback As recent warming deepens the active layer subject to permafrost thaw, this exposes formerly stored carbon to biogenic processes which facilitate its entrance into the atmosphere as carbon dioxide and methane. Because carbon emissions from permafrost thaw contribute to the same warming which facilitates the thaw, it is a well-known example of a positive climate change feedback. Permafrost thaw is sometimes included as one of the major tipping points in the climate system due to the exhibition of local thresholds and its effective irreversibility. However, while there are self-perpetuating processes that apply on the local or regional scale, it is debated as to whether it meets the strict definition of a global tipping point as in aggregate permafrost thaw is gradual with warming. In the northern circumpolar region, permafrost contains organic matter equivalent to 1400–1650 billion tons of pure carbon, which was built up over thousands of years. This amount equals almost half of all organic material in all soils, and it is about twice the carbon content of the atmosphere, or around four times larger than the human emissions of carbon between the start of the Industrial Revolution and 2011. Further, most of this carbon (~1,035 billion tons) is stored in what is defined as the near-surface permafrost, no deeper than below the surface. However, only a fraction of this stored carbon is expected to enter the atmosphere. In general, the volume of permafrost in the upper 3 m of ground is expected to decrease by about 25% per of global warming, yet even under the RCP8.5 scenario associated with over of global warming by the end of the 21st century, about 5% to 15% of permafrost carbon is expected to be lost "over decades and centuries". The exact amount of carbon that will be released due to warming in a given permafrost area depends on depth of thaw, carbon content within the thawed soil, physical changes to the environment, and microbial and vegetation activity in the soil. Notably, estimates of carbon release alone do not fully represent the impact of permafrost thaw on climate change. This is because carbon can be released through either aerobic or anaerobic respiration, which results in carbon dioxide (CO2) or methane (CH4) emissions, respectively. While methane lasts less than 12 years in the atmosphere, its global warming potential is around 80 times larger than that of CO2 over a 20-year period and about 28 times larger over a 100-year period. While only a small fraction of permafrost carbon will enter the atmosphere as methane, those emissions will cause 40–70% of the total warming caused by permafrost thaw during the 21st century. Much of the uncertainty about the eventual extent of permafrost methane emissions is caused by the difficulty of accounting for the recently discovered abrupt thaw processes, which often increase the fraction of methane emitted over carbon dioxide in comparison to the usual gradual thaw processes. Another factor which complicates projections of permafrost carbon emissions is the ongoing "greening" of the Arctic. As climate change warms the air and the soil, the region becomes more hospitable to plants, including larger shrubs and trees which could not survive there before. Thus, the Arctic is losing more and more of its tundra biomes, yet it gains more plants, which proceed to absorb more carbon. Some of the emissions caused by permafrost thaw will be offset by this increased plant growth, but the exact proportion is uncertain. It is considered very unlikely that this greening could offset all of the emissions from permafrost thaw during the 21st century, and even less likely that it could continue to keep pace with those emissions after the 21st century. Further, climate change also increases the risk of wildfires in the Arctic, which can substantially accelerate emissions of permafrost carbon. Impact on global temperatures Altogether, it is expected that cumulative greenhouse gas emissions from permafrost thaw will be smaller than the cumulative anthropogenic emissions, yet still substantial on a global scale, with some experts comparing them to emissions caused by deforestation. The IPCC Sixth Assessment Report estimates that carbon dioxide and methane released from permafrost could amount to the equivalent of 14–175 billion tonnes of carbon dioxide per of warming. For comparison, by 2019, annual anthropogenic emissions of carbon dioxide alone stood around 40 billion tonnes. A major review published in the year 2022 concluded that if the goal of preventing of warming was realized, then the average annual permafrost emissions throughout the 21st century would be equivalent to the year 2019 annual emissions of Russia. Under RCP4.5, a scenario considered close to the current trajectory and where the warming stays slightly below , annual permafrost emissions would be comparable to year 2019 emissions of Western Europe or the United States, while under the scenario of high global warming and worst-case permafrost feedback response, they would approach year 2019 emissions of China. Fewer studies have attempted to describe the impact directly in terms of warming. A 2018 paper estimated that if global warming was limited to , gradual permafrost thaw would add around to global temperatures by 2100, while a 2022 review concluded that every of global warming would cause and from abrupt thaw by the year 2100 and 2300. Around of global warming, abrupt (around 50 years) and widespread collapse of permafrost areas could occur, resulting in an additional warming of . Thaw-induced ground instability As the water drains or evaporates, soil structure weakens and sometimes becomes viscous until it regains strength with decreasing moisture content. One visible sign of permafrost degradation is the random displacement of trees from their vertical orientation in permafrost areas. Global warming has been increasing permafrost slope disturbances and sediment supplies to fluvial systems, resulting in exceptional increases in river sediment. On the other hands, disturbance of formerly hard soil increases drainage of water reservoirs in northern wetlands. This can dry them out and compromise the survival of plants and animals used to the wetland ecosystem. In high mountains, much of the structural stability can be attributed to glaciers and permafrost. As climate warms, permafrost thaws, decreasing slope stability and increasing stress through buildup of pore-water pressure, which may ultimately lead to slope failure and rockfalls. Over the past century, an increasing number of alpine rock slope failure events in mountain ranges around the world have been recorded, and some have been attributed to permafrost thaw induced by climate change. The 1987 Val Pola landslide that killed 22 people in the Italian Alps is considered one such example. In 2002, massive rock and ice falls (up to 11.8 million m3), earthquakes (up to 3.9 Richter), floods (up to 7.8 million m3 water), and rapid rock-ice flow to long distances (up to 7.5 km at 60 m/s) were attributed to slope instability in high mountain permafrost. Permafrost thaw can also result in the formation of frozen debris lobes (FDLs), which are defined as "slow-moving landslides composed of soil, rocks, trees, and ice". This is a notable issue in the Alaska's southern Brooks Range, where some FDLs measured over in width, in height, and in length by 2012. As of December 2021, there were 43 frozen debris lobes identified in the southern Brooks Range, where they could potentially threaten both the Trans Alaska Pipeline System (TAPS) corridor and the Dalton Highway, which is the main transport link between the Interior Alaska and the Alaska North Slope. Infrastructure As of 2021, there are 1162 settlements located directly atop the Arctic permafrost, which host an estimated 5 million people. By 2050, permafrost layer below 42% of these settlements is expected to thaw, affecting all their inhabitants (currently 3.3 million people). Consequently, a wide range of infrastructure in permafrost areas is threatened by the thaw. By 2050, it's estimated that nearly 70% of global infrastructure located in the permafrost areas would be at high risk of permafrost thaw, including 30–50% of "critical" infrastructure. The associated costs could reach tens of billions of dollars by the second half of the century. Reducing greenhouse gas emissions in line with the Paris Agreement is projected to stabilize the risk after mid-century; otherwise, it'll continue to worsen. In Alaska alone, damages to infrastructure by the end of the century would amount to $4.6 billion (at 2015 dollar value) if RCP8.5, the high-emission climate change scenario, were realized. Over half stems from the damage to buildings ($2.8 billion), but there's also damage to roads ($700 million), railroads ($620 million), airports ($360 million) and pipelines ($170 million). Similar estimates were done for RCP4.5, a less intense scenario which leads to around by 2100, a level of warming similar to the current projections. In that case, total damages from permafrost thaw are reduced to $3 billion, while damages to roads and railroads are lessened by approximately two-thirds (from $700 and $620 million to $190 and $220 million) and damages to pipelines are reduced more than ten-fold, from $170 million to $16 million. Unlike the other costs stemming from climate change in Alaska, such as damages from increased precipitation and flooding, climate change adaptation is not a viable way to reduce damages from permafrost thaw, as it would cost more than the damage incurred under either scenario. In Canada, Northwest Territories have a population of only 45,000 people in 33 communities, yet permafrost thaw is expected to cost them $1.3 billion over 75 years, or around $51 million a year. In 2006, the cost of adapting Inuvialuit homes to permafrost thaw was estimated at $208/m2 if they were built at pile foundations, and $1,000/m2 if they didn't. At the time, the average area of a residential building in the territory was around 100 m2. Thaw-induced damage is also unlikely to be covered by home insurance, and to address this reality, territorial government currently funds Contributing Assistance for Repairs and Enhancements (CARE) and Securing Assistance for Emergencies (SAFE) programs, which provide long- and short-term forgivable loans to help homeowners adapt. It is possible that in the future, mandatory relocation would instead take place as the cheaper option. However, it would effectively tear the local Inuit away from their ancestral homelands. Right now, their average personal income is only half that of the median NWT resident, meaning that adaptation costs are already disproportionate for them. By 2022, up to 80% of buildings in some Northern Russia cities had already experienced damage. By 2050, the damage to residential infrastructure may reach $15 billion, while total public infrastructure damages could amount to 132 billion. This includes oil and gas extraction facilities, of which 45% are believed to be at risk. Outside of the Arctic, Qinghai–Tibet Plateau (sometimes known as "the Third Pole"), also has an extensive permafrost area. It is warming at twice the global average rate, and 40% of it is already considered "warm" permafrost, making it particularly unstable. Qinghai–Tibet Plateau has a population of over 10 million people – double the population of permafrost regions in the Arctic – and over 1 million m2 of buildings are located in its permafrost area, as well as 2,631 km of power lines, and 580 km of railways. There are also 9,389 km of roads, and around 30% are already sustaining damage from permafrost thaw. Estimates suggest that under the scenario most similar to today, SSP2-4.5, around 60% of the current infrastructure would be at high risk by 2090 and simply maintaining it would cost $6.31 billion, with adaptation reducing these costs by 20.9% at most. Holding the global warming to would reduce these costs to $5.65 billion, and fulfilling the optimistic Paris Agreement target of would save a further $1.32 billion. In particular, fewer than 20% of railways would be at high risk by 2100 under , yet this increases to 60% at , while under SSP5-8.5, this level of risk is met by mid-century. Release of toxic pollutants For much of the 20th century, it was believed that permafrost would "indefinitely" preserve anything buried there, and this made deep permafrost areas popular locations for hazardous waste disposal. In places like Canada's Prudhoe Bay oil field, procedures were developed documenting the "appropriate" way to inject waste beneath the permafrost. This means that as of 2023, there are ~4500 industrial facilities in the Arctic permafrost areas which either actively process or store hazardous chemicals. Additionally, there are between 13,000 and 20,000 sites which have been heavily contaminated, 70% of them in Russia, and their pollution is currently trapped in the permafrost. About a fifth of both the industrial and the polluted sites (1000 and 2200–4800) are expected to start thawing in the future even if the warming does not increase from its 2020 levels. Only about 3% more sites would start thawing between now and 2050 under the climate change scenario consistent with the Paris Agreement goals, RCP2.6, but by 2100, about 1100 more industrial facilities and 3500 to 5200 contaminated sites are expected to start thawing even then. Under the very high emission scenario RCP8.5, 46% of industrial and contaminated sites would start thawing by 2050, and virtually all of them would be affected by the thaw by 2100. Organochlorines and other persistent organic pollutants are of a particular concern, due to their potential to repeatedly reach local communities after their re-release through biomagnification in fish. At worst, future generations born in the Arctic would enter life with weakened immune systems due to pollutants accumulating across generations. A notable example of pollution risks associated with permafrost was the 2020 Norilsk oil spill, caused by the collapse of diesel fuel storage tank at Norilsk-Taimyr Energy's thermal power plant No. 3. It spilled 6,000 tonnes of fuel into the land and 15,000 into the water, polluting Ambarnaya, Daldykan and many smaller rivers on Taimyr Peninsula, even reaching lake Pyasino, which is a crucial water source in the area. State of emergency at the federal level was declared. The event has been described as the second-largest oil spill in modern Russian history. Another issue associated with permafrost thaw is the release of natural mercury deposits. An estimated 800,000 tons of mercury are frozen in the permafrost soil. According to observations, around 70% of it is simply taken up by vegetation after the thaw. However, if the warming continues under RCP8.5, then permafrost emissions of mercury into the atmosphere would match the current global emissions from all human activities by 2200. Mercury-rich soils also pose a much greater threat to humans and the environment if they thaw near rivers. Under RCP8.5, enough mercury will enter the Yukon River basin by 2050 to make its fish unsafe to eat under the EPA guidelines. By 2100, mercury concentrations in the river will double. Contrastingly, even if mitigation is limited to RCP4.5 scenario, mercury levels will increase by about 14% by 2100, and will not breach the EPA guidelines even by 2300. Revival of ancient organisms Microorganisms Bacteria are known for being able to remain dormant to survive adverse conditions, and viruses are not metabolically active outside of host cells in the first place. This has motivated concerns that permafrost thaw could free previously unknown microorganisms, which may be capable of infecting either humans or important livestock and crops, potentially resulting in damaging epidemics or pandemics. Further, some scientists argue that horizontal gene transfer could occur between the older, formerly frozen bacteria, and modern ones, and one outcome could be the introduction of novel antibiotic resistance genes into the genome of current pathogens, exacerbating what is already expected to become a difficult issue in the future. At the same time, notable pathogens like influenza and smallpox appear unable to survive being thawed, and other scientists argue that the risk of ancient microorganisms being both able to survive the thaw and to threaten humans is not scientifically plausible. Likewise, some research suggests that antimicrobial resistance capabilities of ancient bacteria would be comparable to, or even inferior to modern ones. Plants In 2012, Russian researchers proved that permafrost can serve as a natural repository for ancient life forms by reviving a sample of Silene stenophylla from 30,000-year-old tissue found in an Ice Age squirrel burrow in the Siberian permafrost. This is the oldest plant tissue ever revived. The resultant plant was fertile, producing white flowers and viable seeds. The study demonstrated that living tissue can survive ice preservation for tens of thousands of years. History of scientific research Between the middle of the 19th century and the middle of the 20th century, most of the literature on basic permafrost science and the engineering aspects of permafrost was written in Russian. One of the earliest written reports describing the existence of permafrost dates to 1684, when well excavation efforts in Yakutsk were stumped by its presence. A significant role in the initial permafrost research was played by Alexander von Middendorff (1815–1894) and Karl Ernst von Baer, a Baltic German scientist at the University of Königsberg, and a member of the St Petersburg Academy of Sciences. Baer began publishing works on permafrost in 1838 and is often considered the "founder of scientific permafrost research." Baer laid the foundation for modern permafrost terminology by compiling and analyzing all available data on ground ice and permafrost. Baer is also known to have composed the world's first permafrost textbook in 1843, "materials for the study of the perennial ground-ice", written in his native language. However, it was not printed then, and a Russian translation wasn't ready until 1942. The original German textbook was believed to be lost until the typescript from 1843 was discovered in the library archives of the University of Giessen. The 234-page text was available online, with additional maps, preface and comments. Notably, Baer's southern limit of permafrost in Eurasia drawn in 1843 corresponds well with the actual southern limit verified by modern research. Beginning in 1942, Siemon William Muller delved into the relevant Russian literature held by the Library of Congress and the U.S. Geological Survey Library so that he was able to furnish the government an engineering field guide and a technical report about permafrost by 1943. That report coined the English term as a contraction of permanently frozen ground, in what was considered a direct translation of the Russian term (). In 1953, this translation was criticized by another USGS researcher Inna Poiré, as she believed the term had created unrealistic expectations about its stability: more recently, some researchers have argued that "perpetually refreezing" would be a more suitable translation. The report itself was classified (as U.S. Army. Office of the Chief of Engineers, Strategic Engineering Study, no. 62, 1943), until a revised version was released in 1947, which is regarded as the first North American treatise on the subject. Between 11 and 15 November 1963, the First International Conference on Permafrost took place on the grounds of Purdue University in the American town of West Lafayette, Indiana. It involved 285 participants (including "engineers, manufacturers and builders" who attended alongside the researchers) from a range of countries (Argentina, Austria, Canada, Germany, Great Britain, Japan, Norway, Poland, Sweden, Switzerland, the US and the USSR). This marked the beginning of modern scientific collaboration on the subject. Conferences continue to take place every five years. During the Fourth conference in 1983, a special meeting between the "Big Four" participant countries (US, USSR, China, and Canada) officially created the International Permafrost Association. In recent decades, permafrost research has attracted more attention than ever due to its role in climate change. Consequently, there has been a massive acceleration in published scientific literature. Around 1990, almost no papers were released containing the words "permafrost" and "carbon": by 2020, around 400 such papers were published every year. References Sources . Climate Change 2013 Working Group 1 website. External links International Permafrost Association (IPA) Map of permafrost in Antarctica. Permafrost – what is it? – Alfred Wegener Institute YouTube video 1940s neologisms Cryosphere Geography of the Arctic Geomorphology Montane ecology Patterned grounds Pedology Periglacial landforms
Permafrost
[ "Environmental_science" ]
9,187
[ "Cryosphere", "Hydrology" ]
157,780
https://en.wikipedia.org/wiki/Lifespring
Lifespring was an American for-profit human potential organization founded in 1974 by John Hanley Sr., Robert White, Randy Revell, and Charlene Afremow. The organization encountered significant controversy in the 1970s and '80s, with various academic articles characterizing Lifespring's training methods as "deceptive and indirect techniques of persuasion and control", and allegations that Lifespring was a cult that used coercive methods to prevent members from leaving. These allegations were highlighted in a 1987 article in The Washington Post as well as local television reporting in communities where Lifespring had a significant presence. Before becoming defunct in the mid-1990s, Lifespring claimed that it had trained more than 400,000 people through its ten centers across the United States. Key people Lifespring was founded by John Hanley Sr. along with Robert White, Randy Revell, and Charlene Afremow. By October 1987, Hanley owned 92.7 percent of the company. Prior to Lifespring, Hanley had worked for the multi-level marketing organization Holiday Magic. He and the other founders had also worked for Mind Dynamics with Werner Erhard, the founder of est, which became the basis for Landmark Education. Holiday Magic was founded by William Penn Patrick, co-owner and board member for Mind Dynamics. Holiday Magic later folded amidst investigations by authorities and accusations of being a pyramid scheme. The Director for Corporate Affairs of Lifespring, Charles "Raz" Ingrasci, had also worked at est to promote a mission to the USSR and the Hunger Project. Ingrasci is now President of the Hoffman Institute, an organization founded in 1967 and also part of the human potential movement which offers programs which are similar to Lifespring's. Course overview The Lifespring training generally involved a three-level program starting with a "basic" training, an "advanced" breakthrough course, and a three-month "leadership program" which taught the students how to implement what they learned from the training into their lives. "There is no hope" is a fundamental tenet in the course. The fundamental purpose of the leadership program was enrollment; the participants in the Leadership Program were essentially an unpaid salesforce with the sole mission of enrollment by any means. The trainers used high pressure and humiliation to force participants to achieve enrollment goals. This included yelling at the group as a whole at meetings, and singling individuals out and humiliating them in front of the whole group. Participants were told the city and the world is at stake and the only solution was enrolling as many people into the trainings as possible. Less than two percent found them to be "of no value". Graduates were often eager to share their own experiences in the training with family, friends, and co-workers, although they were precluded from sharing fellow trainees' experiences. There was never any compensation for assisting in enrolling others into the workshops. However, another, independent study found, "The merging, grandiosity, and identity confusion that has been encouraged and then exploited in the training in order to control participants is now used to tie them to Vitality (Lifespring) in the future by enrolling them in new trainings and enlisting them as recruiters." The basic training was composed of successive sessions on Wednesday night, Thursday night, Friday night, Saturday day and night, Sunday day and night, a Tuesday night post-training session ten days after graduation, and a post-training interview. Evening sessions began at 6:30 pm and lasted until 11:30 or 12 or later. Saturday sessions started at 10 am and sometimes lasted until midnight. Sunday sessions started at 9 am and lasted until approximately 6 pm. The trainings were usually held in the convention facilities of large, easily accessible, moderate priced hotels (i.e., mid-town New York). A basic training was usually composed of 150–200 participants, while an advanced training was composed of 75-100 participants. Approximately 50 percent of advanced training graduates participated in the leadership program. Training also included alumni volunteers who served as small group leaders, several official staff, an assistant trainer, and a head trainer. The training consisted of a series of lectures and experiential processes designed to show the participants a new manner of contending with life situations and concerns and how other possible explanations and interpretations may lead to different results. Some individuals complained that they felt harangued, embarrassed, or humiliated by the trainer during the training. A few individuals chose not to complete the training. Additionally, the trainer used many English words in a manner different from their usual meaning. "Commitment", for instance, was defined as "the willingness to do whatever it takes". "Conclusion" was defined as a belief. Also, words such as "responsibility", "space", "surrender", "experience", "trust", "consideration", "unreasonable", "righteous", "totally participate", "from your head", "openness", "letting go" were redefined or used so as to assign them a more specific meaning. "Stretch" was an activity that was outside the participant's comfort zone. During the advanced course the participants were sometimes sent out to perform certain tasks. If any participant did not complete their task the group was considered in "breakdown ". The book Evaluating a Large Group Awareness Training made comparisons between Lifespring and Erhard Seminars Training (est). Lifespring has been characterized as a form of "Large Group Awareness Training" in several sources. Lawsuits In one case, an asthmatic was allegedly told that her asthma exacerbation was psychological and later died from the exacerbation. The lawsuit was settled for $450,000, and Lifespring admitted no wrongdoing. In another case, a man who could not swim was made to jump into a river and drowned. This case was also settled out of court. Many suits said the trainings placed participants under extreme psychological stress. The Washington Post published an article about the company in 1987. It quotes Hanley as saying, "If a thousand people get benefit from the training, and one person is harmed, I'd can it. I have an absolute commitment for having this training work for every person who takes it." However, according to the Post, by 1987 Hanley and other Lifespring executives had known for more than a decade that some people were not suited for this level of personal inquiry. As evidence, the Post cited: Talk among top company officials about how to make the trainings less harsh while maintaining their effectiveness Dozens of reports submitted to Hanley in the late 1970s and early 1980s by Lifespring staff about participants who became panicky, confused, or nervous Over time, the training company began qualifying students and required doctors' signatures for people who might require therapy rather than coaching. Criticism The Post also reported in the same article that Hanley had been convicted of six counts of felony mail fraud in 1969, and was given a five-year suspended sentence. In 1980, a federal judge rejected Hanley's request to have the felony conviction removed from his record. His request for a presidential pardon was also denied. In 1990 KARE-TV (Channel 11, Minneapolis-St. Paul) ran a segment called "Mind Games?" that Lifespring said was deceptive and sensationalized. One prominent critic of Lifespring is Ginni Thomas, wife of Supreme Court Justice Clarence Thomas. A congressional aide when she took the course, Mrs. Thomas said in an interview with the Post that she was troubled by exercises that involved stripping, sexual questions, and body shaming. After talking with a cult deprogrammer, she decided she needed to stop participating, but it took several months of work to overcome the "high-pressure tactics" to fully break with Lifespring. Afterwards, she received "constant phone calls" to pressure her to stay with the group, and ended up relocating to another part of the country to escape the calls. References Further reading External links American companies established in 1974 Education companies established in 1974 Defunct companies based in California Training companies of the United States New Age organizations Personal development Large-group awareness training Self religions 1974 establishments in California
Lifespring
[ "Biology" ]
1,680
[ "Personal development", "Behavior", "Human behavior" ]
157,819
https://en.wikipedia.org/wiki/Meteor%20shower
A meteor shower is a celestial event in which a number of meteors are observed to radiate, or originate, from one point in the night sky. These meteors are caused by streams of cosmic debris called meteoroids entering Earth's atmosphere at extremely high speeds on parallel trajectories. Most meteors are smaller than a grain of sand, so almost all of them disintegrate and never hit the Earth's surface. Very intense or unusual meteor showers are known as meteor outbursts and meteor storms, which produce at least 1,000 meteors an hour, most notably from the Leonids. The Meteor Data Centre lists over 900 suspected meteor showers of which about 100 are well established. Several organizations point to viewing opportunities on the Internet. NASA maintains a daily map of active meteor showers. Historical developments A meteor shower in August 1583 was recorded in the Timbuktu manuscripts. In the modern era, the first great meteor storm was the Leonids of November 1833. One estimate is a peak rate of over one hundred thousand meteors an hour, but another, done as the storm abated, estimated more than two hundred thousand meteors during the 9 hours of the storm, over the entire region of North America east of the Rocky Mountains. American Denison Olmsted (1791–1859) explained the event most accurately. After spending the last weeks of 1833 collecting information, he presented his findings in January 1834 to the American Journal of Science and Arts, published in January–April 1834, and January 1836. He noted the shower was of short duration and was not seen in Europe, and that the meteors radiated from a point in the constellation of Leo. He speculated the meteors had originated from a cloud of particles in space. Work continued, yet coming to understand the annual nature of showers though the occurrences of storms perplexed researchers. The actual nature of meteors was still debated during the 19th century. Meteors were conceived as an atmospheric phenomenon by many scientists (Alexander von Humboldt, Adolphe Quetelet, Julius Schmidt) until the Italian astronomer Giovanni Schiaparelli ascertained the relation between meteors and comets in his work "Notes upon the astronomical theory of the falling stars" (1867). In the 1890s, Irish astronomer George Johnstone Stoney (1826–1911) and British astronomer Arthur Matthew Weld Downing (1850–1917) were the first to attempt to calculate the position of the dust at Earth's orbit. They studied the dust ejected in 1866 by comet 55P/Tempel-Tuttle before the anticipated Leonid shower return of 1898 and 1899. Meteor storms were expected, but the final calculations showed that most of the dust would be far inside Earth's orbit. The same results were independently arrived at by Adolf Berberich of the Königliches Astronomisches Rechen Institut (Royal Astronomical Computation Institute) in Berlin, Germany. Although the absence of meteor storms that season confirmed the calculations, the advance of much better computing tools was needed to arrive at reliable predictions. In 1981, Donald K. Yeomans of the Jet Propulsion Laboratory reviewed the history of meteor showers for the Leonids and the history of the dynamic orbit of Comet Tempel-Tuttle. A graph from it was adapted and re-published in Sky and Telescope. It showed relative positions of the Earth and Tempel-Tuttle and marks where Earth encountered dense dust. This showed that the meteoroids are mostly behind and outside the path of the comet, but paths of the Earth through the cloud of particles resulting in powerful storms were very near paths of nearly no activity. In 1985, E. D. Kondrat'eva and E. A. Reznikov of Kazan State University first correctly identified the years when dust was released which was responsible for several past Leonid meteor storms. In 1995, Peter Jenniskens predicted the 1995 Alpha Monocerotids outburst from dust trails. In anticipation of the 1999 Leonid storm, Robert H. McNaught, David Asher, and Finland's Esko Lyytinen were the first to apply this method in the West. In 2006 Jenniskens published predictions for future dust trail encounters covering the next 50 years. Jérémie Vaubaillon continues to update predictions based on observations each year for the Institut de Mécanique Céleste et de Calcul des Éphémérides (IMCCE). Radiant point Because meteor shower particles are all traveling in parallel paths and at the same velocity, they will appear to an observer below to radiate away from a single point in the sky. This radiant point is caused by the effect of perspective, similar to parallel railroad tracks converging at a single vanishing point on the horizon. Meteor showers are normally named after the constellation from which the meteors appear to originate. This "fixed point" slowly moves across the sky during the night due to the Earth turning on its axis, the same reason the stars appear to slowly march across the sky. The radiant also moves slightly from night to night against the background stars (radiant drift) due to the Earth moving in its orbit around the Sun. See IMO Meteor Shower Calendar 2017 (International Meteor Organization) for maps of drifting "fixed points". When the moving radiant is at the highest point, it will reach the observer's sky that night. The Sun will be just clearing the eastern horizon. For this reason, the best viewing time for a meteor shower is generally slightly before dawn — a compromise between the maximum number of meteors available for viewing and the brightening sky, which makes them harder to see. Naming Meteor showers are named after the nearest constellation, or bright star with a Greek or Roman letter assigned that is close to the radiant position at the peak of the shower, whereby the grammatical declension of the Latin possessive form is replaced by "id" or "ids." Hence, meteors radiating from near the star Delta Aquarii (declension "-i") are called the Delta Aquariids. The International Astronomical Union's Task Group on Meteor Shower Nomenclature and the IAU's Meteor Data Center keep track of meteor shower nomenclature and which showers are established. Origin of meteoroid streams A meteor shower results from an interaction between a planet, such as Earth, and streams of debris from a comet(or occasionally an asteroid). Comets can produce debris by water vapor drag, as demonstrated by Fred Whipple in 1951, and by breakup. Whipple envisioned comets as "dirty snowballs", made up of rock embedded in ice, orbiting the Sun. The "ice" may be water, methane, ammonia, or other volatiles, alone or in combination. The "rock" may vary in size from a dust mote to a small boulder. Dust mote sized solids are orders of magnitude more common than those the size of sand grains, which, in turn, are similarly more common than those the size of pebbles, and so on. When the ice warms and sublimates, the vapor can drag along dust, sand, and pebbles. Each time a comet swings by the Sun in its orbit, some of its ice vaporizes, and a certain number of meteoroids will be shed. The meteoroids spread out along the entire trajectory of the comet to form a meteoroid stream, also known as a "dust trail" (as opposed to a comet's "gas tail" caused by the tiny particles that are quickly blown away by solar radiation pressure). Recently, Peter Jenniskens has argued that most of our short-period meteor showers are not from the normal water vapor drag of active comets, but the product of infrequent disintegrations, when large chunks break off a mostly dormant comet. Examples are the Quadrantids and Geminids, which originated from a breakup of asteroid-looking objects, and 3200 Phaethon, respectively, about 500 and 1000 years ago. The fragments tend to fall apart quickly into dust, sand, and pebbles and spread out along the comet's orbit to form a dense meteoroid stream, which subsequently evolves into Earth's path. Dynamical evolution of meteoroid streams Shortly after Whipple predicted that dust particles traveled at low speeds relative to the comet, Milos Plavec was the first to offer the idea of a dust trail, when he calculated how meteoroids, once freed from the comet, would drift mostly in front of or behind the comet after completing one orbit. The effect is simple celestial mechanics – the material drifts only a little laterally away from the comet while drifting ahead or behind the comet because some particles make a wider orbit than others. These dust trails are sometimes observed in comet images taken at mid infrared wavelengths (heat radiation), where dust particles from the previous return to the Sun are spread along the orbit of the comet (see figures). The gravitational pull of the planets determines where the dust trail would pass by Earth orbit, much like a gardener directing a hose to water a distant plant. Most years, those trails would miss the Earth altogether, but in some years, the Earth is showered by meteors. This effect was first demonstrated from observations of the 1995 alpha Monocerotids, and from earlier not widely known identifications of past Earth storms. Over more extended periods, the dust trails can evolve in complicated ways. For example, the orbits of some repeating comets, and meteoroids leaving them, are in resonant orbits with Jupiter or one of the other large planets – so many revolutions of one will equal another number of the other. This creates a shower component called a filament. A second effect is a close encounter with a planet. When the meteoroids pass by Earth, some are accelerated (making wider orbits around the Sun), others are decelerated (making shorter orbits), resulting in gaps in the dust trail in the next return (like opening a curtain, with grains piling up at the beginning and end of the gap). Also, Jupiter's perturbation can dramatically change sections of the dust trail, especially for a short period comets, when the grains approach the giant planet at their furthest point along the orbit around the Sun, moving most slowly. As a result, the trail has a clumping, a braiding or a tangling of crescents, of each release of material. The third effect is that of radiation pressure which will push less massive particles into orbits further from the Sun – while more massive objects (responsible for bolides or fireballs) will tend to be affected less by radiation pressure. This makes some dust trail encounters rich in bright meteors, others rich in faint meteors. Over time, these effects disperse the meteoroids and create a broader stream. The meteors we see from these streams are part of annual showers, because Earth encounters those streams every year at much the same rate. When the meteoroids collide with other meteoroids in the zodiacal cloud, they lose their stream association and become part of the "sporadic meteors" background. Long since dispersed from any stream or trail, they form isolated meteors, not a part of any shower. These random meteors will not appear to come from the radiant of the leading shower. Famous meteor showers Perseids and Leonids In most years, the most visible meteor shower is the Perseids, which peak on 12 August of each year at over one meteor per minute. NASA has a tool to calculate how many meteors per hour are visible from one's observing location. The Leonid meteor shower peaks around 17 November of each year. The Leonid shower produces a meteor storm, peaking at rates of thousands of meteors per hour. Leonid storms gave birth to the term meteor shower when it was first realised that, during the November 1833 storm, the meteors radiated from near the star Gamma Leonis. The last Leonid storms were in 1999, 2001 (two), and 2002 (two). Before that, there were storms in 1767, 1799, 1833, 1866, 1867, and 1966. When the Leonid shower is not storming, it is less active than the Perseids. See the Infographics on Meteor Shower Calendar-2021 on the right. Other meteor showers Established meteor showers Official names are given in the International Astronomical Union's list of meteor showers. Extraterrestrial meteor showers Any other Solar System body with a reasonably transparent atmosphere can also have meteor showers. As the Moon is in the neighborhood of Earth it can experience the same showers, but will have its own phenomena due to its lack of an atmosphere per se, such as vastly increasing its sodium tail. NASA now maintains an ongoing database of observed impacts on the moon maintained by the Marshall Space Flight Center whether from a shower or not. Many planets and moons have impact craters dating back large spans of time. But new craters, perhaps even related to meteor showers are possible. Mars, and thus its moons, is known to have meteor showers. These have not been observed on other planets as yet but may be presumed to exist. For Mars in particular, although these are different from the ones seen on Earth because of the different orbits of Mars and Earth relative to the orbits of comets. The Martian atmosphere has less than one percent of the density of Earth's at ground level, at their upper edges, where meteoroids strike; the two are more similar. Because of the similar air pressure at altitudes for meteors, the effects are much the same. Only the relatively slower motion of the meteoroids due to increased distance from the sun should marginally decrease meteor brightness. This is somewhat balanced because the slower descent means that Martian meteors have more time to ablate. On March 7, 2004, the panoramic camera on Mars Exploration Rover Spirit recorded a streak which is now believed to have been caused by a meteor from a Martian meteor shower associated with comet 114P/Wiseman-Skiff. A strong display from this shower was expected on December 20, 2007. Other showers speculated about are a "Lambda Geminid" shower associated with the Eta Aquariids of Earth (i.e., both associated with Comet 1P/Halley), a "Beta Canis Major" shower associated with Comet 13P/Olbers, and "Draconids" from 5335 Damocles. Isolated massive impacts have been observed at Jupiter: The 1994 Comet Shoemaker–Levy 9 which formed a brief trail as well, and successive events since then (see List of Jupiter events.) Meteors or meteor showers have been discussed for most of the objects in the Solar System with an atmosphere: Mercury, Venus, Saturn's moon Titan, Neptune's moon Triton, and Pluto. See also American Meteor Society (AMS) Earth-grazing fireball International Meteor Organization (IMO) List of meteor showers Meteor procession North American Meteor Network (NAMN) Radiant – point in the sky from which meteors appear to originate Zenith hourly rate (ZHR) References External links Meteor Showers, by Sky and Telescope Six Not-So-Famous Summer Meteor Showers Joe Rao (SPACE.com) The American Meteor Society The International Meteor Organisation Meteor Shower Portal shows the direction of active showers each night on a celestial sphere. Astronomical events of the Solar System Atmospheric entry Meteoroids
Meteor shower
[ "Astronomy", "Engineering" ]
3,098
[ "Astronomical events", "Atmospheric entry", "Aerospace engineering", "Astronomical events of the Solar System", "Solar System" ]
157,833
https://en.wikipedia.org/wiki/Ahnentafel
An ahnentafel (German for "ancestor table"; ) or ahnenreihe ("ancestor series"; ) is a genealogical numbering system for listing a person's direct ancestors in a fixed sequence of ascent. The subject (or proband) of the ahnentafel is listed as , the subject's father as and the mother as , the paternal grandparents as and and the maternal grandparents as and , and so on, back through the generations. Apart from , who can be male or female, all even-numbered persons are male, and all odd-numbered persons are female. In this schema, the number of any person's father is double the person's number, and a person's mother is double the person's number plus one. Using this definition of numeration, one can derive some basic information about individuals who are listed without additional research. This construct displays a person's genealogy compactly, without the need for a diagram such as a family tree. It is particularly useful in situations where one may be restricted to presenting a genealogy in plain text, for example, in emails or newsgroup articles. In effect, an ahnentafel is a method for storing a binary tree in an array by listing the nodes (individuals) in level-order (in generation order). The ahnentafel system of numeration is also known as the Eytzinger Method, for Michaël Eytzinger, the Austrian-born historian who first published the principles of the system in 1590; the Sosa Method, named for Jerónimo (Jerome) de Sosa, the Spanish genealogist who popularized the numbering system in his work Noticia de la gran casa de los marqueses de Villafranca in 1676; and the Sosa–Stradonitz Method, for Stephan Kekulé von Stradonitz, the genealogist and son of chemist Friedrich August Kekulé, who published his interpretation of Sosa's method in his Ahnentafel-atlas in 1898. "Ahnentafel" is a loan word from the German language, and its German equivalents are Ahnenreihe and Ahnenliste. An ahnentafel list is sometimes called a "Kekulé" after Stephan Kekulé von Stradonitz. A variant of this is known in French as Seize Quartiers. Inductive reckoning To find out what someone's number would be without compiling a list, one must first trace how they relate back to the subject or person of interest, meaning that one determines for example that some ancestor is the subject's father's mother's mother's father's father. Once one has done that, one can use two methods. First method Use the definition that a father's number will be twice that individual's number, or a mother's will be twice plus one, and just multiply and add 1 accordingly. For instance, someone can find out what number Sophia of Hanover would be on an ahnentafel of Peter Phillips (son of Princess Anne and grandson of Elizabeth II). Sophia is Phillips's mother's mother's father's father's father's mother's father's father's father's father's father's mother. So, we multiply and add: 1×2 + 1 = 3 3×2 + 1 = 7 7×2 = 14 14×2 = 28 28×2 = 56 56×2 + 1 = 113 113×2 = 226 226×2 = 452 452×2 = 904 904×2 = 1808 1808×2 = 3616 3616×2 + 1 = 7233 Thus, if we were to make an ahnentafel for Peter Phillips, Electress Sophia would be #7233, among other numbers due to royal intermarriage causing pedigree collapse. (See #Multiple numbers for the same person below.) Second method Write down the digit "1", which represents the subject, then from left to right write "0" for each father and "1" for each mother in the relation, ending with the ancestor of interest. The result will be the binary representation of the ancestor's ahnentafel number. Then convert the binary number to decimal form. Using the Sophia example: Sophia = Peter's mother's mother's father's father's father's mother's father's father's father's father's father's mother Sophia = 1110001000001 1110001000001 = [1×4096] + [1×2048] + [1×1024] + [0×512] + [0×256] + [0×128] + [1×64] + [0×32] + [0×16] + [0×8] + [0×4] + [0×2] + [1×1] Sophia = 7233 Deductive reckoning We can also work in reverse to find what the relation is from the number. Reverse first method One starts out by seeing if the number is odd or even. If it is odd, the last part of the relation is "mother", so subtract 1 and divide by 2. If it is even, the last part is "father", and one divides by 2. Repeat steps 2–3, and build back from the last word. Once one gets to 1, one is done. On an ahnentafel of Prince William, John Wark is number 116. We follow the steps: We reverse that, and we get that #116, John Wark, is Prince William's mother's mother's father's mother's father's father. Reverse second method 1. Convert the ahnentafel number from decimal to binary, then replace the leftmost "1" with the subject's name and replace each following "0" and "1" with "father" and "mother" respectively. John Wark = 116 116 = [1×64] + [1×32] + [1×16] + [0×8] + [1×4] + [0×2] + [0×1] John Wark = 1110100 John Wark = Prince William's mother's mother's father's mother's father's father Calculation of the generation number The generation number can be calculated as the logarithm to base 2 of the ahnentafel number, and rounding down to a full integer by truncating decimal digits. For example, the number 38 is between 25=32 and 26=64, so log2(38) is between 5 and 6. This means that ancestor no.38 belongs to generation five, and was a great-great-great-grandparent of the reference person who is no.1 (generation zero). Example The example, shown below, is an ahnentafel of the Prince of Wales, listing all of his ancestors up to his fourth great-grandparents. William, Prince of Wales (born 21 June 1982) Charles III, King of the United Kingdom et al. (born 14 November 1948) Diana, Princess of Wales (1 July 1961 – 31 August 1997) Prince Philip, Duke of Edinburgh (10 June 1921 – 9 April 2021) Elizabeth II, Queen of the United Kingdom et al. (21 April 1926 - 8 September 2022) Edward Spencer, 8th Earl Spencer (24 January 1924 – 29 March 1992) Frances Roche (20 January 1936 – 3 June 2004) Prince Andrew of Greece and Denmark (20 January 1882 – 3 December 1944) Princess Alice of Battenberg (25 February 1885 – 5 December 1969) George VI, King of the United Kingdom et al. (14 December 1895 – 6 February 1952) Queen Elizabeth, the Queen Mother (4 August 1900 – 30 March 2002) Albert Spencer, 7th Earl Spencer (23 May 1892 – 9 June 1975) Cynthia Hamilton (16 August 1897 – 4 December 1972) Maurice Roche, 4th Baron Fermoy (15 May 1885 – 8 July 1955) Ruth Gill (2 October 1908 – 6 July 1993) George I, King of the Hellenes (24 December 1845 – 18 March 1913) Grand Duchess Olga Konstantinovna of Russia (3 September 1851 – 18 June 1926) Prince Louis of Battenberg, later Louis Mountbatten, 1st Marquess of Milford Haven (24 May 1854 – 11 September 1921) Princess Victoria of Hesse and by Rhine (5 April 1863 – 24 September 1950) George V, King of the United Kingdom (3 June 1865 – 20 January 1936) Mary of Teck (26 May 1867 – 24 March 1953) Claude Bowes-Lyon, 14th Earl of Strathmore and Kinghorne (14 March 1855 – 7 November 1944) Cecilia Cavendish-Bentinck (11 September 1862 – 23 June 1938) Charles Robert Spencer, 6th Earl Spencer (30 October 1857 – 26 September 1922) Margaret Baring (14 December 1868 – 4 July 1906) James Hamilton, 3rd Duke of Abercorn (30 November 1869 – 12 September 1953) Rosalind Bingham (26 February 1869 – 18 January 1958) James Roche, 3rd Baron Fermoy (28 July 1852 – 30 October 1920) Frances Work (27 October 1857 – 26 January 1947) Colonel William Smith Gill (16 February 1865 – 25 December 1957) Ruth Littlejohn (4 December 1879 – 24 August 1964) Christian IX, King of Denmark (8 April 1818 – 29 January 1906) Princess Louise of Hesse-Kassel (7 September 1817 – 29 September 1898) Grand Duke Konstantin Nikolayevich of Russia (9 September 1827 – 13 January 1892) Grand Duchess Aleksandra Iosifovna of Russia (8 July 1830 – 23 June 1911) Prince Alexander of Hesse and by Rhine (15 July 1823 – 15 December 1888) Julia von Hauke (12 November 1825 – 19 September 1895) Ludwig IV, Grand Duke of Hesse and by Rhine (12 September 1837 – 13 March 1892) The Princess Alice (25 April 1843 – 14 December 1878) Edward VII, King of the United Kingdom (9 November 1841 – 6 May 1910) Princess Alexandra of Denmark (1 December 1844 – 20 November 1925) Prince Francis, Duke of Teck (27 August 1837 – 21 January 1900) Princess Mary Adelaide of Cambridge (27 November 1833 – 27 October 1897) Claude Bowes-Lyon, 13th Earl of Strathmore and Kinghorne (21 July 1824 – 16 February 1904) Frances Bowes-Lyon, Countess of Strathmore and Kinghorne (1830 – 5 February 1922) Revd Charles Cavendish-Bentinck (8 November 1817 – 17 August 1865) Louisa Cavendish-Bentinck (23 Nov 1833 – 6 July 1918) Frederick Spencer, 4th Earl Spencer (14 April 1798 – 27 December 1857) Adelaide Spencer, Countess Spencer (27 January 1825 – 29 October 1877) Edward Baring, 1st Baron Revelstoke (13 April 1828 – 17 July 1897) Louisa Baring, Baroness Revelstoke (1839 – 16 October 1892) James Hamilton, 2nd Duke of Abercorn (24 August 1838 – 3 January 1913) Mary Curzon-Howe (23 July 1848 – 10 May 1929) Charles Bingham, 4th Earl of Lucan (8 May 1830 – 5 June 1914) Cecilia Bingham, Countess of Lucan (13 April 1835 – 5 October 1910) Edmond Roche, 1st Baron Fermoy (August 1815 – 17 September 1874) Elizabeth Roche, Baroness Fermoy (9 August 1821 – 26 April 1897) Frank Work (10 February 1819 – 16 March 1911) Ellen Wood (18 July 1831 – 22 February 1877) Alexander Ogston Gill Barbara Smith Marr (died ca. 30 June 1898) David Littlejohn (3 April 1841 – 11 May 1924) Jane Crombie (died 19 September 1917) Friedrich Wilhelm, Duke of Schleswig-Holstein-Sonderburg-Glücksburg (4 January 1785 – 17 February 1831) Princess Louise Caroline of Hesse-Kassel (28 September 1789 – 13 March 1867) Landgrave Wilhelm of Hesse-Kassel (24 December 1787 – 5 September 1867) Princess Louise Charlotte of Denmark (30 October 1789 – 28 March 1864) Nicholas I, Tsar of all the Russias (25 June 1796 – 18 February 1855) Aleksandra Feodorovna, Empress of Russia (13 July 1798 – 20 October 1860) Joseph, Duke of Saxe-Altenburg (27 August 1789 – 25 January 1868) Duchess Amelia of Württemberg (28 June 1799 – 28 November 1848) Ludwig II, Grand Duke of Hesse and by Rhine (26 December 1777 – 16 June 1848) Princess Wilhelmine of Baden (10 September 1788 – 27 January 1836) Count Moritz von Hauke (26 October 1775 – 29 November 1830) Sophie Lafontaine (1790 – 27 August 1831) Prince Karl of Hesse and by Rhine (23 April 1809 – 20 March 1877) Princess Elizabeth of Prussia (18 June 1815 – 21 March 1885) Albert, Prince Consort (26 August 1819 – 14 December 1861) Queen Victoria (24 May 1819 – 22 January 1901) = 78 = 79 = 32 = 33 Duke Alexander of Württemberg (9 September 1804 – 4 July 1885) Countess Claudine Rhédey von Kis-Rhéde (21 September 1812 – 1 October 1841) Prince Adolphus, Duke of Cambridge (24 February 1774 – 8 July 1850) Princess Augusta of Hesse-Kassel (25 July 1797 – 6 April 1889) Thomas George Bowes-Lyon, Lord Glamis (6 February 1801 – 27 January 1834) Charlotte Grimstead (22 January 1797 – 19 January 1881) Oswald Smith (7 July 1794 – 18 June 1863) Henrietta Hodgson Lord Charles Bentinck (3 October 1780 – 28 April 1826) Anne Wellesley (1788 – 19 March 1875) Edwyn Burnaby (29 September 1798 – 18 July 1867) Anne Salisbury (1805 – 3 May 1881) George Spencer, 2nd Earl Spencer (1 September 1758 – 10 November 1834) Lavinia Bingham (27 July 1762 – 8 June 1831) Sir Horace Seymour (22 November 1791 – 23 November 1851) Elizabeth Palk (died 18 January 1827) Henry Baring (18 January 1776 – 13 April 1848) Cecilia Windham (16 February 1803 – 2 September 1874) John Crocker Bulteel (died 10 September 1843) Elizabeth Grey (10 July 1798 – 8 November 1880) James Hamilton, 1st Duke of Abercorn (21 January 1811 – 31 October 1885) Louisa Russell (8 July 1812 – 31 March 1905) Richard Curzon-Howe, 1st Earl Howe (11 December 1796 – 12 May 1870) Anne Gore (8 March 1817 – 23 July 1877) George Bingham, 3rd Earl of Lucan (16 April 1800 – 10 November 1888) Anne Bingham, Countess of Lucan née Lady Anne Brudenell (29 June 1809 – 2 April 1877) Charles Gordon-Lennox, 5th Duke of Richmond (3 August 1791 – 21 October 1860) Caroline Paget (6 June 1796 – 12 March 1874) Edward Roche (13 July 1771 – 21 March 1855) Margaret Curtain (1786 – 21 January 1862) James Boothby (10 February 1791 – 28 October 1850) Charlotte Cunningham (1799 – 22 January 1893) John Wark (1783 – 16 April 1823) Sarah Duncan Boude (15 December 1790 – 17 December 1860) John Wood (29 July 1785 – 29 January 1848) Eleanor Strong (ca. 1803 – 9 July 1863) David Gill Sarah Ogston William Smith Marr (27 November 1810 – 13 February 1898) Helen Bean (1814/5 – 20 July 1852) William Littlejohn (12 August 1803 – 8 July 1888) Janet Bentley (26 January 1811 – 1 October 1848) James Crombie (13 January 1810 – 31 January 1878) Katharine Forbes (1 December 1812 – 10 April 1893) The same information in a tree: Multiple numbers for the same person An ancestor may have two or more numbers due to pedigree collapse. For example, in the above Ahnentafel for Prince William, Queen Victoria is both no.79 and no.81. She is no.79 because she was the great-great-grandmother of William's grandfather Prince Philip, and she is also no.81 because she was the great-great-grandmother of William's grandmother Queen Elizabeth II. The relationships are easier to follow using the ancestry tree with ahnentafel numbering. Other German definitions European nobility took pride in displaying their descent. In the German language, the term Ahnentafel may refer to a list of coats of arms and names of one's ancestors, even when it does not follow the numbered tabular representation given above. In this case, the German "Tafel" is taken literally to be a physical "display board" instead of an abstract scheme. In Nazi Germany, the Law for the Restoration of the Professional Civil Service required a person to prove non-Jewish ancestry with an Ariernachweis (Aryan certificate). The certificate could take the form of entries in the permanent Ahnenpass (that was sorted according to the ahnentafel numbering system) or as entries in a singular Arierschein (Aryan attestation) that was titled "Ahnentafel". Software See also Binary heap: a computer data structure that uses the same formulas as an Ahnentafel to represent a binary tree in linear memory. Cousin chart (Table of consanguinity) Family tree Genealogical numbering systems Genealogy software Genogram Pedigree chart Pedigree collapse Progenitor Seize quartiers References Genealogy Family trees
Ahnentafel
[ "Biology" ]
3,602
[ "Phylogenetics", "Genealogy" ]
157,835
https://en.wikipedia.org/wiki/Water%20table
The water table is the upper surface of the zone of saturation. The zone of saturation is where the pores and fractures of the ground are saturated with groundwater, which may be fresh, saline, or brackish, depending on the locality. It can also be simply explained as the depth below which the ground is saturated. The water table is the surface where the water pressure head is equal to the atmospheric pressure (where gauge pressure = 0). It may be visualized as the "surface" of the subsurface materials that are saturated with groundwater in a given vicinity. The groundwater may be from precipitation or from groundwater flowing into the aquifer. In areas with sufficient precipitation, water infiltrates through pore spaces in the soil, passing through the unsaturated zone. At increasing depths, water fills in more of the pore spaces in the soils, until a zone of saturation is reached. Below the water table, in the phreatic zone (zone of saturation), layers of permeable rock that yield groundwater are called aquifers. In less permeable soils, such as tight bedrock formations and historic lakebed deposits, the water table may be more difficult to define. “Water table” and “water level” are not synonymous. If a deeper aquifer has a lower permeable unit that confines the upward flow, then the water level in this aquifer may rise to a level that is greater or less than the elevation of the actual water table. The elevation of the water in this deeper well is dependent upon the pressure in the deeper aquifer and is referred to as the potentiometric surface, not the water table. Formation The water table may vary due to seasonal changes such as precipitation and evapotranspiration. In undeveloped regions with permeable soils that receive sufficient amounts of precipitation, the water table typically slopes toward rivers that act to drain the groundwater away and release the pressure in the aquifer. Springs, rivers, lakes and oases occur when the water table reaches the surface. Groundwater entering rivers and lakes accounts for the base-flow water levels in water bodies. Surface topography Within an aquifer, the water table is rarely horizontal, but reflects the surface relief due to the capillary effect (capillary fringe) in soils, sediments and other porous media. In the aquifer, groundwater flows from points of higher pressure to points of lower pressure, and the direction of groundwater flow typically has both a horizontal and a vertical component. The slope of the water table is known as the “hydraulic gradient”, which depends on the rate at which water is added to and removed from the aquifer and the permeability of the material. The water table does not always mimic the topography due to variations in the underlying geological structure (e.g., folded, faulted, fractured bedrock). Perched water tables A perched water table (or perched aquifer) is an aquifer that occurs above the regional water table. This occurs when there is an impermeable layer of rock or sediment (aquiclude) or relatively impermeable layer (aquitard) above the main water table/aquifer but below the land surface. If a perched aquifer's flow intersects the surface, at a valley wall, for example, the water is discharged as a spring. Fluctuations Tidal On low-lying oceanic islands with porous soil, freshwater tends to collect in lenticular pools on top of the denser seawater intruding from the sides of the islands. Such an island's freshwater lens, and thus the water table, rises and falls with the tides. Seasonal In some regions, for example, Great Britain or California, winter precipitation is often higher than summer precipitation and so the groundwater storage is not fully recharged in summer. Consequently, the water table is lower during the summer. This disparity between the level of the winter and summer water table is known as the "zone of intermittent saturation", wherein the water table will fluctuate in response to climatic conditions. Long-term Fossil water is groundwater that has remained in an aquifer for several millennia and occurs mainly in deserts. It is non-renewable by present-day rainfall due to its depth below the surface, and any extraction causes a permanent change in the water table in such regions. Effects on crop yield Most crops need a water table at a minimum depth. For some important food and fiber crops a classification was made because at shallower depths the crop suffers a yield decline. (where DWT = depth to water table in centimetres) Effects on construction A water table close to the surface affects excavation, drainage, foundations, wells and leach fields (in areas without municipal water and sanitation), and more. When excavation occurs near enough to the water table to reach its capillary action, groundwater must be removed during construction. This is conspicuous in Berlin, which is built on sandy, marshy ground, and the water table is generally 2 meters below the surface. Pink and blue pipes can often be seen carrying groundwater from construction sites into the Spree river (or canals). See also References Aquifers Hydrology Hydrogeology Irrigation Water supply Water and the environment Karst
Water table
[ "Chemistry", "Engineering", "Environmental_science" ]
1,080
[ "Hydrology", "Aquifers", "Environmental engineering", "Water supply", "Hydrogeology" ]
157,877
https://en.wikipedia.org/wiki/Calcium%20deficiency%20%28plant%20disorder%29
Calcium (Ca) deficiency is a plant disorder that can be caused by insufficient level of biologically available calcium in the growing medium, but is more frequently a product of low transpiration of the whole plant or more commonly the affected tissue. Plants are susceptible to such localized calcium deficiencies in low or non-transpiring tissues because calcium is not transported in the phloem. This may be due to water shortages, which slow the transportation of calcium to the plant, poor uptake of calcium through the stem, or too much nitrogen in the soil. Causes Acidic, sandy, or coarse soils often contain less calcium. Uneven soil moisture and overuse of fertilizers can also cause calcium deficiency. At times, even with sufficient calcium in the soil, it can be in an insoluble form and is then unusable by the plant or it could be attributed to a "transport protein". Soils containing high phosphorus are particularly susceptible to creating insoluble forms of calcium. Calcium and magnesium are opposed within the plant cells, and have antagonistic interactions. As a result, a homeostatic balance between Ca and Mg within the plant is necessary for optimal growth and proper development. Symptoms Calcium deficiency symptoms appear initially as localized tissue necrosis leading to stunted plant growth, necrotic leaf margins on young leaves or curling of the leaves, and eventual death of terminal buds and root tips. Generally, the new growth and rapidly growing tissues of the plant are affected first. The mature leaves are rarely if ever affected because calcium accumulates to high concentrations in older leaves. Calcium deficiencies in plants are associated with reduced height, fewer nodes, and less leaf area. Crop-specific symptoms include: Apple 'Bitter pit' – fruit skins develop pits, brown spots appear on skin and/or in flesh and taste of those areas is bitter. This usually occurs when fruit is in storage, and Bramley apples are particularly susceptible. Related to boron deficiency, "water cored" apples seldom display bitter pit effects. Cabbage, lettuce and brussels sprouts There is some evidence that plants like lettuce are more likely to experience tipburn (burned edges of leaves) if they're experiencing a deficiency of calcium. Carrot 'Cavity spot' – oval spots develop into craters which may be invaded by other disease-causing organisms. Celery Stunted growth, central leaves stunted. Tomatoes and peppers'Blossom end rot' – Symptoms start as sunken, dry decaying areas at the blossom end of the fruit, furthest away from the stem, not all fruit on a truss is necessarily affected. Sometimes rapid growth from high-nitrogen fertilizers may exacerbate blossom end rot. Water management and preventing water stress is key to minimizing its occurrence. Although it was once common knowledge that blossom end rot was caused by calcium deficiencies, there are also other proposed causes. Treatment Calcium deficiency can sometimes be rectified by adding agricultural lime to acid soils, aiming at a pH of 6.5, unless the subject plants specifically prefer acidic soil. Organic matter should be added to the soil to improve its moisture-retaining capacity. However, because of the nature of the disorder (i.e. poor transport of calcium to low transpiring tissues), the problem cannot generally be cured by the addition of calcium to the roots. In some species, the problem can be reduced by prophylactic spraying with calcium chloride of tissues at risk. Plant damage is difficult to reverse, so corrective action should be taken immediately, supplemental applications of calcium nitrate at 200 ppm nitrogen, for example. Soil pH should be tested, and corrected if needed, because calcium deficiency is often associated with low pH. Early fruit will generally have the worst symptoms, with them typically lessening as the season progresses. Preventative measures, such as irrigating prior to especially high temperatures and stable irrigation will minimize the occurrence. See also Blackheart (plant disease) Plant nutrition Horticulture References Hopkins, William G., Norman P.A. Hüner. Introduction to Plant Physiology. London: Wiley & Sons, 2009. Nguyen, Ivy. “Increasing Vitamin D2 with Ergosterol for Calcium Absorption in Sugarcane.” UC Davis COSMOS. July 2009. 17 October 2010. NGUYEN_IVY.pdf Simon, E.W. “The Symptoms of Calcium Deficiency in Plants.” New Phytologist 80 (1978):1-15. Notes External links Blossom end Rot video Example of blossom end rot on Roma tomatoes Blossom End Rot - symptoms, cause and management - The Ohio State University Extension Physiological plant disorders Biology and pharmacology of chemical elements Deficiency (Plant) Tomato diseases
Calcium deficiency (plant disorder)
[ "Chemistry", "Biology" ]
948
[ "Biology and pharmacology of chemical elements", "Pharmacology", "Biochemistry", "Properties of chemical elements" ]
157,885
https://en.wikipedia.org/wiki/Nitrogen%20deficiency
Nitrogen deficiency is a deficiency of nitrogen in plants. This can occur when organic matter with high carbon content, such as sawdust, is added to soil. Soil organisms use any nitrogen available to break down carbon sources, making nitrogen unavailable to plants. This is known as "robbing" the soil of nitrogen. All vegetables apart from nitrogen fixing legumes are prone to this disorder. Nitrogen deficiency can be prevented by using grass mowings as a mulch or foliar feeding with manure. Sowing green manure crops such as grazing rye to cover soil over the winter will help to prevent nitrogen leaching, while leguminous green manures such as winter tares will fix additional nitrogen from the atmosphere. Moreover, poor irrigation system in the field can lead to loss of nitrogen in plants as stagnant water in the field will cause the nitrogen to evaporate in the air. Symptoms Some symptoms of nitrogen deficiency (in absence or low supply) are given below : The chlorophyll content of the plant leaves is reduced which results in pale yellow color (chlorosis). Older leaves turn completely yellow. Flowering, fruitings, protein and starch contents are reduced. Reduction in protein results in stunted growth and dormant lateral buds. Disease Plants look thin, pale and the condition is called general starvation. Effect on Potato Production Symptoms of nitrogen deficiencies in plants is general chlorosis of the leaves, which is when leaves turn pale green, and leaves cup upwards quite severely in deficient plants. Nitrogen deficiencies also cause leaves to remain small, and drop prematurely, resulting in less photosynthesis occurring in the plant, and fewer, smaller tubers can form for harvest. Research done by Yara International has shown that there is a direct correlation between tuber size and yield, and the amount of plant-available nitrogen in the soil. This makes it crucial that the fields have enough nitrogen in the soil to grow a prosperous crop. However, excess nitrogen in the soil can also be harmful to potato production, influencing how well the roots are able to develop, and delays can occur in tuber initiation during the tuberization stage of potato growth. Detection The visual symptoms of nitrogen deficiency mean that it can be relatively easy to detect in some plant species. Symptoms include poor plant growth, and leaves become pale green or yellow because they are unable to make sufficient chlorophyll. Leaves in this state are said to be chlorotic. Lower leaves (older leaves) show symptoms first, since the plant will move nitrogen from older tissues to more important younger ones. Nevertheless, plants are reported to show nitrogen deficiency symptoms at different parts. For example, Nitrogen deficiency of tea is identified by retarded shoot growth and yellowing of younger leaves. However, these physical symptoms can also be caused by numerous other stresses, such as deficiencies in other nutrients, toxicity, herbicide injury, disease, insect damage or environmental conditions. Therefore, nitrogen deficiency is most reliably detected by conducting quantitative tests in addition to assessing the plant's visual symptoms. These tests include soil tests and plant tissue test. Plant tissue tests destructively sample the plant of interest. However, nitrogen deficiency can also be detected non-destructively by measuring chlorophyll content. Chlorophyll content tests work because leaf nitrogen content and chlorophyll concentration are closely linked, which would be expected since the majority of leaf nitrogen is contained in chlorophyll molecules. Chlorophyll content can be detected with a Chlorophyll content meter; a portable instrument that measures the greenness of leaves to estimate their relative chlorophyll concentration. Chlorophyll content can also be assessed with a chlorophyll fluorometer, which measures a chlorophyll fluorescence ratio to identify phenolic compounds that are produced in higher quantities when nitrogen is limited. These instruments can therefore be used to non-destructively test for nitrogen deficiency. Corrective Measures Fertilizers like ammonium phosphate, calcium ammonium nitrate, urea can be supplied. Foliar spray of urea can be a quick method. See also Nitrogen fixation Protein deficiency References Physiological plant disorders Deficiency (Plant) Deficiency (Plant)
Nitrogen deficiency
[ "Chemistry" ]
866
[ "Nitrogen cycle", "Metabolism" ]
157,909
https://en.wikipedia.org/wiki/Repentance
Repentance is reviewing one's actions and feeling contrition or regret for past or present wrongdoings, which is accompanied by commitment to and actual actions that show and prove a change for the better. In modern times, it is generally seen as involving a commitment to personal change and the resolve to live a more responsible and humane life. In other words, being sorry for one's misdeeds. It can also involve sorrow over a specific sin or series of sins that an individual feels guilt over, or conviction that they have committed. The practice of repentance plays an important role in the soteriological doctrines of Judaism, Christianity, and Islam. Analogous practices have been found in other world religions as well. In religious contexts, it often involves an act of confession to God or to a spiritual elder (such as a monk or priest). This confession might include an admission of guilt, a promise or intent not to repeat the offense, an attempt to make restitution for the wrong, or in some way reverse the harmful effects of the wrong where possible. Judaism Repentance (, literally, "return", pronounced tshuva or teshuva) is one element of atoning for sin in Judaism. Judaism recognizes that everybody sins on occasion, but that people can stop or minimize those occasions in the future by repenting for past transgressions. Thus, the primary purpose of repentance in Judaism is ethical self transformation. A Jewish penitent is traditionally known as a baal teshuva (lit., "master of repentance" or "master of return") (; for a woman: , baalat teshuva; plural: , baalei teshuva). An alternative modern term is hozer beteshuva () (lit., "returning in repentance"). "In a place where baalei teshuva stand", according to halakha, "even the full-fledged righteous do not stand." Christianity Repentance is a stage in Christian salvation where the believer turns away from sin. As a distinct stage in the ordo salutis its position is disputed, with some theological traditions arguing it occurs prior to faith and the Reformed theological tradition arguing it occurs after faith. In Roman Catholic theology repentance is part of the larger theological concept of penance. Islam Tawba is the Islamic concept of repenting to God due to performing any sins and misdeeds. It is a direct matter between a person and God, so there is no intercession. There is no original sin in Islam. It is the act of leaving what God has prohibited and returning to what he has commanded. The word denotes the act of being repentant for one's misdeeds, atoning for those misdeeds, and having a strong determination to forsake those misdeeds (remorse, resolution, and repentance). If someone sins against another person, restitution is required. Hinduism Dharma Shastras and Vedas advocate for self-reflection, repentance paschatapa and atonement prayaschitta. Stories such as that of Ajamila speak about forgiveness by grace of God even to the worst sinners. Buddhism The Buddha considered shame over doing wrong (Pali: hiri) and fear of the consequences of wrongdoing (Pali:otappa) as essential safeguards against falling into evil ways and further as extremely useful in the path of purification. Also recommended was the regular practice of self-assessment or wise reflection (Pali: yoniso manasikara) on one's own actions in relation to others and the bigger picture. In Mahayana Buddhism, one of the most common repentance verses used for reflection is Samantabhadra's Repentance Verse taken from Chapter 40 of the Flower Adornment Sutra: <poem> For all the evil deeds I have done in the past Created by my body, mouth, and mind, From beginningless greed, anger, and delusion, I now know shame and repent of them all. </poem> Hawaiian religion Hooponopono (ho-o-pono-pono) is an ancient practice in Hawaiian religion of reconciliation and forgiveness, combined with (repentance) prayers. Similar forgiveness practices were performed on islands throughout the South Pacific, including Samoa, Tahiti and New Zealand. Traditionally hooponopono is practiced by healing priests or kahuna lapaau among family members of a person who is physically ill. Modern versions are performed within the family by a family elder, or by the individual alone. See also Buß- und Bettag, Day of Repentance and Prayer Mea culpa'' Repentance Day, a public holiday of Christian prayer in Papua New Guinea Further reading References External links Quranic view on Repentance Jewish Encyclopedia: Repentance Theopedia: Repentance (conservative Calvinist perspective) Chattopadhyay, Subhasis. Review of Julia Kristeva's Hatred and Forgiveness in Prabuddha Bharata or Awakened India 121(10):721-22 (2016). ISSN 0032-6178. Edited by Swami Narasimhananda. Religious practices Religious terminology
Repentance
[ "Biology" ]
1,089
[ "Behavior", "Religious practices", "Human behavior" ]
157,932
https://en.wikipedia.org/wiki/Index%20of%20coincidence
In cryptography, coincidence counting is the technique (invented by William F. Friedman) of putting two texts side-by-side and counting the number of times that identical letters appear in the same position in both texts. This count, either as a ratio of the total or normalized by dividing by the expected count for a random source model, is known as the index of coincidence, or IC or IOC or IoC for short. Because letters in a natural language are not distributed evenly, the IC is higher for such texts than it would be for uniformly random text strings. What makes the IC especially useful is the fact that its value does not change if both texts are scrambled by the same single-alphabet substitution cipher, allowing a cryptanalyst to quickly detect that form of encryption. Calculation The index of coincidence provides a measure of how likely it is to draw two matching letters by randomly selecting two letters from a given text. The chance of drawing a given letter in the text is (number of times that letter appears / length of the text). The chance of drawing that same letter again (without replacement) is (appearances − 1 / text length − 1). The product of these two values gives you the chance of drawing that letter twice in a row. One can find this product for each letter that appears in the text, then sum these products to get a chance of drawing two of a kind. This probability can then be normalized by multiplying it by some coefficient, typically 26 in English. where c is the normalizing coefficient (26 for English), na is the number of times the letter "a" appears in the text, and N is the length of the text. We can express the index of coincidence IC for a given letter-frequency distribution as a summation: where N is the length of the text and n1 through nc are the frequencies (as integers) of the c letters of the alphabet (c = 26 for monocase English). The sum of the ni is necessarily N. The products count the number of combinations of n elements taken two at a time. (Actually this counts each pair twice; the extra factors of 2 occur in both numerator and denominator of the formula and thus cancel out.) Each of the ni occurrences of the i -th letter matches each of the remaining occurrences of the same letter. There are a total of letter pairs in the entire text, and 1/c is the probability of a match for each pair, assuming a uniform random distribution of the characters (the "null model"; see below). Thus, this formula gives the ratio of the total number of coincidences observed to the total number of coincidences that one would expect from the null model. The expected average value for the IC can be computed from the relative letter frequencies of the source language: If all letters of an alphabet were equally probable, the expected index would be 1.0. The actual monographic IC for telegraphic English text is around 1.73, reflecting the unevenness of natural-language letter distributions. Sometimes values are reported without the normalizing denominator, for example for English; such values may be called κp ("kappa-plaintext") rather than IC, with κr ("kappa-random") used to denote the denominator (which is the expected coincidence rate for a uniform distribution of the same alphabet, for English). English plaintext will generally fall somewhere in the range of 1.5 to 2.0 (normalized calculation). Application The index of coincidence is useful both in the analysis of natural-language plaintext and in the analysis of ciphertext (cryptanalysis). Even when only ciphertext is available for testing and plaintext letter identities are disguised, coincidences in ciphertext can be caused by coincidences in the underlying plaintext. This technique is used to cryptanalyze the Vigenère cipher, for example. For a repeating-key polyalphabetic cipher arranged into a matrix, the coincidence rate within each column will usually be highest when the width of the matrix is a multiple of the key length, and this fact can be used to determine the key length, which is the first step in cracking the system. Coincidence counting can help determine when two texts are written in the same language using the same alphabet. (This technique has been used to examine the purported Bible code). The causal coincidence count for such texts will be distinctly higher than the accidental coincidence count for texts in different languages, or texts using different alphabets, or gibberish texts. To see why, imagine an "alphabet" of only the two letters A and B. Suppose that in our "language", the letter A is used 75% of the time, and the letter B is used 25% of the time. If two texts in this language are laid side by side, then the following pairs can be expected: Overall, the probability of a "coincidence" is 62.5% (56.25% for AA + 6.25% for BB). Now consider the case when both messages are encrypted using the simple monoalphabetic substitution cipher which replaces A with B and vice versa: The overall probability of a coincidence in this situation is 62.5% (6.25% for AA + 56.25% for BB), exactly the same as for the unencrypted "plaintext" case. In effect, the new alphabet produced by the substitution is just a uniform renaming of the original character identities, which does not affect whether they match. Now suppose that only one message (say, the second) is encrypted using the same substitution cipher (A,B)→(B,A). The following pairs can now be expected: Now the probability of a coincidence is only 37.5% (18.75% for AA + 18.75% for BB). This is noticeably lower than the probability when same-language, same-alphabet texts were used. Evidently, coincidences are more likely when the most frequent letters in each text are the same. The same principle applies to real languages like English, because certain letters, like E, occur much more frequently than other letters—a fact which is used in frequency analysis of substitution ciphers. Coincidences involving the letter E, for example, are relatively likely. So when any two English texts are compared, the coincidence count will be higher than when an English text and a foreign-language text are used. This effect can be subtle. For example, similar languages will have a higher coincidence count than dissimilar languages. Also, it is not hard to generate random text with a frequency distribution similar to real text, artificially raising the coincidence count. Nevertheless, this technique can be used effectively to identify when two texts are likely to contain meaningful information in the same language using the same alphabet, to discover periods for repeating keys, and to uncover many other kinds of nonrandom phenomena within or among ciphertexts. Expected values for various languages are: Generalization The above description is only an introduction to use of the index of coincidence, which is related to the general concept of correlation. Various forms of Index of Coincidence have been devised; the "delta" I.C. (given by the formula above) in effect measures the autocorrelation of a single distribution, whereas a "kappa" I.C. is used when matching two text strings. Although in some applications constant factors such as and can be ignored, in more general situations there is considerable value in truly indexing each I.C. against the value to be expected for the null hypothesis (usually: no match and a uniform random symbol distribution), so that in every situation the expected value for no correlation is 1.0. Thus, any form of I.C. can be expressed as the ratio of the number of coincidences actually observed to the number of coincidences expected (according to the null model), using the particular test setup. From the foregoing, it is easy to see that the formula for kappa I.C. is where is the common aligned length of the two texts A and B, and the bracketed term is defined as 1 if the -th letter of text A matches the -th letter of text B, otherwise 0. A related concept, the "bulge" of a distribution, measures the discrepancy between the observed I.C. and the null value of 1.0. The number of cipher alphabets used in a polyalphabetic cipher may be estimated by dividing the expected bulge of the delta I.C. for a single alphabet by the observed bulge for the message, although in many cases (such as when a repeating key was used) better techniques are available. Example As a practical illustration of the use of I.C., suppose that we have intercepted the following ciphertext message: QPWKA LVRXC QZIKG RBPFA EOMFL JMSDZ VDHXC XJYEB IMTRQ WNMEA IZRVK CVKVL XNEIC FZPZC ZZHKM LVZVZ IZRRQ WDKEC HOSNY XXLSP MYKVQ XJTDC IOMEE XDQVS RXLRL KZHOV (The grouping into five characters is just a telegraphic convention and has nothing to do with actual word lengths.) Suspecting this to be an English plaintext encrypted using a Vigenère cipher with normal A–Z components and a short repeating keyword, we can consider the ciphertext "stacked" into some number of columns, for example seven: QPWKALV RXCQZIK GRBPFAE OMFLJMS DZVDHXC XJYEBIM TRQWN… If the key size happens to have been the same as the assumed number of columns, then all the letters within a single column will have been enciphered using the same key letter, in effect a simple Caesar cipher applied to a random selection of English plaintext characters. The corresponding set of ciphertext letters should have a roughness of frequency distribution similar to that of English, although the letter identities have been permuted (shifted by a constant amount corresponding to the key letter). Therefore, if we compute the aggregate delta I.C. for all columns ("delta bar"), it should be around 1.73. On the other hand, if we have incorrectly guessed the key size (number of columns), the aggregate delta I.C. should be around 1.00. So we compute the delta I.C. for assumed key sizes from one to ten: We see that the key size is most likely five. If the actual size is five, we would expect a width of ten to also report a high I.C., since each of its columns also corresponds to a simple Caesar encipherment, and we confirm this. So we should stack the ciphertext into five columns: QPWKA LVRXC QZIKG RBPFA EOMFL JMSDZ VDH… We can now try to determine the most likely key letter for each column considered separately, by performing trial Caesar decryption of the entire column for each of the 26 possibilities A–Z for the key letter, and choosing the key letter that produces the highest correlation between the decrypted column letter frequencies and the relative letter frequencies for normal English text. That correlation, which we don't need to worry about normalizing, can be readily computed as where are the observed column letter frequencies and are the relative letter frequencies for English. When we try this, the best-fit key letters are reported to be "EVERY," which we recognize as an actual word, and using that for Vigenère decryption produces the plaintext: MUSTC HANGE MEETI NGLOC ATION FROMB RIDGE TOUND ERPAS SSINC EENEM YAGEN TSARE BELIE VEDTO HAVEB EENAS SIGNE DTOWA TCHBR IDGES TOPME ETING TIMEU NCHAN GEDXX from which one obtains: MUST CHANGE MEETING LOCATION FROM BRIDGE TO UNDERPASS SINCE ENEMY AGENTS ARE BELIEVED TO HAVE BEEN ASSIGNED TO WATCH BRIDGE STOP MEETING TIME UNCHANGED XX after word divisions have been restored at the obvious positions. "XX" are evidently "null" characters used to pad out the final group for transmission. This entire procedure could easily be packaged into an automated algorithm for breaking such ciphers. Due to normal statistical fluctuation, such an algorithm will occasionally make wrong choices, especially when analyzing short ciphertext messages. References See also Kasiski examination Riverbank Publications Topics in cryptography Cryptographic attacks Summary statistics for contingency tables
Index of coincidence
[ "Technology" ]
2,618
[ "Cryptographic attacks", "Computer security exploits" ]
157,934
https://en.wikipedia.org/wiki/Frequency%20analysis
In cryptanalysis, frequency analysis (also known as counting letters) is the study of the frequency of letters or groups of letters in a ciphertext. The method is used as an aid to breaking classical ciphers. Frequency analysis is based on the fact that, in any given stretch of written language, certain letters and combinations of letters occur with varying frequencies. Moreover, there is a characteristic distribution of letters that is roughly the same for almost all samples of that language. For instance, given a section of English language, , , and are the most common, while , , and are rare. Likewise, , , , and are the most common pairs of letters (termed bigrams or digraphs), and , , , and are the most common repeats. The nonsense phrase "ETAOIN SHRDLU" represents the 12 most frequent letters in typical English language text. In some ciphers, such properties of the natural language plaintext are preserved in the ciphertext, and these patterns have the potential to be exploited in a ciphertext-only attack. Frequency analysis for simple substitution ciphers In a simple substitution cipher, each letter of the plaintext is replaced with another, and any particular letter in the plaintext will always be transformed into the same letter in the ciphertext. For instance, if all occurrences of the letter turn into the letter , a ciphertext message containing numerous instances of the letter would suggest to a cryptanalyst that represents . The basic use of frequency analysis is to first count the frequency of ciphertext letters and then associate guessed plaintext letters with them. More s in the ciphertext than anything else suggests that corresponds to in the plaintext, but this is not certain; and are also very common in English, so might be either of them. It is unlikely to be a plaintext or , which are less common. Thus the cryptanalyst may need to try several combinations of mappings between ciphertext and plaintext letters. More complex use of statistics can be conceived, such as considering counts of pairs of letters (bigrams), triplets (trigrams), and so on. This is done to provide more information to the cryptanalyst, for instance, and nearly always occur together in that order in English, even though itself is rare. An example Suppose Eve has intercepted the cryptogram below, and it is known to be encrypted using a simple substitution cipher: For this example, uppercase letters are used to denote ciphertext, lowercase letters are used to denote plaintext (or guesses at such), and ~ is used to express a guess that ciphertext letter represents the plaintext letter . Eve could use frequency analysis to help solve the message along the following lines: counts of the letters in the cryptogram show that is the most common single letter, most common bigram, and is the most common trigram. is the most common letter in the English language, is the most common bigram, and is the most common trigram. This strongly suggests that ~, ~ and ~. The second most common letter in the cryptogram is ; since the first and second most frequent letters in the English language, and are accounted for, Eve guesses that ~, the third most frequent letter. Tentatively making these assumptions, the following partial decrypted message is obtained. Using these initial guesses, Eve can spot patterns that confirm her choices, such as "". Moreover, other patterns suggest further guesses. "" might be "", which would mean ~. Similarly "" could be guessed as "", yielding ~ and ~. Furthermore, "" might be "", giving ~. Filling in these guesses, Eve gets: In turn, these guesses suggest still others (for example, "" could be "", implying ~) and so on, and it is relatively straightforward to deduce the rest of the letters, eventually yielding the plaintext. At this point, it would be a good idea for Eve to insert spaces and punctuation: Hereupon Legrand arose, with a grave and stately air, and brought me the beetle from a glass case in which it was enclosed. It was a beautiful scarabaeus, and, at that time, unknown to naturalists—of course a great prize in a scientific point of view. There were two round black spots near one extremity of the back, and a long one near the other. The scales were exceedingly hard and glossy, with all the appearance of burnished gold. The weight of the insect was very remarkable, and, taking all things into consideration, I could hardly blame Jupiter for his opinion respecting it. In this example from "The Gold-Bug", Eve's guesses were all correct. This would not always be the case, however; the variation in statistics for individual plaintexts can mean that initial guesses are incorrect. It may be necessary to backtrack incorrect guesses or to analyze the available statistics in much more depth than the somewhat simplified justifications given in the above example. It is possible that the plaintext does not exhibit the expected distribution of letter frequencies. Shorter messages are likely to show more variation. It is also possible to construct artificially skewed texts. For example, entire novels have been written that omit the letter altogether — a form of literature known as a lipogram. History and usage The first known recorded explanation of frequency analysis (indeed, of any kind of cryptanalysis) was given in the 9th century by Al-Kindi, an Arab polymath, in A Manuscript on Deciphering Cryptographic Messages. It has been suggested that a close textual study of the Qur'an first brought to light that Arabic has a characteristic letter frequency. Its use spread, and similar systems were widely used in European states by the time of the Renaissance. By 1474, Cicco Simonetta had written a manual on deciphering encryptions of Latin and Italian text. Several schemes were invented by cryptographers to defeat this weakness in simple substitution encryptions. These included: Homophonic substitution: Use of homophones — several alternatives to the most common letters in otherwise monoalphabetic substitution ciphers. For example, for English, both X and Y ciphertext might mean plaintext E. Polyalphabetic substitution, that is, the use of several alphabets — chosen in assorted, more or less devious, ways (Leone Alberti seems to have been the first to propose this); and Polygraphic substitution, schemes where pairs or triplets of plaintext letters are treated as units for substitution, rather than single letters, for example, the Playfair cipher invented by Charles Wheatstone in the mid-19th century. A disadvantage of all these attempts to defeat frequency counting attacks is that it increases complication of both enciphering and deciphering, leading to mistakes. Famously, a British Foreign Secretary is said to have rejected the Playfair cipher because, even if school boys could cope successfully as Wheatstone and Playfair had shown, "our attachés could never learn it!". The rotor machines of the first half of the 20th century (for example, the Enigma machine) were essentially immune to straightforward frequency analysis. However, other kinds of analysis ("attacks") successfully decoded messages from some of those machines. Frequency analysis requires only a basic understanding of the statistics of the plaintext language and some problem-solving skills, and, if performed by hand, tolerance for extensive letter bookkeeping. During World War II, both the British and the Americans recruited codebreakers by placing crossword puzzles in major newspapers and running contests for who could solve them the fastest. Several of the ciphers used by the Axis powers were breakable using frequency analysis, for example, some of the consular ciphers used by the Japanese. Mechanical methods of letter counting and statistical analysis (generally IBM card type machinery) were first used in World War II, possibly by the US Army's SIS. Today, the work of letter counting and analysis is done by computer software, which can carry out such analysis in seconds. With modern computing power, classical ciphers are unlikely to provide any real protection for confidential data. Frequency analysis in fiction Frequency analysis has been described in fiction. Edgar Allan Poe's "The Gold-Bug" and Sir Arthur Conan Doyle's Sherlock Holmes tale "The Adventure of the Dancing Men" are examples of stories which describe the use of frequency analysis to attack simple substitution ciphers. The cipher in the Poe story is encrusted with several deception measures, but this is more a literary device than anything significant cryptographically. See also Index of coincidence Topics in cryptography Zipf's law A Void, a novel by Georges Perec. The original French text is written without the letter e, as is the English translation. The Spanish version contains no a. Gadsby (novel), a novel by Ernest Vincent Wright. The novel is written as a lipogram, which does not include words that contain the letter E. Further reading Helen Fouché Gaines, "Cryptanalysis", 1939, Dover. . Abraham Sinkov, "Elementary Cryptanalysis: A Mathematical Approach", The Mathematical Association of America, 1966. . References External links Online frequency analysis tool Character and syllable frequencies of 41 languages and a portable tool to create frequency and syllable distributions Arabic letter frequency analysis Conditional probabilities for characters in English text Czech letter/bigram/trigram frequency Cryptographic attacks Frequency distribution Arab inventions Quantitative linguistics
Frequency analysis
[ "Mathematics", "Technology" ]
1,946
[ "Functions and mappings", "Cryptographic attacks", "Mathematical objects", "Mathematical relations", "Frequency distribution", "Computer security exploits" ]
157,935
https://en.wikipedia.org/wiki/Plaintext
In cryptography, plaintext usually means unencrypted information pending input into cryptographic algorithms, usually encryption algorithms. This usually refers to data that is transmitted or stored unencrypted. Overview With the advent of computing, the term plaintext expanded beyond human-readable documents to mean any data, including binary files, in a form that can be viewed or used without requiring a key or other decryption device. Information—a message, document, file, etc.—if to be communicated or stored in an unencrypted form is referred to as plaintext. Plaintext is used as input to an encryption algorithm; the output is usually termed ciphertext, particularly when the algorithm is a cipher. Codetext is less often used, and almost always only when the algorithm involved is actually a code. Some systems use multiple layers of encryption, with the output of one encryption algorithm becoming "plaintext" input for the next. Secure handling Insecure handling of plaintext can introduce weaknesses into a cryptosystem by letting an attacker bypass the cryptography altogether. Plaintext is vulnerable in use and in storage, whether in electronic or paper format. Physical security means the securing of information and its storage media from physical, attack—for instance by someone entering a building to access papers, storage media, or computers. Discarded material, if not disposed of securely, may be a security risk. Even shredded documents and erased magnetic media might be reconstructed with sufficient effort. If plaintext is stored in a computer file, the storage media, the computer and its components, and all backups must be secure. Sensitive data is sometimes processed on computers whose mass storage is removable, in which case physical security of the removed disk is vital. In the case of securing a computer, useful (as opposed to handwaving) security must be physical (e.g., against burglary, brazen removal under cover of supposed repair, installation of covert monitoring devices, etc.), as well as virtual (e.g., operating system modification, illicit network access, Trojan programs). Wide availability of keydrives, which can plug into most modern computers and store large quantities of data, poses another severe security headache. A spy (perhaps posing as a cleaning person) could easily conceal one, and even swallow it if necessary. Discarded computers, disk drives and media are also a potential source of plaintexts. Most operating systems do not actually erase anything— they simply mark the disk space occupied by a deleted file as 'available for use', and remove its entry from the file system directory. The information in a file deleted in this way remains fully present until overwritten at some later time when the operating system reuses the disk space. With even low-end computers commonly sold with many gigabytes of disk space and rising monthly, this 'later time' may be months later, or never. Even overwriting the portion of a disk surface occupied by a deleted file is insufficient in many cases. Peter Gutmann of the University of Auckland wrote a celebrated 1996 paper on the recovery of overwritten information from magnetic disks; areal storage densities have gotten much higher since then, so this sort of recovery is likely to be more difficult than it was when Gutmann wrote. Modern hard drives automatically remap failing sectors, moving data to good sectors. This process makes information on those failing, excluded sectors invisible to the file system and normal applications. Special software, however, can still extract information from them. Some government agencies (e.g., US NSA) require that personnel physically pulverize discarded disk drives and, in some cases, treat them with chemical corrosives. This practice is not widespread outside government, however. Garfinkel and Shelat (2003) analyzed 158 second-hand hard drives they acquired at garage sales and the like, and found that less than 10% had been sufficiently sanitized. The others contained a wide variety of readable personal and confidential information. See data remanence. Physical loss is a serious problem. The US State Department, Department of Defense, and the British Secret Service have all had laptops with secret information, including in plaintext, lost or stolen. Appropriate disk encryption techniques can safeguard data on misappropriated computers or media. On occasion, even when data on host systems is encrypted, media that personnel use to transfer data between systems is plaintext because of poorly designed data policy. For example, in October 2007, HM Revenue and Customs lost CDs that contained the unencrypted records of 25 million child benefit recipients in the United Kingdom. Modern cryptographic systems resist known plaintext or even chosen plaintext attacks, and so may not be entirely compromised when plaintext is lost or stolen. Older systems resisted the effects of plaintext data loss on security with less effective techniques—such as padding and Russian copulation to obscure information in plaintext that could be easily guessed. See also Ciphertext Red/black concept References S. Garfinkel and A Shelat, "Remembrance of Data Passed: A Study of Disk Sanitization Practices", IEEE Security and Privacy, January/February 2003 https://creativecommons.org/licenses/by-sa/3.0/ UK HM Revenue and Customs loses 25m records of child benefit recipients BBC Kissel, Richard (editor). (February, 2011). NIST IR 7298 Revision 1, Glossary of Key Information Security Terms (https://creativecommons.org/licenses/by-sa/3.0/). National Institute of Standards and Technology. Cryptography
Plaintext
[ "Mathematics", "Engineering" ]
1,152
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
158,005
https://en.wikipedia.org/wiki/Genetic%20recombination
Genetic recombination (also known as genetic reshuffling) is the exchange of genetic material between different organisms which leads to production of offspring with combinations of traits that differ from those found in either parent. In eukaryotes, genetic recombination during meiosis can lead to a novel set of genetic information that can be further passed on from parents to offspring. Most recombination occurs naturally and can be classified into two types: (1) interchromosomal recombination, occurring through independent assortment of alleles whose loci are on different but homologous chromosomes (random orientation of pairs of homologous chromosomes in meiosis I); & (2) intrachromosomal recombination, occurring through crossing over. During meiosis in eukaryotes, genetic recombination involves the pairing of homologous chromosomes. This may be followed by information transfer between the chromosomes. The information transfer may occur without physical exchange (a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed) (see SDSA – Synthesis Dependent Strand Annealing pathway in Figure); or by the breaking and rejoining of DNA strands, which forms new molecules of DNA (see DHJ pathway in Figure). Recombination may also occur during mitosis in eukaryotes where it ordinarily involves the two sister chromosomes formed after chromosomal replication. In this case, new combinations of alleles are not produced since the sister chromosomes are usually identical. In meiosis and mitosis, recombination occurs between similar molecules of DNA (homologous sequences). In meiosis, non-sister homologous chromosomes pair with each other so that recombination characteristically occurs between non-sister homologues. In both meiotic and mitotic cells, recombination between homologous chromosomes is a common mechanism used in DNA repair. Gene conversion – the process during which homologous sequences are made identical also falls under genetic recombination. Genetic recombination and recombinational DNA repair also occurs in bacteria and archaea, which use asexual reproduction. Recombination can be artificially induced in laboratory (in vitro) settings, producing recombinant DNA for purposes including vaccine development. V(D)J recombination in organisms with an adaptive immune system is a type of site-specific genetic recombination that helps immune cells rapidly diversify to recognize and adapt to new pathogens. Synapsis During meiosis, synapsis (the pairing of homologous chromosomes) ordinarily precedes genetic recombination. Mechanism Genetic recombination is catalyzed by many different enzymes. Recombinases are key enzymes that catalyse the strand transfer step during recombination. RecA, the chief recombinase found in Escherichia coli, is responsible for the repair of DNA double strand breaks (DSBs). In yeast and other eukaryotic organisms there are two recombinases required for repairing DSBs. The RAD51 protein is required for mitotic and meiotic recombination, whereas the DNA repair protein, DMC1, is specific to meiotic recombination. In the archaea, the ortholog of the bacterial RecA protein is RadA. Bacterial recombination Bacteria regularly undergo genetic recombination in three main ways: Transformation, the uptake of exogenous DNA from the surrounding environment. Transduction, the virus-mediated transfer of DNA between bacteria. Conjugation, the transfer of DNA from one bacterium to another via cell-to-cell contact. Sometimes a strand of DNA is transferred into the target cell but fails to be copied as the target divides. This is called an abortive transfer. Chromosomal crossover In eukaryotes, recombination during meiosis is facilitated by chromosomal crossover. The crossover process leads to offspring having different combinations of genes from those of their parents, and can occasionally produce new chimeric alleles. The shuffling of genes brought about by genetic recombination produces increased genetic variation. It also allows sexually reproducing organisms to avoid Muller's ratchet, in which the genomes of an asexual population tend to accumulate more deleterious mutations over time than beneficial or reversing mutations. Chromosomal crossover involves recombination between the paired chromosomes inherited from each of one's parents, generally occurring during meiosis. During prophase I (pachytene stage) the four available chromatids are in tight formation with one another. While in this formation, homologous sites on two chromatids can closely pair with one another, and may exchange genetic information. Because there is a small probability of recombination at any location along a chromosome, the frequency of recombination between two locations depends on the distance separating them. Therefore, for genes sufficiently distant on the same chromosome, the amount of crossover is high enough to destroy the correlation between alleles. Tracking the movement of genes resulting from crossovers has proven quite useful to geneticists. Because two genes that are close together are less likely to become separated than genes that are farther apart, geneticists can deduce roughly how far apart two genes are on a chromosome if they know the frequency of the crossovers. Geneticists can also use this method to infer the presence of certain genes. Genes that typically stay together during recombination are said to be linked. One gene in a linked pair can sometimes be used as a marker to deduce the presence of the other gene. This is typically used to detect the presence of a disease-causing gene. The recombination frequency between two loci observed is the crossing-over value. It is the frequency of crossing over between two linked gene loci (markers), and depends on the distance between the genetic loci observed. For any fixed set of genetic and environmental conditions, recombination in a particular region of a linkage structure (chromosome) tends to be constant, and the same is then true for the crossing-over value which is used in the production of genetic maps. Gene conversion In gene conversion, a section of genetic material is copied from one chromosome to another, without the donating chromosome being changed. Gene conversion occurs at high frequency at the actual site of the recombination event during meiosis. It is a process by which a DNA sequence is copied from one DNA helix (which remains unchanged) to another DNA helix, whose sequence is altered. Gene conversion has often been studied in fungal crosses where the 4 products of individual meioses can be conveniently observed. Gene conversion events can be distinguished as deviations in an individual meiosis from the normal 2:2 segregation pattern (e.g. a 3:1 pattern). Nonhomologous recombination Recombination can occur between DNA sequences that contain no sequence homology. This can cause chromosomal translocations, sometimes leading to cancer. In B cells B cells of the immune system perform genetic recombination, called immunoglobulin class switching. It is a biological mechanism that changes an antibody from one class to another, for example, from an isotype called IgM to an isotype called IgG. Genetic engineering In genetic engineering, recombination can also refer to artificial and deliberate recombination of disparate pieces of DNA, often from different organisms, creating what is called recombinant DNA. A prime example of such a use of genetic recombination is gene targeting, which can be used to add, delete or otherwise change an organism's genes. This technique is important to biomedical researchers as it allows them to study the effects of specific genes. Techniques based on genetic recombination are also applied in protein engineering to develop new proteins of biological interest. Examples include Restriction enzyme mediated integration, Gibson assembly and Golden Gate Cloning. Recombinational repair DNA damages caused by a variety of exogenous agents (e.g. UV light, X-rays, chemical cross-linking agents) can be repaired by homologous recombinational repair (HRR). These findings suggest that DNA damages arising from natural processes, such as exposure to reactive oxygen species that are byproducts of normal metabolism, are also repaired by HRR. In humans, deficiencies in the gene products necessary for HRR during meiosis likely cause infertility In humans, deficiencies in gene products necessary for HRR, such as BRCA1 and BRCA2, increase the risk of cancer (see DNA repair-deficiency disorder). In bacteria, transformation is a process of gene transfer that ordinarily occurs between individual cells of the same bacterial species. Transformation involves integration of donor DNA into the recipient chromosome by recombination. This process appears to be an adaptation for repairing DNA damages in the recipient chromosome by HRR. Transformation may provide a benefit to pathogenic bacteria by allowing repair of DNA damage, particularly damages that occur in the inflammatory, oxidizing environment associated with infection of a host. When two or more viruses, each containing lethal genomic damages, infect the same host cell, the virus genomes can often pair with each other and undergo HRR to produce viable progeny. This process, referred to as multiplicity reactivation, has been studied in lambda and T4 bacteriophages, as well as in several pathogenic viruses. In the case of pathogenic viruses, multiplicity reactivation may be an adaptive benefit to the virus since it allows the repair of DNA damages caused by exposure to the oxidizing environment produced during host infection. See also reassortment. Meiotic recombination A molecular model for the mechanism of meiotic recombination presented by Anderson and Sekelsky is outlined in the first figure in this article. Two of the four chromatids present early in meiosis (prophase I) are paired with each other and able to interact. Recombination, in this model, is initiated by a double-strand break (or gap) shown in the DNA molecule (chromatid) at the top of the figure. Other types of DNA damage may also initiate recombination. For instance, an inter-strand cross-link (caused by exposure to a cross-linking agent such as mitomycin C) can be repaired by HRR. Two types of recombinant product are produced. Indicated on the right side is a "crossover" (CO) type, where the flanking regions of the chromosomes are exchanged, and on the left side, a "non-crossover" (NCO) type where the flanking regions are not exchanged. The CO type of recombination involves the intermediate formation of two "Holliday junctions" indicated in the lower right of the figure by two X-shaped structures in each of which there is an exchange of single strands between the two participating chromatids. This pathway is labeled in the figure as the DHJ (double-Holliday junction) pathway. The NCO recombinants (illustrated on the left in the figure) are produced by a process referred to as "synthesis dependent strand annealing" (SDSA). Recombination events of the NCO/SDSA type appear to be more common than the CO/DHJ type. The NCO/SDSA pathway contributes little to genetic variation, since the arms of the chromosomes flanking the recombination event remain in the parental configuration. Thus, explanations for the adaptive function of meiosis that focus exclusively on crossing-over are inadequate to explain the majority of recombination events. Achiasmy and heterochiasmy Achiasmy is the phenomenon where autosomal recombination is completely absent in one sex of a species. Achiasmatic chromosomal segregation is well documented in male Drosophila melanogaster. The "Haldane-Huxley rule" states that achiasmy usually occurs in the heterogametic sex. Heterochiasmy occurs when recombination rates differ between the sexes of a species. In humans, each oocyte has on average 41.6 ± 11.3 recombinations, 1.63-fold higher than sperms. This sexual dimorphic pattern in recombination rate has been observed in many species. In mammals, females most often have higher rates of recombination. RNA virus recombination Numerous RNA viruses are capable of genetic recombination when at least two viral genomes are present in the same host cell. Recombination is largely responsible for RNA virus diversity and immune evasion. RNA recombination appears to be a major driving force in determining genome architecture and the course of viral evolution among picornaviridae ((+)ssRNA) (e.g. poliovirus). In the retroviridae ((+)ssRNA)(e.g. HIV), damage in the RNA genome appears to be avoided during reverse transcription by strand switching, a form of recombination. Recombination also occurs in the reoviridae (dsRNA)(e.g. reovirus), orthomyxoviridae ((-)ssRNA)(e.g. influenza virus) and coronaviridae ((+)ssRNA) (e.g. SARS). Recombination in RNA viruses appears to be an adaptation for coping with genome damage. Switching between template strands during genome replication, referred to as copy-choice recombination, was originally proposed to explain the positive correlation of recombination events over short distances in organisms with a DNA genome (see first Figure, SDSA pathway). Recombination can occur infrequently between animal viruses of the same species but of divergent lineages. The resulting recombinant viruses may sometimes cause an outbreak of infection in humans. Especially in coronaviruses, recombination may also occur even among distantly related evolutionary groups (subgenera), due to their characteristic transcription mechanism, that involves subgenomic mRNAs that are formed by template switching. When replicating its (+)ssRNA genome, the poliovirus RNA-dependent RNA polymerase (RdRp) is able to carry out recombination. Recombination appears to occur by a copy choice mechanism in which the RdRp switches (+)ssRNA templates during negative strand synthesis. Recombination by RdRp strand switching also occurs in the (+)ssRNA plant carmoviruses and tombusviruses. Recombination appears to be a major driving force in determining genetic variability within coronaviruses, as well as the ability of coronavirus species to jump from one host to another and, infrequently, for the emergence of novel species, although the mechanism of recombination in is unclear. In early 2020, many genomic sequences of Australian SARS‐CoV‐2 isolates have deletions or mutations (29742G>A or 29742G>U; "G19A" or "G19U") in the s2m, suggesting that RNA recombination may have occurred in this RNA element. 29742G("G19"), 29744G("G21"), and 29751G("G28") were predicted as recombination hotspots. During the first months of the COVID-19 pandemic, such a recombination event was suggested to have been a critical step in the evolution of SARS-CoV-2's ability to infect humans. Linkage disequilibrium analysis confirmed that RNA recombination with the 11083G > T mutation also contributed to the increase of mutations among the viral progeny. The findings indicate that the 11083G > T mutation of SARS-CoV-2 spread during Diamond Princess shipboard quarantine and arose through de novo RNA recombination under positive selection pressure. In three patients on the Diamond Princess cruise, two mutations, 29736G > T and 29751G > T (G13 and G28) were located in Coronavirus 3′ stem-loop II-like motif (s2m) of SARS-CoV-2. Although s2m is considered an RNA motif highly conserved in 3' untranslated region among many coronavirus species, this result also suggests that s2m of SARS-CoV-2 is RNA recombination/mutation hotspot. SARS-CoV-2's entire receptor binding motif appeared, based on preliminary observations, to have been introduced through recombination from coronaviruses of pangolins. However, more comprehensive analyses later refuted this suggestion and showed that SARS-CoV-2 likely evolved solely within bats and with little or no recombination. Role of recombination in the origin of life Nowak and Ohtsuki noted that the origin of life (abiogenesis) is also the origin of biological evolution. They pointed out that all known life on earth is based on biopolymers and proposed that any theory for the origin of life must involve biological polymers that act as information carriers and catalysts. Lehman argued that recombination was an evolutionary development as ancient as the origins of life. Smail et al. proposed that in the primordial Earth, recombination played a key role in the expansion of the initially short informational polymers (presumed to be RNA) that were the precursors to life. See also Eukaryote hybrid genome Four-gamete test Homologous recombination Independent assortment Recombination frequency Recombination hotspot Site-specific recombinase technology Site-specific recombination Reassortment V(D)J recombination References External links Animations – homologous recombination: Animations showing several models of homologous recombination The Holliday Model of Genetic Recombination Animated guide to homologous recombination. Cellular processes Modification of genetic information Molecular genetics
Genetic recombination
[ "Chemistry", "Biology" ]
3,808
[ "Modification of genetic information", "Molecular genetics", "Cellular processes", "Molecular biology" ]
158,011
https://en.wikipedia.org/wiki/Lipid%20bilayer
The lipid bilayer (or phospholipid bilayer) is a thin polar membrane made of two layers of lipid molecules. These membranes form a continuous barrier around all cells. The cell membranes of almost all organisms and many viruses are made of a lipid bilayer, as are the nuclear membrane surrounding the cell nucleus, and membranes of the membrane-bound organelles in the cell. The lipid bilayer is the barrier that keeps ions, proteins and other molecules where they are needed and prevents them from diffusing into areas where they should not be. Lipid bilayers are ideally suited to this role, even though they are only a few nanometers in width, because they are impermeable to most water-soluble (hydrophilic) molecules. Bilayers are particularly impermeable to ions, which allows cells to regulate salt concentrations and pH by transporting ions across their membranes using proteins called ion pumps. Biological bilayers are usually composed of amphiphilic phospholipids that have a hydrophilic phosphate head and a hydrophobic tail consisting of two fatty acid chains. Phospholipids with certain head groups can alter the surface chemistry of a bilayer and can, for example, serve as signals as well as "anchors" for other molecules in the membranes of cells. Just like the heads, the tails of lipids can also affect membrane properties, for instance by determining the phase of the bilayer. The bilayer can adopt a solid gel phase state at lower temperatures but undergo phase transition to a fluid state at higher temperatures, and the chemical properties of the lipids' tails influence at which temperature this happens. The packing of lipids within the bilayer also affects its mechanical properties, including its resistance to stretching and bending. Many of these properties have been studied with the use of artificial "model" bilayers produced in a lab. Vesicles made by model bilayers have also been used clinically to deliver drugs. The structure of biological membranes typically includes several types of molecules in addition to the phospholipids comprising the bilayer. A particularly important example in animal cells is cholesterol, which helps strengthen the bilayer and decrease its permeability. Cholesterol also helps regulate the activity of certain integral membrane proteins. Integral membrane proteins function when incorporated into a lipid bilayer, and they are held tightly to the lipid bilayer with the help of an annular lipid shell. Because bilayers define the boundaries of the cell and its compartments, these membrane proteins are involved in many intra- and inter-cellular signaling processes. Certain kinds of membrane proteins are involved in the process of fusing two bilayers together. This fusion allows the joining of two distinct structures as in the acrosome reaction during fertilization of an egg by a sperm, or the entry of a virus into a cell. Because lipid bilayers are fragile and invisible in a traditional microscope, they are a challenge to study. Experiments on bilayers often require advanced techniques like electron microscopy and atomic force microscopy. Structure and organization When phospholipids are exposed to water, they self-assemble into a two-layered sheet with the hydrophobic tails pointing toward the center of the sheet. This arrangement results in two 'leaflets' that are each a single molecular layer. The center of this bilayer contains almost no water and excludes molecules like sugars or salts that dissolve in water. The assembly process and maintenance are driven by aggregation of hydrophobic molecules (also called the hydrophobic effect). This complex process includes non-covalent interactions such as van der Waals forces, electrostatic and hydrogen bonds. Cross-section analysis The lipid bilayer is very thin compared to its lateral dimensions. If a typical mammalian cell (diameter ~10 micrometers) were magnified to the size of a watermelon (~1 ft/30 cm), the lipid bilayer making up the plasma membrane would be about as thick as a piece of office paper. Despite being only a few nanometers thick, the bilayer is composed of several distinct chemical regions across its cross-section. These regions and their interactions with the surrounding water have been characterized over the past several decades with x-ray reflectometry, neutron scattering, and nuclear magnetic resonance techniques. The first region on either side of the bilayer is the hydrophilic headgroup. This portion of the membrane is completely hydrated and is typically around 0.8-0.9 nm thick. In phospholipid bilayers the phosphate group is located within this hydrated region, approximately 0.5 nm outside the hydrophobic core. In some cases, the hydrated region can extend much further, for instance in lipids with a large protein or long sugar chain grafted to the head. One common example of such a modification in nature is the lipopolysaccharide coat on a bacterial outer membrane. Next to the hydrated region is an intermediate region that is only partially hydrated. This boundary layer is approximately 0.3 nm thick. Within this short distance, the water concentration drops from 2M on the headgroup side to nearly zero on the tail (core) side. The hydrophobic core of the bilayer is typically 3-4 nm thick, but this value varies with chain length and chemistry. Core thickness also varies significantly with temperature, in particular near a phase transition. Asymmetry In many naturally occurring bilayers, the compositions of the inner and outer membrane leaflets are different. In human red blood cells, the inner (cytoplasmic) leaflet is composed mostly of phosphatidylethanolamine, phosphatidylserine and phosphatidylinositol and its phosphorylated derivatives. By contrast, the outer (extracellular) leaflet is based on phosphatidylcholine, sphingomyelin and a variety of glycolipids. In some cases, this asymmetry is based on where the lipids are made in the cell and reflects their initial orientation. The biological functions of lipid asymmetry are imperfectly understood, although it is clear that it is used in several different situations. For example, when a cell undergoes apoptosis, the phosphatidylserine — normally localised to the cytoplasmic leaflet — is transferred to the outer surface: There, it is recognised by a macrophage that then actively scavenges the dying cell. Lipid asymmetry arises, at least in part, from the fact that most phospholipids are synthesised and initially inserted into the inner monolayer: those that constitute the outer monolayer are then transported from the inner monolayer by a class of enzymes called flippases. Other lipids, such as sphingomyelin, appear to be synthesised at the external leaflet. Flippases are members of a larger family of lipid transport molecules that also includes floppases, which transfer lipids in the opposite direction, and scramblases, which randomize lipid distribution across lipid bilayers (as in apoptotic cells). In any case, once lipid asymmetry is established, it does not normally dissipate quickly because spontaneous flip-flop of lipids between leaflets is extremely slow. It is possible to mimic this asymmetry in the laboratory in model bilayer systems. Certain types of very small artificial vesicle will automatically make themselves slightly asymmetric, although the mechanism by which this asymmetry is generated is very different from that in cells. By utilizing two different monolayers in Langmuir-Blodgett deposition or a combination of Langmuir-Blodgett and vesicle rupture deposition it is also possible to synthesize an asymmetric planar bilayer. This asymmetry may be lost over time as lipids in supported bilayers can be prone to flip-flop. However, it has been reported that lipid flip-flop is slow compare to cholesterol and other smaller molecules. It has been reported that the organization and dynamics of the lipid monolayers in a bilayer are coupled. For example, introduction of obstructions in one monolayer can slow down the lateral diffusion in both monolayers. In addition, phase separation in one monolayer can also induce phase separation in other monolayer even when other monolayer can not phase separate by itself. Phases and phase transitions At a given temperature a lipid bilayer can exist in either a liquid or a gel (solid) phase. All lipids have a characteristic temperature at which they transition (melt) from the gel to liquid phase. In both phases the lipid molecules are prevented from flip-flopping across the bilayer, but in liquid phase bilayers a given lipid will exchange locations with its neighbor millions of times a second. This random walk exchange allows lipid to diffuse and thus wander across the surface of the membrane.Unlike liquid phase bilayers, the lipids in a gel phase bilayer have less mobility. The phase behavior of lipid bilayers is determined largely by the strength of the attractive Van der Waals interactions between adjacent lipid molecules. Longer-tailed lipids have more area over which to interact, increasing the strength of this interaction and, as a consequence, decreasing the lipid mobility. Thus, at a given temperature, a short-tailed lipid will be more fluid than an otherwise identical long-tailed lipid. Transition temperature can also be affected by the degree of unsaturation of the lipid tails. An unsaturated double bond can produce a kink in the alkane chain, disrupting the lipid packing. This disruption creates extra free space within the bilayer that allows additional flexibility in the adjacent chains. An example of this effect can be noted in everyday life as butter, which has a large percentage saturated fats, is solid at room temperature while vegetable oil, which is mostly unsaturated, is liquid. Most natural membranes are a complex mixture of different lipid molecules. If some of the components are liquid at a given temperature while others are in the gel phase, the two phases can coexist in spatially separated regions, rather like an iceberg floating in the ocean. This phase separation plays a critical role in biochemical phenomena because membrane components such as proteins can partition into one or the other phase and thus be locally concentrated or activated. One particularly important component of many mixed phase systems is cholesterol, which modulates bilayer permeability, mechanical strength, and biochemical interactions. Surface chemistry While lipid tails primarily modulate bilayer phase behavior, it is the headgroup that determines the bilayer surface chemistry. Most natural bilayers are composed primarily of phospholipids, but sphingolipids and sterols such as cholesterol are also important components. Of the phospholipids, the most common headgroup is phosphatidylcholine (PC), accounting for about half the phospholipids in most mammalian cells. PC is a zwitterionic headgroup, as it has a negative charge on the phosphate group and a positive charge on the amine but, because these local charges balance, no net charge. Other headgroups are also present to varying degrees and can include phosphatidylserine (PS) phosphatidylethanolamine (PE) and phosphatidylglycerol (PG). These alternate headgroups often confer specific biological functionality that is highly context-dependent. For instance, PS presence on the extracellular membrane face of erythrocytes is a marker of cell apoptosis, whereas PS in growth plate vesicles is necessary for the nucleation of hydroxyapatite crystals and subsequent bone mineralization. Unlike PC, some of the other headgroups carry a net charge, which can alter the electrostatic interactions of small molecules with the bilayer. Biological roles Containment and separation The primary role of the lipid bilayer in biology is to separate aqueous compartments from their surroundings. Without some form of barrier delineating “self” from “non-self”, it is difficult to even define the concept of an organism or of life. This barrier takes the form of a lipid bilayer in all known life forms except for a few species of archaea that utilize a specially adapted lipid monolayer. It has even been proposed that the very first form of life may have been a simple lipid vesicle with virtually its sole biosynthetic capability being the production of more phospholipids. The partitioning ability of the lipid bilayer is based on the fact that hydrophilic molecules cannot easily cross the hydrophobic bilayer core, as discussed in Transport across the bilayer below. The nucleus, mitochondria and chloroplasts have two lipid bilayers, while other sub-cellular structures are surrounded by a single lipid bilayer (such as the plasma membrane, endoplasmic reticula, Golgi apparatus and lysosomes). See Organelle. Prokaryotes have only one lipid bilayer - the cell membrane (also known as the plasma membrane). Many prokaryotes also have a cell wall, but the cell wall is composed of proteins or long chain carbohydrates, not lipids. In contrast, eukaryotes have a range of organelles including the nucleus, mitochondria, lysosomes and endoplasmic reticulum. All of these sub-cellular compartments are surrounded by one or more lipid bilayers and, together, typically comprise the majority of the bilayer area present in the cell. In liver hepatocytes for example, the plasma membrane accounts for only two percent of the total bilayer area of the cell, whereas the endoplasmic reticulum contains more than fifty percent and the mitochondria a further thirty percent. Signaling The most familiar form of cellular signaling is likely synaptic transmission, whereby a nerve impulse that has reached the end of one neuron is conveyed to an adjacent neuron via the release of neurotransmitters. This transmission is made possible by the action of synaptic vesicles which are, inside the cell, loaded with the neurotransmitters to be released later. These loaded vesicles fuse with the cell membrane at the pre-synaptic terminal and their contents are released into the space outside the cell. The contents then diffuse across the synapse to the post-synaptic terminal. Lipid bilayers are also involved in signal transduction through their role as the home of integral membrane proteins. This is an extremely broad and important class of biomolecule. It is estimated that up to a third of the human proteome are membrane proteins. Some of these proteins are linked to the exterior of the cell membrane. An example of this is the CD59 protein, which identifies cells as “self” and thus inhibits their destruction by the immune system. The HIV virus evades the immune system in part by grafting these proteins from the host membrane onto its own surface. Alternatively, some membrane proteins penetrate all the way through the bilayer and serve to relay individual signal events from the outside to the inside of the cell. The most common class of this type of protein is the G protein-coupled receptor (GPCR). GPCRs are responsible for much of the cell's ability to sense its surroundings and, because of this important role, approximately 40% of all modern drugs are targeted at GPCRs. In addition to protein- and solution-mediated processes, it is also possible for lipid bilayers to participate directly in signaling. A classic example of this is phosphatidylserine-triggered phagocytosis. Normally, phosphatidylserine is asymmetrically distributed in the cell membrane and is present only on the interior side. During programmed cell death a protein called a scramblase equilibrates this distribution, displaying phosphatidylserine on the extracellular bilayer face. The presence of phosphatidylserine then triggers phagocytosis to remove the dead or dying cell. Characterization methods The lipid bilayer is a difficult structure to study because it is so thin and fragile. To overcome these limitations, techniques have been developed to allow investigations of its structure and function. Electrical measurements Electrical measurements are a straightforward way to characterize an important function of a bilayer: its ability to segregate and prevent the flow of ions in solution. By applying a voltage across the bilayer and measuring the resulting current, the resistance of the bilayer is determined. This resistance is typically quite high (108 Ohm-cm2 or more) since the hydrophobic core is impermeable to charged species. The presence of even a few nanometer-scale holes results in a dramatic increase in current. The sensitivity of this system is such that even the activity of single ion channels can be resolved. Fluorescence microscopy A lipid bilayer cannot be seen with a traditional microscope because it is too thin, so researchers often use fluorescence microscopy. A sample is excited with one wavelength of light and observed in another, so that only fluorescent molecules with a matching excitation and emission profile will be seen. A natural lipid bilayer is not fluorescent, so at least one fluorescent dye needs to be attached to some of the molecules in the bilayer. Resolution is usually limited to a few hundred nanometers, which is unfortunately much larger than the thickness of a lipid bilayer. Electron microscopy Electron microscopy offers a higher resolution image. In an electron microscope, a beam of focused electrons interacts with the sample rather than a beam of light as in traditional microscopy. In conjunction with rapid freezing techniques, electron microscopy has also been used to study the mechanisms of inter- and intracellular transport, for instance in demonstrating that exocytotic vesicles are the means of chemical release at synapses. Nuclear magnetic resonance spectroscopy 31P-Nuclear magnetic resonance spectroscopy is widely used for studies of phospholipid bilayers and biological membranes in native conditions. The analysis of 31P-NMR spectra of lipids could provide a wide range of information about lipid bilayer packing, phase transitions (gel phase, physiological liquid crystal phase, ripple phases, non bilayer phases), lipid head group orientation/dynamics, and elastic properties of pure lipid bilayer and as a result of binding of proteins and other biomolecules. Atomic force microscopy A new method to study lipid bilayers is Atomic force microscopy (AFM). Rather than using a beam of light or particles, a very small sharpened tip scans the surface by making physical contact with the bilayer and moving across it, like a record player needle. AFM is a promising technique because it has the potential to image with nanometer resolution at room temperature and even under water or physiological buffer, conditions necessary for natural bilayer behavior. Utilizing this capability, AFM has been used to examine dynamic bilayer behavior including the formation of transmembrane pores (holes) and phase transitions in supported bilayers. Another advantage is that AFM does not require fluorescent or isotopic labeling of the lipids, since the probe tip interacts mechanically with the bilayer surface. Because of this, the same scan can image both lipids and associated proteins, sometimes even with single-molecule resolution. AFM can also probe the mechanical nature of lipid bilayers. Dual polarisation interferometry Lipid bilayers exhibit high levels of birefringence where the refractive index in the plane of the bilayer differs from that perpendicular by as much as 0.1 refractive index units. This has been used to characterise the degree of order and disruption in bilayers using dual polarisation interferometry to understand mechanisms of protein interaction. Quantum chemical calculations Lipid bilayers are complicated molecular systems with many degrees of freedom. Thus, atomistic simulation of membrane and in particular ab initio calculations of its properties is difficult and computationally expensive. Quantum chemical calculations has recently been successfully performed to estimate dipole and quadrupole moments of lipid membranes. Transport across the bilayer Passive diffusion Most polar molecules have low solubility in the hydrocarbon core of a lipid bilayer and, as a consequence, have low permeability coefficients across the bilayer. This effect is particularly pronounced for charged species, which have even lower permeability coefficients than neutral polar molecules. Anions typically have a higher rate of diffusion through bilayers than cations. Compared to ions, water molecules actually have a relatively large permeability through the bilayer, as evidenced by osmotic swelling. When a cell or vesicle with a high interior salt concentration is placed in a solution with a low salt concentration it will swell and eventually burst. Such a result would not be observed unless water was able to pass through the bilayer with relative ease. The anomalously large permeability of water through bilayers is still not completely understood and continues to be the subject of active debate. Small uncharged apolar molecules diffuse through lipid bilayers many orders of magnitude faster than ions or water. This applies both to fats and organic solvents like chloroform and ether. Regardless of their polar character larger molecules diffuse more slowly across lipid bilayers than small molecules. Ion pumps and channels Two special classes of protein deal with the ionic gradients found across cellular and sub-cellular membranes in nature- ion channels and ion pumps. Both pumps and channels are integral membrane proteins that pass through the bilayer, but their roles are quite different. Ion pumps are the proteins that build and maintain the chemical gradients by utilizing an external energy source to move ions against the concentration gradient to an area of higher chemical potential. The energy source can be ATP, as is the case for the Na+-K+ ATPase. Alternatively, the energy source can be another chemical gradient already in place, as in the Ca2+/Na+ antiporter. It is through the action of ion pumps that cells are able to regulate pH via the pumping of protons. In contrast to ion pumps, ion channels do not build chemical gradients but rather dissipate them in order to perform work or send a signal. Probably the most familiar and best studied example is the voltage-gated Na+ channel, which allows conduction of an action potential along neurons. All ion pumps have some sort of trigger or “gating” mechanism. In the previous example it was electrical bias, but other channels can be activated by binding a molecular agonist or through a conformational change in another nearby protein. Endocytosis and exocytosis Some molecules or particles are too large or too hydrophilic to pass through a lipid bilayer. Other molecules could pass through the bilayer but must be transported rapidly in such large numbers that channel-type transport is impractical. In both cases, these types of cargo can be moved across the cell membrane through fusion or budding of vesicles. When a vesicle is produced inside the cell and fuses with the plasma membrane to release its contents into the extracellular space, this process is known as exocytosis. In the reverse process, a region of the cell membrane will dimple inwards and eventually pinch off, enclosing a portion of the extracellular fluid to transport it into the cell. Endocytosis and exocytosis rely on very different molecular machinery to function, but the two processes are intimately linked and could not work without each other. The primary mechanism of this interdependence is the large amount of lipid material involved. In a typical cell, an area of bilayer equivalent to the entire plasma membrane travels through the endocytosis/exocytosis cycle in about half an hour. Exocytosis in prokaryotes: Membrane vesicular exocytosis, popularly known as membrane vesicle trafficking, a Nobel prize-winning (year, 2013) process, is traditionally regarded as a prerogative of eukaryotic cells. This myth was however broken with the revelation that nanovesicles, popularly known as bacterial outer membrane vesicles, released by gram-negative microbes, translocate bacterial signal molecules to host or target cells to carry out multiple processes in favour of the secreting microbe e.g., in host cell invasion and microbe-environment interactions, in general. Electroporation Electroporation is the rapid increase in bilayer permeability induced by the application of a large artificial electric field across the membrane. Experimentally, electroporation is used to introduce hydrophilic molecules into cells. It is a particularly useful technique for large highly charged molecules such as DNA, which would never passively diffuse across the hydrophobic bilayer core. Because of this, electroporation is one of the key methods of transfection as well as bacterial transformation. It has even been proposed that electroporation resulting from lightning strikes could be a mechanism of natural horizontal gene transfer. Mechanics Lipid bilayers are large enough structures to have some of the mechanical properties of liquids or solids. The area compression modulus Ka, bending modulus Kb, and edge energy , can be used to describe them. Solid lipid bilayers also have a shear modulus, but like any liquid, the shear modulus is zero for fluid bilayers. These mechanical properties affect how the membrane functions. Ka and Kb affect the ability of proteins and small molecules to insert into the bilayer, and bilayer mechanical properties have been shown to alter the function of mechanically activated ion channels. Bilayer mechanical properties also govern what types of stress a cell can withstand without tearing. Although lipid bilayers can easily bend, most cannot stretch more than a few percent before rupturing. As discussed in the Structure and organization section, the hydrophobic attraction of lipid tails in water is the primary force holding lipid bilayers together. Thus, the elastic modulus of the bilayer is primarily determined by how much extra area is exposed to water when the lipid molecules are stretched apart. It is not surprising given this understanding of the forces involved that studies have shown that Ka varies strongly with osmotic pressure but only weakly with tail length and unsaturation. Because the forces involved are so small, it is difficult to experimentally determine Ka. Most techniques require sophisticated microscopy and very sensitive measurement equipment. In contrast to Ka, which is a measure of how much energy is needed to stretch the bilayer, Kb is a measure of how much energy is needed to bend or flex the bilayer. Formally, bending modulus is defined as the energy required to deform a membrane from its intrinsic curvature to some other curvature. Intrinsic curvature is defined by the ratio of the diameter of the head group to that of the tail group. For two-tailed PC lipids, this ratio is nearly one so the intrinsic curvature is nearly zero. If a particular lipid has too large a deviation from zero intrinsic curvature it will not form a bilayer and will instead form other phases such as micelles or inverted micelles. Addition of small hydrophilic molecules like sucrose into mixed lipid lamellar liposomes made from galactolipid-rich thylakoid membranes destabilises bilayers into the micellar phase. is a measure of how much energy it takes to expose a bilayer edge to water by tearing the bilayer or creating a hole in it. The origin of this energy is the fact that creating such an interface exposes some of the lipid tails to water, but the exact orientation of these border lipids is unknown. There is some evidence that both hydrophobic (tails straight) and hydrophilic (heads curved around) pores can coexist. Fusion Fusion is the process by which two lipid bilayers merge, resulting in one connected structure. If this fusion proceeds completely through both leaflets of both bilayers, a water-filled bridge is formed and the solutions contained by the bilayers can mix. Alternatively, if only one leaflet from each bilayer is involved in the fusion process, the bilayers are said to be hemifused. Fusion is involved in many cellular processes, in particular in eukaryotes, since the eukaryotic cell is extensively sub-divided by lipid bilayer membranes. Exocytosis, fertilization of an egg by sperm activation, and transport of waste products to the lysozome are a few of the many eukaryotic processes that rely on some form of fusion. Even the entry of pathogens can be governed by fusion, as many bilayer-coated viruses have dedicated fusion proteins to gain entry into the host cell. There are four fundamental steps in the fusion process. First, the involved membranes must aggregate, approaching each other to within several nanometers. Second, the two bilayers must come into very close contact (within a few angstroms). To achieve this close contact, the two surfaces must become at least partially dehydrated, as the bound surface water normally present causes bilayers to strongly repel. The presence of ions, in particular divalent cations like magnesium and calcium, strongly affects this step. One of the critical roles of calcium in the body is regulating membrane fusion. Third, a destabilization must form at one point between the two bilayers, locally distorting their structures. The exact nature of this distortion is not known. One theory is that a highly curved "stalk" must form between the two bilayers. Proponents of this theory believe that it explains why phosphatidylethanolamine, a highly curved lipid, promotes fusion. Finally, in the last step of fusion, this point defect grows and the components of the two bilayers mix and diffuse away from the site of contact. The situation is further complicated when considering fusion in vivo since biological fusion is almost always regulated by the action of membrane-associated proteins. The first of these proteins to be studied were the viral fusion proteins, which allow an enveloped virus to insert its genetic material into the host cell (enveloped viruses are those surrounded by a lipid bilayer; some others have only a protein coat). Eukaryotic cells also use fusion proteins, the best-studied of which are the SNAREs. SNARE proteins are used to direct all vesicular intracellular trafficking. Despite years of study, much is still unknown about the function of this protein class. In fact, there is still an active debate regarding whether SNAREs are linked to early docking or participate later in the fusion process by facilitating hemifusion. In studies of molecular and cellular biology it is often desirable to artificially induce fusion. The addition of polyethylene glycol (PEG) causes fusion without significant aggregation or biochemical disruption. This procedure is now used extensively, for example by fusing B-cells with myeloma cells. The resulting “hybridoma” from this combination expresses a desired antibody as determined by the B-cell involved, but is immortalized due to the melanoma component. Fusion can also be artificially induced through electroporation in a process known as electrofusion. It is believed that this phenomenon results from the energetically active edges formed during electroporation, which can act as the local defect point to nucleate stalk growth between two bilayers. Model systems Lipid bilayers can be created artificially in the lab to allow researchers to perform experiments that cannot be done with natural bilayers. They can also be used in the field of Synthetic Biology, to define the boundaries of artificial cells. These synthetic systems are called model lipid bilayers. There are many different types of model bilayers, each having experimental advantages and disadvantages. They can be made with either synthetic or natural lipids. Among the most common model systems are: Black lipid membranes (BLM) Supported lipid bilayers (SLB) Vesicles Droplet Interface Bilayers (DIBs) Commercial applications To date, the most successful commercial application of lipid bilayers has been the use of liposomes for drug delivery, especially for cancer treatment. (Note- the term “liposome” is in essence synonymous with “vesicle” except that vesicle is a general term for the structure whereas liposome refers to only artificial not natural vesicles) The basic idea of liposomal drug delivery is that the drug is encapsulated in solution inside the liposome then injected into the patient. These drug-loaded liposomes travel through the system until they bind at the target site and rupture, releasing the drug. In theory, liposomes should make an ideal drug delivery system since they can isolate nearly any hydrophilic drug, can be grafted with molecules to target specific tissues and can be relatively non-toxic since the body possesses biochemical pathways for degrading lipids. The first generation of drug delivery liposomes had a simple lipid composition and suffered from several limitations. Circulation in the bloodstream was extremely limited due to both renal clearing and phagocytosis. Refinement of the lipid composition to tune fluidity, surface charge density, and surface hydration resulted in vesicles that adsorb fewer proteins from serum and thus are less readily recognized by the immune system. The most significant advance in this area was the grafting of polyethylene glycol (PEG) onto the liposome surface to produce “stealth” vesicles, which circulate over long times without immune or renal clearing. The first stealth liposomes were passively targeted at tumor tissues. Because tumors induce rapid and uncontrolled angiogenesis they are especially “leaky” and allow liposomes to exit the bloodstream at a much higher rate than normal tissue would. More recently work has been undertaken to graft antibodies or other molecular markers onto the liposome surface in the hope of actively binding them to a specific cell or tissue type. Some examples of this approach are already in clinical trials. Another potential application of lipid bilayers is the field of biosensors. Since the lipid bilayer is the barrier between the interior and exterior of the cell, it is also the site of extensive signal transduction. Researchers over the years have tried to harness this potential to develop a bilayer-based device for clinical diagnosis or bioterrorism detection. Progress has been slow in this area and, although a few companies have developed automated lipid-based detection systems, they are still targeted at the research community. These include Biacore (now GE Healthcare Life Sciences), which offers a disposable chip for utilizing lipid bilayers in studies of binding kinetics and Nanion Inc., which has developed an automated patch clamping system. A supported lipid bilayer (SLB) as described above has achieved commercial success as a screening technique to measure the permeability of drugs. This parallel artificial membrane permeability assay (PAMPA) technique measures the permeability across specifically formulated lipid cocktail(s) found to be highly correlated with Caco-2 cultures, the gastrointestinal tract, blood–brain barrier and skin. History By the early twentieth century scientists had come to believe that cells are surrounded by a thin oil-like barrier, but the structural nature of this membrane was not known. Two experiments in 1925 laid the groundwork to fill in this gap. By measuring the capacitance of erythrocyte solutions, Hugo Fricke determined that the cell membrane was 3.3 nm thick. Although the results of this experiment were accurate, Fricke misinterpreted the data to mean that the cell membrane is a single molecular layer. Prof. Dr. Evert Gorter (1881–1954) and F. Grendel of Leiden University approached the problem from a different perspective, spreading the erythrocyte lipids as a monolayer on a Langmuir-Blodgett trough. When they compared the area of the monolayer to the surface area of the cells, they found a ratio of two to one. Later analyses showed several errors and incorrect assumptions with this experiment but, serendipitously, these errors canceled out and from this flawed data Gorter and Grendel drew the correct conclusion- that the cell membrane is a lipid bilayer. This theory was confirmed through the use of electron microscopy in the late 1950s. Although he did not publish the first electron microscopy study of lipid bilayers J. David Robertson was the first to assert that the two dark electron-dense bands were the headgroups and associated proteins of two apposed lipid monolayers. In this body of work, Robertson put forward the concept of the “unit membrane.” This was the first time the bilayer structure had been universally assigned to all cell membranes as well as organelle membranes. Around the same time, the development of model membranes confirmed that the lipid bilayer is a stable structure that can exist independent of proteins. By “painting” a solution of lipid in organic solvent across an aperture, Mueller and Rudin were able to create an artificial bilayer and determine that this exhibited lateral fluidity, high electrical resistance and self-healing in response to puncture, all of which are properties of a natural cell membrane. A few years later, Alec Bangham showed that bilayers, in the form of lipid vesicles, could also be formed simply by exposing a dried lipid sample to water. This demonstrated that lipid bilayers form spontaneously via self assembly and do not require a patterned support structure. In 1977, a totally synthetic bilayer membrane was prepared by Kunitake and Okahata, from a single organic compound, didodecyldimethylammonium bromide. This showed that the bilayer membrane was assembled by the intermolecular forces. See also Surfactant Membrane biophysics Lipid polymorphism Lipidomics References External links LIPIDAT An extensive database of lipid physical properties Structure of Fluid Lipid Bilayers Simulations and publication links related to the cross sectional structure of lipid bilayers. Biological matter Membrane biology
Lipid bilayer
[ "Chemistry" ]
7,952
[ "Membrane biology", "Molecular biology" ]
158,142
https://en.wikipedia.org/wiki/Eta%20Canis%20Majoris
Eta Canis Majoris (η Canis Majoris, abbreviated Eta CMa, η CMa), also named Aludra , is a star in the constellation of Canis Major. Since 1943, the spectrum of this star has served as one of the stable anchor points by which other stars are classified. Nomenclature η Canis Majoris (Latinised to Eta Canis Majoris) is the star's Bayer designation. The traditional name Aludra originates from the Arabic: العذراء al-adhraa, 'the virgin'. This star, along with Epsilon Canis Majoris (Adhara), Delta Canis Majoris (Wezen) and Omicron2 Canis Majoris (Thanih al Adzari), were Al 'Adhārā (العذاري), 'the Virgins'. In 2016, the International Astronomical Union organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN's first bulletin of July 2016 included a table of the first two batches of names approved by the WGSN; which included Aludra for this star. In Chinese, (), meaning Bow and Arrow, refers to an asterism consisting of Eta Canis Majoris, Delta Canis Majoris, HD 63032, HD 65456, Omicron Puppis, k Puppis, Epsilon Canis Majoris, Kappa Canis Majoris and Pi Puppis. Consequently, Eta Canis Majoris itself is known as (, ). Properties Eta Canis Majoris is a blue supergiant with a spectral type B5Ia. It has been the standard for this spectral class in the Morgan–Keenan system. It is likely a post-red supergiant, a star which left its red supergiant phase and near the end of its life. As a consequence, Eta CMa has lost significant part of its mass, and now has a mass of either , depending on the estimate, in contrast with its initial mass of . Eta CMa shines brightly in the skies in spite of a large distance from Earth due to being intrinsically many times brighter than the Sun. It has a luminosity over 100,000 times and a radius around 54 times that of the Sun. It has only existed for a fraction of the time the Sun has, less than 10 million years, yet is already in the last stages of its life. Eta CMa is classified as an Alpha Cygni-type variable star and its brightness varies from magnitude +2.38 to +2.48 over a period of 4.7 days. Namesakes Both USS Aludra (AF-55), an Alstede-class stores ship, and USS Aludra (AK-72), a Crater-class cargo ship, were U.S. Navy vessels named after the star. References Alpha Cygni variables Canis Majoris, Eta B-type supergiants Canis Major Canis Majoris, 31 058350 035904 Aludra 2827 CD-29 04328
Eta Canis Majoris
[ "Astronomy" ]
647
[ "Canis Major", "Constellations" ]
158,158
https://en.wikipedia.org/wiki/Intensive%20pig%20farming
Intensive pig farming, also known as pig factory farming, is the primary method of pig production, in which grower pigs are housed indoors in group-housing or straw-lined sheds, whilst pregnant sows are housed in gestation crates or pens and give birth in farrowing crates. The use of gestation crates for pregnant sows has lowered birth production costs; Gestation crates or individual stalls are used as a way to nurture the animals and protect them first during pregnancy. Because the animals are vulnerable during this time, with some sows more aggressive than others, the practice of separating the animals in crates keeps them from fighting and injuring each other. In addition, the case has also been made that crates make it easier for hog farmers to monitor individual sow health and administer vaccines as needed. Many of the world's largest producers of pigs (US, China, and Mexico) use gestation crates. The European Union has banned the use of gestation crates after the fourth week of pregnancy. Intensive pig farmers often cut off tails, testes or teeth of pigs without anaesthetic. Although combined use of an anesthetic and analgesic appears to be the most effective method for controlling pain associated with surgical castration, regulatory requirements and cost remain obstacles to practical application. Use of pharmaceuticals can burden producers with direct and indirect costs; the latter are associated with time delays and a potential need for additional veterinary assistance. Extra-label use of anesthetics and analgesics, while an option, is not ideal. Knowledge of effectiveness is not as great as it is for drugs approved for particular species and purposes. Extra-label use can also discourage research and development necessary to approve drugs for specific purposes. The environmental impacts of pig farming include problems posed to drinking water and algal bloom events. Description Intensive piggeries are generally large warehouse-like buildings or barns with little exposure to sunlight or the outdoors. Most pigs are officially entitled to less than one square meter of space each. Indoor pig systems allow many more pigs to be monitored than historical methods, ensuring lowered cost, and increased productivity. Buildings are ventilated and their temperature regulated. Most domestic pig varieties are susceptible to sunburn and heat stress, and all pigs lack sweat glands and cannot cool themselves. Pigs have a limited tolerance to high temperatures and heat stress can lead to death. Maintaining a more specific temperature within the pig-tolerance range also maximizes growth and growth-to-feed ratio. Indoor piggeries have allowed pig farming to be undertaken in countries or areas with unsuitable climate or soil for outdoor pig raising. In an intensive operation, pigs no longer need access to a wallow (mud), which is their natural cooling mechanism. Intensive piggeries control temperature through ventilation or drip water systems. The way animals are housed in intensive systems varies, and depending on economic viability, dry or open time for sows can sometimes be spent in indoor pens or outdoor pens or pastures. The pigs begin life in a farrowing or gestation crate, a small pen with a central cage, designed to allow the piglets to feed from their mother, the sow, while preventing her from moving around, crushing her children, and reducing aggression. The crates are so small that the pigs cannot turn around. Artificial insemination is much more common than natural mating, as it allows up to 30-40 female pigs to be impregnated from a single boar. Workers collect the semen by masturbating the boars, then insert it into the sows via a raised catheter known as a pork stork. Boars are still physically used to excite the females prior to insemination, but are prevented from actually mating. When confirmed pregnant, sows are moved to farrowing crates, with litter, and will spend their time in gestation crates from before farrowing until weaning. Injections with a high availability iron solution often are given, as sow's milk is low in iron. Vitamin D supplements are also given to compensate for the lack of sunlight. As the sows' bodies become less capable of handling the large litter sizes encouraged by the industry, the frequency of stillborn piglets generally increases with each litter. These high litter sizes have doubled the death rates of sows, and as many as 25%-50% of sow deaths have been caused by prolapse, the collapse of the sow's rectum, vagina, or uterus. Pig breeders repeat the cycle of impregnation and confinement for about 3 to 5 years or until the sow succumbs to her injuries, at which point she is then slaughtered for low-grade meat such as pies, pasties and sausage meat. Of the piglets born alive, 10% to 18% will not make it to weaning age, succumbing to disease, starvation, dehydration, or being accidentally crushed by their trapped mothers. This death toll includes the runts, unusually small piglets who are considered economically unviable and killed by staff, typically by blunt trauma to the head. Piglets often have the following performed: castration, earmarking, tattooing for litter identification, tail docking, teeth clipping to prevent cannibalism, instability, aggression, and tail biting that is induced by the cramped environment. Because anesthetic is not legally mandated and often economically unviable, these invasive procedures are usually done without any pain killers. While wild piglets remain with their mothers for around 12 to 14 weeks, farmed piglets are weaned and removed from their mothers at between two and five weeks old. They are then placed in sheds, nursery barns or directly to growout barns. While capable of living 10–12 years, most pigs are slaughtered when they are 5–6 months old. Indoor systems allow for the easy collection of waste. In an indoor intensive pig farm, manure can be managed through a lagoon system or other waste-management system. However, waste smell remains a problem which is difficult to manage. Pigs in the wild or on open farmland are naturally clean animals. Statistics In the UK there are around 11,000 pig farms. Approximately 1,400 of these units house more than 1,000 pigs and contain about 85% of the total UK pig population. Because of this, the vast majority of the pork products sold in the UK come from intensive farms. In Australia, there were around 50,000 pig farms in Australia in the 1960s. Today, there are fewer than 1,400, and yet the total number of pigs bred and slaughtered for food has increased. As of 2015, 49 farms housed 60% of the country's total pig population. In the United States, three-quarters of pork comes from large operations with 5,000 or more pigs. The animals are most often kept in crowded confinement buildings without fresh air or sunshine. Environmental impacts Intensive pig farming adversely affects the surrounding environment, mainly driven by the spread of feces and waste to surrounding neighborhoods, polluting air and water with toxic waste particles. Regulation Many countries have introduced laws to regulate treatment of intensively farmed pigs. However, there is no legal definition for free-range pigs, so retailers can label pork products as free-range without having to adhere to any standards or guidelines. Only 3% of UK pigs spend their entire lives outdoors. European Union As of 2016, The European Union legislation has required that pigs be given environmental enrichment, specifically they must have permanent access to a sufficient quantity of material to enable proper investigation and manipulation activities. Under the legislation tail docking may only be used as a last resort. The law provides that farmers must first take measures to improve the pigs' conditions and, only where these have failed to prevent tail biting, may they tail dock. United States As of 2023, ten states have banned the use of gestation crates: Arizona, California, Colorado, Florida, Maine, Massachusetts, Michigan, Ohio, Oregon, and Rhode Island. Proposition 12, a California ballot measure passed in 2018, also bans the sale of whole, uncooked pork cuts throughout the state if the producers are noncompliant with the ban, affecting both in-state and out-of-state pig farmers. Discharge from concentrated animal feeding operations (CAFOs) is regulated by the federal Environmental Protection Agency (EPA). In 2003, EPA revised the Clean Water Act to include permitting requirements and discharge limitations for CAFOs. , EPA revised the National Pollutant Discharge Elimination System (NPDES) by requiring CAFOs to apply for permits before they can discharge manure. The federal Humane Slaughter Act requires pigs to be stunned before slaughter, although compliance and enforcement is questioned. There is concern from animal liberation and welfare groups that the laws have not resulted in a prevention of animal suffering and that there are "repeated violations of the Humane Slaughter Act at dozens of slaughterhouses". Criticism Dispute regarding farming methods Intensive piggeries have been negatively contrasted with free range systems. Such systems usually refer not to a group-pen or shedding system, but to outdoor farming systems. Those that support outdoor systems usually do so on the grounds that they are more animal friendly and allow pigs to experience natural activities (e.g., wallowing in mud, relating to young, rooting soil). Outdoor systems are usually less economically productive due to increased space requirements and higher morbidity, (though, when dealing with the killing of piglets and other groups of swine, the methods are the same.) They also have a range of environmental impacts, such as denitrification of soil and erosion. Outdoor pig farming may also have welfare implications, for example, pigs kept outside may get sunburnt and are more susceptible to heat stress than in indoor systems, where air conditioning or similar can be used. Outdoor pig farming may also increase the incidence of worms and parasites in pigs. Management of these problems depends on local conditions, such as geography, climate, and the availability of skilled staff. In certain environmental conditions – for example, a temperate climate – outdoor pig farming of these breeds is possible. However, there are many other breeds of pig suited to outdoor rearing, as they have been used in this way for centuries, such as Gloucester Old Spot and Oxford Forest. Following the UK ban of sow stalls, the British Pig Executive indicates that the pig farming industry in the UK has declined. The increase in production costs has led to British pig-products being more expensive than those from other countries, leading to increased imports and the need to position UK pork as a product deserving a price premium. In 1997, Grampian Country Foods, then the UK's largest pig producer, pointed out that pigmeat production costs in the UK were 44 p/kg higher than on the continent. Grampian stated that only 2 p/kg of this was due to the ban on stalls; the majority of the extra costs resulted from the then strength of sterling and the fact that at that time meat and bone meal had been banned in the UK but not on the continent. A study by the Meat and Livestock Commission in 1999, the year that the gestation crate ban came into force, found that moving from gestation crates, to group housing added just 1.6 pence to the cost of producing 1 kg of pigmeat. French and Dutch studies show that even in the higher welfare group housing systems – ones giving more space and straw – a kg of pigmeat costs less than 2 pence more to produce than in gestation crates. Sow breeding systems Organized campaigns by animal activists have focused on the use of the gestation crate, such as the 'gestation crate' and farrowing crate. The gestation crate has now been banned in the UK, certain US states, and other European countries, although it remains part of pig production in much of the US and European Union. The sows selected for breeding will be confined in a gestation crate. Hogs (males) are kept confined in caged crates of the same size for the duration of their lives in order to have their sperm repeatedly extracted by workers. In an intensive system, the sow will be placed in a crate prior to insemination and will stay there for at least the start of her pregnancy, depending on each country's laws and local regulations. The typical length of the sow's pregnancy is 3 months, 3 weeks, and 3 days. In certain cases, sows may spend this time in the crate. However, a variety of farming systems are used and the time in the crate may vary from 4 weeks to the whole pregnancy. There is also current controversy and criticism of 'farrowing crates'. A farrowing crate houses the sow in one section and her piglets in another. It allows the sow to lie down and roll over to feed her piglets, but keeps her piglets in a separate section. This prevents the large sow from sitting on her piglets and killing them, which is quite common where the sow is not separated from the piglets. Sows are also prevented from being able to move other than between standing and lying. Some models of farrowing crates may allow more space than others, and allow greater interaction between sow and young. Well-designed farrowing pens in which the sow has ample space can be just as effective as crates in preventing piglet mortality. Some crates may also be designed with cost-effectiveness or efficiency in mind and therefore be smaller. Authoritative industry data indicate that moving from sow stalls to group housing added 2 pence to the cost of producing 1 kg. of pigmeat. Many English fattening pigs are kept in barren conditions and are routinely tail docked. Since 2003 EU legislation has required pigs to be given environmental enrichment and has banned routine tail docking. However, 80% of UK pigs are tail docked. In 2015, use of sow crates was made illegal on New Zealand pig farms. Effects on traditional rural communities Common criticism of intensive piggeries is that they represent a corporatization of the traditional rural lifestyle. Critics feel the rise of intensive piggeries has largely replaced family farming. In large part, this is because intensive piggeries are more economical than outdoor systems, pen systems, or the sty. In many pork-producing countries (e.g., United States, Canada, Australia, Denmark) the use of intensive piggeries has led to market rationalization and concentration. The New York Times reported that keeping pigs and other animals in "unnaturally overcrowded" environments poses considerable health risks for workers, neighbors, and consumers. Waste management and public health concerns Contaminants from animal wastes can enter the environment through pathways such as through leakage of poorly constructed manure lagoons or during major precipitation events resulting in either overflow of lagoons and runoff from recent applications of waste to farm fields, or atmospheric deposition followed by dry or wet fallout. Runoff can leach through permeable soils to vulnerable aquifers that tap ground water sources for human consumption. Runoff of manure can also find its way into surface water such as lakes, streams, and ponds. An example of weather induced runoff having been recently reported in the wake of Hurricane Matthew. Many contaminants are present in livestock wastes, including nutrients, pathogens, veterinary pharmaceuticals and naturally excreted hormones. Improper disposal of animal carcasses and abandoned livestock facilities can also contribute to water quality problems in surrounding areas of CAFOs. Exposure to waterborne contaminants can result from both recreational use of affected surface water and from ingestion of drinking water derived from either contaminated surface water or ground water. High-Risk populations are generally the very young, the elderly, pregnant women, and immunocompromised individuals. Dermal contact may cause skin, eye, or ear infections. Drinking water exposures to pathogens could occur in vulnerable private wells. At Varkensproefcentrum Sterksel in the Netherlands, a pig farm has been created that reuses its waste streams. CO2 and ammonia from the pig manure are reused to grow algae which in turn are used to feed the pigs. Another method to reduce the effect on the environment is to switch to other breeds of pig. The enviropig is a genetically modified type of pig with the capability to digest plant phosphorus more efficiently than ordinary pigs, though the enviropig program ended in 2012 and did not reach commercial distribution. Nutrient-rich runoff from CAFO's can contribute to Algal blooms in rivers, lakes and seas. The 2009 harmful algal bloom event off the coast of Brittany, France was attributed to runoff from an intensive pig farm. North Carolina As of 2010, North Carolina housed approximately ten million hogs, most of which are located in the eastern half of the state in industrialized concentrated animal feeding operations (CAFOs). This was not the case 20 years ago. The initial horizontal integration and the vertical integration that arose in this industry resulted in numerous issues, including issues of environmental disparity, loss of work, pollution, animal rights, and overall general public health. The most remarkable example of swine CAFO monopoly is found in the United States, where in 2001, 50 producers had control over 70% of total pork production. In 2001, the biggest CAFO had just over 710,000 sows. Originally, Murphy Family Farms horizontally integrated the North Carolina system. They laid the groundwork for the industry to be vertically integrated. Today the hog industry in North Carolina is led by Smithfield Foods, which has expanded into both nationwide and international production. The environmental justice problems in North Carolina's agroindustrialization of swine production seem to stem from the history of the coastal region's economy, which has relied heavily on black and low-income populations to supply the necessary agricultural labor. The industry's shift from family-owned hog farms to factory hogging has contributed to the frequent targeting of these areas. This swine production and pollution that accompanies factory hogging is concentrated in the parts of North Carolina that have the highest disease rates, the least access to medical care, and the greatest need for positive education and economic development. Since hog production has become consolidated in the coastal region of N.C., the high water tables and low-lying flood plains have increased the risk and impact of hog farm pollution. A swine CAFO is made up of three parts: the hog house, the “lagoon,” and the “spray field.” Waste disposal techniques used by small-scale traditional hog farms, like using waste as fertilizer for commercially viable crops, were adopted and expanded for use by CAFOs. Lagoons are supposed to be protected with an impermeable liner, but some do not work properly. This can cause environmental damage, as seen in 1995 when a lagoon burst in North Carolina. This lagoon released 25 million gallons of noxious sludge into North Carolina's New River and killed approximately eight to ten million fish. The toxins emitted by the swine CAFOs can produce a variety of symptoms and illnesses ranging from respiratory disorders, headaches, and shortness of breath to hydrogen sulfide poisoning, bronchitis, and asthma. The potential for spray field runoff or lagoon leakage puts nearby residents in danger of contaminated drinking water, which can lead to diseases like samonellosis, giardiasis, Chlamydia, meningitis, cryptosporidiosis, worms, and influenza. Denmark Slaughterhouses and veterinarians are obliged to report pigs with injuries to the Ministry of Food, Agriculture and Fisheries, which forwards cases to the police. There were relatively few cases before 2006, but by 2008-9 there were about 300 per year. When there are visible injuries, it represents not only a problem in animal welfare but also the farmers economy because parts or occasionally the entire carcass has to be discarded. From 2006 to 2009 the number of pigs with injuries caused by hard objects, such as planks or chains received by slaughterhouses rose significantly. It was possibly related to a system introduced in 2006, which rewards "the rushed loading of animals onto vehicles", as well as a sharp increase in uneducated Eastern European farm workers unaware of Danish laws. Gestation crates were sometimes used on some Danish farms to restrict the movement of sows during pregnancy, as documented by British celebrity chef Jamie Oliver in a television programme for the UK's Channel 4 in 2009. In other fields, such as bathing facilities for the pigs and floor material Danish requirements were higher than in the UK. the practice was already prohibited for pigs exported to the UK. The use of gestation crates became illegal in Denmark (as part of the EU) in 2013. New Zealand According to Scoop, in 2009 the New Zealand pork industry was "dealt a shameful public relations slap-in-the-face after its former celebrity kingpin, Mike King, outed their farming practices as 'brutal', 'callous' and 'evil on a May episode of New Zealand television show Sunday. King condemned the "appalling treatment" of factory farmed pigs. King observed conditions inside a New Zealand piggery, and saw a dead female pig inside a gestation crate, lame and crippled pigs and others that could barely stand, pigs either extremely depressed or highly distressed, pigs with scars and injuries, and a lack of clean drinking water and food. See also Animal–industrial complex Veterinary ethics Feedback (pork industry) References External links US Government regulation CAFO Hearing 9-6-2007, Written Statement of Commissioner Blackham National Association of State Departments of Agriculture, USA – contains history of CAFO Regulations Animal Feeding Operations (AFOs) - Permitting Program, EPA's CAFO Industry Regulation North Carolina State Government Study on Ammonia Concentration in the Air Proponent, neutral, and industry-related The Pig Site – industry support site with feature articles and news, with an emphasis on intensive farming practices Purdue University food science extension Key industries: Hog farming in North Carolina LEARN NC, a program of the UNC School of Education, 2013 Criticism of intensive pig farming lovepigs.org.nz – the New Zealand group SAFE campaign to end intensive pig farming. SaveBabe.com Animals Australia's campaign to end pig factory farming featuring James Cromwell Swine Production: A Global Perspective 2/7/2007 JOHN R. MOORE - Alltech Inc. Pig Farming Compassion in World Farming, UK pork Factory farming.com Pig farming Ethically disputed business practices towards animals Intensive farming Meat industry Articles containing video clips
Intensive pig farming
[ "Chemistry" ]
4,608
[ "Eutrophication", "Intensive farming" ]
158,196
https://en.wikipedia.org/wiki/Committee
A committee or commission is a body of one or more persons subordinate to a deliberative assembly or other form of organization. A committee may not itself be considered to be a form of assembly or a decision making body. Usually, an assembly or organization sends matters to a committee as a way to explore them more fully than would be possible if the whole assembly or organization were considering them. Committees may have different functions and their types of work differ depending on the type of organization and its needs. A member of a legislature may be delegated a committee assignment, which gives them the right to serve on a certain committee. Purpose A deliberative assembly or other organization may form a committee (or "commission") consisting of one or more persons to assist with the work of the assembly. For larger organizations, much work is done in committees. They can be a way to formally draw together people of relevant expertise from different parts of an organization who otherwise would not have a good way to share information and coordinate actions. They may have the advantage of widening viewpoints and sharing out responsibilities. They can also be appointed with experts to recommend actions in matters that require specialized knowledge or technical judgment. Functions Committees can serve several different functions: Governance In organizations considered too large for all the members to participate in decisions affecting the organization as a whole, a smaller body, such as a board of directors, is given the power to make decisions, spend money, or take actions. A governance committee is formed as a separate committee to review the performance of the board and board policy as well as nominate candidates for the board. Coordination and administration A large body may have smaller committees with more specialized functions. Examples are an audit committee, an elections committee, a finance committee, a fundraising committee, and a program committee. Large conventions or academic conferences are usually organized by a coordinating committee drawn from the membership of the organization. Research and recommendations Committees may be formed to do research and make recommendations on a potential or planned project or change. For example, an organization considering a major capital investment might create a temporary working committee of several people to review options and make recommendations to upper management or the board of directors. DisciplineA committee on discipline may be used to handle disciplinary procedures for members of the organization. As a tactic for indecision As a means of public relations by sending sensitive, inconvenient, or irrelevant matters to committees, organizations may bypass, stall, or disacknowledge matters without declaring a formal policy of inaction or indifference. However, this could be considered a dilatory tactic. Power and authority Generally, committees are required to report to their parent body. They do not usually have the power to act independently unless the body that created it gives it such power. Formal procedures When a committee is formed in a formal situation, such as committees in legislatures or for corporate bodies with by-laws, a chairman (or "chair" or "chairperson") is designated for the committee. Sometimes a vice-chairman (or similar name) is also appointed. It is common for the committee chairman to organize its meetings. Sometimes these meetings are held through videoconferencing or other means if committee members are not able to attend in person, as may be the case if they are in different parts of the country or the world. The chairman is responsible for running meetings. Duties include keeping the discussion on the appropriate subject, recognizing members to speak, and confirming what the committee has decided (through voting or by unanimous consent). Using Roberts Rules of Order Newly Revised (RONR), committees may follow informal procedures (such as not requiring motions if it is clear what is being discussed). The level of formality depends on the size and type of committee, in which sometimes larger committees considering crucial issues may require more formal processes. Minutes are a record of the decisions at meetings. They can be taken by a person designated as the secretary. For most organizations, committees are not required to keep formal minutes. However, some bodies require that committees take minutes, especially if the committees are public ones subject to open meeting laws. Committees may meet on a regular basis, such as weekly or more often, or meetings may be called irregularly as the need arises. The frequency of the meetings depends on the work of the committee and the needs of the parent body. When the committee completes its work, it provides the results in a report to its parent body. The report may include the methods used, the facts uncovered, the conclusions reached, and any recommendations. If the committee is not ready to report, it may provide a partial report or the assembly may discharge the committee of the matter so that the assembly can handle it. Also, if members of the committee are not performing their duties, they may be removed or replaced by the appointing power. Whether the committee continues to exist after presenting its report depends on the type of committee. Generally, committees established by the bylaws or the organization's rules continue to exist, while committees formed for a particular purpose go out of existence after the final report. Commit (motion) In parliamentary procedure, the motion to commit (or refer) is used to refer another motion—usually a main motion—to a committee. A motion to commit should specify to which committee the matter is to be referred, and if the committee is a special committee appointed specifically for purposes of the referred motion, it should also specify the number of committee members and the method of their selection, unless that is specified in the bylaws. Any proposed amendments to the main motion that are pending at the time the motion is referred to a committee go to the committee as well. Once referred, but before the committee reports its recommendations back to the assembly, the referred motion may be removed from the committee's consideration by the motion to discharge a committee. Recommit In the United States House of Representatives, a motion to recommit can be made with or without instructions. If the motion is made without instructions, the bill or resolution is simply sent back to the committee. If the motion is made with instructions and the motion is agreed to, the chairman of the committee in question will immediately report the bill or resolution back to the whole House with the new language. In this sense, a motion to recommit with instructions is effectively an amendment. Variations for full assembly consideration In Robert's Rules of Order Newly Revised (RONR), the motion to commit has three variations which do not turn a question over to a smaller group, but simply permit the assembly's full meeting body to consider it with the greater freedom of debate that is allowed to committees. These forms are to go into a committee of the whole, to go into a quasi-committee of the whole, and to consider informally. Passing any of these motions removes the limitations on the number of times a member can speak. The Standard Code of Parliamentary Procedure has informal consideration, but does not have "committee of the whole" or "quasi committee of the whole". Discharge a committee In Robert's Rules of Order Newly Revised, the motion to discharge a committee is used to take a matter out of a committee's hands before the committee has made a final report on it. A committee can use this motion to discharge a subcommittee. The vote required is a majority vote, if the committee has failed to report at the prescribed time or if the assembly is considering a partial report of the committee. Otherwise, it requires a majority vote with previous notice; a two-thirds vote; or a majority of the entire membership. Under The Standard Code of Parliamentary Procedure, the assembly that has referred a motion or a matter to a committee may, by a majority vote, withdraw it at any time from the committee, refer it to another committee, or decide the question itself. Types Executive committee Organizations with a large board of directors (such as international labor unions, large corporations with thousands of stockholders or national and international organizations) may have a smaller body of the board, called an executive committee, to handle its business. The executive committee may function more like a board than an actual committee. In any case, an executive committee can only be established through a specific provision in the charter or bylaws of the entity (i.e. a board cannot appoint an executive committee without authorization to do so). Members of the executive committee may be elected by the overall franchised membership or by the board, depending on the rules of the organization, and usually consist of the CEO and the Vice Presidents in charge of respective directorates within the organization. However formed, an executive committee only has such powers and authority that the governing documents of the organization give it. In some cases, it may be empowered to act on behalf of the board or organization, while in others, it may only be able to make recommendations. Conference committee Governments at the national level may have a conference committee. A conference committee in a bicameral legislature is responsible for creating a compromise version of a particular bill when each house has passed a different version. A conference committee in the United States Congress is a temporary panel of negotiators from the House of Representatives and the Senate. Unless one chamber decides to accept the other's original bill, the compromise version must pass both chambers after leaving the conference committee. This committee is usually composed of the senior members of the standing committees that originally considered the legislation in each chamber. Other countries that use conference committees include France, Germany, Japan, and Switzerland. In Canada, conference committees have been unused since 1947. In the European Union (EU) legislative process, a similar committee is called a 'Conciliation Committee', which carries out the Trilogue negotiations in case the Council does not agree with a text amended and adopted by the European Parliament at a second reading. Although the practice has fallen out of favour in other Australian Parliaments, the Parliament of South Australia still regularly appoints a "Conference of Managers" from each House to negotiate compromises on disputed bills in private. Different use of term In organizations, the term "conference committee" may have a different meaning. This meaning may be associated with the conferences, or conventions, that the organization puts together. These committees that are responsible for organizing such events may be called "conference committees". Standing committee A standing committee is a subunit of a political or deliberative body established in a permanent fashion to aid the parent assembly in accomplishing its duties, for example by meeting on a specific, permanent policy domain (e.g. defence, health, or trade and industry). A standing committee is granted its scope and powers over a particular area of business by the governing documents. Standing committees meet on a regular or irregular basis depending on their function, and retain any power or oversight originally given them until subsequent official actions of the governing body (through changes to law or by-laws) disbands the committee. Legislatures Most governmental legislative committees are standing committees. This phrase is used in the legislatures of the following countries: Armenia Standing Committees of the National Assembly Australia Australian House of Representatives committees Australian Senate committees Canada List of committees of the Canadian House of Commons Standing committee (Canada) China Standing Committee of the National People's Congress Special committee of the National People's Congress Politburo Standing Committee of the Chinese Communist Party Iceland List of standing committees of the Icelandic parliament Ireland Committees of the Oireachtas Hong Kong Legislative Council (Hong Kong) India Standing committee (India) Malaysia Dewan Rakyat committees Dewan Negara committees New Zealand New Zealand House of Representatives committees Thailand Parliamentary committees of Thailand United Kingdom Parliamentary committees of the United Kingdom Public bill committee United States Standing committee (United States Congress) Under the laws of the United States of America, a standing committee is a Congressional committee permanently authorized by the United States House of Representatives and United States Senate rules. The Legislative Reorganization Act of 1946 greatly reduced the number of committees, and set up the legislative committee structure still in use today, as modified by authorized changes via the orderly mechanism of rule changes. Examples in organizations Examples of standing committees in organizations are; an audit committee, an elections committee, a finance committee, a fundraising committee, a governance committee, and a program committee. Typically, the standing committees perform their work throughout the year and present their reports at the annual meeting of the organization. These committees continue to exist after presenting their reports, although the membership in the committees may change. Nominating committee A nominating committee (or nominations committee) is a group formed for the purpose of nominating candidates for office or the board in an organization. It may consist of members from inside the organization. Sometimes a governance committee takes the role of a nominating committee. Depending on the organization, this committee may be empowered to actively seek out candidates or may only have the power to receive nominations from members and verify that the candidates are eligible. A nominating committee works similarly to an electoral college, the main difference being that the available candidates, either nominated or "written in" outside of the committee's choices, are then voted into office by the membership. It is a part of governance methods often employed by corporate bodies, business entities, and social and sporting groups, especially clubs. The intention is that they be made up of qualified and knowledgeable people representing the best interests of the membership. In the case of business entities, their directors will often be brought in from outside, and receive a benefit for their expertise. In the context of nominations for awards, a nominating committee can also be formed for the purpose of nominating persons or things held up for judgment by others as to their comparative quality or value, especially for the purpose of bestowing awards in the arts, or in application to industry's products and services. The objective being to update, set, and maintain high and possibly new standards. Steering committee A steering committee is a committee that provides guidance, direction and control to a project within an organization. The term is derived from the steering mechanism that changes the steering angle of a vehicle's wheels. Project steering committees are frequently used for guiding and monitoring IT projects in large organizations, as part of project governance. The functions of the committee might include building a business case for the project, planning, providing assistance and guidance, monitoring the progress, controlling the project scope and resolving conflicts. As with other committees, the specific duties and role of the steering committee vary among organizations. Special committee A special committee (also working, select, or ad hoc committee) is established to accomplish a particular task or to oversee a specific area in need of control or oversight. Many are research or coordination committees in type or purpose and are temporary. Some are a sub-group of a larger society with a particular area of interest which are organized to meet and discuss matters pertaining to their interests. For example; a group of astronomers might be organized to discuss how to get the larger society to address near Earth objects. A subgroup of engineers and scientists of a large project's development team could be organized to solve some particular issue with offsetting considerations and trade-offs. Once the committee makes its final report to its parent body, the special committee ceases to exist. Subcommittee A committee that is a subset of a larger committee is called a subcommittee. Committees that have a large workload may form subcommittees to further divide the work. Subcommittees report to the parent committee and not to the general assembly. Committee of the whole When the entire assembly meets as a committee to discuss or debate, this is called a "committee of the whole". This is a procedural device most commonly used by legislative bodies to discuss an issue under the rules of a committee meeting rather than the more formal and rigid rules which would have to be followed to actually enact legislation. Central Committee "Central Committee" is the common designation of the highest organ of communist parties between two congresses. The committee was elected by the party congress and led party activities, elected the politburo and the general secretary of the communist party. See also Caucus List of IEC technical committees List of the Czech Republic Senate committees Committee for the Promotion of Virtue and the Prevention of Vice (Saudi Arabia) Parliamentary committees of the United Kingdom Popular Committees (disambiguation) Revolutionary committee (disambiguation) Standing Committees of the European Parliament United States congressional committee References Human communication Legislatures Meetings Parliamentary procedure Political communication
Committee
[ "Biology" ]
3,289
[ "Human communication", "Behavior", "Human behavior" ]
158,297
https://en.wikipedia.org/wiki/The%20Dragon%20in%20the%20Sea
The Dragon in the Sea (1956), also known as Under Pressure from its serialization, is a novel by Frank Herbert. It was first serialized in Astounding magazine from 1955 to 1956, then reworked and published as a standalone novel in 1956. A 1961 2nd printing of the Avon paperback, catalog # G-1092, was titled 21st Century Sub with the previous title in parentheses, and a short 36 page version of the novel was later collected in Eye. It is usually classified as a psychological novel. Plot In a near-future earth, the West and the East have been at war for more than a decade, and resources are running thin. The West is stealing oil from the East with specialized nuclear submarines ("subtugs") that sneak into the underwater oil fields of the East to secretly pump out the oil and bring it back. Each carrying a crew of four, these submarines undertake the most hazardous, stressful missions conceivable, and of late, the missions have been failing, with the last twenty submarines simply disappearing. The East has been very successful in planting sleepers in the West's military and command structures, and the suspicion is that sleepers are sabotaging the subs or revealing their positions once at sea. John Ramsey, a young psychologist from the Bureau of Psychology (BuPsych), is trained as an electronics operator and sent on the next mission, replacing the previous officer who went insane. His secret mission is to find the sleeper, or figure out why the crews are going insane. Major themes Herbert's portrayal of submarines towing large bags filled with surreptitiously pumped oil has been cited as an inspiration for the invention called the Dracone, for which development started in the year following Herbert's serial. Reception Galaxy reviewer Floyd C. Gale praised Dragon in the Sea as "a dramatically fascinating story. . . . [a] tense and well-written novel." Algis Budrys described it as "hypnotically fascinating," praising Herbert's "intelligence, sophistication, [and] capacity for research" as well as his "ability to write clean prose as an unobtrusive but effective vehicle for a cleanly told story." Anthony Boucher found the novel "as impressive in its cumulative depiction of a specialized scientific background as anything since Hal Clement's Mission of Gravity." Spider Robinson, reviewing a mid-1970s reissue, faulted the novel's characterizations, saying "there are no real people in it, only psychological types and syndromes walking around on legs." J. Francis McComas praised the novel in The New York Times, comparing it to Forester and Wouk and declaring, "In this fine blend of speculation and action, Mr. Herbert has created a novel that ranks with the best of modern science fiction." Awards The Dragon in the Sea tied for number thirty-four in the 1975 Locus All-Time Poll. See also List of underwater science fiction works References External links The "Under Pressure" chapter from Timothy O'Reilly's critical study of Frank Herbert, Frank Herbert Review 1956 American novels American thriller novels Novels by Frank Herbert Works originally published in Analog Science Fiction and Fact Novels first published in serial form Doubleday (publisher) books Submarines in fiction Works about petroleum Underwater novels
The Dragon in the Sea
[ "Chemistry" ]
671
[ "Petroleum", "Works about petroleum" ]
158,371
https://en.wikipedia.org/wiki/John%20Edensor%20Littlewood
John Edensor Littlewood (9 June 1885 – 6 September 1977) was a British mathematician. He worked on topics relating to analysis, number theory, and differential equations and had lengthy collaborations with G. H. Hardy, Srinivasa Ramanujan and Mary Cartwright. Biography Littlewood was born on the 9th of June 1885 in Rochester, Kent, the eldest son of Edward Thornton Littlewood and Sylvia Maud (née Ackland). In 1892, his father accepted the headmastership of a school in Wynberg, Cape Town, in South Africa, taking his family there. Littlewood returned to Britain in 1900 to attend St Paul's School in London, studying under Francis Sowerby Macaulay, an influential algebraic geometer. In 1903, Littlewood entered the University of Cambridge, studying in Trinity College. He spent his first two years preparing for the Tripos examinations which qualify undergraduates for a bachelor's degree where he emerged in 1905 as Senior Wrangler bracketed with James Mercer (Mercer had already graduated from the University of Manchester before attending Cambridge). In 1906, after completing the second part of the Tripos, he started his research under Ernest Barnes. One of the problems that Barnes suggested to Littlewood was to prove the Riemann hypothesis, an assignment at which he did not succeed. He was elected a Fellow of Trinity College in 1908. From October 1907 to June 1910, he worked as a Richardson Lecturer in the School of Mathematics at the University of Manchester before returning to Cambridge in October 1910, where he remained for the rest of his career. He was appointed Rouse Ball Professor of Mathematics in 1928, retiring in 1950. He was elected a Fellow of the Royal Society in 1916, awarded the Royal Medal in 1929, the Sylvester Medal in 1943, and the Copley Medal in 1958. He was president of the London Mathematical Society from 1941 to 1943 and was awarded the De Morgan Medal in 1938 and the Senior Berwick Prize in 1960. Littlewood died on 6 September 1977. Work Most of Littlewood's work was in the field of mathematical analysis. He began research under the supervision of Ernest William Barnes, who suggested that he attempt to prove the Riemann hypothesis: Littlewood showed that if the Riemann hypothesis is true, then the prime number theorem follows and obtained the error term. This work won him his Trinity fellowship. However, the link between the Riemann hypothesis and the prime number theorem had been known before in Continental Europe, and Littlewood wrote later in his book, A Mathematician's Miscellany that his rediscovery of the result did not shed a positive light on the isolated nature of British mathematics at the time. Theory of the distribution of prime numbers In 1914, Littlewood published his first result in the field of analytic number theory concerning the error term of the prime-counting function. If denotes the number of primes up , then the prime number theorem implies that , where is known as the Eulerian logarithmic integral. Numerical evidence seemed to suggest that for all . Littlewood, however proved that the difference changes sign infinitely often. Collaboration with G. H. Hardy Littlewood collaborated for many years with G. H. Hardy. Together they devised the first Hardy–Littlewood conjecture, a strong form of the twin prime conjecture, and the second Hardy–Littlewood conjecture. Ramanujan He also, with Hardy, identified the work of the Indian mathematician Srinivasa Ramanujan as that of a genius and supported him in travelling from India to work at Cambridge. A self-taught mathematician, Ramanujan later became a Fellow of the Royal Society, Fellow of Trinity College, Cambridge, and widely recognised as on a par with other geniuses such as Euler and Jacobi. Collaboration with Mary Cartwright In the late 1930s, as the prospect of war loomed, the Department of Scientific and Industrial Research sought the interest of pure mathematicians in the properties of non linear differential equations that were needed by radio engineers and scientists. The problems appealed to Littlewood and Mary Cartwright, and they worked on them independently during the next 20 years. The problems that Littlewood and Cartwright worked on concerned differential equations arising out of early research on radar: their work foreshadowed the modern theory of dynamical systems. Littlewood's 4/3 inequality on bilinear forms was a forerunner of the later Grothendieck tensor norm theory. Military service WWI – ballistics work During the Great War, Littlewood served in the Royal Garrison Artillery as a second lieutenant. He made highly significant contributions in the field of ballistics. Later life He continued to write papers into his eighties, particularly in analytical areas of what would become the theory of dynamical systems. Littlewood is also remembered for his book of reminiscences, A Mathematician's Miscellany (new edition published in 1986). Among his PhD students were Sarvadaman Chowla, Harold Davenport, and Donald C. Spencer. Spencer reported that in 1941 when he (Spencer) was about to get on the boat that would take him home to the United States, Littlewood reminded him: "n, n alpha, n beta!" (referring to Littlewood's conjecture). Littlewood's collaborative work, carried out by correspondence, covered fields in Diophantine approximation and Waring's problem, in particular. In his other work, he collaborated with Raymond Paley on Littlewood–Paley theory in Fourier theory, and with Cyril Offord in combinatorial work on random sums, in developments that opened up fields that are still intensively studied. In a 1947 lecture, the Danish mathematician Harald Bohr said, "To illustrate to what extent Hardy and Littlewood in the course of the years came to be considered as the leaders of recent English mathematical research, I may report what an excellent colleague once jokingly said: 'Nowadays, there are only three really great English mathematicians: Hardy, Littlewood, and Hardy–Littlewood.'" The German mathematician Edmund Landau supposed that Littlewood was a pseudonym that Hardy used for his lesser work and "so doubted the existence of Littlewood that he made a special trip to Great Britain to see the man with his own eyes". He visited Cambridge where he saw much of Hardy but nothing of Littlewood and so considered his conjecture to be proven. A similar story was told about Norbert Wiener, who vehemently denied it in his autobiography. He coined Littlewood's law, which states that individuals can expect "miracles" to happen to them at the rate of about one per month. Cultural references John Littlewood is depicted in two films covering the life of Ramanujan – Ramanujan in 2014 portrayed by Michael Lieber and The Man Who Knew Infinity in 2015 portrayed by Toby Jones. See also Critical line theorem Littlewood conjecture Littlewood polynomial Littlewood's three principles of real analysis Littlewood's Tauberian theorem Littlewood's 4/3 inequality Littlewood subordination theorem Littlewood–Offord problem Littlewood–Paley theory Hardy–Littlewood circle method Hardy–Littlewood definition Hardy–Littlewood inequality Hardy–Littlewood maximal function Hardy–Littlewood zeta function conjectures Hardy–Littlewood tauberian theorem First Hardy–Littlewood conjecture Second Hardy–Littlewood conjecture Ross–Littlewood paradox Hadamard three-circle theorem Skewes's number References Bibliography Further reading Littlewood's Miscellany, edited by B. Bollobás, Cambridge University Press; 1986. (alternative title for A Mathematician's Miscellany) External links Papers of Littlewood on Number Theory A Mathematicians Miscellany British number theorists Mathematical analysts 20th-century English mathematicians Recipients of the Copley Medal Fellows of Trinity College, Cambridge Fellows of the Royal Society Alumni of Trinity College, Cambridge People educated at St Paul's School, London People from Rochester, Kent 1885 births 1977 deaths Royal Medal winners Senior Wranglers De Morgan Medallists Royal Garrison Artillery officers Rouse Ball Professors of Mathematics (Cambridge)
John Edensor Littlewood
[ "Mathematics" ]
1,630
[ "Mathematical analysis", "Mathematical analysts" ]
158,383
https://en.wikipedia.org/wiki/Hawaiian%20Islands
The Hawaiian Islands () are an archipelago of eight major volcanic islands, several atolls, and numerous smaller islets in the North Pacific Ocean, extending some from the island of Hawaii in the south to northernmost Kure Atoll. Formerly called the Sandwich Islands by Europeans, the present name for the archipelago is derived from the name of its largest island, Hawaii. The archipelago sits on the Pacific plate. The islands are exposed peaks of a great undersea mountain range known as the Hawaiian–Emperor seamount chain, formed by volcanic activity over the Hawaiian hotspot. The islands are about from the nearest continent and are part of the Polynesia subregion of Oceania. The U.S. state of Hawaii occupies the archipelago almost in its entirety (including the mostly uninhabited Northwestern Hawaiian Islands), with the sole exception of Midway Atoll (a United States Minor Outlying Island). Hawaii is the only U.S. state that is situated entirely on an archipelago, and the only state not geographically connected with North America. The Northwestern islands (sometimes called the Leeward Islands) and surrounding seas are protected as a national monument and World Heritage Site. Islands and reefs The Hawaiian Islands have a total land area of . Except for Midway, which is an unincorporated territory of the United States, these islands and islets are administered as Hawaii—the 50th state of the United States. Major islands The eight major islands of Hawaii (Windward Islands) are listed above. All except Kaho'olawe are inhabited. Minor islands, islets The state of Hawaii counts 137 "islands" in the Hawaiian chain. This number includes all minor islands (small islands), islets (even smaller islands) offshore of the major islands (listed above), and individual islets in each atoll. These are just a few: Kaʻula Kāohikaipu Lehua Mānana Mōkōlea Rock Mokolii Moku Manu Mokuauia Moku o Loʻe Moku Ola Mokuʻumeʻume Molokini Nā Mokulua Partial islands, atolls, reefs Partial islands, atolls, reefs—those west of Niʻihau are uninhabited except Midway Atoll—form the Northwestern Hawaiian Islands (Leeward Islands): Nihoa (Mokumana) Necker (Mokumanamana) French Frigate Shoals (Kānemilohaʻi) Gardner Pinnacles (Pūhāhonu) Maro Reef (Nalukākala) Laysan (Kauō) Lisianski Island (Papaʻāpoho) Pearl and Hermes Atoll (Holoikauaua) Midway Atoll (Pihemanu) Kure Atoll (Mokupāpapa) Geology This chain of islands, or archipelago, developed as the Pacific plate slowly moved northwestward over a hotspot in the Earth's mantle at a rate of approximately per million years. Thus, the southeast island is volcanically active, whereas the islands on the northwest end of the archipelago are older and typically smaller, due to longer exposure to erosion. The age of the archipelago has been estimated using potassium-argon dating methods. From this study and others, it is estimated that the northwesternmost island, Kure Atoll, is the oldest at approximately 28 million years (Ma); while the southeasternmost island, Hawaiʻi, is approximately 0.4 Ma (400,000 years). The only active volcanism in the last 200 years has been on the southeastern island, Hawaiʻi, and on the submerged but growing volcano to the extreme southeast, Kamaʻehuakanaloa (formerly Loʻihi). The Hawaiian Volcano Observatory of the USGS documents recent volcanic activity and provides images and interpretations of the volcanism. Kīlauea had been erupting nearly continuously since 1983 when it stopped August 2018. Almost all of the magma of the hotspot has the composition of basalt, and so the Hawaiian volcanoes are composed almost entirely of this igneous rock. There is very little coarser-grained gabbro and diabase. Nephelinite is exposed on the islands but is extremely rare. The majority of eruptions in Hawaiʻi are Hawaiian-type eruptions because basaltic magma is relatively fluid compared with magmas typically involved in more explosive eruptions, such as the andesitic magmas that produce some of the spectacular and dangerous eruptions around the margins of the Pacific basin. Hawaiʻi island (the Big Island) is the biggest and youngest island in the chain, built from five volcanoes. Mauna Loa, taking up over half of the Big Island, is the largest shield volcano on the Earth. The measurement from sea level to summit is more than , from sea level to sea floor about . Earthquakes The Hawaiian Islands have many earthquakes, generally triggered by and related to volcanic activity. Seismic activity, as a result, is currently highest in the southern part of the chain. Both historical and modern earthquake databases have correlated higher magnitude earthquakes with flanks of active volcanoes, such as Mauna Loa and Kilauea. The combination of erosional forces, which cause slumping and landslides, with the pressure exerted by rising magma put a great amount of stress on the volcanic flanks. The stress is released when the slope fails, or slips, causing an earthquake. This type of seismicity is unique because the forces driving the system are not always consistent over time, since rates of volcanic activity fluctuate. Seismic hazard near active, seaward volcanic flanks is high, partially because of the especially unpredictable nature of the forces that trigger earthquakes, and partially because these events occur at relatively shallow depths. Flank earthquakes typically occur at depths ranging from 5 to 20 km, increasing the hazard to local infrastructure and communities. Earthquakes and landslides on the island chain have also been known to cause tsunamis. Most of the early earthquake monitoring took place in Hilo, by missionaries Titus Coan and Sarah J. Lyman and her family. Between 1833 and 1896, approximately 4 or 5 earthquakes were reported per year. Today, earthquakes are monitored by the Hawaiian Volcano Observatory run by the USGS. Hawaii accounted for 7.3% of the United States' reported earthquakes with a magnitude 3.5 or greater from 1974 to 2003, with a total 1533 earthquakes. Hawaii ranked as the state with the third most earthquakes over this time period, after Alaska and California. On October 15, 2006, there was an earthquake with a magnitude of 6.7 off the northwest coast of the island of Hawaii, near the Kona area. The initial earthquake was followed approximately five minutes later by a magnitude 5.7 aftershock. Minor to moderate damage was reported on most of the Big Island. Several major roadways became impassable from rock slides, and effects were felt as far away as Honolulu, Oahu, nearly from the epicenter. Power outages lasted for several hours to days. Several water mains ruptured. No deaths or life-threatening injuries were reported. On May 4, 2018, there was a 6.9 earthquake in the zone of volcanic activity from Kīlauea. Earthquakes are monitored by the Hawaiian Volcano Observatory run by the USGS. Tsunamis The Hawaiian Islands are subject to tsunamis, great waves that strike the shore. Tsunamis are most often caused by earthquakes somewhere in the Pacific. The waves produced by the earthquakes travel at speeds of and can affect coastal regions thousands of miles (kilometers) away. Tsunamis may also originate from the Hawaiian Islands. Explosive volcanic activity can cause tsunamis. The island of Molokaʻi had a catastrophic collapse or debris avalanche over a million years ago; this underwater landslide likely caused tsunamis. The Hilina Slump on the island of Hawaiʻi is another potential place for a large landslide and resulting tsunami. The city of Hilo on the Big Island has been most affected by tsunamis, where the in-rushing water is accentuated by the shape of Hilo Bay. Coastal cities have tsunami warning sirens. A tsunami resulting from an earthquake in Chile hit the islands on February 27, 2010. It was relatively minor, but local emergency management officials utilized the latest technology and ordered evacuations in preparation for a possible major event. The Governor declared it a "good drill" for the next major event. A tsunami resulting from an earthquake in Japan hit the islands on March 11, 2011. It was relatively minor, but local officials ordered evacuations in preparation for a possible major event. The tsunami caused about $30.1 million in damages. Volcanoes Only the two Hawaiian islands furthest to the southeast have active volcanoes: Haleakalā on Maui, and Mauna Loa, Mauna Kea, Kilauea, and Hualalai, all on the Big Island. The volcanoes on the remaining islands are extinct as they are no longer over the Hawaii hotspot. The Kamaʻehuakanaloa Seamount is an active submarine volcano that is expected to become the newest Hawaiian island when it rises above the ocean's surface in 10,000–100,000 years. Hazards from these volcanoes include lava flows that can destroy and bury the surrounding surface, volcanic gas emissions, earthquakes and tsunamis listed above, submarine eruptions affecting the ocean, and the possibility of an explosive eruption. History Hawaii was first discovered and settled by explorers from Tahiti or the Marquesas Islands. The date of the first settlements is a continuing debate. Kirch's textbooks on Hawaiian archeology date the first Polynesian settlements to about 300 C.E., although his more recent estimates are as late as 600. More recent surveys of carbon-dating evidence put the arrival of the first settlers at around 940–1130 C.E. Ecology The islands are home to a multitude of endemic species. Since human settlement, first by Polynesians, non native trees, plants, and animals were introduced. These included species such as rats and pigs, that have preyed on native birds and invertebrates that initially evolved in the absence of such predators. The growing population of humans, especially through European and American colonisation and development, has also led to deforestation, forest degradation, treeless grasslands, and environmental degradation. As a result, many species which depended on forest habitats and food became extinct—with many current species facing extinction. As humans cleared land for farming with the importation of industrialized farming practices through European and American encroachment, monocultural crop production replaced multi-species systems. The arrival of the Europeans had a more significant impact, with the promotion of large-scale single-species export agriculture and livestock grazing. This led to increased clearing of forests, and the development of towns, adding many more species to the list of extinct animals of the Hawaiian Islands. , many of the remaining endemic species are considered endangered. National Monument On June 15, 2006, President George W. Bush issued a public proclamation creating Papahānaumokuākea Marine National Monument under the Antiquities Act of 1906. The Monument encompasses the northwestern Hawaiian Islands and surrounding waters, forming the largest marine wildlife reserve in the world. In August 2010, UNESCO's World Heritage Committee added Papahānaumokuākea to its list of World Heritage Sites. On August 26, 2016, former President Barack Obama greatly expanded Papahānaumokuākea, quadrupling it from its original size. Climate The Hawaiian Islands are tropical but experience many different climates, depending on altitude and surroundings. The islands receive most rainfall from the trade winds on their north and east flanks (the windward side) as a result of orographic precipitation. Coastal areas in general and especially the south and west flanks, or leeward sides, tend to be drier. In general, the lowlands of Hawaiian Islands receive most of their precipitation during the winter months (October to April). Drier conditions generally prevail from May to September. The tropical storms, and occasional hurricanes, tend to occur from July through November. During the summer months the average temperature is about 84 °F (29 °C), in the winter months it is approximately 79 °F (26 °C). As the temperature is relatively constant over the year the probability of dangerous thunderstorms is approximately low. See also Hawaii Inter-Island Cable System Index of Hawaii-related articles List of birds of Hawaii List of fish of Hawaii List of mountain peaks of Hawaii List of Ultras of Hawaii Maritime fur trade Outline of Hawaii Notes References Further reading An integrated information website focused on the Hawaiian Archipelago from the Pacific Region Integrated Data Enterprise (PRIDE). 1970 edition: The Ocean Atlas of Hawai‘i – SOEST at University of Hawaii. Volcano World ; Your World is Erupting – Oregon State University College of Science Archipelagoes of the Pacific Ocean Archipelagoes of Oceania Archipelagoes of the United States Geography of Polynesia Islands Hudson's Bay Company trading posts Physical oceanography Eastern Indo-Pacific Marine ecoregions
Hawaiian Islands
[ "Physics" ]
2,598
[ "Applied and interdisciplinary physics", "Physical oceanography" ]
158,396
https://en.wikipedia.org/wiki/Beryllium%20copper
Beryllium copper (BeCu), also known as copper beryllium (CuBe), beryllium bronze, and spring copper, is a copper alloy with 0.5–3% beryllium. Copper beryllium alloys are often used because of their high strength and good conductivity of both heat and electricity. It is used for its ductility, weldability in metalworking, and machining properties. It has many specialized applications in tools for hazardous environments, musical instruments, precision measurement devices, bullets, and some uses in the field of aerospace. Beryllium copper and other beryllium alloys are harmful carcinogens that present a toxic inhalation hazard during manufacturing. Properties Beryllium copper is a ductile, weldable, and machinable alloy. Like pure copper, it is resistant to non-oxidizing acids (such as hydrochloric acid and carbonic acid) and plastic decomposition products, to abrasive wear, and to galling. It can be heat-treated for increased strength, durability, and electrical conductivity. Beryllium copper attains the greatest strength (up to ) of any copper-based alloy. It has thermal conductivity of 62 Btu/h-ft-°F (107 W/m-K), which is 3–5 times higher than tool steel. It has a solid melting point of 1590 °F (866 °C) and a liquid melting point of 1800 °F (982 °C). It has a high capacity for being hot-formed. C17200 beryllium copper alloy has strength and hardness similar to that of steel; Rockwell hardness properties in its peaked age condition are in the range of 200 ksi and RC45. C17200 has effective corrosion-resistant properties when exposed to harsh conditions such as seawater, and down-hole environments. It will withstand sulphide or chloride stress corrosion cracking and will resist the effects of carbon dioxide and hydrogen embrittlement. Copper alloys in general have always been considered non-sparking. C17200 has the strength to withstand use in hand and mechanical tools. These non-sparking features are best applied in explosive environments such as in the oil & gas and gunpowder industries. Toxicity Inhalation of dust, mist, or fumes containing beryllium can cause chronic beryllium disease, which restricts the exchange of oxygen between the lungs and the bloodstream. The International Agency for Research on Cancer (IARC) lists beryllium as a Group 1 human carcinogen. The National Toxicology Program (NTP) also lists beryllium as a carcinogen. Copper beryllium alloy containing less than 2.5% beryllium (in copper) is not designated as a carcinogen. Uses Beryllium copper is a non-ferrous alloy used in springs, spring wire, load cells, and other parts that must retain their shape under repeated stress and strain. It has high electrical conductivity and is used in low-current contacts for batteries and electrical connectors. Beryllium copper is non-sparking yet physically tough and nonmagnetic, fulfilling the requirements of ATEX directive for Zones 0, 1, and 2. Beryllium copper screwdrivers, pliers, wrenches, cold chisels, knives, and hammers are available for environments with explosive hazards, such as oil rigs, coal mines, and grain elevators. An alternative metal sometimes used for non-sparking tools is aluminium bronze. Compared to steel tools, beryllium copper tools are more expensive and not as strong, but the properties of beryllium copper in hazardous environments may outweigh the disadvantages. Some of BeCu's varied uses include: Certain percussion instruments, especially tambourines and triangles, because of beryllium copper's consistent tone and resonance. Ultra-low temperature cryogenic equipment, such as dilution refrigerators, because of its mechanical strength and relatively high thermal conductivity in this temperature range. Molds for manufacturing plastic containers (including most plastic milk jugs), with the blow molding process. Armour piercing bullets, though such an application is unusual, as bullets made from steel alloys are much less expensive and have similar properties. Measurement-while-drilling (MWD) tools in the directional drilling industry. A non-magnetic alloy is required, as magnetometers are used for field-strength data received from the tool. Servicing magnetic resonance imaging (MRI) machines, where high-strength magnetic fields make the use of ferrous tools dangerous, and where magnetic materials in the field can disturb the image. Gaskets used to create an RF-tight (resistant to radio frequency leakage) seal, the electronic seal on doors used with EMC testing, and anechoic chambers. In the 1980s, beryllium copper was used in the manufacture of golf clubs, particularly wedges and putters. Though some golfers prefer the feel of beryllium copper club heads, regulatory concerns and high costs have made beryllium copper clubs difficult to find in current production. Kiefer Plating (defunct) of Elkhart, Indiana built beryllium-copper trumpet bells for the Schilke Music Co. of Chicago. These lightweight bells produce a sound preferred by some musicians. Beryllium copper wire is produced in many forms: round, square, flat, and shaped, in coils, on spools, and in straight lengths. Beryllium copper valve seats and guides are used in high-performance four-stroke engines with coated titanium valves. BeCu dissipates heat from the valve as much as seven times faster than powdered steel or iron seats and guides. The softer BeCu reduces valve wear and increases valve life. Age-hardened alloy Beryllium copper (C17200 & C17300) is an age-hardening alloy that attains the highest strength of any copper base alloy. It may be age hardened after forming into springs, intricate forms, or complex shapes. It is valued for its elasticity, corrosion resistance, stability, conductivity, and low creep. Tempered beryllium copper is C17200 and C17300, which have been age-hardened and cold-drawn. No further heat treatment is necessary beyond possible light stress relief. It is sufficiently ductile to wind on its diameter and can be formed into springs and most shapes. The tempered wire is most useful where the properties of beryllium copper are desired, but the age-hardening of finished parts is not practical. C17510 and C17500 beryllium copper alloys are age-hardenable and provide good electrical conductivity, physical properties, and wear-resistance. They are used in springs and wire where electrical conduction or retention of properties at elevated temperatures is important. Specialized variants High-strength beryllium copper alloys contain as much as 2.7% beryllium (cast), or 1.6-2% beryllium with about 0.3% cobalt (wrought). The strength is achieved by age hardening. The thermal conductivity of these alloys lies between that of steel and aluminum. The cast alloys are frequently formed with injection molds. The wrought alloys are designated by UNS as C17200 to C17400, the cast alloys are C82000 to C82800. The hardening process requires rapid cooling of the annealed metal, resulting in a solid-state solution of beryllium in copper, which is then kept at 200-460 °C for at least an hour, producing a precipitation of metastable beryllide crystals in the copper matrix. Over-aging beyond the equilibrium phase depletes the beryllide crystals and reduces their strength. The beryllides in cast alloys are similar to those in wrought alloys. High conductivity beryllium copper alloys contain as much as 0.7% beryllium with some nickel and cobalt. The thermal conductivity of these alloys is greater than that of aluminum and slightly less than that of pure copper; they are often used as electrical contacts. References External links Standards and properties - Copper and copper alloy microstructures - Copper Beryllium National Pollutant Inventory - Beryllium and compounds fact sheet National Pollutant Inventory - Copper and compounds fact sheet Copper beryllium and nickel beryllium datasheets Copper beryllium and nickel beryllium WIRE datasheets Copper alloys Beryllium alloys pl:Brązy#Brąz berylowy
Beryllium copper
[ "Chemistry" ]
1,750
[ "Copper alloys", "Alloys", "Beryllium alloys" ]
158,405
https://en.wikipedia.org/wiki/Iron%28II%29%20sulfate
Iron(II) sulfate (British English: iron(II) sulphate) or ferrous sulfate denotes a range of salts with the formula FeSO4·xH2O. These compounds exist most commonly as the heptahydrate (x = 7) but several values for x are known. The hydrated form is used medically to treat or prevent iron deficiency, and also for industrial applications. Known since ancient times as copperas and as green vitriol (vitriol is an archaic name for hydrated sulfate minerals), the blue-green heptahydrate (hydrate with 7 molecules of water) is the most common form of this material. All the iron(II) sulfates dissolve in water to give the same aquo complex [Fe(H2O)6]2+, which has octahedral molecular geometry and is paramagnetic. The name copperas dates from times when the copper(II) sulfate was known as blue copperas, and perhaps in analogy, iron(II) and zinc sulfate were known respectively as green and white copperas. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 107th most commonly prescribed medication in the United States, with more than 6million prescriptions. Uses Industrially, ferrous sulfate is mainly used as a precursor to other iron compounds. It is a reducing agent, and as such is useful for the reduction of chromate in cement to less toxic Cr(III) compounds. Historically ferrous sulfate was used in the textile industry for centuries as a dye fixative. It is used historically to blacken leather and as a constituent of iron gall ink. The preparation of sulfuric acid ('oil of vitriol') by the distillation of green vitriol (iron(II) sulfate) has been known for at least 700 years. Medical use Plant growth Iron(II) sulfate is sold as ferrous sulfate, a soil amendment for lowering the pH of a high alkaline soil so that plants can access the soil's nutrients. In horticulture it is used for treating iron chlorosis. Although not as rapid-acting as ferric EDTA, its effects are longer-lasting. It can be mixed with compost and dug into the soil to create a store which can last for years. Ferrous sulfate can be used as a lawn conditioner. It can also be used to eliminate silvery thread moss in golf course putting greens. Pigment and craft Ferrous sulfate can be used to stain concrete and some limestones and sandstones a yellowish rust color. Woodworkers use ferrous sulfate solutions to color maple wood a silvery hue. Green vitriol is also a useful reagent in the identification of mushrooms. Historical uses Ferrous sulfate was used in the manufacture of inks, most notably iron gall ink, which was used from the Middle Ages until the end of the 18th century. Chemical tests made on the Lachish letters () showed the possible presence of iron. It is thought that oak galls and copperas may have been used in making the ink on those letters. It also finds use in wool dyeing as a mordant. Harewood, a material used in marquetry and parquetry since the 17th century, is also made using ferrous sulfate. Two different methods for the direct application of indigo dye were developed in England in the 18th century and remained in use well into the 19th century. One of these, known as china blue, involved iron(II) sulfate. After printing an insoluble form of indigo onto the fabric, the indigo was reduced to leuco-indigo in a sequence of baths of ferrous sulfate (with reoxidation to indigo in air between immersions). The china blue process could make sharp designs, but it could not produce the dark hues of other methods. In the second half of the 1850s ferrous sulfate was used as a photographic developer for collodion process images. Hydrates Iron(II) sulfate can be found in various states of hydration, and several of these forms exist in nature or were created synthetically. FeSO4·H2O (mineral: szomolnokite, relatively rare, monoclinic) FeSO4·H2O (synthetic compound stable at pressures exceeding 6.2 GPa, triclinic) FeSO4·4H2O (mineral: rozenite, white, relatively common, may be dehydration product of melanterite, monoclinic) FeSO4·5H2O (mineral: siderotil, relatively rare, triclinic) FeSO4·6H2O (mineral: ferrohexahydrite, very rare, monoclinic) FeSO4·7H2O (mineral: melanterite, blue-green, relatively common, monoclinic) The tetrahydrate is stabilized when the temperature of aqueous solutions reaches . At these solutions form both the tetrahydrate and monohydrate. Mineral forms are found in oxidation zones of iron-bearing ore beds, e.g. pyrite, marcasite, chalcopyrite, etc. They are also found in related environments, like coal fire sites. Many rapidly dehydrate and sometimes oxidize. Numerous other, more complex (either basic, hydrated, and/or containing additional cations) Fe(II)-bearing sulfates exist in such environments, with copiapite being a common example. Production and reactions In the finishing of steel prior to plating or coating, the steel sheet or rod is passed through pickling baths of sulfuric acid. This treatment produces large quantities of iron(II) sulfate as a by-product. Another source of large amounts results from the production of titanium dioxide from ilmenite via the sulfate process. Ferrous sulfate is also prepared commercially by oxidation of pyrite: It can be produced by displacement of metals less reactive than Iron from solutions of their sulfate: Reactions Upon dissolving in water, ferrous sulfates form the metal aquo complex [Fe(H2O)6]2+, which is an almost colorless, paramagnetic ion. On heating, iron(II) sulfate first loses its water of crystallization and the original green crystals are converted into a white anhydrous solid. When further heated, the anhydrous material decomposes into sulfur dioxide and sulfur trioxide, leaving a reddish-brown iron(III) oxide. Thermolysis of iron(II) sulfate begins at about . ->[\Delta] Like other iron(II) salts, iron(II) sulfate is a reducing agent. For example, it reduces nitric acid to nitrogen monoxide and chlorine to chloride: Its mild reducing power is of value in organic synthesis. It is used as the iron catalyst component of Fenton's reagent. Ferrous sulfate can be detected by the cerimetric method, which is the official method of the Indian Pharmacopoeia. This method includes the use of ferroin solution showing a red to light green colour change during titration. See also Iron(III) sulfate (ferric sulfate), the other common simple sulfate of iron. Copper(II) sulfate Ammonium iron(II) sulfate, also known as Mohr's salt, the common double salt of ammonium sulfate with iron(II) sulfate. Chalcanthum Ephraim Seehl known as an early manufacturer of Iron(II) sulfate, which he called 'green vitriol'. References External links Iron(II) compounds Sulfates World Health Organization essential medicines Deliquescent materials
Iron(II) sulfate
[ "Chemistry" ]
1,624
[ "Sulfates", "Deliquescent materials", "Salts" ]
158,530
https://en.wikipedia.org/wiki/Cepheid%20variable
A Cepheid variable () is a type of variable star that pulsates radially, varying in both diameter and temperature. It changes in brightness, with a well-defined stable period and amplitude. Cepheids are important cosmic benchmarks for scaling galactic and extragalactic distances; a strong direct relationship exists between a Cepheid variable's luminosity and its pulsation period. This characteristic of classical Cepheids was discovered in 1908 by Henrietta Swan Leavitt after studying thousands of variable stars in the Magellanic Clouds. The discovery establishes the true luminosity of a Cepheid by observing its pulsation period. This in turn gives the distance to the star by comparing its known luminosity to its observed brightness, calibrated by directly observing the parallax distance to the closest Cepheids such as RS Puppis and Polaris. Cepheids change brightness due to the κ–mechanism, which occurs when opacity in a star increases with temperature rather than decreasing. The main gas involved is thought to be helium. The cycle is driven by the fact doubly ionized helium, the form adopted at high temperatures, is more opaque than singly ionized helium. As a result, the outer layer of the star cycles between being compressed, which heats the helium until it becomes doubly ionized and (due to opacity) absorbs enough heat to expand; and expanded, which cools the helium until it becomes singly ionized and (due to transparency) cools and collapses again. Cepheid variables become dimmest during the part of the cycle when the helium is doubly ionized. Etymology The term Cepheid originates from the star Delta Cephei in the constellation Cepheus, which was one of the early discoveries. History On September 10, 1784, Edward Pigott detected the variability of Eta Aquilae, the first known representative of the class of classical Cepheid variables. The eponymous star for classical Cepheids, Delta Cephei, was discovered to be variable by John Goodricke a few months later. The number of similar variables grew to several dozen by the end of the 19th century, and they were referred to as a class as Cepheids. Most of the Cepheids were known from the distinctive light curve shapes with the rapid increase in brightness and a hump, but some with more symmetrical light curves were known as Geminids after the prototype ζ Geminorum. A relationship between the period and luminosity for classical Cepheids was discovered in 1908 by Henrietta Swan Leavitt in an investigation of thousands of variable stars in the Magellanic Clouds. She published it in 1912 with further evidence. Cepheid variables were found to show radial velocity variation with the same period as the luminosity variation, and initially this was interpreted as evidence that these stars were part of a binary system. However, in 1914, Harlow Shapley demonstrated that this idea should be abandoned. Two years later, Shapley and others had discovered that Cepheid variables changed their spectral types over the course of a cycle. In 1913, Ejnar Hertzsprung attempted to find distances to 13 Cepheids using their motion through the sky. (His results would later require revision.) In 1918, Harlow Shapley used Cepheids to place initial constraints on the size and shape of the Milky Way and of the placement of the Sun within it. In 1924, Edwin Hubble established the distance to classical Cepheid variables in the Andromeda Galaxy, until then known as the "Andromeda Nebula" and showed that those variables were not members of the Milky Way. Hubble's finding settled the question raised in the "Great Debate" of whether the Milky Way represented the entire Universe or was merely one of many galaxies in the Universe. In 1929, Hubble and Milton L. Humason formulated what is now known as Hubble's law by combining Cepheid distances to several galaxies with Vesto Slipher's measurements of the speed at which those galaxies recede from us. They discovered that the Universe is expanding, confirming the theories of Georges Lemaître. In the mid 20th century, significant problems with the astronomical distance scale were resolved by dividing the Cepheids into different classes with very different properties. In the 1940s, Walter Baade recognized two separate populations of Cepheids (classical and type II). Classical Cepheids are younger and more massive population I stars, whereas type II Cepheids are older, fainter Population II stars. Classical Cepheids and type II Cepheids follow different period-luminosity relationships. The luminosity of type II Cepheids is, on average, less than classical Cepheids by about 1.5 magnitudes (but still brighter than RR Lyrae stars). Baade's seminal discovery led to a twofold increase in the distance to M31, and the extragalactic distance scale. RR Lyrae stars, then known as Cluster Variables, were recognized fairly early as being a separate class of variable, due in part to their short periods. The mechanics of stellar pulsation as a heat-engine was proposed in 1917 by Arthur Stanley Eddington (who wrote at length on the dynamics of Cepheids), but it was not until 1953 that S. A. Zhevakin identified ionized helium as a likely valve for the engine. Classes Cepheid variables are divided into two subclasses which exhibit markedly different masses, ages, and evolutionary histories: classical Cepheids and type II Cepheids. Delta Scuti variables are A-type stars on or near the main sequence at the lower end of the instability strip and were originally referred to as dwarf Cepheids. RR Lyrae variables have short periods and lie on the instability strip where it crosses the horizontal branch. Delta Scuti variables and RR Lyrae variables are not generally treated with Cepheid variables although their pulsations originate with the same helium ionisation kappa mechanism. Classical Cepheids Classical Cepheids (also known as Population I Cepheids, type I Cepheids, or Delta Cepheid variables) undergo pulsations with very regular periods on the order of days to months. Classical Cepheids are Population I variable stars which are 4–20 times more massive than the Sun, and up to 100,000 times more luminous. These Cepheids are yellow bright giants and supergiants of spectral class F6 – K2 and their radii change by (~25% for the longer-period I Carinae) millions of kilometers during a pulsation cycle. Classical Cepheids are used to determine distances to galaxies within the Local Group and beyond, and are a means by which the Hubble constant can be established. Classical Cepheids have also been used to clarify many characteristics of the Milky Way galaxy, such as the Sun's height above the galactic plane and the Galaxy's local spiral structure. A group of classical Cepheids with small amplitudes and sinusoidal light curves are often separated out as Small Amplitude Cepheids or s-Cepheids, many of them pulsating in the first overtone. Type II Cepheids Type II Cepheids (also termed Population II Cepheids) are population II variable stars which pulsate with periods typically between 1 and 50 days. Type II Cepheids are typically metal-poor, old (~10 Gyr), low mass objects (~half the mass of the Sun). Type II Cepheids are divided into several subgroups by period. Stars with periods between 1 and 4 days are of the BL Her subclass, 10–20 days belong to the W Virginis subclass, and stars with periods greater than 20 days belong to the RV Tauri subclass. Type II Cepheids are used to establish the distance to the Galactic Center, globular clusters, and galaxies. Anomalous Cepheids A group of pulsating stars on the instability strip have periods of less than 2 days, similar to RR Lyrae variables but with higher luminosities. Anomalous Cepheid variables have masses higher than type II Cepheids, RR Lyrae variables, and the Sun. It is unclear whether they are young stars on a "turned-back" horizontal branch, blue stragglers formed through mass transfer in binary systems, or a mix of both. Double-mode Cepheids A small proportion of Cepheid variables have been observed to pulsate in two modes at the same time, usually the fundamental and first overtone, occasionally the second overtone. A very small number pulsate in three modes, or an unusual combination of modes including higher overtones. Uncertain distances Chief among the uncertainties tied to the classical and type II Cepheid distance scale are: the nature of the period-luminosity relation in various passbands, the impact of metallicity on both the zero-point and slope of those relations, and the effects of photometric contamination (blending with other stars) and a changing (typically unknown) extinction law on Cepheid distances. All these topics are actively debated in the literature. These unresolved matters have resulted in cited values for the Hubble constant (established from Classical Cepheids) ranging between 60 km/s/Mpc and 80 km/s/Mpc. Resolving this discrepancy is one of the foremost problems in astronomy since the cosmological parameters of the Universe may be constrained by supplying a precise value of the Hubble constant. Uncertainties have diminished over the years, due in part to discoveries such as RS Puppis. Delta Cephei is also of particular importance as a calibrator of the Cepheid period-luminosity relation since its distance is among the most precisely established for a Cepheid, partly because it is a member of a star cluster and the availability of precise parallaxes observed by the Hubble, Hipparcos, and Gaia space telescopes. The accuracy of parallax distance measurements to Cepheid variables and other bodies within 7,500 light-years is vastly improved by comparing images from Hubble taken six months apart, from opposite points in the Earth's orbit. (Between two such observations 2 AU apart, a star at a distance of 7500 light-years = 2300 parsecs would appear to move an angle of 2/2300 arc-seconds = 2 x 10−7 degrees, the resolution limit of the available telescopes.) Pulsation model The accepted explanation for the pulsation of Cepheids is called the Eddington valve, or "κ-mechanism", where the Greek letter κ (kappa) is the usual symbol for the gas opacity. Helium is the gas thought to be most active in the process. Doubly ionized helium (helium whose atoms are missing both electrons) is more opaque than singly ionized helium. As helium is heated, its temperature rises until it reaches the point at which double ionisation spontaneously occurs and is sustained throughout the layer in much the same way a fluorescent tube 'strikes'. At the dimmest part of a Cepheid's cycle, this ionized gas in the outer layers of the star is relatively opaque, and so is heated by the star's radiation, and due to the increasing temperature, begins to expand. As it expands, it cools, but remains ionised until another threshold is reached at which point double ionization cannot be sustained and the layer becomes singly ionized hence more transparent, which allows radiation to escape. The expansion then stops, and reverses due to the star's gravitational attraction. The star's states are held to be either expanding or contracting by the hysteresis generated by the doubly ionized helium and indefinitely flip-flops between the two states reversing every time the upper or lower threshold is crossed. This process is rather analogous to the relaxation oscillator found in electronics. In 1879, August Ritter (1826–1908) demonstrated that the adiabatic radial pulsation period for a homogeneous sphere is related to its surface gravity and radius through the relation: where k is a proportionality constant. Now, since the surface gravity is related to the sphere mass and radius through the relation: one finally obtains: where Q is a constant, called the pulsation constant. Examples Classical Cepheids include: Eta Aquilae, Zeta Geminorum, Beta Doradus, RT Aurigae, Polaris, as well as Delta Cephei. Type II Cepheids include: W Virginis, Kappa Pavonis and BL Herculis. Anomalous Cepheids include: XZ Ceti (overtone pulsation mode) and BL Boötis. References External links McMaster Cepheid Photometry and Radial Velocity Data Archive American Association of Variable Star Observers Survey of Warsaw University at Las Campanas Observatory: OGLE-III (Optical Gravitational Lensing Experiment) Variable Stars catalog website David Dunlap Observatory of Toronto University: Galactic Classical Cepheids database Astrometry Standard candles Pulsating variables
Cepheid variable
[ "Physics", "Astronomy" ]
2,787
[ "Astronomical sub-disciplines", "Standard candles", "Astrophysics", "Astrometry" ]
158,571
https://en.wikipedia.org/wiki/Harry%20Nyquist
Harry Nyquist (, ; February 7, 1889 – April 4, 1976) was a Swedish-American physicist and electronic engineer who made important contributions to communication theory. Personal life Nyquist was born in the village Nilsby of the parish Stora Kil, Värmland, Sweden. He was the son of Lars Jonsson Nyqvist (1847–1930) and Catarina (or Katrina) Eriksdotter (1857–1920). His parents had eight children: Elin Teresia, Astrid, Selma, Harry Theodor, Amelie, Olga Maria, Axel Martin and Herta Alfrida. He immigrated to the United States in 1907. Education He entered the University of North Dakota in 1912 and received B.S. and M.S. degrees in electrical engineering in 1914 and 1915, respectively. He received a Ph.D. in physics at Yale University in 1917. Career He worked at AT&T's Department of Development and Research from 1917 to 1934, and continued when it became Bell Telephone Laboratories that year, until his retirement in 1954. Nyquist received the IRE Medal of Honor in 1960 for "fundamental contributions to a quantitative understanding of thermal noise, data transmission and negative feedback." In October 1960 he was awarded the Stuart Ballantine Medal of the Franklin Institute "for his theoretical analyses and practical inventions in the field of communications systems during the past forty years including, particularly, his original work in the theories of telegraph transmission, thermal noise in electric conductors, and in the history of feedback systems." In 1969 he was awarded the National Academy of Engineering's fourth Founder's Medal "in recognition of his many fundamental contributions to engineering." In 1975 Nyquist received together with Hendrik Bode the Rufus Oldenburger Medal from the American Society of Mechanical Engineers. As reported in The Idea Factory: Bell Labs and the Great Age of American Innovation, the Bell Labs patent lawyers wanted to know why some people were so much more productive (in terms of patents) than others. After crunching a lot of data, they found that the only thing the productive employees had in common (other than having made it through the Bell Labs hiring process) was that "Workers with the most patents often shared lunch or breakfast with a Bell Labs electrical engineer named Harry Nyquist. It wasn't the case that Nyquist gave them specific ideas. Rather, as one scientist recalled, 'he drew people out, got them thinking'" (p. 135). Nyquist lived in Pharr, Texas after his retirement, and died in Harlingen, Texas on April 4, 1976. Technical contributions As an engineer at Bell Laboratories, Nyquist did important work on thermal noise ("Johnson–Nyquist noise"), the stability of feedback amplifiers, telegraphy, facsimile, television, and other important communications problems. With Herbert E. Ives, he helped to develop AT&T's first facsimile machines that were made public in 1924. In 1932, he published a classic paper on stability of feedback amplifiers. The Nyquist stability criterion can now be found in many textbooks on feedback control theory. His early theoretical work on determining the bandwidth requirements for transmitting information laid the foundations for later advances by Claude Shannon, which led to the development of information theory. In particular, Nyquist determined that the number of independent pulses that could be put through a telegraph channel per unit time is limited to twice the bandwidth of the channel, and published his results in the papers Certain factors affecting telegraph speed (1924) and Certain topics in Telegraph Transmission Theory (1928). This rule is essentially a dual of what is now known as the Nyquist–Shannon sampling theorem. Terms named for Harry Nyquist Nyquist rate: sampling rate twice the bandwidth of the signal's waveform being sampled; sampling at a rate that is equal to, or faster, than this rate ensures that the waveform can be reconstructed accurately. Nyquist frequency: half the sample rate of a system; signal frequencies below this value are unambiguously represented. Nyquist filter Nyquist plot Nyquist ISI criterion Nyquist (programming language) Nyquist stability criterion References External links IEEE Global History Network page about Nyquist Nyquist criterion page with photo of Nyquist with John R. Pierce and Rudy Kompfner K.J.Astrom: Nyquist and his seminal papers, 2005 presentation Nyquist biography, p. 2 1889 births 1976 deaths People from Kil Municipality American electronics engineers AT&T people Control theorists IEEE Medal of Honor recipients American information theorists Information theory Scientists at Bell Labs University of North Dakota alumni Yale University alumni Swedish emigrants to the United States People from Pharr, Texas Mathematicians from Texas 20th-century American engineers Fellows of the American Physical Society
Harry Nyquist
[ "Mathematics", "Technology", "Engineering" ]
964
[ "Telecommunications engineering", "Applied mathematics", "Computer science", "Information theory", "Control engineering", "Control theorists" ]
158,644
https://en.wikipedia.org/wiki/Paddle
A paddle is a handheld tool with an elongated handle and a flat, widened end (the blade) used as a lever to apply force onto the bladed end. It most commonly describes a completely handheld tool used to propel a human-powered watercraft by pushing water in a direction opposite to the direction of travel (i.e. paddling). A paddle is different from an oar (which can be similar in shape and perform the same function via rowing) – an oar is attached to the watercraft via a fulcrum. The term "paddle" can also be used to describe objects of similar shapes or functions: A rotating set of paddle boards known as a paddle wheel is used to propel a steamboat or paddle steamer. In a number of racquet sports (e.g. ping-pong and paddle ball), a "paddle" or "bat" is a short, solid racket used to strike a ball. A mixing paddle is a device used to stir or mix separate ingredients within a mixture. A spanking paddle is used in corporal punishment, typically to forcefully hit someone (e.g. a juvenile) on the buttocks. Canoe and kayak paddles Materials and designs Paddles commonly used in canoes consist of a wooden, fibreglass, carbon fibre, or metal rod (the shaft) with a handle on one end and a rigid sheet (the blade) on the other end. Paddles for use in kayaks are longer, with a blade on each end; they are handled from the middle of the shaft. Kayak paddles having blades in the same plane (when viewed down the shaft) are called "un-feathered." Paddles with blades in different planes are called "feathered". Feathered paddles are measured by the degree of feather, such as 30, 45, or even 90 degrees. Many modern paddles are made of two pieces which can be snapped together in either feathered or unfeathered settings. The shaft is normally straight but in some cases a 'crank' is added with the aim of making the paddle more comfortable and reducing strain on the wrist. Because the kayak paddle is not supported by the boat, paddles made of lighter materials are desired; it is not uncommon for a kayak paddle to be two pounds ( ) or less and very expensive paddles can be as light as . Weight savings are more desirable at the ends of the paddle rather than in the middle. Cheaper kayak paddles have an aluminium shaft while more expensive ones use a lighter fibreglass or carbon fibre shaft. Some paddles have a smaller diameter shaft for people with smaller hands. Paddle length varies with a longer paddle being better suited for stronger people, taller people, and people using the paddle in a wider kayak. Some paddle makers have an online paddle size calculator. Blades vary in size and shape. A blade with a larger surface area may be desirable for a strong person with good shoulder joints, but tiring for a weaker person or a person with less than perfect shoulder joints. Because normal paddling involves alternately dipping and raising the paddle blades, the colour of the blades may affect the visibility of the kayaker to powerboats operators under limited visibility conditions. For this reason white or yellow blades may offer a safety advantage over black or blue blades. Of course, kayakers should wear a headlamp or have other lighting on their kayak under conditions of limited lighting. However, if a powerboat operator must look straight into a sun low in the sky to see a kayaker, the motion of brightly coloured paddle blades may be of more value than lighting on the kayak. Highly reflective water resistant tape (e.g. SOLAS tape) may be affixed to the paddle blades and boat to enhance visibility. Use The paddle is held with two hands, some distance apart from each other. For normal use, it is drawn through the water from front (bow) to back (stern) to drive the boat forwards. The two blades of a kayak paddle are dipped alternately on either side of the kayak. A paddle is distinguished from an oar in that the paddle is held in the user's hands and completely supported by the paddler, whereas an oar is primarily supported by the boat, through the use of oarlocks. Gloves may be worn to prevent blistering for long periods of paddling. Other types On mechanical paddle steamers, the motorized paddling is not done with a mass of paddles or oars but by rotating one or a few paddle wheels (rather the inverse of a water mill). Racing paddles also have special designs. They are generally less flat and are curved to catch more water, which enable racing paddlers to maximize the efficiency of their stroke. Wing bladed paddles are very popular in kayak racing. A wing paddle looks like a spoon and acts like a wing or sail, generating lift on the convex side, which pulls the paddle forward-outward at the expense of overcoming drag. This gives additional forward thrust as compared with a flat paddle with forward thrust mainly from drag. Bent shaft paddles, popular with tripping and marathon canoers, have a blade that is angled from the shaft, usually 12 to 15 degrees. See also Canoe paddle strokes Mixing paddle Oar Spanking paddle References External links Paddling History Canoeing and kayaking equipment Marine propulsion Fishing equipment
Paddle
[ "Engineering" ]
1,100
[ "Marine propulsion", "Marine engineering" ]
158,681
https://en.wikipedia.org/wiki/Aircraft%20engine
An aircraft engine, often referred to as an aero engine, is the power component of an aircraft propulsion system. Aircraft using power components are referred to as powered flight. Most aircraft engines are either piston engines or gas turbines, although a few have been rocket powered and in recent years many small UAVs have used electric motors. Manufacturing industry In commercial aviation the major Western manufacturers of turbofan engines are Pratt & Whitney (a subsidiary of Raytheon Technologies), General Electric, Rolls-Royce, and CFM International (a joint venture of Safran Aircraft Engines and General Electric). Russian manufacturers include the United Engine Corporation, Aviadvigatel and Klimov. Aeroengine Corporation of China was formed in 2016 with the merger of several smaller companies. The largest manufacturer of turboprop engines for general aviation is Pratt & Whitney. General Electric announced in 2015 entrance into the market. Development history 1848: John Stringfellow made a steam engine for a 10-foot wingspan model aircraft which achieved the first powered flight, albeit with negligible payload. 1903: Charlie Taylor built an inline engine, mostly of aluminum, for the Wright Flyer (12 horsepower). 1903: Manly-Balzer engine sets standards for later radial engines. 1906: Léon Levavasseur produces a successful water-cooled V8 engine for aircraft use. 1908: René Lorin patents a design for the ramjet engine. 1908: Louis Seguin designed the Gnome Omega, the world's first rotary engine to be produced in quantity. In 1909 a Gnome powered Farman III aircraft won the prize for the greatest non-stop distance flown at the Reims Grande Semaine d'Aviation setting a world record for endurance of . 1910: Coandă-1910, an unsuccessful ducted fan aircraft exhibited at Paris Aero Salon, powered by a piston engine. The aircraft never flew, but a patent was filed for routing exhaust gases into the duct to augment thrust. 1914: Auguste Rateau suggests using exhaust-powered compressor – a turbocharger – to improve high-altitude performance; not accepted after the tests 1915: The Mercedes D.VI - an eighteen-cylinder liquid-cooled W-18 type aircraft engine - (517 hp/380 kW) was the most powerful engine during WW1. 1917–18: The Idflieg-numbered R.30/16 example of the Imperial German Luftstreitkräfte's Zeppelin-Staaken R.VI heavy bomber becomes the earliest known supercharger-equipped aircraft to fly, with a Mercedes D.II straight-six engine in the central fuselage driving a Brown-Boveri mechanical supercharger for the R.30/16's four Mercedes D.IVa engines. 1918: Sanford Alexander Moss picks up Rateau's idea and creates the first successful turbocharger 1926: Armstrong Siddeley Jaguar IV (S), the first series-produced supercharged engine for aircraft use; two-row radial with a gear-driven centrifugal supercharger. 1930: Frank Whittle submitted his first patent for a turbojet engine. June 1939: Heinkel He 176 is the first successful aircraft to fly powered solely by a liquid-fueled rocket engine. August 1939: Heinkel HeS 3 turbojet propels the pioneering German Heinkel He 178 aircraft. 1940: Jendrassik Cs-1, the world's first run of a turboprop engine. It is not put into service. 1943 Daimler-Benz DB 670, first turbofan runs 1944: Messerschmitt Me 163B Komet, the world's first rocket-propelled combat aircraft deployed. 1945: First turboprop-powered aircraft flies, a modified Gloster Meteor with two Rolls-Royce Trent engines. 1947: Bell X-1 rocket-propelled aircraft exceeds the speed of sound. 1948: 100 shp 782, the first turboshaft engine to be applied to aircraft use; in 1950 used to develop the larger Turbomeca Artouste. 1949: Leduc 010, the world's first ramjet-powered aircraft flight. 1950: Rolls-Royce Conway, the world's first production turbofan, enters service. 1968: General Electric TF39 high bypass turbofan enters service delivering greater thrust and much better efficiency. 2002: HyShot scramjet flew in dive. 2004: NASA X-43, the first scramjet to maintain altitude. 2020: Pipistrel E-811 is the first electric aircraft engine to be awarded a type certificate by EASA. It powers the Pipistrel Velis Electro, the first fully electric EASA type-certified aeroplane. Shaft engines Reciprocating (piston) engines In-line engine In this section, for clarity, the term "inline engine" refers only to engines with a single row of cylinders, as used in automotive language, but in aviation terms, the phrase "inline engine" also covers V-type and opposed engines (as described below), and is not limited to engines with a single row of cylinders. This is typically to differentiate them from radial engines. A straight engine typically has an even number of cylinders, but there are instances of three- and five-cylinder engines. The greatest advantage of an inline engine is that it allows the aircraft to be designed with a low frontal area to minimize drag. If the engine crankshaft is located above the cylinders, it is called an inverted inline engine: this allows the propeller to be mounted high up to increase ground clearance, enabling shorter landing gear. The disadvantages of an inline engine include a poor power-to-weight ratio, because the crankcase and crankshaft are long and thus heavy. An in-line engine may be either air-cooled or liquid-cooled, but liquid-cooling is more common because it is difficult to get enough air-flow to cool the rear cylinders directly. Inline engines were common in early aircraft; one was used in the Wright Flyer, the aircraft that made the first controlled powered flight. However, the inherent disadvantages of the design soon became apparent, and the inline design was abandoned, becoming a rarity in modern aviation. For other configurations of aviation inline engine, such as X-engines, U-engines, H-engines, etc., see Inline engine (aeronautics). V-type engine Cylinders in this engine are arranged in two in-line banks, typically tilted 60–90 degrees apart from each other and driving a common crankshaft. The vast majority of V engines are water-cooled. The V design provides a higher power-to-weight ratio than an inline engine, while still providing a small frontal area. Perhaps the most famous example of this design is the legendary Rolls-Royce Merlin engine, a 27-litre (1649 in3) 60° V12 engine used in, among others, the Spitfires that played a major role in the Battle of Britain. Horizontally opposed engine A horizontally opposed engine, also called a flat or boxer engine, has two banks of cylinders on opposite sides of a centrally located crankcase. The engine is either air-cooled or liquid-cooled, but air-cooled versions predominate. Opposed engines are mounted with the crankshaft horizontal in airplanes, but may be mounted with the crankshaft vertical in helicopters. Due to the cylinder layout, reciprocating forces tend to cancel, resulting in a smooth running engine. Opposed-type engines have high power-to-weight ratios because they have a comparatively small, lightweight crankcase. In addition, the compact cylinder arrangement reduces the engine's frontal area and allows a streamlined installation that minimizes aerodynamic drag. These engines always have an even number of cylinders, since a cylinder on one side of the crankcase "opposes" a cylinder on the other side. Opposed, air-cooled four- and six-cylinder piston engines are by far the most common engines used in small general aviation aircraft requiring up to per engine. Aircraft that require more than per engine tend to be powered by turbine engines. H configuration engine An H configuration engine is essentially a pair of horizontally opposed engines placed together, with the two crankshafts geared together. Radial engine This type of engine has one or more rows of cylinders arranged around a centrally located crankcase. Each row generally has an odd number of cylinders to produce smooth operation. A radial engine has only one crank throw per row and a relatively small crankcase, resulting in a favorable power-to-weight ratio. Because the cylinder arrangement exposes a large amount of the engine's heat-radiating surfaces to the air and tends to cancel reciprocating forces, radials tend to cool evenly and run smoothly. The lower cylinders, which are under the crankcase, may collect oil when the engine has been stopped for an extended period. If this oil is not cleared from the cylinders prior to starting the engine, serious damage due to hydrostatic lock may occur. Most radial engines have the cylinders arranged evenly around the crankshaft, although some early engines, sometimes called semi-radials or fan configuration engines, had an uneven arrangement. The best known engine of this type is the Anzani engine, which was fitted to the Bleriot XI used for the first flight across the English Channel in 1909. This arrangement had the drawback of needing a heavy counterbalance for the crankshaft, but was used to avoid the spark plugs oiling up. In military aircraft designs, the large frontal area of the engine acted as an extra layer of armor for the pilot. Also air-cooled engines, without vulnerable radiators, are slightly less prone to battle damage, and on occasion would continue running even with one or more cylinders shot away. However, the large frontal area also resulted in an aircraft with an aerodynamically inefficient increased frontal area. Rotary engine Rotary engines have the cylinders in a circle around the crankcase, as in a radial engine, (see above), but the crankshaft is fixed to the airframe and the propeller is fixed to the engine case, so that the crankcase and cylinders rotate. The advantage of this arrangement is that a satisfactory flow of cooling air is maintained even at low airspeeds, retaining the weight advantage and simplicity of a conventional air-cooled engine without one of their major drawbacks. The first practical rotary engine was the Gnome Omega designed by the Seguin brothers and first flown in 1909. Its relative reliability and good power to weight ratio changed aviation dramatically. Before the first World War most speed records were gained using Gnome-engined aircraft, and in the early years of the war rotary engines were dominant in aircraft types for which speed and agility were paramount. To increase power, engines with two rows of cylinders were built. However, the gyroscopic effects of the heavy rotating engine produced handling problems in aircraft and the engines also consumed large amounts of oil since they used total loss lubrication, the oil being mixed with the fuel and ejected with the exhaust gases. Castor oil was used for lubrication, since it is not soluble in petrol, and the resultant fumes were nauseating to the pilots. Engine designers had always been aware of the many limitations of the rotary engine so when the static style engines became more reliable and gave better specific weights and fuel consumption, the days of the rotary engine were numbered. Wankel engine The Wankel is a type of rotary engine. The Wankel engine is about one half the weight and size of a traditional four-stroke cycle piston engine of equal power output, and much lower in complexity. In an aircraft application, the power-to-weight ratio is very important, making the Wankel engine a good choice. Because the engine is typically constructed with an aluminium housing and a steel rotor, and aluminium expands more than steel when heated, a Wankel engine does not seize when overheated, unlike a piston engine. This is an important safety factor for aeronautical use. Considerable development of these designs started after World War II, but at the time the aircraft industry favored the use of turbine engines. It was believed that turbojet or turboprop engines could power all aircraft, from the largest to smallest designs. The Wankel engine did not find many applications in aircraft, but was used by Mazda in a popular line of sports cars. The French company Citroën had developed Wankel powered helicopter in 1970's. In modern times the Wankel engine has been used in motor gliders where the compactness, light weight, and smoothness are crucially important. The now-defunct Staverton-based firm MidWest designed and produced single- and twin-rotor aero engines, the MidWest AE series. These engines were developed from the motor in the Norton Classic motorcycle. The twin-rotor version was fitted into ARV Super2s and the Rutan Quickie. The single-rotor engine was put into a Chevvron motor glider and into the Schleicher ASH motor-gliders. After the demise of MidWest, all rights were sold to Diamond of Austria, who have since developed a MkII version of the engine. As a cost-effective alternative to certified aircraft engines some Wankel engines, removed from automobiles and converted to aviation use, have been fitted in homebuilt experimental aircraft. Mazda units with outputs ranging from to can be a fraction of the cost of traditional engines. Such conversions first took place in the early 1970s; and as of 10 December 2006 the National Transportation Safety Board has only seven reports of incidents involving aircraft with Mazda engines, and none of these is of a failure due to design or manufacturing flaws. Combustion cycles The most common combustion cycle for aero engines is the four-stroke with spark ignition. Two-stroke spark ignition has also been used for small engines, while the compression-ignition diesel engine is seldom used. Starting in the 1930s attempts were made to produce a practical aircraft diesel engine. In general, Diesel engines are more reliable and much better suited to running for long periods of time at medium power settings. The lightweight alloys of the 1930s were not up to the task of handling the much higher compression ratios of diesel engines, so they generally had poor power-to-weight ratios and were uncommon for that reason, although the Clerget 14F Diesel radial engine (1939) has the same power to weight ratio as a gasoline radial. Improvements in Diesel technology in automobiles (leading to much better power-weight ratios), the Diesel's much better fuel efficiency and the high relative taxation of AVGAS compared to Jet A1 in Europe have all seen a revival of interest in the use of diesels for aircraft. Thielert Aircraft Engines converted Mercedes Diesel automotive engines, certified them for aircraft use, and became an OEM provider to Diamond Aviation for their light twin. Financial problems have plagued Thielert, so Diamond's affiliate — Austro Engine — developed the new AE300 turbodiesel, also based on a Mercedes engine. Competing new Diesel engines may bring fuel efficiency and lead-free emissions to small aircraft, representing the biggest change in light aircraft engines in decades. Power turbines Turboprop While military fighters require very high speeds, many civil airplanes do not. Yet, civil aircraft designers wanted to benefit from the high power and low maintenance that a gas turbine engine offered. Thus was born the idea to mate a turbine engine to a traditional propeller. Because gas turbines optimally spin at high speed, a turboprop features a gearbox to lower the speed of the shaft so that the propeller tips don't reach supersonic speeds. Often the turbines that drive the propeller are separate from the rest of the rotating components so that they can rotate at their own best speed (referred to as a free-turbine engine). A turboprop is very efficient when operated within the realm of cruise speeds it was designed for, which is typically . Turboshaft Turboshaft engines are used primarily for helicopters and auxiliary power units. A turboshaft engine is similar to a turboprop in principle, but in a turboprop the propeller is supported by the engine and the engine is bolted to the airframe: in a turboshaft, the engine does not provide any direct physical support to the helicopter's rotors. The rotor is connected to a transmission which is bolted to the airframe, and the turboshaft engine drives the transmission. The distinction is seen by some as slim, as in some cases aircraft companies make both turboprop and turboshaft engines based on the same design. Electric power A number of electrically powered aircraft, such as the QinetiQ Zephyr, have been designed since the 1960s. Some are used as military drones. In France in late 2007, a conventional light aircraft powered by an 18 kW electric motor using lithium polymer batteries was flown, covering more than , the first electric airplane to receive a certificate of airworthiness. On 18 May 2020, the Pipistrel E-811 was the first electric aircraft engine to be awarded a type certificate by EASA for use in general aviation. The E-811 powers the Pipistrel Velis Electro. Limited experiments with solar electric propulsion have been performed, notably the manned Solar Challenger and Solar Impulse and the unmanned NASA Pathfinder aircraft. Many big companies, such as Siemens, are developing high performance electric engines for aircraft use, also, SAE shows new developments in elements as pure Copper core electric motors with a better efficiency. A hybrid system as emergency back-up and for added power in take-off is offered for sale by Axter Aerospace, Madrid, Spain. Small multicopter UAVs are almost always powered by electric motors. Reaction engines Reaction engines generate the thrust to propel an aircraft by ejecting the exhaust gases at high velocity from the engine, the resultant reaction of forces driving the aircraft forwards. The most common reaction propulsion engines flown are turbojets, turbofans and rockets. Other types such as pulsejets, ramjets, scramjets and pulse detonation engines have also flown. In jet engines the oxygen necessary for fuel combustion comes from the air, while rockets carry an oxidizer (usually oxygen in some form) as part of the fuel load, permitting their use in space. Jet turbines Turbojet A turbojet is a type of gas turbine engine that was originally developed for military fighters during World War II. A turbojet is the simplest of all aircraft gas turbines. It consists of a compressor to draw air in and compress it, a combustion section where fuel is added and ignited, one or more turbines that extract power from the expanding exhaust gases to drive the compressor, and an exhaust nozzle that accelerates the exhaust gases out the back of the engine to create thrust. When turbojets were introduced, the top speed of fighter aircraft equipped with them was at least 100 miles per hour faster than competing piston-driven aircraft. In the years after the war, the drawbacks of the turbojet gradually became apparent. Below about Mach 2, turbojets are very fuel inefficient and create tremendous amounts of noise. Early designs also respond very slowly to power changes, a fact that killed many experienced pilots when they attempted the transition to jets. These drawbacks eventually led to the downfall of the pure turbojet, and only a handful of types are still in production. The last airliner that used turbojets was the Concorde, whose Mach 2 airspeed permitted the engine to be highly efficient. Turbofan A turbofan engine is much the same as a turbojet, but with an enlarged fan at the front that provides thrust in much the same way as a ducted propeller, resulting in improved fuel efficiency. Though the fan creates thrust like a propeller, the surrounding duct frees it from many of the restrictions that limit propeller performance. This operation is a more efficient way to provide thrust than simply using the jet nozzle alone, and turbofans are more efficient than propellers in the transsonic range of aircraft speeds and can operate in the supersonic realm. A turbofan typically has extra turbine stages to turn the fan. Turbofans were among the first engines to use multiple spools—concentric shafts that are free to rotate at their own speed—to let the engine react more quickly to changing power requirements. Turbofans are coarsely split into low-bypass and high-bypass categories. Bypass air flows through the fan, but around the jet core, not mixing with fuel and burning. The ratio of this air to the amount of air flowing through the engine core is the bypass ratio. Low-bypass engines are preferred for military applications such as fighters due to high thrust-to-weight ratio, while high-bypass engines are preferred for civil use for good fuel efficiency and low noise. High-bypass turbofans are usually most efficient when the aircraft is traveling at , the cruise speed of most large airliners. Low-bypass turbofans can reach supersonic speeds, though normally only when fitted with afterburners. Advanced technology engine The term advanced technology engine refers to the modern generation of jet engines. The principle is that a turbine engine will function more efficiently if the various sets of turbines can revolve at their individual optimum speeds, instead of at the same speed. The true advanced technology engine has a triple spool, meaning that instead of having a single drive shaft, there are three, in order that the three sets of blades may revolve at different speeds. An interim state is a twin-spool engine, allowing only two different speeds for the turbines. Pulsejets Pulsejets are mechanically simple devices that—in a repeating cycle—draw air through a no-return valve at the front of the engine into a combustion chamber and ignite it. The combustion forces the exhaust gases out the back of the engine. It produces power as a series of pulses rather than as a steady output, hence the name. The only application of this type of engine was the German unmanned V1 flying bomb of World War II. Though the same engines were also used experimentally for ersatz fighter aircraft, the extremely loud noise generated by the engines caused mechanical damage to the airframe that was sufficient to make the idea unworkable. Gluhareff Pressure Jet The Gluhareff Pressure Jet (or tip jet) is a type of jet engine that, like a valveless pulsejet, has no moving parts. Having no moving parts, the engine works by having a coiled pipe in the combustion chamber that superheats the fuel (propane) before being injected into the air-fuel inlet. In the combustion chamber, the fuel/air mixture ignites and burns, creating thrust as it leaves through the exhaust pipe. Induction and compression of the fuel/air mixture is done both by the pressure of propane as it is injected, along with the sound waves created by combustion acting on the intake stacks. It was intended as a power plant for personal helicopters and compact aircraft such as Microlights. Rocket A few aircraft have used rocket engines for main thrust or attitude control, notably the Bell X-1 and North American X-15. Rocket engines are not used for most aircraft as the energy and propellant efficiency is very poor, but have been employed for short bursts of speed and takeoff. Where fuel/propellant efficiency is of lesser concern, rocket engines can be useful because they produce very large amounts of thrust and weigh very little. Rocket turbine engine A rocket turbine engine is a combination of two types of propulsion engines: a liquid-propellant rocket and a turbine jet engine. Its power-to-weight ratio is a little higher than a regular jet engine, and works at higher altitudes. Precooled jet engines For very high supersonic/low hypersonic flight speeds, inserting a cooling system into the air duct of a hydrogen jet engine permits greater fuel injection at high speed and obviates the need for the duct to be made of refractory or actively cooled materials. This greatly improves the thrust/weight ratio of the engine at high speed. It is thought that this design of engine could permit sufficient performance for antipodal flight at Mach 5, or even permit a single stage to orbit vehicle to be practical. The hybrid air-breathing SABRE rocket engine is a pre-cooled engine under development. Piston-turbofan hybrid At the April 2018 ILA Berlin Air Show, Munich-based research institute :de:Bauhaus Luftfahrt presented a high-efficiency composite cycle engine for 2050, combining a geared turbofan with a piston engine core. The 2.87 m diameter, 16-blade fan gives a 33.7 ultra-high bypass ratio, driven by a geared low-pressure turbine but the high-pressure compressor drive comes from a piston-engine with two 10 piston banks without a high-pressure turbine, increasing efficiency with non-stationary isochoric-isobaric combustion for higher peak pressures and temperatures. The 11,200 lb (49.7 kN) engine could power a 50-seat regional jet. Its cruise TSFC would be 11.5 g/kN/s (0.406 lb/lbf/hr) for an overall engine efficiency of 48.2%, for a burner temperature of , an overall pressure ratio of 38 and a peak pressure of . Although engine weight increases by 30%, aircraft fuel consumption is reduced by 15%. Sponsored by the European Commission under Framework 7 project , Bauhaus Luftfahrt, MTU Aero Engines and GKN Aerospace presented the concept in 2015, raising the overall engine pressure ratio to over 100 for a 15.2% fuel burn reduction compared to 2025 engines. Engine position numbering On multi-engine aircraft, engine positions are numbered from left to right from the point of view of the pilot looking forward, so for example on a four-engine aircraft such as the Boeing 747, engine No. 1 is on the left side, farthest from the fuselage, while engine No. 3 is on the right side nearest to the fuselage. In the case of the twin-engine English Electric Lightning, which has two fuselage-mounted jet engines one above the other, engine No. 1 is below and to the front of engine No. 2, which is above and behind. In the Cessna 337 Skymaster, a push-pull twin-engine airplane, engine No. 1 is the one at the front of the fuselage, while engine No. 2 is aft of the cabin. Fuel Aircraft reciprocating (piston) engines are typically designed to run on aviation gasoline. Avgas has a higher octane rating than automotive gasoline to allow higher compression ratios, power output, and efficiency at higher altitudes. Currently the most common Avgas is 100LL. This refers to the octane rating (100 octane) and the lead content (LL = low lead, relative to the historic levels of lead in pre-regulation Avgas). Refineries blend Avgas with tetraethyllead (TEL) to achieve these high octane ratings, a practice that governments no longer permit for gasoline intended for road vehicles. The shrinking supply of TEL and the possibility of environmental legislation banning its use have made a search for replacement fuels for general aviation aircraft a priority for pilots’ organizations. Turbine engines and aircraft diesel engines burn various grades of jet fuel. Jet fuel is a relatively less volatile petroleum derivative based on kerosene, but certified to strict aviation standards, with additional additives. Model aircraft typically use nitro engines (also known as "glow engines" due to the use of a glow plug) powered by glow fuel, a mixture of methanol, nitromethane, and lubricant. Electrically powered model airplanes and helicopters are also commercially available. Small multicopter UAVs are almost always powered by electricity, but larger gasoline-powered designs are under development. See also Aviation safety Engine configuration Federal Aviation Regulations Hyper engine Model engine United States military aircraft engine designations Notes References External links Aircraft Engines and Aircraft Engine Theory (includes links to diagrams) The Aircraft Engine Historical Society Jet Engine Specification Database Aircraft Engine Efficiency: Comparison of Counter-rotating and Axial Aircraft LP Turbines The History of Aircraft Power Plants Briefly Reviewed : From the " 7 lb. per h.p" Days to the " 1 lb. per h.p" of To-day "The Quest for Power" a 1954 Flight article by Bill Gunston Engine
Aircraft engine
[ "Physics", "Technology" ]
5,760
[ "Physical quantities", "Engines", "Power (physics)", "Powered flight", "Aircraft engines" ]
158,682
https://en.wikipedia.org/wiki/Ls
In computing, ls is a command to list computer files and directories in Unix and Unix-like operating systems. It is specified by POSIX and the Single UNIX Specification. It is available in the EFI shell, as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities, or as part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The numerical computing environments MATLAB and GNU Octave include an ls function with similar functionality. In other environments, such as DOS, OS/2, and Microsoft Windows, similar functionality is provided by the dir command. History An ls utility appeared in the first version of AT&T UNIX, the name inherited from a similar command in Multics also named 'ls', short for the word "list". is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification. Behavior Unix and Unix-like operating systems maintain the idea of a working directory. When invoked without arguments, ls lists the files in the working directory. If a directory is specified as an argument, the files in that directory are listed; if a file is specified, that file is listed. Multiple directories and files may be specified. In many Unix-like systems, names starting with a dot (.) are hidden. Examples are ., which refers to the working directory, and .., which refers to its parent directory. Hidden names are not shown by default. With -a, all names, including all hidden names, are shown. Using -A shows all names, including hidden names, except for . and ... File names specified explicitly (for example ls .secret) are always listed. Without options, ls displays names only. The different implementations have different options, but common options include: -l Long format, displaying Unix file types, permissions, number of hard links, owner, group, size, last-modified date-time and name. If the modified date is older than 6 months, the time is replaced with the year. Some implementations add additional flags to permissions. The file type can be one of 8 characters: -, regular file; d, directory; l, symbolic (soft) link; n, network files; s, socket; p, named pipe (FIFO); c, character special file; b, block special file. -h Output sizes in human readable format (e.g., 1K (kilobytes), 234M (megabytes), 2G (gigabytes)). This option is not part of the POSIX standard, although implemented in several systems, e.g., GNU coreutils in 1997, FreeBSD 4.5 in 2002, and Solaris 9 in 2002. Additional options controlling how items are displayed include: -R Recursively list items in subdirectories. -t Sort the list by modification time (default sort is alphabetically). -u Sort the list by last access time. -c Sort the list by last attribute (status) change time. -r Reverse the order, for example most recent time last. --full-time Show times down to the second and millisecond instead of just the minute. -1 One entry per line. -m Stream format; list items across the page, separated by commas. -g Include group but not owner. -o Include owner but not group (when combined with -g both group and owner are suppressed). -d Show information about a directory or symbolic link, rather than the contents of a directory or the link's target. -F Append a "/" to directory names and a "*" to executable files. It may be possible to highlight different types of items with different colors. This is an area where implementations differ: GNU ls uses the --color option; it checks the Unix file type, the file permissions and the file extension and uses its own database to control colors maintained using dircolors. FreeBSD ls uses the -G option; it checks only the Unix file type and file permissions and uses the termcap database When the option to use color to indicate item types is selected, the output might look like: -rw-r--r-- 1 tsmitt nregion 26650 Dec 20 11:16 audio.ogg brw-r--r-- 1 tsmitt nregion 64 Jan 27 05:52 bd-block-device crw-r--r-- 1 tsmitt nregion 255 Jan 26 13:57 cd-character-device -rw-r--r-- 1 tsmitt nregion 290 Jan 26 14:08 image.png drwxrwxr-x 2 tsmitt nregion 48 Jan 26 11:28 di-directory -rwxrwxr-x 1 tsmitt nregion 29 Jan 26 14:03 ex-executable -rw-r--r-- 1 tsmitt nregion 0 Dec 20 09:39 fi-regular-file lrwxrwxrwx 1 tsmitt nregion 3 Jan 26 11:44 ln-soft-link -> dir lrwxrwxrwx 1 tsmitt nregion 15 Dec 20 10:57 or-orphan-link -> mi-missing-link drwxr-xrwx 2 tsmitt nregion 4096 Dec 20 10:58 ow-other-writeable-dir prw-r--r-- 1 tsmitt nregion 0 Jan 26 11:50 pi-pipe -rwxr-sr-x 1 tsmitt nregion 0 Dec 20 11:05 sg-setgid srw-rw-rw- 1 tsmitt nregion 0 Jan 26 12:00 so-socket drwxr-xr-t 2 tsmitt nregion 4096 Dec 20 10:58 st-sticky-dir -rwsr-xr-x 1 tsmitt nregion 0 Dec 20 11:09 su-setuid -rw-r--r-- 1 tsmitt nregion 10240 Dec 20 11:12 compressed.gz drwxrwxrwt 2 tsmitt nregion 4096 Dec 20 11:10 tw-sticky-other-writeable-dir Sample usage The following example demonstrates the output of the command: $ ls -l drwxr--r-- 1 fjones editors 4096 Mar 2 12:52 drafts -rw-r--r-- 3 fjones editors 30405 Mar 2 12:52 edition-32 -r-xr-xr-x 1 fjones bookkeepers 8460 Jan 16 2022 edit.sh Each line shows the d (directory) or - (file) indicator, Unix file permission notation, number of hard links (1 or 3), the file's owner, the file's group, the file size, the modification date/time, and the file name. In the working directory, the owner fjones has a directory named drafts, a regular file named edition-32, and an executable named edit.sh which is "old", i.e. modified more than 6 months ago as indicated by the display of the year.┌─────────── file (not a directory) |┌─────────── read-write (no execution) permissions for the owner |│ ┌───────── read-only permissions for the group |│ │ ┌─────── read-only permissions for others |│ │ │ ┌── number of hard links |│ │ │ │ ┌── owner |│ │ │ │ │ ┌── user group |│ │ │ │ │ │ ┌── file size in bytes |│ │ │ │ │ │ │ ┌── last modified on |│ │ │ │ │ │ │ │ ┌── filename -rw-r--r-- 3 fjones editors 30405 Mar 2 12:52 edition-32 See also stat (Unix) chown chgrp du (Unix) mdls User identifier (Unix) Group identifier (Unix) List of Unix commands Unix directory structure References External links GNU ls source code (as part of coreutils) ls at the LinuxQuestions.org wiki Multics commands Standard Unix programs Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands IBM i Qshell commands
Ls
[ "Technology" ]
1,826
[ "IBM i Qshell commands", "Standard Unix programs", "Multics commands", "Computing commands", "Plan 9 commands", "Inferno (operating system) commands" ]
158,705
https://en.wikipedia.org/wiki/Oil%20spill
An oil spill is the release of a liquid petroleum hydrocarbon into the environment, especially the marine ecosystem, due to human activity, and is a form of pollution. The term is usually given to marine oil spills, where oil is released into the ocean or coastal waters, but spills may also occur on land. Oil spills can result from the release of crude oil from tankers, offshore platforms, drilling rigs, and wells. They may also involve spills of refined petroleum products, such as gasoline and diesel fuel, as well as their by-products. Additionally, heavier fuels used by large ships, such as bunker fuel, or spills of any oily refuse or waste oil, contribute to such incidents. These spills can have severe environmental and economic consequences. Oil spills penetrate into the structure of the plumage of birds and the fur of mammals, reducing its insulating ability, and making them more vulnerable to temperature fluctuations and much less buoyant in the water. Cleanup and recovery from an oil spill is difficult and depends upon many factors, including the type of oil spilled, the temperature of the water (affecting evaporation and biodegradation), and the types of shorelines and beaches involved. Spills may take weeks, months or even years to clean up. Oil spills can have disastrous consequences for society; economically, environmentally, and socially. As a result, oil spill accidents have initiated intense media attention and political uproar, bringing many together in a political struggle concerning government response to oil spills and what actions can best prevent them from happening. Human impacts An oil spill represents an immediate negative effects on human health, including respiratory and reproductive problems as well as liver, and immune system damage. Oil spills causing future oil supply to decline also effects the everyday life of humans such as the potential closure of beaches, parks, fisheries and fire hazards. The Kuwaiti oil fires produced air pollution that caused respiratory distress. The Deepwater Horizon explosion killed eleven oil rig workers. The fire resulting from the Lac-Mégantic derailment killed 47 and destroyed half of the town's centre. Spilled oil can also contaminate drinking water supplies. For example, in 2013 two different oil spills contaminated water supplies for 300,000 in Miri, Malaysia; 80,000 people in Coca, Ecuador. In 2000, springs were contaminated by an oil spill in Clark County, Kentucky. Contamination can have an economic impact on tourism and marine resource extraction industries. For example, the Deepwater Horizon oil spill impacted beach tourism and fishing along the Gulf Coast, and the responsible parties were required to compensate economic victims. Environmental effects Animals The threat posed to birds, fish, shellfish and crustaceans from spilled oil was known in England in the 1920s, largely through observations made in Yorkshire. The subject was also explored in a scientific paper produced by the National Academy of Sciences in the US in 1974 which considered impacts to fish, crustaceans and molluscs. The paper was limited to 100 copies and was described as a draft document, not to be cited. In general, spilled oil can affect animals and plants in two ways: dirесt from the oil and from the response or cleanup process. Oil penetrates into the structure of the plumage of birds and the fur of mammals, reducing their insulating ability, and making them more vulnerable to temperature fluctuations and much less buoyant in the water. Animals who rely on scent to find their babies or mothers cannot do so due to the strong scent of the oil. This causes a baby to be rejected and abandoned, leaving the babies to starve and eventually die. Oil can impair a bird's ability to fly, preventing it from foraging or escaping from predators. As they preen, birds may ingest the oil coating their feathers, irritating the digestive tract, altering liver function, and causing kidney damage. Together with their diminished foraging capacity, this can rapidly result in dehydration and metabolic imbalance. Some birds exposed to petroleum also experience changes in their hormonal balance, including changes in their luteinizing protein. The majority of birds affected by oil spills die from complications without human intervention. Some studies have suggested that less than one percent of oil-soaked birds survive, even after cleaning, although the survival rate can also exceed ninety percent, as in the case of the MV Treasure oil spill. Oil spills and oil dumping events have been impacting sea birds since at least the 1920s and was understood to be a global problem in the 1930s. Heavily furred exposed to oil spills are affected in similar ways. Oil coats the fur of sea otters and seals, reducing its insulating effect, and leading to fluctuations in body temperature and hypothermia. Oil can also blind an animal, leaving it defenseless. The ingestion of oil causes dehydration and impairs the digestive process. Animals can be poisoned, and may die from oil entering the lungs or liver. Air In addition, oil spills can also harm air quality. The chemicals in crude oil are mostly hydrocarbons that contains toxic chemicals such as benzenes, toluene, poly-aromatic hydrocarbons and oxygenated polycyclic aromatic hydrocarbons. These chemicals can introduce adverse health effects when being inhaled into human body. In addition, these chemicals can be oxidized by oxidants in the atmosphere to form fine particulate matter after they evaporate into the atmosphere. These particulates can penetrate lungs and carry toxic chemicals into the human body. Burning surface oil can also be a source for pollution such as soot particles. During the cleanup and recovery process, it will also generate air pollutants such as nitric oxides and ozone from ships. Lastly, bubble bursting can also be a generation pathway for particulate matter during an oil spill. During the Deepwater Horizon oil spill, significant air quality issues were found on the Gulf Coast, which is the downwind of DWH oil spill. Air quality monitoring data showed that criteria pollutants had exceeded the health-based standard in the coastal regions. Ecosystems, habitat The majority of oil from an oil spill remains in the environment, hence a spill from an operation in the ocean is different from an operation on tundra or wetland. Wetlands are considered one of the most sensitive habitats to oil spills and the most difficult to clean. Sources and rate of occurrence Oil spills can be caused by human error, natural disasters, technical failures or deliberate releases. It is estimated that 30–50% of all oil spills are directly or indirectly caused by human error, with approximately 20–40% of oil spills being attributed to equipment failure or malfunction. Causes of oil spills are further distinguished between deliberate releases, such as operational discharges or acts of war and accidental releases. Accidental oil spills are in the focus of the literature, although some of the largest oil spills ever recorded, the Gulf War Oil Spill (sea based) and Kuwaiti Oil Fires (land based) were deliberate acts of war. The academic study of sources and causes of oil spills identifies vulnerable points in oil transportation infrastructure and calculates the likelihood of oil spills happening. This can then guide prevention efforts and regulation policies Natural seeps Around 40–50% of all oil released into the oceans stems from natural seeps from seafloor rocks. This corresponds to approximately 600,000 tons annually on a global level. While natural seeps are the single largest source of oil spills, they are considered less problematic because ecosystems have adapted to such regular releases. For instance, on sites of natural oil seeps, ocean bacteria have evolved to digest oil molecules. Oil tankers and vessels Vessels can be the source of oil spills either through operational releases of oil or in the case of oil tanker accidents. As of 2007, operational discharges from vessels were estimated to account for 21% of oil releases from vessels. They occur as a consequence of failure to comply with regulations or arbitrary discharges of waste oil and water containing such oil residues. Such operational discharges are regulated through the MARPOL convention. Operational releases are frequent, but small in the amount of oil spilled per release, and are often not in the focus of attention regarding oil spills. There has been a steady decrease of operational discharges of oil, with an additional decrease of around 50% since the 1990s. accidental oil tank vessel spills accounted for approximately 8–13% of all oil spilled into the oceans. The main causes of oil tank vessel spills were collision (29%), grounding (22%), mishandling (14%) and sinking (12%), among others. Oil tanker spills are considered a major ecological threat due to the large amount of oil spilled per accident and the fact that major sea traffic routes are close to Large Marine Ecosystems. Around 90% of the world's oil transportation is through oil tankers, and the absolute amount of seaborne oil trade is steadily increasing. However, there has been a reduction of the number of spills from oil tankers and of the amount of oil released per oil tanker spill. In 1992, MARPOL was amended and made it mandatory for large tankers (5,000 dwt and more) to be fitted with double hulls. This is considered to be a major reason for the reduction of oil tanker spills, alongside other innovations such as GPS, sectioning of vessels and sea lanes in narrow straits. In 2023, the International Tanker Owners Pollution Federation (ITOPF) documented a significant oil spill incident of over 700 tonnes and nine medium spills ranging between 7 and 700 tonnes. The major spill occurred in Asia involving heavy fuel oil, and the medium spills were scattered across Asia, Africa, Europe, and America, involving various oil types. The total volume of oil released from these spills in 2023 was approximately 2,000 tonnes. This contributes to a trend of decreased oil spill volumes and frequencies over the decades. Comparatively, the 1970s averaged 79 significant spills per year, which drastically reduced to an average of about 6.3 per year in the 2010s, and has maintained a similar level in the current decade. The reduction in oil spill volume has also been substantial over the years. For instance, the 1990s recorded 1,134,000 tonnes lost, mainly from 10 major spills. This figure decreased to 196,000 tonnes in the 2000s and 164,000 tonnes in the 2010s. In the early 2020s, approximately 28,000 tonnes have been lost, predominantly from major incidents. Offshore oil platforms Accidental spills from oil platforms nowadays account for approximately 3% of oil spills in the oceans. Prominent offshore oil platform spills typically occurred as a result of a blowout. They can go on for months until relief wells have been drilled, resulting in enormous amounts of oil leaked. Notable examples of such oil spills are Deepwater Horizon and Ixtoc I. While technologies for drilling in deep water have significantly improved in the past 30–40 years, oil companies move to drilling sites in more and more difficult places. This ambiguous development results in no clear trend regarding the frequency of offshore oil platform spills. Pipelines As of 2010, overall, there has been a substantial increase of pipeline oil spills in the past four decades. Prominent examples include oil spills of pipelines in the Niger Delta. Pipeline oil spills can be caused by trawling of fishing boats, natural disasters, pipe corrosion, construction defects, sabotage, or an attack, as with the Caño Limón-Coveñas pipeline in Colombia. Pipelines as sources of oil spills are estimated to contribute 1% of oil pollution to the oceans. Reasons for this are underreporting, and many oil pipeline leaks occur on land with only fractions of that oil reaching the oceans. Other sources Recreational boats can spill oil into the ocean because of operational or human error and unpreparedness. The amounts are however small, and such oil spills are hard to track due to underreporting. Oil can reach the oceans as oil and fuel from land-based sources. It is estimated that runoff oil and oil from rivers are responsible for 11% of oil pollution to the oceans. Such pollution can also be oil on roads from land vehicles, which is then flushed into the oceans during rainstorms. Purely land-based oil spills are different from maritime oil spills in that oil on land does not spread as quickly as in water, and effects thus remain local. Cleanup and recovery Cleanup and recovery from an oil spill is difficult and depends upon many factors, including the type of oil spilled, the temperature of the water (affecting evaporation and biodegradation), and the types of shorelines and beaches involved. Physical cleanups of oil spills are also very expensive. Until the 1960s, the best method for remediation consisted of putting straw on the spill and retrieving the oil-soaked straw manually. Chemical remediation is the norm as of the early 21st century, using compounds that can herd and thicken oil for physical recovery, disperse oil in the water, or facilitate burning the oil off. The future of oil cleanup technology is likely the use of microorganisms such as Fusobacteriota (formerly Fusobacteria), species demonstrate potential for future oil spill cleanup because of their ability to colonize and degrade oil slicks on the sea surface. There are three kinds of oil-consuming bacteria. Sulfate-reducing bacteria (SRB) and acid-producing bacteria are anaerobic, while general aerobic bacteria (GAB) are aerobic. These bacteria occur naturally and will act to remove oil from an ecosystem, and their biomass will tend to replace other populations in the food chain. The chemicals from the oil which dissolve in water, and hence are available to bacteria, are those in the water associated fraction of the oil. Methods for cleaning up include: Bioremediation: use of microorganisms or biological agents to break down or remove oil; such as Alcanivorax bacteria or Methylocella silvestris. Bioremediation Accelerator: a binder molecule that moves hydrocarbons out of water and into gels, when combined with nutrients, encourages natural bioremediation. Oleophilic, hydrophobic chemical, containing no bacteria, which chemically and physically bonds to both soluble and insoluble hydrocarbons. The accelerator acts as a herding agent in water and on the surface, floating molecules such as phenol and BTEX to the surface of the water, forming gel-like agglomerations. Undetectable levels of hydrocarbons can be obtained in produced water and manageable water columns. By overspraying sheen with bioremediation accelerator, sheen is eliminated within minutes. Whether applied on land or on water, the nutrient-rich emulsion creates a bloom of local, indigenous, pre-existing, hydrocarbon-consuming bacteria. Those specific bacteria break down the hydrocarbons into water and carbon dioxide, with EPA tests showing 98% of alkanes biodegraded in 28 days; and aromatics being biodegraded 200 times faster than in nature they also sometimes use the hydrofireboom to clean the oil up by taking it away from most of the oil and burning it. Controlled burning can effectively reduce the amount of oil in water, if done properly. But it can only be done in low wind, and can cause air pollution. Dispersants can be used to dissipate oil slicks. A dispersant is either a non-surface active polymer or a surface-active substance added to a suspension, usually a colloid, to improve the separation of particles and to prevent settling or clumping. They may rapidly disperse large amounts of certain oil types from the sea surface by transferring it into the water column. They will cause the oil slick to break up and form water-soluble micelles that are rapidly diluted. The oil is then effectively spread throughout a larger volume of water than the surface from where the oil was dispersed. They can also delay the formation of persistent oil-in-water emulsions. However, laboratory experiments showed that dispersants increased toxic hydrocarbon levels in fish by a factor of up to 100 and may kill fish eggs. Dispersed oil droplets infiltrate into deeper water and can lethally contaminate coral. Research indicates that some dispersants are toxic to corals. A 2012 study found that Corexit dispersant had increased the toxicity of oil by up to 52 times. In 2019, the U.S. National Academies released a report analyzing the advantages and disadvantages of several response methods and tools. Watch and wait: in some cases, natural attenuation of oil may be most appropriate, due to the invasive nature of facilitated methods of remediation, particularly in ecologically sensitive areas such as wetlands. Dredging: for oils dispersed with detergents and other oils denser than water. Skimming: Requires calm waters at all times during the process. Solidifying: Solidifiers are composed of tiny, floating, dry ice pellets, and hydrophobic polymers that both adsorb and absorb. They clean up oil spills by changing the physical state of spilled oil from liquid to a solid, semi-solid or a rubber-like material that floats on water. Solidifiers are insoluble in water, therefore the removal of the solidified oil is easy and the oil will not leach out. Solidifiers have been proven to be relatively non-toxic to aquatic and wildlife and have been proven to suppress harmful vapors commonly associated with hydrocarbons such as benzene, xylene and naphtha. The reaction time for solidification of oil is controlled by the surface area or size of the polymer or dry pellets as well as the viscosity and thickness of the oil layer. Some solidifier product manufacturers claim the solidified oil can be thawed and used if frozen with dry ice or disposed of in landfills, recycled as an additive in asphalt or rubber products, or burned as a low ash fuel. A solidifier called C.I.Agent (manufactured by C.I.Agent Solutions of Louisville, Kentucky) is being used by BP in granular form, as well as in Marine and Sheen Booms at Dauphin Island and Fort Morgan, Alabama, to aid in the Deepwater Horizon oil spill cleanup. Vacuum and centrifuge: oil can be sucked up along with the water, and then a centrifuge can be used to separate the oil from the water – allowing a tanker to be filled with near pure oil. Usually, the water is returned to the sea, making the process more efficient, but allowing small amounts of oil to go back as well. This issue has hampered the use of centrifuges due to a United States regulation limiting the amount of oil in water returned to the sea. Beach Raking: coagulated oil that is left on the beach can be picked up by machinery. Equipment used includes: Booms: large floating barriers that round up oil and lift the oil off the water Skimmers: skim the oil Sorbents: large absorbents that absorb oil and adsorb small droplets Chemical and biological agents: helps to break down the oil Vacuums: remove oil from beaches and water surface Shovels and other road equipment: typically used to clean up oil on beaches Prevention Secondary containment – methods to prevent releases of oil or hydrocarbons into the environment. Oil Spill Prevention Control and Countermeasures (SPCC) program by the United States Environmental Protection Agency. Double-hulling – build double hulls into vessels, which reduces the risk and severity of a spill in case of a collision or grounding. Existing single-hull vessels can also be rebuilt to have a double hull. Thick-hulled railroad transport tanks. Spill response procedures should include elements such as; A listing of appropriate protective clothing, safety equipment, and cleanup materials required for spill cleanup (gloves, respirators, etc.) and an explanation of their proper use; Appropriate evacuation zones and procedures; Availability of fire suppression equipment; Disposal containers for spill cleanup materials; and The first aid procedures that might be required. Research Adaptation of the oil bee's, e.g. Macropis fulvipes', mechanism for harvesting flower oils has led to the biomimetic development of an additional oil spill recovery method. Oil bees have oleophilic properties in their hair-like protrusions that collect and store oil. This technique has been applied to textiles that can be used to remove oil from sea water. Environmental Sensitivity Index (ESI) mapping Environmental Sensitivity Indexes (ESI) are tools used to create Environmental Sensitivity Maps (ESM). ESM's are pre-planning tools used to identify sensitive areas and resources prior to an oil spill event in order to set priorities for protection and plan clean-up strategies. It is to date the most commonly used mapping tool for sensitive area plotting. The ESI has three components: A shoreline type ranking system, a biological resources section, and a human-use resource category. History and development ESI is the most frequently used sensitivity mapping tool yet. It was first applied in 1979 in response to an oil-spill near Texas in the Gulf of Mexico. To this time, ESI maps were prepared merely days in advance of one's arrival to an oil spill location. ESMs used to be atlases, maps consisting of thousands of pages that could solely work with spills in the oceans. In the past 3 decades, this product has been transformed into a versatile online tool. This conversion allows sensitivity indexing to become more adaptable and in 1995 by the US National Oceanic and Atmospheric Administration (NOAA) worked on the tool allowing ESI to extended maps to lakes, rivers, and estuary shoreline types. ESI maps have since become integral to  collecting, synthesizing, and producing data which have previously never been accessible in digital formats. Especially in the United States, the tool has made impressive advancements in developing tidal bay protection strategies, collecting seasonal information and generally in the modelling of sensitive areas. Together with Geographic Information System Mapping (GIS), ESI integrates their techniques to successfully geographically reference the three different types of resources. Usage and application The ESI depicts environmental stability, coastal resilience to maritime related catastrophes, and the configurations of a stress-response relationship between all things maritime. Created for ecological-related decision making, ESMs can accurately identify sensitive areas and habitats, clean-up responses, response measures and monitoring strategies for oil-spills. The maps allow experts from varying fields to come together and work efficiently during fast-paced response operations. The process of making an ESI atlas involves GIS technology. The steps involve, first zoning the area that is to be mapped, and secondly, a meeting with local and regional experts on the area and its resources. Following, all the shoreline types, biological, and human use resources need to be identified and their locations pinpointed. Once all this information is gathered, it then becomes digitized. In its digital format, classifications are set in place, tables are produced and local experts refine the product before it gets released. ESI's current most common use is within contingency planning. After the maps are calculated and produced, the most sensitive areas get picked out and authenticated. These areas then go through a scrutinization process throughout which methods of protection and resource assessments are obtained. This in-depth research is then put back into the ESMs to develop their accuracy and allowing for tactical information to be stored in them as well. The finished maps are then used for drills and trainings for clean-up efficiency. Trainings also often help to update the maps and tweak certain flaws that might have occurred in the previous steps. ESI Categories Shoreline type Shoreline type is classified by rank depending on how easy the target site would be to clean up, how long the oil would persist, and how sensitive the shoreline is. The ranking system works on a 10-point scale where the higher the rank, the more sensitive a habitat or shore is. The coding system usually works in colour, where warm colours are used for the increasingly sensitive types and cooler colours are used for robust shores. For each navigable body of water, there is a feature classifying its sensitivity to oil. Shoreline type mapping codes a large range of ecological settings including estuarine, lacustrine, and riverine environments. Floating oil slicks put the shoreline at particular risk when they eventually come ashore, covering the substrate with oil. The differing substrates between shoreline types vary in their response to oiling, and influence the type of cleanup that will be required to effectively decontaminate the shoreline. Hence ESI shoreline ranking helps committees identify which clean-up techniques are approved or detrimental the natural environment. The exposure the shoreline has to wave energy and tides, substrate type, and slope of the shoreline are also taken into account—in addition to biological productivity and sensitivity. Mangroves and marshes tend to have higher ESI rankings due to the potentially long-lasting and damaging effects of both oil contamination and cleanup actions. Impermeable and exposed surfaces with high wave action are ranked lower due to the reflecting waves keeping oil from coming onshore, and the speed at which natural processes will remove the oil. Biological resources Within the biological resources, the ESI maps protected areas as well as those with bio-diverse importance. These are usually identified through the UNEP-WCMC Integrated Biodiversity Assessment Tool. There are varying types of coastal habitats and ecosystems and thus also many endangered species that need to be considered when looking at affected areas post oil spills. The habitats of plants and animals that may be at risk from oil spills are referred to as "elements" and are divided by functional group. Further classification divides each element into species groups with similar life histories and behaviors relative to their vulnerability to oil spills. There are eight element groups: birds, reptiles, amphibians, fish, invertebrates, habitats and plants, wetlands, and marine mammals and terrestrial mammals. Element groups are further divided into sub-groups, for example, the ‘marine mammals’ element group is divided into dolphins, manatees, pinnipeds (seals, sea lions & walruses), polar bears, sea otters and whales. Necessary when ranking and selecting species is their vulnerability to the oil spills themselves. This not only includes their reactions to such events but also their fragility, the scale of large clusters of animals, whether special life stages occur ashore, and whether any present species is threatened, endangered or rare. The way in which the biological resources are mapped is through symbols representing the species, and polygons and lines to map out the special extent of the species. The symbols also have the ability to identify the most vulnerable of a species life stages, such as the molting, nesting, hatching or migration patterns. This allows for more accurate response plans during those given periods. There is also a division for sub-tidal habitats which are equally important to coastal biodiversity including kelp, coral reefs and sea beds which are not commonly mapped within the shoreline ESI type. Human-use resources Human-use resources are also often referred to as socio-economic features, which map inanimate resources that have the potential to be directly impacted by oil pollution. Human-use resources that are mapped within the ESI will have socio-economic repercussions to an oil spill. These resources are divided into four major classifications: archaeological importance or cultural resource site, high-use recreational areas or shoreline access points, important protected management areas, and resource origins. Some examples include airports, diving sites, popular beach sites, marinas, hotels, factories, natural reserves or marine sanctuaries. When mapped, the human-use resources the need protecting must be certified by a local or regional policy maker. These resources are often extremely vulnerable to seasonal changes due to ex. fishing and tourism. For this category there are also a set of symbols available to demonstrate their importance on ESMs. Estimating the volume of a spill By observing the thickness of the film of oil and its appearance on the surface of the water, it is possible to estimate the quantity of oil spilled. If the surface area of the spill is also known, the total volume of the oil can be calculated. Oil spill model systems are used by industry and government to assist in planning and emergency decision making. Of critical importance for the skill of the oil spill model prediction is the adequate description of the wind and current fields. There is a worldwide oil spill modelling (WOSM) program. Tracking the scope of an oil spill may also involve verifying that hydrocarbons collected during an ongoing spill are derived from the active spill or some other source. This can involve sophisticated analytical chemistry focused on finger printing an oil source based on the complex mixture of substances present. Largely, these will be various hydrocarbons, among the most useful being polyaromatic hydrocarbons. In addition, both oxygen and nitrogen heterocyclic hydrocarbons, such as parent and alkyl homologues of carbazole, quinoline, and pyridine, are present in many crude oils. As a result, these compounds have great potential to supplement the existing suite of hydrocarbons targets to fine-tune source tracking of petroleum spills. Such analysis can also be used to follow weathering and degradation of crude spills. Largest oil spills Crude oil and refined fuel spills from tanker ship accidents have damaged vulnerable ecosystems in Alaska, the Gulf of Mexico, the Galapagos Islands, France, the Sundarbans and many other places. The quantity of oil spilled during accidents has ranged from a few hundred tons to several hundred thousand tons (e.g., Deepwater Horizon oil spill, Atlantic Empress, Amoco Cadiz), but volume is a limited measure of damage or impact. Smaller spills have already proven to have a great impact on ecosystems, such as the Exxon Valdez oil spill because of the remoteness of the site or the difficulty of an emergency environmental response. Oil spills in the Niger Delta are among the worst on the planet and is often used as an example of ecocide. Between 1970 and 2000, there were over 7,000 spills. Between 1956 and 2006, up to 1.5 million tons of oil were spilled in the Niger Delta. Oil spills at sea are generally much more damaging than those on land, since they can spread for hundreds of nautical miles in a thin oil slick which can cover beaches with a thin coating of oil. These can kill seabirds, mammals, shellfish and other organisms they coat. Oil spills on land are more readily containable if a makeshift earth dam can be rapidly bulldozed around the spill site before most of the oil escapes, and land animals can avoid the oil more easily. The economic impact of oil spills Oil spills can have devastating environmental impacts; however, we cannot allow these to overshadow their often equally detrimental economic consequences. These disasters do not only pose immediate threats to marine ecosystems, but also leave lasting impacts on local and regional economies. This section will explore the multifaceted economic repercussions of oil spills, specifically considering: the decline in tourism, the reduction in fishing, and the impact on port activity. Decline in tourism In the short term, an oil spill will prevent tourists from partaking in usual recreational activities such as swimming, boating, diving, and angling. As such, the area will witness a decline in tourism. This will negatively impact several industries. Firstly, the hotels, restaurants, and bars in the immediate vicinity will have significantly fewer customers. Local car park owners and shopkeepers will be affected too. Then, this decline in tourists will cause further damage to travel agencies, tour guides, and transport companies. The beaches will likely stay shut for several days whilst clean-up operations take place, and there may be disruption caused by an increase in clean-up vehicles. Overall, several businesses will be negatively impacted by the spill in the short term, which can lead to further long-term damage should companies be forced to reduce staff or shut down entirely. Often, this process is intensified by disproportionate media attention. Usually, the affected area returns to normal relatively soon after an oil spill, as the clean-up process is fast. However, media stories will drive future tourists away, as they work to degrade the popular image of a destination with exaggerated stories of oil on beaches and deserted hotels. This aggravates the economic losses, as people continue to choose to travel elsewhere. Such a scenario is particularly damaging for regions which are very reliant on the tourism industry. For example, the Brazilian Northeast can be very vulnerable to drops in tourism, thus, they were badly impacted following a 2500 tonne crude oil spill from an unknown tanker in 2019. Similarly, tourism in Ibiza was severely impacted in 2007. Just 20 tonnes of oil were spilled from the Don Pedro in July 2007, a relatively limited volume compared with other spills. Whilst this caused just a small amount of environmental damage, the economic damage was disproportionately large. Most beaches were reopened within a week, just a dozen seabirds were affected, and there were no reports of injured sea mammals. Nonetheless, 27 percent of hotels in Ibiza were negatively affected, with two thirds of these being seafront hotels. Thus, 32 claims were made by tourist firms, equating to approximately 1.5 million euros of compensation. This provides a clear example of an oil spill resulting in massive economic disaster. Furthermore, following the world's largest oil spill, the Deepwater Horizon Oil Spill in 2010, the U.S. Travel Association estimated 23 billion dollars’ worth of associated costs for affected tourist infrastructure. Reduction in fishing After the Deepwater Horizon crisis, the Gulf of Mexico suffered an estimated 1.9-billion-dollar loss in revenue from fishing. This is because fishing closures were imposed due to fears of the safety of seafood, there was also a decline in demand, as seafood restaurants and markets suffered such severe losses that many were forced to shut. Usually, the Gulf sees an average of 106,703 fishing trips per day, equating to 1 million metric tonnes of annual fishery landings. Therefore, the necessary fishing ban following the disaster was highly damaging. Similarly, following the sinking of the Prestige oil tanker near Galicia, Spain, in November 2002, 77,000 tonnes of crude oil were spilled into the ocean. This disaster has had severe economic consequences, alongside the environmental damage. Large zones were cordoned in which fishing was banned, with these bans lasting for more than eight months. This affected several groups, including fishermen, ship owners, and the companies who bought and sold the fish. Several compensatory actions were introduced, including tax benefits and aid. This resulted in expenses of approximately 113 million euros in an attempt to compensate for the halt in fishing activity. The examples of the Deepwater Horizon and the Prestige clearly illustrate the severe economic consequences when oil spills prevent commercial fishing. Water pollution due to oil spills can be severe, often resulting in the death or injury of many sea creatures, including birds, sea mammals, fish, algae, and coral. The impact on fish caught in the spill has both immediate and longer-term impacts. Immediately, the fish are tainted with oil, and they cannot be used commercially due to safety reasons. Then, the oil can spread and sink below the water's surface. If fish swallow the oil, they are also inconsumable due to the health risk posed to humans. Therefore, massive economic damage is caused to the fishing industry following an oil spill, as the stock is vastly reduced. Furthermore, the oil can cause damage to the equipment and boats of fishermen. Clean-up operations can also interrupt usual fishing routes, and sometimes fishing bans are imposed. This further illustrates the damaging economic effects of oil spills on commercial fishing, which is particularly detrimental for regions whose economy relies heavily on fishing. The impact on port activity Ports are major hubs for economic activity; thus, an oil spill in or near a port can have significant consequences. During and following a spill, all boats entering or leaving the port must be closely managed in order to prevent further spread. Furthermore, specialist cleaning contractors must be hired to effectively clean the various port structures. Oil spills are relatively regular occurrences in ports, as small spills often happen due to the large volume of boats, and these are not as well documented in the media as larger events are. However, these spills must still be dealt with, and they can still have damaging economic repercussions. Both the incident and the response require expensive and time-consuming management which is disruptive to port activity. Furthermore, special care must be taken during clean-up operations to ensure that the oil does not get stuck under the quayside, as this could act as a continual source of oil contamination. This can also be seen with sea defenses; should the oil penetrate deep into the structures, they may become a source of secondary pollution. Therefore, it is crucial for ports to manage and mitigate any oil spills, in order to limit the damage to ships and shipping operations. Otherwise, should large disruption occur, the economic damage can be extensive due to costly clean-up processes and delayed shipments. Summary The economic impact of oil spills on tourism, fishing, and ports is substantial and important to assess. Coordinated efforts are necessary to mitigate these impacts, including effective clean-up measures, public relations campaigns to restore the image of affected areas, and support for businesses and communities that must bear the economic downturn. See also Automated Data Inquiry for Oil Spills Environmental issues with petroleum Environmental issues with shipping LNG spill Storm oil Low-temperature thermal desorption National Oil and Hazardous Substances Pollution Contingency Plan Ohmsett (Oil and Hazardous Materials Simulated Environmental Test Tank) Oil Pollution Act of 1990 (in the US) Oil well Penguin sweater Project Deep Spill, the first intentional deepwater oil and gas spill Pseudomonas putida (used for degrading oil) S-200 (fertilizer) ShoreZone Spill containment Tarball References Further reading Nelson-Smith, Oil Pollution and Marine Ecology, Elek Scientific, London, 1972; Plenum, New York, 1973 Oil Spill Case Histories 1967–1991, NOAA/Hazardous Materials and Response Division, Seattle, WA, 1992 Ramseur, Jonathan L. Oil Spills: Background and Governance, Congressional Research Service, Washington, DC, September 15, 2017 Technology hazards Bird mortality Ocean pollution Product safety scandals Road hazards Disasters in Nigeria
Oil spill
[ "Chemistry", "Technology", "Environmental_science" ]
7,916
[ "Ocean pollution", "Road hazards", "Water pollution", "nan", "Oil spills" ]
158,715
https://en.wikipedia.org/wiki/Radiator
A radiator is a heat exchanger used to transfer thermal energy from one medium to another for the purpose of cooling and heating. The majority of radiators are constructed to function in cars, buildings, and electronics. A radiator is always a source of heat to its environment, although this may be for either the purpose of heating an environment, or for cooling the fluid or coolant supplied to it, as for automotive engine cooling and HVAC dry cooling towers. Despite the name, most radiators transfer the bulk of their heat via convection instead of thermal radiation. History The Roman hypocaust is an early example of a type of radiator for building space heating. Franz San Galli, a Prussian-born Russian businessman living in St. Petersburg, is credited with inventing the heating radiator around 1855, having received a radiator patent in 1857, but American Joseph Nason and Scot Rory Gregor developed a primitive radiator in 1841 and received a number of U.S. patents for hot water and steam heating. Radiation and convection Heat transfer from a radiator occurs by two mechanisms: thermal radiation and convection into flowing air or liquid. Conduction is not normally a major source of heat transfer in radiators. A radiator may even transfer heat by phase change, for example, drying a pair of socks. In practice, the term "radiator" refers to any of a number of devices in which a liquid circulates through exposed pipes (often with fins or other means of increasing surface area). The term "convector" refers to a class of devices in which the source of heat is not directly exposed. To increase the surface area available for heat exchange with the surroundings, a radiator will have multiple fins, in contact with the tube carrying liquid pumped through the radiator. Air (or other exterior fluid) in contact with the fins carries off heat. If air flow is obstructed by dirt or damage to the fins, that portion of the radiator is ineffective at heat transfer. Heating Radiators are commonly used to heat buildings on the European continent. In a radiative central heating system, hot water or sometimes steam is generated in a central boiler and circulated by pumps through radiators within the building, where this heat is transferred to the surroundings. In some countries, portable radiators are common to heat a single room, as a safer alternative to space heater and fan heater. Heating, ventilation, and air conditioning Radiators are used in dry cooling towers and closed-loop cooling towers for cooling buildings using liquid-cooled chillers for heating, ventilation, and air conditioning (HVAC) while keeping the chiller coolant isolated from the surroundings. Engine cooling Radiators are used for cooling internal combustion engines, mainly in automobiles but also in piston-engined aircraft, railway locomotives, motorcycles, stationary generating plants and other places where heat engines are used (watercrafts, having an unlimited supply of a relatively cool water outside, usually use the liquid-liquid heat exchangers instead). To cool down the heat engine, a coolant is passed through the engine block, where it absorbs heat from the engine. The hot coolant is then fed into the inlet tank of the radiator (located either on the top of the radiator, or along one side), from which it is distributed across the radiator core through tubes to another tank on the opposite end of the radiator. As the coolant passes through the radiator tubes on its way to the opposite tank, it transfers much of its heat to the tubes which, in turn, transfer the heat to the fins that are lodged between each row of tubes. The fins then release the heat to the ambient air. Fins are used to greatly increase the contact surface of the tubes to the air, thus increasing the exchange efficiency. The cooled liquid is fed back to the engine, and the cycle repeats. Normally, the radiator does not reduce the temperature of the coolant back to ambient air temperature, but it is still sufficiently cooled to keep the engine from overheating. This coolant is usually water-based, with the addition of glycols to prevent freezing and other additives to limit corrosion, erosion and cavitation. However, the coolant may also be an oil. The first engines used thermosiphons to circulate the coolant; today, however, all but the smallest engines use pumps. Up to the 1980s, radiator cores were often made of copper (for fins) and brass (for tubes, headers, and side-plates, while tanks could also be made of brass or of plastic, often a polyamide). Starting in the 1970s, use of aluminium increased, eventually taking over the vast majority of vehicular radiator applications. The main inducements for aluminium are reduced weight and cost. Since air has a lower heat capacity and density than liquid coolants, a fairly large volume flow rate (relative to the coolant's) must be blown through the radiator core to capture the heat from the coolant. Radiators often have one or more fans that blow air through the radiator. To save fan power consumption in vehicles, radiators are often behind the grille at the front end of a vehicle. Ram air can give a portion or all of the necessary cooling air flow when the coolant temperature remains below the system's designed maximum temperature, and the fan remains disengaged. Electronics and computers As electronic devices become smaller, the problem of dispersing waste heat becomes more difficult. Tiny radiators known as heat sinks are used to convey heat from the electronic components into a cooling air stream. Heatsinks do not use water, rather they conduct the heat from the source. High-performance heat sinks have copper to conduct better. Heat is transferred to the air by conduction and convection; a relatively small proportion of heat is transferred by radiation owing to the low temperature of semiconductor devices compared to their surroundings. Radiators are also used in liquid cooling loops for rejecting heat. Spacecraft Radiators are found as components of some spacecraft. These radiators work by radiating heat energy away as light (generally infrared given the temperatures at which spacecraft try to operate) because in the vacuum of space neither convection nor conduction can work to transfer heat away. On the International Space Station, these can be seen clearly as large white panels attached to the main truss. They can be found on both crewed and uncrewed craft. See also Heat sink Heat spreader Heat pipe Heat pump Radiatori – small, squat pasta shaped to resemble radiators References Heating, ventilation, and air conditioning Plumbing Residential heating appliances Russian inventions Vehicle parts
Radiator
[ "Technology", "Engineering" ]
1,392
[ "Construction", "Vehicle parts", "Plumbing", "Components" ]
158,717
https://en.wikipedia.org/wiki/Ekpyrotic%20universe
The ekpyrotic universe () is a cosmological model of the early universe that explains the origin of the large-scale structure of the cosmos. The model has also been incorporated in the cyclic universe theory (or ekpyrotic cyclic universe theory), which proposes a complete cosmological history, both the past and future. Origins The original ekpyrotic model was introduced by Justin Khoury, Burt Ovrut, Paul Steinhardt and Neil Turok in 2001. Steinhardt created the name based on the Ancient Greek word ekpyrosis (ἐκπύρωσις, "conflagration"), which refers to a Stoic cosmological model in which the universe is caught in an eternal cycle of fiery birth, cooling and rebirth. The theory addresses the fundamental question that remains unanswered by the Big Bang inflationary model, "What happened before the Big Bang?" The explanation, according to the ekpyrotic theory, is that the Big Bang was actually a big bounce, a transition from a previous epoch of contraction to the present epoch of expansion. The key events that shaped our universe occurred before the bounce, and, in a cyclic version, the universe bounces at regular intervals. Applications of the theory The original ekpyrotic models relied on string theory, branes and extra dimensions, but most contemporary ekpyrotic and cyclic models use the same physical ingredients as inflationary models (quantum fields evolving in ordinary space-time). Like Big Bang cosmology, the ekpyrotic theory has accurately described essential features of our universe. It predicts a uniform, flat universe with patterns of hot spots and cold spots, in agreement with observations of the cosmic microwave background (CMB), observations confirmed to higher precision by the WMAP and Planck satellite experiments. Observation of a CMB has long been considered evidence of the Big Bang, but proponents of the ekpyrotic and cyclic theories contend that the CMB is also consistent with a Big Bounce as posited in those models. Other researchers argue that data from the Planck observations of the CMB "significantly limit the viable parameter space of the ekpyrotic/cyclic scenarios." Primordial gravitational waves, if ever observed, may help scientists distinguish between various theories about the origin of the universe. Implications for cosmology An advantage of ekpyrotic and cyclic models is that they do not produce a multiverse. This is important because when the effects of quantum fluctuations are properly included in the Big Bang inflationary model, they prevent the universe from achieving the uniformity and flatness that the cosmologists are trying to explain. Instead, inflated quantum fluctuations cause the universe to break up into patches with every conceivable combination of physical properties. Instead of making clear predictions, the Big Bang inflationary theory allows any outcome, so that the properties we observe may be viewed as random chance, resulting from the particular patch of the multiverse in which the Earth resides. Most regions of the multiverse would have very different properties. Nobel laureate Steven Weinberg has suggested that if the multiverse is true, “the hope of finding a rational explanation for the precise values of quark masses and other constants of the standard model that we observe in our Big Bang is doomed, for their values would be an accident of the particular part of the multiverse in which we live.” The idea that the properties of our universe are an accident and come from a theory that allows a multiverse of other possibilities is hard to reconcile with the fact that the universe is extraordinarily simple (uniform and flat) on large scales and that elementary particles appear to be described by simple symmetries and interactions. Also, the accidental concept cannot be falsified by an experiment since any future experiments can be viewed as yet other accidental aspects. In ekpyrotic and cyclic models, smoothing and flattening occurs during a period of slow contraction, so quantum fluctuations are not inflated and cannot produce a multiverse. As a result, the ekpyrotic and cyclic models predict simple physical properties that are consistent with current experimental evidence without producing a multiverse. See also Cosmic inflation Cyclic model Physical cosmology Notes and references Further reading A Brief Introduction to the Ekpyrotic Universe by Steinhardt, Paul J., Department of Physics, Princeton University. Greene, Brian, The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory, Vintage (2000), ISBN 9780099289920. (the first paper to point out problems with the theory). Whitehouse, David, "Before the Big Bang". BBC News. April 10, 2001. Discover Magazine, Before the Big Bang February 2004 issue. Parallel Universes, (BBC Two, 9 February 14, 2002). 'Brane-Storm' Challenges Part of Big Bang Theory. Yi-Fu Cai, Damien A. Easson, Robert Brandenberger, Towards a Nonsingular Bouncing Cosmology, arXiv:1206.2382, (June 2012). Concepts in astrophysics Physical cosmology
Ekpyrotic universe
[ "Physics", "Astronomy" ]
1,050
[ "Concepts in astrophysics", "Theoretical physics", "Astrophysics", "Physical cosmology", "Astronomical sub-disciplines" ]
158,740
https://en.wikipedia.org/wiki/Exhaust%20gas%20recirculation
In internal combustion engines, exhaust gas recirculation (EGR) is a nitrogen oxide () emissions reduction technique used in petrol/gasoline, diesel engines and some hydrogen engines. EGR works by recirculating a portion of an engine's exhaust gas back to the engine cylinders. The exhaust gas displaces atmospheric air and reduces in the combustion chamber. Reducing the amount of oxygen reduces the amount of fuel that can burn in the cylinder thereby reducing peak in-cylinder temperatures. The actual amount of recirculated exhaust gas varies with the engine operating parameters. In the combustion cylinder, is produced by high-temperature mixtures of atmospheric nitrogen and oxygen, and this usually occurs at cylinder peak pressure. In a spark-ignition engine, an ancillary benefit of recirculating exhaust gases via an external EGR valve is an increase in efficiency, as charge dilution allows a larger throttle position and reduces associated pumping losses. Mazda's turbocharged SkyActiv gasoline direct injection engine uses recirculated and cooled exhaust gases to reduce combustion chamber temperatures, thereby permitting the engine to run at higher boost levels before the air-fuel mixture must be enriched to prevent engine knocking. In a gasoline engine, this inert exhaust displaces some amount of combustible charge in the cylinder, effectively reducing the quantity of charge available for combustion without affecting the air-fuel ratio. In a diesel engine, the exhaust gas replaces some of the excess oxygen in the pre-combustion mixture. Because forms primarily when a mixture of nitrogen and oxygen is subjected to high temperature, the lower combustion chamber temperatures caused by EGR reduces the amount of that the combustion process generates. Gases re-introduced from EGR systems will also contain near equilibrium concentrations of and CO; the small fraction initially within the combustion chamber inhibits the total net production of these and other pollutants when sampled on a time average. Chemical properties of different fuels limit how much EGR may be used. For example methanol is more tolerant to EGR than gasoline. History The first EGR systems were crude; some were as simple as an orifice jet between the exhaust and intake tracts which admitted exhaust to the intake tract whenever the engine was running. Difficult starting, rough idling, reduced performance and lost fuel economy inevitably resulted. By 1973, an EGR valve controlled by manifold vacuum opened or closed to admit exhaust to the intake tract only under certain conditions. Control systems grew more sophisticated as automakers gained experience; Volkswagen's "Coolant Controlled Exhaust Gas Recirculation" system of 1973 exemplified this evolution: a coolant temperature sensor blocked vacuum to the EGR valve until the engine reached normal operating temperature. This prevented driveability problems due to unnecessary exhaust induction; forms under elevated temperature conditions generally not present with a cold engine. Moreover, the EGR valve was controlled, in part, by vacuum drawn from the carburetor's venturi, which allowed more precise constraint of EGR flow to only those engine load conditions under which is likely to form. Later, backpressure transducers were added to the EGR valve control to further tailor EGR flow to engine load conditions. Most modern engines now need exhaust gas recirculation to meet emissions standards. However, recent innovations have led to the development of engines that do not require them. The 3.6 Chrysler Pentastar engine is one example that does not require EGR. EGR The exhaust gas contains water vapor and carbon dioxide which both have lower heat capacity ratio than air. Adding exhaust gas therefore reduces pressure and temperature during the isentropic compression in the cylinder, thereby lowering the adiabatic flame temperature. In a typical automotive spark-ignited (SI) engine, 5% to 15% of the exhaust gas is routed back to the intake as EGR. The maximum quantity is limited by the need of the mixture to sustain a continuous flame front during the combustion event; excessive EGR in poorly set up applications can cause misfires and partial burns. Although EGR does measurably slow combustion, this can largely be compensated for by advancing spark timing. The impact of EGR on engine efficiency largely depends on the specific engine design, and sometimes leads to a compromise between efficiency and emissions. In certain types of situations, a properly operating EGR can theoretically increase the efficiency of gasoline engines via several mechanisms: Reduced throttle losses. The addition of inert exhaust gas into the intake system means that for a given power output, the throttle plate must be opened further, resulting in increased inlet manifold pressure and reduced throttling losses. Reduced heat rejection. Lowered peak combustion temperatures not only reduces formation, it also reduces the loss of thermal energy to combustion chamber surfaces, leaving more available for conversion to mechanical work during the expansion stroke. Reduced chemical dissociation. The lower peak temperatures result in more of the released energy remaining as sensible energy near Top Dead Center (TDC), rather than being bound up (early in the expansion stroke) in the dissociation of combustion products. This effect is minor compared to the first two. EGR is typically not employed at high loads because it would reduce peak power output. This is because it reduces the intake charge density. EGR is also omitted at idle (low-speed, zero load) because it would cause unstable combustion, resulting in rough idle. Since the EGR system recirculates a portion of exhaust gases, over time the valve can become clogged with carbon deposits, which will prevent it from operating properly. Clogged EGR valves can sometimes be cleaned, but replacement is necessary if the valve is faulty. Diesel engines Because diesel engines depend on the heat of compression to ignite their fuel, they are fundamentally different from spark-ignited engines. The physical process of diesel-fuel combustion is such that the most complete combustion occurs at the highest temperatures. Unfortunately, the production of nitrogen oxides () increases at high temperatures. The goal of EGR is thus to reduce production by reducing the combustion temperatures. In modern diesel engines, the EGR gas is usually cooled with a heat exchanger to allow the introduction of a greater mass of recirculated gas. However, uncooled EGR designs do exist; these are often referred to as hot-gas recirculation (HGR). Cooled EGR components are exposed to repeated, rapid changes in temperatures, which can cause coolant leak and catastrophic engine failure. Unlike spark-ignition engines, diesel engines are not limited by the need for a contiguous flamefront. Furthermore, since diesels always operate with excess air, they benefit (in terms of reduced output) from EGR rates as high as 50%. However, a 50% EGR rate is only suitable when the diesel engine is at idle, since this is when there is otherwise a large excess of air. Because modern diesel engines often have a throttle, EGR can reduce the need for throttling, thereby eliminating this type of loss in the same way that it does for spark-ignited engines. In a naturally aspirated (i.e. nonturbocharged) engine, such a reduction in throttling also reduces the problem of engine oil being sucked past the piston rings into the cylinder and causing oil-derived carbon deposits there. (This benefit only applies to nonturbocharged engines.) In diesel engines in particular, EGR systems come with serious drawbacks, one of which is a reduction in engine longevity. For example, because the EGR system routes exhaust gas directly back into the cylinder intake without any form of filtration, this exhaust gas contains carbon particulates. And, because these tiny particles are abrasive, the recirculation of this material back into the cylinder increases engine wear. This is so because these carbon particles will blow by the piston rings (causing piston-cylinder-interface wear in the process) and then end up in the crankcase oil, where they will cause further wear throughout the engine simply because their tiny size passes through typical oil filters. This enables them to be recirculated indefinitely (until the next oil change takes place). Exhaust gas—which consists largely of nitrogen, carbon dioxide, and water vapor—has a higher specific heat than air, so it still serves to lower peak combustion temperatures. However, adding EGR to a diesel reduces the specific heat ratio of the combustion gases in the power stroke. This reduces the amount of power that can be extracted by the piston, thereby reducing the thermodynamic efficiency. EGR also tends to reduce the completeness of fuel combustion during the power stroke. This is plainly evident by the increase in particulate emissions that corresponds to an increase in EGR. Particulate matter (mainly carbon and also known as soot) that is not burned in the power stroke represents wasted energy. Because of stricter regulations on particulate matter (PM), the soot-increasing effect of EGR required the introduction of further emission controls in order to compensate for the resulting PM emission increases. The most common soot-control device is a diesel particulate filter (DPF) installed downstream of the engine in the exhaust system. This captures soot but causes a reduction in fuel efficiency due to the back pressure created. Diesel particulate filters come with their own set of very specific operational and maintenance requirements. Firstly, as the DPF captures the soot particles (which are made far more numerous due to the use of EGR), the DPF itself progressively becomes loaded with soot. This soot must then be burned off, either actively or passively. At sufficiently high temperatures, the nitrogen dioxide component of emissions is the primary oxidizer of the soot caught in the DPF at normal operating temperatures. This process is known as passive regeneration, and it is only partially effective at burning off the captured soot. And, especially at high EGR rates, the effectiveness of passive regeneration is further reduced. This, in turn, necessitates periodic active regeneration of the DPF by burning diesel fuel directly in the oxidation catalyst in order to significantly increase exhaust-gas temperatures through the DPF to the point where PM is incinerated by the residual oxygen in the exhaust. Because diesel fuel and engine oil both contain nonburnable (i.e. metallic and mineral) impurities, the incineration of soot (PM) in the DPF leaves behind a residue known as ash. For this reason, after repeated regeneration events, eventually the DPF must either be physically removed and cleaned in a special external process, or it must be replaced. As noted earlier, the feeding of the low-oxygen exhaust gas into the diesel engine's air intake engenders lower combustion temperatures, thereby reducing emissions of . By replacing some of the fresh air intake with inert gases EGR also allows the engine to reduce the amount of injected fuel without compromising ideal air-fuel mixture ratio, therefore reducing fuel consumption in low engine load situation (for ex. while the vehicle is coasting or cruising). Power is not reduced by EGR at any times, as EGR is not employed in high load engine situations. This allows engines to still deliver maximum power when needed, but lower fuel consumption despite large cylinder volume when partial load is sufficient to meet the power needs of the car and the driver. EGR has nothing to do with oil vapor re-routing from a positive crankcase ventilation system (PCV) system, as the latter is only there to reduce oil vapor emissions, and can be present on engines with or without any EGR system. However, the tripartite mixture resulting from employing both EGR and PCV in an engine (i.e. exhaust gas, fresh air, and oil vapour) can cause the buildup of sticky tar in the intake manifold and valves. This mixture can also cause problems with components such as swirl flaps, where fitted. (These problems, which effectively take the form of an undesirable positive-feedback loop, will worsen as the engine ages. For example, as the piston rings progressively wear out, more crankcase oil will get into the exhaust stream. Simultaneously, more fuel and soot and combustion byproducts will gain access to the engine oil.) The end result of this recirculation of both exhaust gas and crankcase oil vapour is again an increase in soot production, which however is effectively countered by the DPF, which collects these and in the end will burn those unburnt particles during regeneration, converting them into CO2 and water vapour emissions, that - unlike NOx gases - have no negative health effects. Modern cooled EGR systems help reduce engine wear by using the waste heat recouped from the recirculated gases to help warm the coolant and hence the engine block faster to operating temperature. This also helps lower fuel consumption through reducing the time after cold starts during which the engine controller has to inject somewhat larger amounts of fuel into the cylinders to counter the effects of fuel vapor condensation on cylinder walls and lowered combustion effectiveness because of the engine block still being below ideal operating temperature. Lowering combustion temperatures also helps reducing the oxidization of engine oil, as the most significant factor affecting that is exposure of the oil to high temperatures. Although engine manufacturers have refused to release details of the effect of EGR on fuel economy, the EPA regulations of 2002 that led to the introduction of cooled EGR were associated with a 3% drop in engine efficiency, thus bucking the trend of a 0.5% annual increase. See also Diesel exhaust Secondary air injection Sources Heywood, John B., "Internal Combustion Engine Fundamentals," McGraw Hill, 1988. van Basshuysen, Richard, and Schäfer, Fred, "Internal Combustion Engine Handbook," SAE International, 2004. "Bosch Automotive Handbook," 3rd Edition, Robert Bosch GmbH, 1993. References External links Lecture notes on improving fuel efficiency that discusses the effects of specific heat ratio, University of Washington Diesel cycle calculator that can be used to show the effect of specific heat ratio, Georgia State University HyperPhysics A Chrysler Imperial fan club describes different EGR control mechanisms Don’t Block or Remove the EGR Valve, It’s Saving You Money What are the symptoms of bad EGR valve Engine technology Chemical process engineering Air pollution control systems NOx control
Exhaust gas recirculation
[ "Chemistry", "Technology", "Engineering" ]
2,923
[ "Chemical process engineering", "Chemical engineering", "Engine technology", "Engines" ]
158,741
https://en.wikipedia.org/wiki/Power-to-weight%20ratio
Power-to-weight ratio (PWR, also called specific power, or power-to-mass ratio) is a calculation commonly applied to engines and mobile power sources to enable the comparison of one unit or design to another. Power-to-weight ratio is a measurement of actual performance of any engine or power source. It is also used as a measurement of performance of a vehicle as a whole, with the engine's power output being divided by the weight (or mass) of the vehicle, to give a metric that is independent of the vehicle's size. Power-to-weight is often quoted by manufacturers at the peak value, but the actual value may vary in use and variations will affect performance. The inverse of power-to-weight, weight-to-power ratio (power loading) is a calculation commonly applied to aircraft, cars, and vehicles in general, to enable the comparison of one vehicle's performance to another. Power-to-weight ratio is equal to thrust per unit mass multiplied by the velocity of any vehicle. Power-to-weight (specific power) The power-to-weight ratio (specific power) is defined as the power generated by the engine(s) divided by the mass. In this context, the term "weight" can be considered a misnomer, as it colloquially refers to mass. In a zero-gravity (weightless) environment, the power-to-weight ratio would not be considered infinite. A typical turbocharged V8 diesel engine might have an engine power of and a mass of , giving it a power-to-weight ratio of 0.65 kW/kg (0.40 hp/lb). Examples of high power-to-weight ratios can often be found in turbines. This is because of their ability to operate at very high speeds. For example, the Space Shuttle's main engines used turbopumps (machines consisting of a pump driven by a turbine engine) to feed the propellants (liquid oxygen and liquid hydrogen) into the engine's combustion chamber. The original liquid hydrogen turbopump is similar in size to an automobile engine (weighing approximately ) and produces for a power-to-weight ratio of 153 kW/kg (93 hp/lb). Physical interpretation In classical mechanics, instantaneous power is the limiting value of the average work done per unit time as the time interval Δt approaches zero (i.e. the derivative with respect to time of the work done). The typically used metric unit of the power-to-weight ratio is which equals . This fact allows one to express the power-to-weight ratio purely by SI base units. A vehicle's power-to-weight ratio equals its acceleration times its velocity; so at twice the velocity, it experiences half the acceleration, all else being equal. Propulsive power If the work to be done is rectilinear motion of a body with constant mass , whose center of mass is to be accelerated along a (possibly non-straight) line to a speed and angle with respect to the centre and radial of a gravitational field by an onboard powerplant, then the associated kinetic energy is where: is mass of the body is speed of the center of mass of the body, changing with time. The work–energy principle states that the work done to the object over a period of time is equal to the difference in its total energy over that period of time, so the rate at which work is done is equal to the rate of change of the kinetic energy (in the absence of potential energy changes). The work done from time t to time t + Δt along the path C is defined as the line integral , so the fundamental theorem of calculus has that power is given by . where: is acceleration of the center of mass of the body, changing with time. is linear force – or thrust – applied upon the center of mass of the body, changing with time. is velocity of the center of mass of the body, changing with time. is torque applied upon the center of mass of the body, changing with time. is angular velocity of the center of mass of the body, changing with time. In propulsion, power is only delivered if the powerplant is in motion, and is transmitted to cause the body to be in motion. It is typically assumed here that mechanical transmission allows the powerplant to operate at peak output power. This assumption allows engine tuning to trade power band width and engine mass for transmission complexity and mass. Electric motors do not suffer from this tradeoff, instead trading their high torque for traction at low speed. The power advantage or power-to-weight ratio is then where: is linear speed of the center of mass of the body. Engine power The useful power of an engine with shaft power output can be calculated using a dynamometer to measure torque and rotational speed, with maximum power reached when torque multiplied by rotational speed is a maximum. For jet engines the useful power is equal to the flight speed of the aircraft multiplied by the force, known as net thrust, required to make it go at that speed. It is used when calculating propulsive efficiency. Examples Engines Heat engines and heat pumps Thermal energy is made up from molecular kinetic energy and latent phase energy. Heat engines are able to convert thermal energy in the form of a temperature gradient between a hot source and a cold sink into other desirable mechanical work. Heat pumps take mechanical work to regenerate thermal energy in a temperature gradient. Standard definitions should be used when interpreting how the propulsive power of a jet or rocket engine is transferred to its vehicle. Electric motors and electromotive generators An electric motor uses electrical energy to provide mechanical work, usually through the interaction of a magnetic field and current-carrying conductors. By the interaction of mechanical work on an electrical conductor in a magnetic field, electrical energy can be generated. Fluid engines and fluid pumps Fluids (liquid and gas) can be used to transmit and/or store energy using pressure and other fluid properties. Hydraulic (liquid) and pneumatic (gas) engines convert fluid pressure into other desirable mechanical or electrical work. Fluid pumps convert mechanical or electrical work into movement or pressure changes of a fluid, or storage in a pressure vessel. Thermoelectric generators and electrothermal actuators A variety of effects can be harnessed to produce thermoelectricity, thermionic emission, pyroelectricity and piezoelectricity. Electrical resistance and ferromagnetism of materials can be harnessed to generate thermoacoustic energy from an electric current. Electrochemical (galvanic) and electrostatic cell systems (Closed cell) batteries All electrochemical cell batteries deliver a changing voltage as their chemistry changes from "charged" to "discharged". A nominal output voltage and a cutoff voltage are typically specified for a battery by its manufacturer. The output voltage falls to the cutoff voltage when the battery becomes "discharged". The nominal output voltage is always less than the open-circuit voltage produced when the battery is "charged". The temperature of a battery can affect the power it can deliver, where lower temperatures reduce power. Total energy delivered from a single charge cycle is affected by both the battery temperature and the power it delivers. If the temperature lowers or the power demand increases, the total energy delivered at the point of "discharge" is also reduced. Battery discharge profiles are often described in terms of a factor of battery capacity. For example, a battery with a nominal capacity quoted in ampere-hours (Ah) at a C/10 rated discharge current (derived in amperes) may safely provide a higher discharge current – and therefore higher power-to-weight ratio – but only with a lower energy capacity. Power-to-weight ratio for batteries is therefore less meaningful without reference to corresponding energy-to-weight ratio and cell temperature. This relationship is known as Peukert's law. Electrostatic, electrolytic and electrochemical capacitors Capacitors store electric charge onto two electrodes separated by an electric field semi-insulating (dielectric) medium. Electrostatic capacitors feature planar electrodes onto which electric charge accumulates. Electrolytic capacitors use a liquid electrolyte as one of the electrodes and the electric double layer effect upon the surface of the dielectric-electrolyte boundary to increase the amount of charge stored per unit volume. Electric double-layer capacitors extend both electrodes with a nanoporous material such as activated carbon to significantly increase the surface area upon which electric charge can accumulate, reducing the dielectric medium to nanopores and a very thin high permittivity separator. While capacitors tend not to be as temperature sensitive as batteries, they are significantly capacity constrained and without the strength of chemical bonds suffer from self-discharge. Power-to-weight ratio of capacitors is usually higher than batteries because charge transport units within the cell are smaller (electrons rather than ions), however energy-to-weight ratio is conversely usually lower. Fuel cell stacks and flow cell batteries Fuel cells and flow cells, although perhaps using similar chemistry to batteries, do not contain the energy storage medium or fuel. With a continuous flow of fuel and oxidant, available fuel cells and flow cells continue to convert the energy storage medium into electric energy and waste products. Fuel cells distinctly contain a fixed electrolyte whereas flow cells also require a continuous flow of electrolyte. Flow cells typically have the fuel dissolved in the electrolyte. Photovoltaics Vehicles Power-to-weight ratios for vehicles are usually calculated using curb weight (for cars) or wet weight (for motorcycles), that is, excluding weight of the driver and any cargo. This could be slightly misleading, especially with regard to motorcycles, where the driver might weigh 1/3 to 1/2 as much as the vehicle itself. In the sport of competitive cycling athlete's performance is increasingly being expressed in VAMs and thus as a power-to-weight ratio in W/kg. This can be measured through the use of a bicycle powermeter or calculated from measuring incline of a road climb and the rider's time to ascend it. Locomotives A locomotive generally must be heavy in order to develop enough adhesion on the rails to start a train. As the coefficient of friction between steel wheels and rails seldom exceeds 0.25 in most cases, improving a locomotive's power-to-weight ratio is often counterproductive. However, the choice of power transmission system, such as variable-frequency drive versus direct-current drive, may support a higher power-to-weight ratio by better managing propulsion power. Utility and practical vehicles Most vehicles are designed to meet passenger comfort and cargo carrying requirements. Vehicle designs trade off power-to-weight ratio to increase comfort, cargo space, fuel economy, emissions control, energy security and endurance. Reduced drag and lower rolling resistance in a vehicle design can facilitate increased cargo space without increase in the (zero cargo) power-to-weight ratio. This increases the role flexibility of the vehicle. Energy security considerations can trade off power (typically decreased) and weight (typically increased), and therefore power-to-weight ratio, for fuel flexibility or drive-train hybridisation. Some utility and practical vehicle variants such as hot hatches and sports-utility vehicles reconfigure power (typically increased) and weight to provide the perception of sports car like performance or for other psychological benefit. Notable low ratio Common power Performance luxury, roadsters and mild sports Increased engine performance is a consideration, but also other features associated with luxury vehicles. Longitudinal engines are common. Bodies vary from hot hatches, sedans (saloons), coupés, convertibles and roadsters. Mid-range dual-sport and cruiser motorcycles tend to have similar power-to-weight ratios. Sports vehicles Power-to-weight ratio is an important vehicle characteristic that affects the acceleration of sports vehicles. Early vehicles Aircraft Propeller aircraft depend on high power-to-weight ratios to generate sufficient thrust to achieve sustained flight, and then for speed. Thrust-to-weight ratio Jet aircraft produce thrust directly. Human Power-to-weight ratio is important in cycling, since it determines acceleration and the speed during hill climbs. Since a cyclist's power-to-weight output decreases with fatigue, it is normally discussed with relation to the length of time that he or she maintains that power. A professional cyclist can produce over 20 W/kg (0.012 hp/lb) as a five-second maximum. See also References Mechanics Power (physics) Engineering ratios
Power-to-weight ratio
[ "Physics", "Mathematics", "Engineering" ]
2,563
[ "Force", "Physical quantities", "Metrics", "Engineering ratios", "Quantity", "Power (physics)", "Energy (physics)", "Mechanics", "Mechanical engineering", "Wikipedia categories named after physical quantities" ]
158,783
https://en.wikipedia.org/wiki/Napier%20Sabre
The Napier Sabre is a British H-24-cylinder, liquid-cooled, sleeve valve, piston aero engine, designed by Major Frank Halford and built by D. Napier & Son during World War II. The engine evolved to become one of the most powerful inline piston aircraft engines in the world, developing from in its earlier versions to in late-model prototypes. The first operational aircraft to be powered by the Sabre were the Hawker Typhoon and Hawker Tempest; the first aircraft powered by the Sabre was the Napier-Heston Racer, which was designed to capture the world speed record. Other aircraft using the Sabre were early prototype and production variants of the Blackburn Firebrand, the Martin-Baker MB 3 prototype and a Hawker Fury prototype. The rapid introduction of jet engines after the war led to the quick demise of the Sabre, as there was less need for high power military piston aero engines and because Napier turned its attention to developing turboprop engines such as the Naiad and Eland. Design and development Prior to the Sabre, Napier had been working on large aero engines for some time. Its most famous was the Lion, which had been a very successful engine between the World Wars and in modified form had powered several of the Supermarine Schneider Trophy competitors in 1923 and 1927, as well as several land speed record cars. By the late 1920s, the Lion was no longer competitive and work started on replacements. Napier followed the Lion with two H-block designs: the H-16 Rapier and the H-24 Dagger. The H-block has a compact layout, consisting of two horizontally opposed engines, one atop or beside the other. Since the cylinders are opposed, the motion in one is balanced by the motion on the opposing side, eliminating both first order and second order vibration. In these new designs, Napier chose air cooling but in service, the rear cylinders proved to be impossible to cool properly, which made the engines unreliable. Genesis During the 1930s, studies showed the need for engines capable of developing one horsepower per cubic inch of displacement (about 45 kW/litre). Such power output was needed to propel aircraft large enough to carry large fuel loads for long range flights. A typical large engine of the era, the Pratt & Whitney R-1830 Twin Wasp, developed about from 1,830 cubic inches (30 litres), so an advance of some 50 per cent would be needed. This called for radical changes and while many companies tried to build such an engine, none succeeded. In 1927, Harry Ricardo published a study on the concept of the sleeve valve engine. In it, he wrote that traditional poppet valve engines would be unlikely to produce much more than , a figure that many companies were eyeing for next generation engines. To pass this limit, the sleeve valve would have to be used, to increase volumetric efficiency, as well as to decrease the engine's sensitivity to detonation, which was prevalent with the poor quality, low-octane fuels in use at the time. Halford had worked for Ricardo 1919–1922 at its London office and Halford's 1923 office was in Ladbroke Grove, North Kensington, only a few miles from Ricardo, while Halford's 1929 office was even closer (700 yards), and while in 1927 Ricardo started work with Bristol Engines on a line of sleeve-valve designs, Halford started work with Napier, using the Dagger as the basis. The layout of the H-block, with its inherent balance and the Sabre's relatively short stroke, allowed it to run at a higher rate of rotation, to deliver more power from a smaller displacement, provided that good volumetric efficiency could be maintained (with better breathing), which sleeve valves could do. The Napier company decided first to develop a large 24 cylinder liquid–cooled engine, capable of producing at least in late 1935. Although the company continued with the opposed H layout of the Dagger, this new design positioned the cylinder blocks horizontally and it was to use sleeve valves. All of the accessories were grouped conveniently above and below the cylinder blocks, rather than being at the front and rear of the engine, as in most contemporary designs. The Air Ministry supported the Sabre programme with a development order in 1937 for two reasons: to provide an alternative engine if the Rolls-Royce Vulture and the Bristol Centaurus failed as the next generation of high power engines and to keep Napier in the aero-engine industry. The first Sabre engines were ready for testing in January 1938, although they were limited to . By March, they were passing tests at and by June 1940, when the Sabre passed the Air Ministry's 100-hour test, the first production versions were delivering from their 2,238 cubic inch (37 litre) displacements. By the end of the year, they were producing . The contemporary 1940 Rolls-Royce Merlin II was generating just over from a 1,647 cubic inch (27 litre) displacement. Production Problems arose as soon as mass production began. Prototype engines had been hand-assembled by Napier craftsmen and it proved to be difficult to adapt it to assembly-line production techniques. The sleeves often failed due to the way they were manufactured from chrome-molybdenum steel, leading to seized cylinders, which caused the loss of the sole prototype Martin-Baker MB 3. The Ministry of Aircraft Production was responsible for the development of the engine and arranged for sleeves to be machined by the Bristol Aeroplane Company from its Taurus engine forgings. These nitrided austenitic steel sleeves were the result of many years of intensive sleeve development, experience that Napier did not have. Air filters had to be fitted when a new sleeve problem appeared in 1944 when aircraft were operating from Normandy soil with its abrasive, gritty dust. Quality control proved to be inadequate, engines were often delivered with improperly cleaned castings, broken piston rings and machine cuttings left inside the engine. Mechanics were overworked trying to keep the Sabres running and during cold weather they had to run them every two hours during the night so that the engine oil would not congeal and prevent the engine from starting the next day. These problems took too long to remedy and the engine gained a bad reputation. To make matters worse, mechanics and pilots unfamiliar with the different nature of the engine, tended to blame the Sabre for problems that were caused by not following correct procedures. This was exacerbated by the representatives of the competing Rolls-Royce company, which had its own agenda. In 1944, Rolls-Royce produced a similar design prototype called the Eagle. Napier seemed complacent and tinkered with the design for better performance. In 1942, it started a series of projects to improve its high-altitude performance, with the addition of a three-speed, two-stage supercharger, when the basic engine was still not running reliably. In December 1942, the company was purchased by the English Electric Company, which ended the supercharger project immediately and devoted the whole company to solving the production problems, which was achieved quickly. By 1944, the Sabre V was delivering consistently and the reputation of the engine started to improve. This was the last version to enter service, being used in the Hawker Typhoon and its derivative, the Hawker Tempest. Without the advanced supercharger, the engine's performance over fell off rapidly and pilots flying Sabre-powered aircraft, were generally instructed to enter combat only below this altitude. At low altitude, both planes were formidable. As air superiority over Continental Europe was slowly gained, Typhoons were increasingly used as fighter-bombers, notably by the RAF Second Tactical Air Force. The Tempest became the principal destroyer of the V-1 flying bomb (Fieseler Fi 103), since it was the fastest of all the Allied fighters at low levels. Later, the Tempest destroyed about 20 Messerschmitt Me 262 jet aircraft. Development continued and the later Sabre VII delivered with a new supercharger. By the end of World War II, there were several engines in the same power class. The Pratt & Whitney R-4360 Wasp Major four-row, 28-cylinder radial produced at first and later types produced , but these required almost twice the displacement in order to do so, 4,360 cubic inches (71 litres). Variants Note: Sabre I (E.107) (1939) . Sabre II (1940) . Experimental 0.332:1 propeller reduction gear ratio. Sabre II (production variant) . Reduction gear ratio 0.274:1: mainly used in early Hawker Typhoons. Sabre IIA . Revised ignition system: maximum boost +9 lbs. Sabre IIB . Four choke S.U. carburettor: Mainly used in Hawker Tempest V. Sabre IIC . Similar to Mk VII. Sabre III . Similar to Mk IIA, tailored for the Blackburn Firebrand: 25 manufactured and installed. Sabre IV . As Mk VA with Hobson fuel injection: preliminary flight development engine for Sabre V series. Used in Hawker Tempest I. Sabre V . Developed MK II, redesigned supercharger with increased boost, redesigned induction system. Sabre VA . Mk V with Hobson-R.A.E fuel injection, single-lever throttle and propeller control: used in Hawker Tempest VI. Sabre VI . Mk VA with Rotol cooling fan: used in 2 Hawker Tempest Vs modified to use Napier designed annular radiators; also in experimental Vickers Warwick V. Sabre VII . Mk VA strengthened to withstand high powers produced using Water/Methanol injection. Larger supercharger impeller. Sabre VIII . Intended for Hawker Fury; tested in the Folland Fo.108. Sabre E.118 (1941) Three-speed, two-stage supercharger, contra-rotating propeller; test flown in Fo.108. Sabre E.122 (1946) 3,500 horsepower. Intended for Napier 500mph tailless fighter Applications The engine has been used in many aircraft, including two mass-produced fighters. Adopted Hawker Tempest Hawker Typhoon Limited production and prototypes Blackburn Firebrand, only in 21 early production aircraft Fairey Battle, test-bed Folland Fo.108, test-bed Hawker Fury, prototype (2 built (LA610, VP207), 485 mph) Martin-Baker MB 3, prototype Napier-Heston Racer, prototype Vickers Warwick, prototype Restoration project and engines on display Under restoration Canadian Aviation Heritage Centre, Macdonald Campus, McGill University, Montréal. Sabre IIa, Serial Number 2484, Hawker Typhoon Preservation Group, RB396, UK Preserved on public display Solent Sky (example on loan from Birmingham Museum of Science and Industry) Fantasy of Flight, Polk City, Florida A Sabre IIA engine has been restored by the Friends Association of the Museo Nacional de Aeronáutica de Argentina and is on public display at the Engines Hall. Sectioned Napier engines on public display Imperial War Museum, Duxford (donated by Cambridge University Engineering Department) Royal Air Force Museum London London Science Museum World of WearableArt & Classic Cars Museum, Nelson Canada Aviation and Space Museum, Ottawa Specifications (Sabre VA) See also References Footnotes Notes Bibliography Air Ministry. Pilot's Notes for Typhoon Marks IA and IB; Sabre II or IIA engine (2nd edition). London: Crecy Publications, 2004. "A Real Contender (article and images) " Aeroplane No. 452, Volume 38, Number 12, December 2010. "A Co-operative Challenger (article and images on Heston Racer)." Flight and The Aircraft Engineer No. 1790, Volume XLIII, 15 April 1943. Gunston, Bill. World Encyclopedia of Aero Engines: From the Pioneers to the Present Day. 5th edition, Stroud, UK: Sutton, 2006. Lumsden, Alec. British Piston Engines and Their Aircraft. Marlborough, UK: Airlife Publishing, 2003. . Mason, Francis K. Hawker Aircraft Since 1920 (3rd revised edition). London: Putnam, 1991. . "Napier Sabre VII (article and images)." Flight and The Aircraft Engineer No. 1926, Volume XLVIII, 22 November 1945. "Napier Flight Development (article and images on Napier's test and development centre)." Flight and The Aircraft Engineer No. 1961, Volume L, 25 July 1946. Setright, L. J. K.: The Power to Fly: The Development of the Piston Engine in Aviation. Allen & Unwin, 1971. . Sheffield, F. C. "2,200 h.p. Napier Sabre (article and images)." Flight and The Aircraft Engineer No. 1829, Volume XLV, 13 January 1944. Sheffield, F. C. "Napier Sabre II (article and images)." Flight and The Aircraft Engineer No. 1839, Volume XLV, 23 March 1944. White, Graham. Allied Aircraft Piston Engines of World War II: History and Development of Frontline Aircraft Piston Engines Produced by Great Britain and the United States During World War II. Warrendale, Pennsylvania: SAE International, 1995. Reynolds, John. Engines and Enterprise: The Life and Work of Sir Harry Ricardo. Stroud, UK: Sutton, 1999. Taylor, Douglas. Boxkite to Jet - the remarkable career of Frank B Halford. Derby, UK: RRHT, 1999. Further reading (1989 copy by Crescent Books, NY.) Clostermann, Pierre: The Big Show. London, UK: Chatto & Windus in association with William Heinemann, 1953. (2004 edition). External links Napier Power Heritage Trust site Cutaway illustration of a Napier Sabre drawn by Max Millar (uncredited) and coloured in by Makoto Oiuchi The Sabre-powered Napier-Heston Racer The Hawker Tempest Page The Greatest Engines of All Time NAPIER SABRE 3000 B.H.P A 1946 Flight advertisement for the Sabre engine Sabre Sleeve valve engines Boxer engines 1930s aircraft piston engines H engines
Napier Sabre
[ "Technology" ]
2,820
[ "Sleeve valve engines", "Engines" ]
158,788
https://en.wikipedia.org/wiki/Carbonyl%20group
For organic chemistry, a carbonyl group is a functional group with the formula , composed of a carbon atom double-bonded to an oxygen atom, and it is divalent at the C atom. It is common to several classes of organic compounds (such as aldehydes, ketones and carboxylic acids), as part of many larger functional groups. A compound containing a carbonyl group is often referred to as a carbonyl compound. The term carbonyl can also refer to carbon monoxide as a ligand in an inorganic or organometallic complex (a metal carbonyl, e.g. nickel carbonyl). The remainder of this article concerns itself with the organic chemistry definition of carbonyl, such that carbon and oxygen share a double bond. Carbonyl compounds In organic chemistry, a carbonyl group characterizes the following types of compounds: Other organic carbonyls are urea and the carbamates, the derivatives of acyl chlorides chloroformates and phosgene, carbonate esters, thioesters, lactones, lactams, hydroxamates, and isocyanates. Examples of inorganic carbonyl compounds are carbon dioxide and carbonyl sulfide. A special group of carbonyl compounds are dicarbonyl compounds, which can exhibit special properties. Structure and reactivity For organic compounds, the length of the C-O bond does not vary widely from 120 picometers. Inorganic carbonyls have shorter C-O distances: CO, 113; CO2, 116; and COCl2, 116 pm. The carbonyl carbon is typically electrophilic. A qualitative order of electrophilicity is RCHO (aldehydes) > R2CO (ketones) > RCO2R' (esters) > RCONH2 (amides). A variety of nucleophiles attack, breaking the carbon-oxygen double bond. Interactions between carbonyl groups and other substituents were found in a study of collagen. Substituents can affect carbonyl groups by addition or subtraction of electron density by means of a sigma bond. ΔHσ values are much greater when the substituents on the carbonyl group are more electronegative than carbon. The polarity of C=O bond also enhances the acidity of any adjacent C-H bonds. Due to the positive charge on carbon and the negative charge on oxygen, carbonyl groups are subject to additions and/or nucleophilic attacks. A variety of nucleophiles attack, breaking the carbon-oxygen double bond, and leading to addition-elimination reactions. Nucleophilic reactivity is often proportional to the basicity of the nucleophile and as nucleophilicity increases, the stability within a carbonyl compound decreases. The pKa values of acetaldehyde and acetone are 16.7 and 19 respectively, Spectroscopy Infrared spectroscopy: the C=O double bond absorbs infrared light at wavenumbers between approximately 1600–1900 cm−1(5263 nm to 6250 nm). The exact location of the absorption is well understood with respect to the geometry of the molecule. This absorption is known as the "carbonyl stretch" when displayed on an infrared absorption spectrum. In addition, the ultraviolet-visible spectra of propanone in water gives an absorption of carbonyl at 257 nm. Nuclear magnetic resonance: the C=O double-bond exhibits different resonances depending on surrounding atoms, generally a downfield shift. The 13C NMR of a carbonyl carbon is in the range of 160–220 ppm. See also Carbon–oxygen bond Organic chemistry Functional group Bridging carbonyl Electrophilic addition References Further reading L.G. Wade, Jr. Organic Chemistry, 5th ed. Prentice Hall, 2002. The Frostburg State University Chemistry Department. Organic Chemistry Help (2000). Advanced Chemistry Development, Inc. IUPAC Nomenclature of Organic Chemistry (1997). William Reusch. tara VirtualText of Organic Chemistry (2004). Purdue Chemistry Department (retrieved Sep 2006). Includes water solubility data. William Reusch. (2004) Aldehydes and Ketones Retrieved 23 May 2005. ILPI. (2005) The MSDS Hyperglossary- Anhydride. Functional groups
Carbonyl group
[ "Chemistry" ]
897
[ "Functional groups" ]
158,826
https://en.wikipedia.org/wiki/Transect
A transect is a path along which one counts and records occurrences of the objects of study (e.g. plants). It requires an observer to move along a fixed path and to count occurrences along the path and, at the same time (in some procedures), obtain the distance of the object from the path. This results in an estimate of the area covered and an estimate of the way in which detectability increases from probability 0 (far from the path) towards 1 (near the path). Using the raw count and this probability function, one can arrive at an estimate of the actual density of objects. The estimation of the abundance of populations (such as terrestrial mammal species) can be achieved using a number of different types of transect methods, such as strip transects, line transects, belt transects, point transects, gradsects and curved line transects. See also – Method for estimating a species population size References External links Scientific observation Ecological techniques Environmental statistics Environmental Sampling Equipment
Transect
[ "Biology" ]
210
[ "Environmental Sampling Equipment", "Ecological techniques" ]
158,837
https://en.wikipedia.org/wiki/Cd%20%28command%29
The command, also known as (change directory), is a command-line shell command used to change the current working directory in various operating systems. It can be used in shell scripts and batch files. Implementations The command has been implemented in operating systems such as Unix, DOS, IBM OS/2, MetaComCo TRIPOS, AmigaOS (where if a bare path is given, cd is implied), Microsoft Windows, ReactOS, and Linux. On MS-DOS, it is available in versions 2 and later. DR DOS 6.0 also includes an implementation of the and commands. The command is also available in the open source MS-DOS emulator DOSBox and in the EFI shell. It is named in HP MPE/iX. The command is analogous to the Stratus OpenVOS command. is frequently included built directly into a command-line interpreter. This is the case in most of the Unix shells (Bourne shell, tcsh, bash, etc.), cmd.exe on Microsoft Windows NT/2000+ and Windows PowerShell on Windows 7+ and COMMAND.COM on DOS/ Microsoft Windows 3.x-9x/ME. The system call that effects the command in most operating systems is that is defined by POSIX. Command line shells on Windows usually use the Windows API to change the current working directory, whereas on Unix systems calls the POSIX C function. This means that when the command is executed, no new process is created to migrate to the other directory as is the case with other commands such as ls. Instead, the shell itself executes this command. This is because, when a new process is created, child process inherits the directory in which the parent process was created. If the command inherits the parent process' directory, then the objective of the command cd will never be achieved. Windows PowerShell, Microsoft's object-oriented command line shell and scripting language, executes the command (cmdlet) within the shell's process. However, since PowerShell is based on the .NET Framework and has a different architecture than previous shells, all of PowerShell's cmdlets like , etc. run in the shell's process. Of course, this is not true for legacy commands which still run in a separate process. Usage A directory is a logical section of a file system used to hold files. Directories may also contain other directories. The command can be used to change into a subdirectory, move back into the parent directory, move all the way back to the root directory or move to any given directory. Consider the following subsection of a Unix filesystem, which shows a user's home directory (represented as ) with a file, , and three subdirectories. If the user's current working directory is the home directory (), then entering the command ls followed by might produce the following transcript: user@wikipedia:~$ ls workreports games encyclopedia text.txt user@wikipedia:~$ cd games user@wikipedia:~/games$ The user is now in the "games" directory. A similar session in DOS (though the concept of a "home directory" may not apply, depending on the specific version) would look like this: C:\> dir workreports <DIR> Wed Oct 9th 9:01 games <DIR> Tue Oct 8th 14:32 encyclopedia <DIR> Mon Oct 1st 10:05 text txt 1903 Thu Oct10th 12:43 C:\> cd games C:\games> DOS maintains separate working directories for each lettered drive, and also has the concept of a current working drive. The command can be used to change the working directory of the working drive or another lettered drive. Typing the drive letter as a command on its own changes the working drive, e.g. ; alternatively, with the switch may be used to change the working drive and that drive's working directory in one step. Modern versions of Windows simulate this behaviour for backwards compatibility under CMD.EXE. Note that executing from the command line with no arguments has different effects in different operating systems. For example, if is executed without arguments in DOS, OS/2, or Windows, the current working directory is displayed (equivalent to Unix pwd). If is executed without arguments in Unix, the user is returned to the home directory. Executing the command within a script or batch file also has different effects in different operating systems. In DOS, the caller's current directory can be directly altered by the batch file's use of this command. In Unix, the caller's current directory is not altered by the script's invocation of the command. This is because in Unix, the script is usually executed within a subshell. Options Unix, Unix-like by itself or will always put the user in their home directory. will leave the user in the same directory they are currently in (i.e. the current directory won't change). This can be useful if the user's shell's internal code can't deal with the directory they are in being recreated; running will place their shell in the recreated directory. cd ~username will put the user in the username's home directory. (without a ) will put the user in a subdirectory; for example, if they are in , typing will put them in , while puts them in . will move the user up one directory. So, if they are , moves them to , while moves them to (i.e. up two levels). The user can use this indirection to access subdirectories too. So, from , they can use to go to will switch the user to the previous directory. For example, if they are in , and go to , they can type to go back to . The user can use this to toggle back and forth between two directories without pushd and popd. DOS, OS/2, Windows, ReactOS no attributes print the full path of the current directory. Print the final directory stack, just like dirs. Entries are wrapped before they reach the edge of the screen. entries are printed one per line, preceded by their stack positions. (DOS and Windows only) returns to the root dir. Consequently, command always takes the user to the named subdirectory on the root directory, regardless of where they are located when the command is issued. Interpreters other than an operating systems shell In the File Transfer Protocol, the respective command is spelled in the control stream, but is available as in most client command-line programs. Some clients also have the for changing the working directory locally. The numerical computing environments MATLAB and GNU Octave include a cd function with similar functionality. The command also pertains to command-line interpreters of various other application software. See also Directory structure pushd and popd chroot List of command-line interpreters References Further reading External links Windows XP > Command-line reference A-Z > Chdir (Cd) from Microsoft TechNet Internal DOS commands File system directories Inferno (operating system) commands IBM i Qshell commands MSX-DOS commands OS/2 commands ReactOS commands Windows administration Standard Unix programs Unix SUS2008 utilities
Cd (command)
[ "Technology" ]
1,505
[ "IBM i Qshell commands", "Standard Unix programs", "Windows commands", "Computing commands", "OS/2 commands", "ReactOS commands", "Inferno (operating system) commands", "MSX-DOS commands" ]
11,748,592
https://en.wikipedia.org/wiki/Idle%20reduction
Idle reduction describes technologies and practices that minimize the amount of time drivers idle their engines. Avoiding idling time has a multitude of benefits including: savings in fuel and maintenance costs, extending vehicle life, and reducing damaging emissions. An idling engine consumes only enough power to keep itself and its accessories running, therefore, producing no usable power to the drive train. For cargo ships, the need to run the ship's engines for power in port is eliminated by techniques collectively described as cold ironing. Idle reduction equipment is aimed at reducing the amount of energy wasted by idling trucks, rail locomotives or automobiles. When a vehicle's engine is not being used to move the vehicle, it can be shut off entirely — thereby conserving fuel and reducing emissions— while other functions like accessories and lighting are powered by an electrical source other than the vehicle's alternator. Each year, long-duration idling of truck and locomotive engines emits 11 million tons of carbon dioxide, 200,000 tons of oxides of nitrogen, and 5,000 tons of particulate matter into the air. There are other technologies that can reduce the use of fuel to heat or cool the cab when the vehicle is traditionally idling overnight. These can be battery or fuel powered but in either case, use less fuel, do no harm to the vehicle's engine, and reduce or eliminate emissions. Other vehicles, including police, military, service trucks, news vans, fire trucks, ambulances, and hydraulic bucket trucks can be equipped with mobile power idle reduction systems, similar to a rechargeable battery. The systems are usually installed in the trunk and can provide up to 10 hours of additional power for equipment operation without engine engagement. When used by law enforcement and the military, idle reduction technology increases mission capability by extending operational time and providing increased situational awareness and safety. Idle reduction is a rapidly growing trend in US federal, state, local and fleet policy. Idling contributes significantly to the transportation sector's portion of yearly greenhouse gas emissions. The US Department of Energy is putting forth a huge effort through the Energy Efficiency and Renewable Energy Program to increase public awareness about decreasing petroleum use; idle-reduction being one of the methods. The Alternative Fuels and Advanced Vehicles Data Center is a reliable resource for information regarding idle-reduction methods such as fuel-operated heaters, auxiliary power units and truck stop electrification. Background and problem In the public sector, idling is common. Police officers, public works employees, fire fighters, and EMTs who operate city fleet vehicles run them at idle to perform their duties which require them to operate equipment. The emissions generated from these tasks by cities all over the U.S. contribute to the fact that each year U.S. passenger cars, light trucks, medium-duty trucks, and heavy-duty vehicles consume more than 6 billion gallons of diesel fuel and gasoline — without even moving. As fuel prices continue to rise, a major challenge in fleet management is how to keep service vehicles on the road to serve the public while staying within budget. Reducing idle time also reduces emission of carbon dioxide, one of the prime factors in causing global warming. By some estimates, an automobile with a 3-liter engine burns 0.4 gallons of gas per hour of idling, and generates a little over a pound of CO² every 10 minutes. Idle reduction is particularly significant for vehicles in heavy traffic and trucks at the estimated 5,000 truck stops in the US. Many hybrid electric vehicles employ idle reduction to achieve better fuel economy in traffic. America's fleet of around 500,000 long-haul trucks consumes over a billion gallons (3.8×109 l; 830 million imp gal) of diesel fuel per year. The trucking industry has analyzed the impact of idling on engines, both in terms of maintenance and engine wear costs. Long-duration idling causes more oil and oil filter deterioration and increases the need for more oil and filter changes. Similarly, the longer the idling time, the sooner the engine itself will need to be rebuilt. The trucking industry estimates that long duration idling costs the truck owner $1.13 per day, based on the need for more frequent oil changes and sooner overhaul costs. Services such as AireDock, IdleAire and Shorepower provide power at truck stops to resting truckers who would otherwise need to continue idling during mandatory breaks. Because the United States Department of Transportation mandates that truckers rest for 10 hours after driving for 11 hours, truckers might park at truck stops for several hours. Often they idle their engines during this rest time to provide their sleeper compartments with air conditioning or heating or to run electrical appliances such as refrigerators or televisions. The problem of anti-idling is most commonly associated with heavy duty diesel engines because they are the biggest contributors when idling. As an example of the need for idling an engine, school bus drivers on a cold morning may go out to their bus and turn it on to warm up the engine in order to provide direct heat to the cabin when they return to their bus to start their morning routes, which brings up two of the main reasons for idling, driver mentality and the need for passenger comfort. This idling period can be considered excessive, though excessive idling is defined and regulated differently in different parts of the country. In the United States Policies at the federal level are more focused towards research and development of technologies, economic incentives, and education. The Department of Energy (DOE) is sponsoring several corporate companies in the R&D of new anti-idling technologies with the hope that this technology will be installed and incorporated in the assembly line or possibly at the dealer as an option. The Environmental Protection Agency (EPA) also has many ways to promote idle reduction. The EPA established the SmartWay Transport Partnership that provides information about available anti-idling technologies, possible strategies for idle reduction, and resources for obtaining financing on anti-idling projects. The program also serves as an EnergyStar-like program with a label available to companies that commit “to improve the environmental performance of their freight delivery operations.” The EPA has a national campaign called the Clean School Bus Campaign which works to reduce diesel fuel consumption in school buses across the nation. Several regions were awarded millions of dollars through grant projects including idle-reduction pilot projects. Various states and localities have passed laws pertaining to idling. Some of the laws are more strict and stringent than others. Thirty-one states currently have some sort of existing regulations pertaining to anti-idling. Of these states, California has the most codes and regulations. The California Air Resources Board has enacted numerous laws that regulate idling in the state. For example, in Virginia, the excessive idling threshold is ten minutes, though, in many west coast states such as Hawaii and California, where there is a larger presence of greener policies in relation to fuel consumption, the thresholds are drastically smaller and may even have no idling tolerance at all. According to Hawaii Administrative Rules §11-60.1-34, no idling is permitted “while the motor vehicle is stationary at a loading zone, parking or servicing area, route terminal, or other off street areas” with a couple of exceptions. “Each year, long-duration idling of truck and locomotive engines consumes over of diesel fuel and emits 11 million tons of carbon dioxide, 200,000 tons of oxides of nitrogen, and 5,000 tons of particulate matter into the air.” At the local level, there are many municipalities that have enacted anti-idling regulations. New York is an example of states making their idling policies more strict. In early 2009, New York Mayor Michael Bloomberg signed legislation that reduced the amount of time non-emergency vehicles could idle when they are located near schools. The new legislation reduced the allowed idling time from three minutes to one minute. In addition, the new law authorized the Department of Parks and Recreation and the Department of Sanitation to enforce the new idling laws. Previously, only the police department and the Department of Environmental Protection had this authority. Civilians are also allowed to report violations under the new law. New York's Citizens Air Complaint Program allows citizens to report idling vehicles – citizens get a share of the collected fine. The New York Times reported in 2022 that the program led to a massive increase in complaints against idling vehicles. In 2017, the City of Palo Alto began considering a proposal to stop drivers from running engines when parked. Truckers argue for the need for idling to keep their cabins comfortable overnight at truck stops. Further complaints have come from the lack of concurrence among state and local idling laws. This disparity in laws requires truckers travelling across the country to be aware of the local idling laws in every place they visit. Even consistency between state and local laws has been a concern. Some truckers have expressed concern that some idling laws could prevent them from complying with other laws, For example, laws requiring truckers to get a certain amount of uninterrupted rest might be interfered with by anti-idling laws. The transportation blog uShip.com, Ship Happens states that “[anti-idling] laws fail to consider the truckers well-being and place drivers at risk of debilitating fines for noncompliance.” These fines could run as high as $25,000 in Connecticut for idling for more than three minutes. United Kingdom Unnecessary vehicle idling is an offence against the Road Traffic (Vehicle Emissions) (Fixed Penalty) (England) Regulations (Statutory Instrument 2002 No. 1808) and may incur fines The regulations apply in zones designated as Air quality management areas by local authorities. The Department for Environment, Food and Rural Affairs has published a list of local authorities with air quality management areas. There is similar legislation in Scotland and Wales, but it is unclear whether Northern Ireland has similar regulations. Fines are set at £20, but local authorities may decide to set them higher. However, enforcement is minimal. Europe In Europe, vehicles increasingly include a Start-stop system to prevent idling. Hong Kong Hong Kong introduced an anti-idling bill in 2010. Technologies Fuel-operated coolant heaters Fuel-operated coolant heaters reduce the need to run engines at idle to warm vehicles such as buses. Directly heating the coolant is more fuel-efficient than using the engine's waste heat, reducing fuel consumption and emissions. In general, coolant heaters burn 1/8 as much fuel as an idling engine would, simultaneously emitting 1/20 of the emissions and directing heat significantly faster to the passenger compartment. Auxiliary power units Auxiliary power units (APUs) are commonly used on semi-trucks to provide electric power to the cabin at times when the cabin or cargo need to be heated or cooled while the vehicle is not in motion for an extended period of time. This period of time is usually overnight, when the truck driver has parked at a truck stop for some rest. Instead of having to keep the engine idling all night just to maintain the temperature in the cabin, the APU can turn on and provide power. Most commonly, the APU will have its own cooling system, heating system, generator, and air conditioning compressor. Sometimes the APU will be integrated into those components of the semi itself. APUs are also commonly used in police cruisers as an alternative to idling. Since a significant amount of time is spent in the cruiser while stationary, idling becomes a major source of cost to police fleets, though, most police fleets have idling policies. The drawback of APUs on police cruisers is that they are normally kept in the trunk where they take up valuable space. Truck stop electrification Federal safety regulations developed by the Federal Motor Carrier Safety Administration, require that truckers must rest ten hours for every eleven hours of consecutive driving. As a result, drivers spend extended periods of time resting and sleeping inside the cabs of their trucks. To maintain comfort and amenities, most long haul truck drivers idle their engines for close to ten hours per day to power their heating systems and air conditioners, generate electricity for on-board appliances, charge their vehicle's batteries, and to warm their engines in colder weather. Given that trucks typically consume 0.8 gallons (3.03 L) of diesel fuel per hour of idling, between 900 and 1,400 gallons (3406 to 5300 L) of fuel are consumed each year per truck, resulting in significant greenhouse gas emissions. Truck-stop electrification (TSE) and auxiliary power unit technologies provide long-haul truckers with the ability to heat, cool, and power additional auxiliary devices at truck stops without requiring them to idle their engines. The United States Department of Transportation estimates there are approximately 5,000 truck stops on the U.S. highway system that provide overnight parking, restrooms, showers, stores, restaurants and fueling stations. The United States Department of Energy maintains a website that lists current TSE sites throughout the United States. As of October 2013, the website records 115 TSE stations throughout the country. Truck stop electrification allows a trucker to “plug-in” to power their on and off-board electrical needs. There are two types of truck stop electrification, on-board and off-board systems. On board TSE solutions allow trucker's the ability to recharge their batteries at truck stops via standard 120 Volt electrical outlets. Truckers can then utilize the truck's batteries to power appliances and provide heating and cooling to the truck cab. Typically, on-board TSE solutions require some vehicle modification. Off-board TSE solutions do not typically require any vehicle modifications, as they provide heating and air conditioning services via an overhead unit and hose that connects to the truck's window. In addition to heating and cooling, these connections can also offer standard electrical outlets, internet access, movies and satellite programming. Normally, private companies provide and regulate either system and can charge an hourly rate for services, typically around $1.00-$2.00 an hour. Both of these options can generate revenue for truck stop operators, and decrease operating expenses for truckers relative to the cost of diesel fuel. The cost of electricity to provide overnight power to trucks can save up to $3,240 of fuel that would normally be consumed by idling per parking space. Truck stop electrification can allow truck drivers to abide local idling regulations and reduce noise to neighboring establishments. The cost of implementing a single TSE site can vary greatly, depending on the type of technology that is employed. Installation costs for technology that provides external power to operate equipment on board a truck range from $4,500 to $8,500 per space, whereas the costs to provide a window based power unit (i.e. an off board apparatus) range from $10,000 to $20,000 per space. Costs for an individual truck operator to install an on-board system capable of utilizing shore power from a TSE space can cost up to $2,000. Idle management/control Idle management technologies have been developed as an upfitting solution to answer idling concerns. Similar to a start-stop system, idle management technologies can control the vehicle while in Park are Neutral, which allows for extensive control when the vehicle is in its primary state of issue—at idle. Some idle management technologies are so comprehensive, they are able to manage the engine's on/off ignition while retaining control of auxiliary functions, such as vehicle climate, anti-theft, operator security, and more, even when the engine is powered off. Idle reduction can also be achieved by more efficient control of stop lights and reducing congestion on roadways. States and municipalities that have employed data analytics to reduce bottlenecks and improve traffic flow—which reduces idling—include Austin, Texas and the state of Florida. See also Start-stop system References External links Idle reduction information — U.S. Department of Energy / EERE Idle-Free Corridors — United States Environmental Protection Agency www.makealeap.org — Lowering Emissions and particulates: an idle reduction educational site Energy conservation Vehicle emission controls Power control
Idle reduction
[ "Physics" ]
3,289
[ "Power control", "Power (physics)", "Physical quantities" ]
11,748,713
https://en.wikipedia.org/wiki/Mechanical%20rectifier
A mechanical rectifier is a device for converting alternating current (AC) to direct current (DC) by means of mechanically operated switches. The best-known type is the commutator, which is an integral part of a DC dynamo, but before solid-state devices became available, independent mechanical rectifiers were used for certain applications. Before the invention of semiconductors, rectification at high currents involved serious losses. There were various vacuum/gas devices, such as the mercury arc rectifiers, thyratrons, ignitrons, and vacuum diodes. Solid-state technology was in its infancy, represented by copper oxide and selenium rectifiers. All of these gave excessive forward voltage drop at high currents. One answer was mechanically opening and closing contacts, if this could be done quickly and cleanly enough. Vibrator type This was the reverse of a vibrator inverter. An electromagnet, powered by DC through contacts it operated (like a buzzer) (or fed with AC), caused a spring to vibrate and the spring-operated change-over contacts which converted the AC to DC. This arrangement was only suitable for low-power applications, e.g. auto radios and was also found in some motorcycle electrical systems, where it was combined with a voltage regulator. Motor-driven type This operated on the same principle as the vibrator type but the change-over contacts were operated by a synchronous motor. It was suitable for high-power applications, e.g. electrolysis cells and electrostatic precipitators. Still rectifier A mechanical rectifier was patented in 1895 (US patent 547043) by William Joseph Still. The details are obscure but it appears from the diagram to be similar to a third-brush dynamo. BTH rectifier The machine shown in the reference was designed by Read and Gimson et al., at British Thomson-Houston (BTH) Rugby, Warwickshire, England, in the early 1950s. It is a three-phase mechanical rectifier working at 220 volts and 15,000 amperes, and its application was the powering of huge banks of electrolysis cells. The central shaft was rotated by synchronous motor, driving an eccentric with a throw of about 2mm. (0.077 inch) Push-rods from this operated the contacts. The timing was critical, and was adjusted by rotating the position of the eccentric on its shaft, and by sliding wedges between the eccentric and push-rods. Crucial to this system were the commutating reactors, inductors that ensured the contacts closed when the voltage across them was small, and opened when the current was small. Without these, contact wear would have been intolerably heavy. These were series inductors that operated for most of the cycle with saturated cores. When the current decreased below that for saturation, their inductances reduced the current considerably. Contact switching was timed to occur while their cores were un-saturated. In the USA, similar rectifiers were made by the I-T-E circuit breaker company. This machinery was undoubtedly successful; its efficiency was determined to be 97.25%. Contact life was never fully determined but considerably exceeded 2000 hours. However, the rapid development of the silicon diode made it ultimately redundant. References Power electronics Rectifiers
Mechanical rectifier
[ "Engineering" ]
704
[ "Electronic engineering", "Power electronics" ]
11,749,910
https://en.wikipedia.org/wiki/Cheese
Cheese is a type of dairy product produced in a range of flavors, textures, and forms by coagulation of the milk protein casein. It comprises proteins and fat from milk (usually the milk of cows, buffalo, goats or sheep). During production, milk is usually acidified and either the enzymes of rennet or bacterial enzymes with similar activity are added to cause the casein to coagulate. The solid curds are then separated from the liquid whey and pressed into finished cheese. Some cheeses have aromatic molds on the rind, the outer layer, or throughout. Over a thousand types of cheese exist, produced in various countries. Their styles, textures and flavors depend on the origin of the milk (including the animal's diet), whether they have been pasteurised, the butterfat content, the bacteria and mold, the processing, and how long they have been aged. Herbs, spices, or wood smoke may be used as flavoring agents. Other added ingredients may include black pepper, garlic, chives or cranberries. A cheesemonger, or specialist seller of cheeses, may have expertise with selecting, purchasing, receiving, storing and ripening cheeses. Most cheeses are acidified by bacteria, which turn milk sugars into lactic acid; the addition of rennet completes the curdling. Vegetarian varieties of rennet are available; most are produced through fermentation by the fungus Mucor miehei, but others have been extracted from Cynara thistles. For a few cheeses, the milk is curdled by adding acids such as vinegar or lemon juice. Cheese is valued for its portability, long shelf life, and high content of fat, protein, calcium, and phosphorus. Cheese is more compact and has a longer shelf life than milk. Hard cheeses, such as Parmesan, last longer than soft cheeses, such as Brie or goat's milk cheese. The long storage life of some cheeses, especially when encased in a protective rind, allows selling when markets are favorable. Vacuum packaging of block-shaped cheeses and gas-flushing of plastic bags with mixtures of carbon dioxide and nitrogen are used for storage and mass distribution of cheeses in the 21st century, compared the paper and twine that was used in the 20th and 19th century. Etymology The word cheese comes from Latin , from which the modern word casein is derived. The earliest source is from the proto-Indo-European root *kwat-, which means "to ferment, become sour". That gave rise to or (in Old English) and (in Middle English). Similar words are shared by other West Germanic languages—West Frisian , Dutch , German , Old High German —all from the reconstructed West-Germanic form *kāsī, which in turn is an early borrowing from Latin. The Online Etymological Dictionary states that "cheese" derives from: The Online Etymological Dictionary states that the word is of: When the Romans began to make hard cheeses for their legionaries' supplies, a new word started to be used: , from , or "cheese shaped in a mold". It is from this word that the French , standard Italian , Catalan , Breton , and Occitan (or ) are derived. Of the Romance languages, Spanish, Portuguese, Romanian, Tuscan and some Southern Italian dialects use words derived from (, , , and for example). The word cheese is occasionally employed, as in Head cheese, to mean "shaped in a mold". History Origins Cheese is an ancient food whose origins predate recorded history. There is no conclusive evidence indicating where cheesemaking originated, whether in Europe, Central Asia or the Middle East. The earliest proposed dates for the origin of cheesemaking range from around 8000 BCE, when sheep were first domesticated. Because animal skins and inflated internal organs have provided storage vessels for a range of foodstuffs since ancient times, it is probable that the process of cheese making was discovered accidentally by storing milk in a container made from the stomach of an animal, resulting in the milk being turned to curd and whey by the rennet from the stomach. There is a legend—with variations—about the discovery of cheese by an Arab trader who used this method of storing milk. The earliest evidence of cheesemaking in the archaeological record dates back to 5500 BCE and is found in what is now Kuyavia, Poland, where strainers coated with milk-fat molecules have been found. The earliest evidence of cheesemaking in the Mediterranean dates back to 5200 BCE, on the coast of the Dalmatia region of Croatia. Cheesemaking may have begun independently of this by the pressing and salting of curdled milk to preserve it. Observation that the effect of making cheese in an animal stomach gave more solid and better-textured curds may have led to the deliberate addition of rennet. Early archeological evidence of Egyptian cheese has been found in Egyptian tomb murals, dating to about 2000 BCE. A 2018 scientific paper stated that cheese dating to approximately 1200 BCE (3200 years before present), was found in ancient Egyptian tombs. The earliest ever discovered preserved cheese was found on mummies in Xiaohe Cemetery in the Taklamakan Desert in Xinjiang, China, dating back as early as 1615 BCE. The earliest cheeses were likely quite sour and salty, similar in texture to rustic cottage cheese or feta, a crumbly, flavorful Greek cheese. Cheese produced in Europe, where climates are cooler than the Middle East, required less salt for preservation. With less salt and acidity, the cheese became a suitable environment for useful microbes and molds, giving aged cheeses their respective flavors. Ancient Greece and Rome Ancient Greek mythology credited Aristaeus with the discovery of cheese. Homer's Odyssey (8th century BCE) describes the monstrous Cyclops making and storing sheep's and goats' milk cheese (translation by Samuel Butler): Columella's De Re Rustica (c. 65 CE) details a cheesemaking process involving rennet coagulation, pressing of the curd, salting, and aging. According to Pliny the Elder, it had become a sophisticated enterprise by the time the Roman Empire came into being. Pliny the Elder also mentions in his writings Caseus Helveticus, a hard Sbrinz-like cheese produced by the Helvetii. Cheese was an everyday food and cheesemaking a mature art in the Roman empire. Pliny's Natural History (77  CE) devotes a chapter (XI, 97) to describing the diversity of cheeses enjoyed by Romans of the early Empire. He stated that the best cheeses came from the villages near Nîmes, but did not keep long and had to be eaten fresh. Cheeses of the Alps and Apennines were as remarkable for their variety then as now. A Ligurian cheese was noted for being made mostly from sheep's milk, and some cheeses produced nearby were stated to weigh as much as a thousand pounds each. Goats' milk cheese was a recent taste in Rome, improved over the "medicinal taste" of Gaul's similar cheeses by smoking. Of cheeses from overseas, Pliny preferred those of Bithynia in Asia Minor. Post-Roman Europe 1000, Anglo-Saxons in England named a village by the River Thames , meaning "Cheese farm". In 1022, it is mentioned that Vlach (Aromanian) shepherds from Thessaly and the Pindus mountains, in modern Greece, provided cheese for Constantinople. Many cheeses popular today were first recorded in the late Middle Ages or after. Cheeses such as Cheddar around 1500, Parmesan in 1597, Gouda in 1697, and Camembert in 1791 show post-Middle Ages dates. In 1546, The Proverbs of John Heywood claimed "the moon is made of a green cheese" (Greene may refer here not to the color, as many now think, but to being new or unaged). Variations on this sentiment were long repeated and NASA exploited this myth for an April Fools' Day spoof announcement in 2006. Modern era Until its modern spread along with European culture, cheese was nearly unheard of in east Asian cultures and in the pre-Columbian Americas and had only limited use in sub-Mediterranean Africa, mainly being widespread and popular only in Europe, the Middle East, the Indian subcontinent, and areas influenced by those cultures. But with the spread, first of European imperialism, and later of Euro-American culture and food, cheese has gradually become known and increasingly popular worldwide. The first factory for the industrial production of cheese opened in Switzerland in 1815, but large-scale production first found real success in the United States. Credit usually goes to Jesse Williams, a dairy farmer from Rome, New York, who in 1851 started making cheese in an assembly-line fashion using the milk from neighboring farms; this made cheddar cheese one of the first US industrial foods. Within decades, hundreds of such commercial dairy associations existed. The 1860s saw the beginnings of mass-produced rennet, and by the turn of the century scientists were producing pure microbial cultures. Before then, bacteria in cheesemaking had come from the environment or from recycling an earlier batch's whey; the pure cultures meant a more standardized cheese could be produced. Factory-made cheese overtook traditional cheesemaking in the World War II era, and factories have been the source of most cheese in America and Europe ever since. By 2012, cheese was one of the most shoplifted items from supermarkets worldwide. Production In 2021, world production of cheese from whole cow milk was 22.2 million tonnes, with the United States accounting for 28% of the total, followed by Germany, France, Italy and the Netherlands as secondary producers (table). As of 2021, the carbon footprint of a kilogram of cheese ranged from 6 to 12 kg of CO2eq, depending on the amount of milk used; accordingly, it is generally lower than beef or lamb, but higher than other foods. Consumption France, Iceland, Finland, Denmark and Germany were the highest consumers of cheese in 2014, averaging per person per annum. Processing Curdling A required step in cheesemaking is to separate the milk into solid curds and liquid whey. Usually this is done by acidifying (souring) the milk and adding rennet. The acidification can be accomplished directly by the addition of an acid, such as vinegar, in a few cases (paneer, queso fresco). More commonly starter bacteria are employed instead which convert milk sugars into lactic acid. The same bacteria (and the enzymes they produce) also play a large role in the eventual flavor of aged cheeses. Most cheeses are made with starter bacteria from the Lactococcus, Lactobacillus, or Streptococcus genera. Swiss starter cultures include Propionibacterium freudenreichii, which produces propionic acid and carbon dioxide gas bubbles during aging, giving Emmental cheese its holes or eyes. Some fresh cheeses are curdled only by acidity, but most cheeses also use rennet. Rennet sets the cheese into a strong and rubbery gel compared to the fragile curds produced by acidic coagulation alone. It also allows curdling at a lower acidity—important because flavor-making bacteria are inhibited in high-acidity environments. In general, softer, smaller, fresher cheeses are curdled with a greater proportion of acid to rennet than harder, larger, longer-aged varieties. While rennet was traditionally produced via extraction from the inner mucosa of the fourth stomach chamber of slaughtered young, unweaned calves, most rennet used today in cheesemaking is produced recombinantly. The majority of the applied chymosin is retained in the whey and, at most, may be present in cheese in trace quantities. In ripe cheese, the type and provenance of chymosin used in production cannot be determined. Curd processing At this point, the cheese has set into a very moist gel. Some soft cheeses are now essentially complete: they are drained, salted, and packaged. For most of the rest, the curd is cut into small cubes. This allows water to drain from the individual pieces of curd. Some hard cheeses are then heated to temperatures in the range of . This forces more whey from the cut curd. It also changes the taste of the finished cheese, affecting both the bacterial culture and the milk chemistry. Cheeses that are heated to the higher temperatures are usually made with thermophilic starter bacteria that survive this step—either Lactobacilli or Streptococci. Salt has roles in cheese besides adding a salty flavor. It preserves cheese from spoiling, draws moisture from the curd, and firms cheese's texture in an interaction with its proteins. Some cheeses are salted from the outside with dry salt or brine washes. Most cheeses have the salt mixed directly into the curds. Other techniques influence a cheese's texture and flavor. Some examples are: Stretching: (Mozzarella, Provolone) the curd is stretched and kneaded in hot water, developing a stringy, fibrous body. Cheddaring: (Cheddar, other English cheeses) the cut curd is repeatedly piled up, pushing more moisture away. The curd is also mixed (or milled) for a long time, taking the sharp edges off the cut curd pieces and influencing the final product's texture. Washing: (Edam, Gouda, Colby) the curd is washed in warm water, lowering its acidity and making for a milder-tasting cheese. Most cheeses achieve their final shape when the curds are pressed into a mold or form. The harder the cheese, the more pressure is applied. The pressure drives out moisture—the molds are designed to allow water to escape—and unifies the curds into a single solid body. Ripening A newborn cheese is usually salty yet bland in flavor and, for harder varieties, rubbery in texture. These qualities are sometimes enjoyed—cheese curds are eaten on their own—but normally cheeses are left to rest under controlled conditions. This aging period (also called ripening, or, from the French, affinage) lasts from a few days to several years. As a cheese ages, microbes and enzymes transform texture and intensify flavor. This transformation is largely a result of the breakdown of casein proteins and milkfat into a complex mix of amino acids, amines, and fatty acids. Some cheeses have additional bacteria or molds intentionally introduced before or during aging. In traditional cheesemaking, these microbes might be already present in the aging room; they are allowed to settle and grow on the stored cheeses. More often today, prepared cultures are used, giving more consistent results and putting fewer constraints on the environment where the cheese ages. These cheeses include soft ripened cheeses such as Brie and Camembert, blue cheeses such as Roquefort, Stilton, Gorgonzola, and rind-washed cheeses such as Limburger. Types There are many types of cheese, with around 500 different varieties recognized by the International Dairy Federation, more than 400 identified by Walter and Hargrove, more than 500 by Burkhalter, and more than 1,000 by Sandine and Elliker. The varieties may be grouped or classified into types according to criteria such as length of ageing, texture, methods of making, fat content, animal milk, country or region of origin, etc.—with these criteria either being used singly or in combination, but with no single method being universally used. The method most commonly and traditionally used is based on moisture content, which is then further discriminated by fat content and curing or ripening methods. Some attempts have been made to rationalise the classification of cheese—a scheme was proposed by Pieter Walstra which uses the primary and secondary starter combined with moisture content, and Walter and Hargrove suggested classifying by production methods which produces 18 types, which are then further grouped by moisture content. The British Cheese Board once claimed that Britain has approximately 700 distinct local cheeses; France and Italy have perhaps 400 each (a French proverb holds there is a different French cheese for every day of the year, and Charles de Gaulle once asked "how can you govern a country in which there are 246 kinds of cheese?"). Cooking and eating At refrigerator temperatures, the fat in a piece of cheese is as hard as unsoftened butter, and its protein structure is stiff as well. Flavor and odor compounds are less easily liberated when cold. For improvements in flavor and texture, it is widely advised that cheeses be allowed to warm up to room temperature before eating. If the cheese is further warmed, to , the fats will begin to "sweat out" as they go beyond soft to fully liquid. Above room temperatures, most hard cheeses melt. Rennet-curdled cheeses have a gel-like protein matrix that is broken down by heat. When enough protein bonds are broken, the cheese itself turns from a solid to a viscous liquid. Soft, high-moisture cheeses will melt at around , while hard, low-moisture cheeses such as Parmesan remain solid until they reach about . Acid-set cheeses, including halloumi, paneer, some whey cheeses and many varieties of fresh goat cheese, have a protein structure that remains intact at high temperatures. When cooked, these cheeses just get firmer as water evaporates. Some cheeses, like raclette, melt smoothly; many tend to become stringy or suffer from a separation of their fats. Many of these can be coaxed into melting smoothly in the presence of acids or starch. Fondue, with wine providing the acidity, is a good example of a smoothly melted cheese dish. Elastic stringiness is a quality that is sometimes enjoyed, in dishes including pizza and Welsh rarebit. Even a melted cheese eventually turns solid again, after enough moisture is cooked off. The saying "you can't melt cheese twice" (meaning "some things can only be done once") refers to the fact that oils leach out during the first melting and are gone, leaving the non-meltable solids behind. As its temperature continues to rise, cheese will brown and eventually burn. Browned, partially burned cheese has a particular distinct flavor of its own and is frequently used in cooking (e.g., sprinkling atop items before baking them). Cheeseboard A cheeseboard (or cheese course) may be served at the end of a meal before or following dessert, or replacing the last course. The British tradition is to have cheese after dessert, accompanied by sweet wines like Port. In France, cheese is consumed before dessert, with robust red wine. A cheeseboard typically has contrasting cheeses with accompaniments, such as crackers, biscuits, grapes, nuts, celery or chutney. A cheeseboard typically contains four to six cheeses, for example: mature Cheddar or Comté (hard: cow's milk cheeses); Brie or Camembert (soft: cow's milk); a blue cheese such as Stilton (hard: cow's milk), Roquefort (medium: ewe's milk) or Bleu d'Auvergne (medium-soft cow's milk); and a soft/medium-soft goat's cheese (e.g. Sainte-Maure de Touraine, Pantysgawn, Crottin de Chavignol). A cheeseboard long was used to feature the variety of cheeses manufactured in Wisconsin, where the state legislature recognizes a "cheesehead" hat as a state symbol. Nutrition and health The nutritional value of cheese varies widely. Cottage cheese may consist of 4% fat and 11% protein while some whey cheeses are 15% fat and 11% protein, and triple cream cheeses can contain 36% fat and 7% protein. In general, cheese is a rich source (20% or more of the Daily Value, DV) of calcium, protein, phosphorus, sodium and saturated fat. A 28-gram (one ounce) serving of cheddar cheese contains about of protein and 202 milligrams of calcium. Nutritionally, cheese is essentially concentrated milk, but altered by the culturing and aging processes: it takes about of milk to provide that much protein, and to equal the calcium, though values for water-soluble vitamins and minerals can vary widely. Cardiovascular disease National health organizations, such as the American Heart Association, Association of UK Dietitians, British National Health Service, and Mayo Clinic, among others, recommend that cheese consumption be minimized, replaced in snacks and meals by plant foods, or restricted to low-fat cheeses to reduce caloric intake and blood levels of LDL fat, which is a risk factor for cardiovascular diseases. Pasteurization A number of food safety agencies around the world have warned of the risks of raw-milk cheeses. The U.S. Food and Drug Administration states that soft raw-milk cheeses can cause "serious infectious diseases including listeriosis, brucellosis, salmonellosis and tuberculosis". It is U.S. law since 1944 that all raw-milk cheeses (including imports since 1951) must be aged at least 60 days. Australia has a wide ban on raw-milk cheeses as well, though in recent years exceptions have been made for Swiss Gruyère, Emmental and Sbrinz, and for French Roquefort. There is a trend for cheeses to be pasteurized even when not required by law. Pregnant women may face an additional risk from cheese; the U.S. Centers for Disease Control has warned pregnant women against eating soft-ripened cheeses and blue-veined cheeses, due to the listeria risk, which can cause miscarriage or harm the fetus. Cultural attitudes Among the few cheeses in Southeast and East Asian cuisines is paneer, a fresh acid-set cheese. In Nepal, the Dairy Development Corporation commercially manufactures cheese made from yak milk and a hard cheese made from either cow or yak milk known as chhurpi. Bhutan produces a similar cheese called Datshi which is a staple in most Bhutanese curries. The national dish of Bhutan, ema datshi, is made from homemade yak or mare milk cheese and hot peppers. In Yunnan, China, several ethnic minority groups produce Rushan and Rubing from cow's milk. Cheese consumption may be increasing in China, with annual sales doubling from 1996 to 2003 (to a still small 30 million U.S. dollars a year). Strict followers of the dietary laws of Islam and Judaism must avoid cheeses made with rennet from animals not slaughtered in accordance with halal or kosher laws respectively. Rennet derived from animal slaughter, and thus cheese made with animal-derived rennet, is not vegetarian. Most widely available vegetarian cheeses are made using rennet produced by fermentation of the fungus Mucor miehei. Vegans and other dairy-avoiding vegetarians do not eat conventional cheese, but some vegetable-based cheese substitutes (soy or almond) are used as substitutes. Odorous cheeses Even in cultures with long cheese traditions, consumers may perceive some cheeses that are especially pungent-smelling, or mold-bearing varieties such as Limburger or Roquefort, as unpalatable. Such cheeses are an acquired taste because they are processed using molds or microbiological cultures, allowing odor and flavor molecules to resemble those in rotten foods. One author stated: "An aversion to the odor of decay has the obvious biological value of steering us away from possible food poisoning, so it is no wonder that an animal food that gives off whiffs of shoes and soil and the stable takes some getting used to". Effect on sleep There is some support from studies that dairy products can help with insomnia. Scientists have debated how cheese might affect sleep. A folk belief that cheese eaten close to bedtime can cause nightmares may have arisen from the Charles Dickens novella A Christmas Carol, in which Ebenezer Scrooge attributes his visions of Jacob Marley to the cheese he ate. This belief can also be found in folklore that predates this story. The theory has been disproven multiple times, although night cheese may cause vivid dreams or otherwise disrupt sleep due to its high saturated fat content, according to studies by the British Cheese Board. Other studies indicate it may actually make people dream less. See also Dairy industry Dutch cheese markets List of cheese dishes List of cheeses List of dairy products List of microorganisms used in food and beverage preparation Sheep milk cheese References Further reading External links Cheese.com – includes an extensive database of different types of cheese. Classification of cheese – why is one cheese type different from another? Cheese Ancient dishes Condiments Dairy products Articles containing video clips Types of food Fermented dairy products Fermented foods
Cheese
[ "Biology" ]
5,193
[ "Fermented foods", "Biotechnology products" ]
11,750,741
https://en.wikipedia.org/wiki/List%20of%20materials-testing%20resources
Materials testing is used to assess product quality, functionality, safety, reliability and toxicity of both materials and electronic devices. Some applications of materials testing include defect detection, failure analysis, material development, basic materials science research, and the verification of material properties for application trials. This is a list of organizations and companies that publish materials testing standards or offer materials testing laboratory services. International organizations These organizations create materials testing standards or conduct active research in the fields of materials analysis and reliability testing. American Association of Textile Chemists and Colorists (AATCC) American National Standards Institute (ANSI) American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) American Society of Mechanical Engineers (ASME) ASTM International Federal Institute for Materials Research and Testing (German: Bundesanstalt für Materialforschung und -prüfung (BAM)) Instron International Organization for Standardization (ISO) MTS Systems Corporation Nadcap National Physical Laboratory (United Kingdom) Society of Automotive Engineers (SAE) Zwick Roell Group Global research laboratories These organizations provide materials testing laboratory services. FEI Company Lucideon SEMATECH See also Characterization (materials science) List of materials analysis methods References Tests Materials testing Engineering-related lists
List of materials-testing resources
[ "Materials_science", "Engineering" ]
252
[ "Materials testing", "Materials science" ]
11,750,751
https://en.wikipedia.org/wiki/.hack
.hack (pronounced "Dot Hack") is a Japanese multimedia franchise that encompasses two projects: Project .hack and .hack Conglomerate. They were primarily created and developed by CyberConnect2, and published by Bandai Namco Entertainment. The series features an alternative history setting in the rise of the new millennium regarding the technological rise of a new version of the internet following a major global computer network disaster in the year 2005, and the mysterious events regarding the wildly popular fictional massively multiplayer online role-playing game The World. Project .hack Project .hack was the first project of the .hack series. It launched in 2002 with the anime series .hack//Sign in April 2002 and the PlayStation 2 game .hack//Infection in June 2002. Project developers included Koichi Mashimo (Bee Train), Kazunori Itō (Catfish) and Yoshiyuki Sadamoto (Gainax). Since then, Project .hack has spanned television, video games, manga and novels. It centers mainly on the events and affairs of the prime installment of The World. The franchise began internationally when Bandai announced .hack//Infection, which was released in 2003 and .hack//Sign got an English dub, which was released on Cartoon Network in the same year. Games .hack, a series of four PlayStation 2 games that follow the story of the .hackers, Kite and BlackRose, and their attempts to find out what caused the sudden coma of Kite's friend, Orca, and BlackRose's brother, Kazu. The volumes included .hack//Infection, .hack//Mutation, .hack//Outbreak and .hack//Quarantine. .hack//frägment, the first .hack Massively multiplayer online game (online role-playing game). It was released only in Japan, the online servers began on November 23, 2005 and ended on January 18, 2007. .hack//Enemy, a collectible card game created by Decipher Inc. based on the .hack series. It was discontinued after running five separate expansions between 2003 and 2005. Anime .hack//Sign is an anime television series directed by Kōichi Mashimo and produced by studio Bee Train and Bandai Visual. It consists of twenty six original episodes and three additional ones, released on DVD as original video animations. The series focuses on a Wavemaster (magic user) named Tsukasa, a player character in the virtual reality game. He wakes up to find himself in a dungeon in The World, but he suffers amnesia as he wonders where he is and how he got there. The situation gets worse when he discovers he cannot log out and is trapped in the game. Tsukasa embarks with other players on a quest to find the truth behind the abnormal situation. The series is influenced by psychological and sociological subjects, such as anxiety, escapism and interpersonal relationships. The series premiered in Japan on TV Tokyo between April 4, 2002 and September 25, 2002. It was later broadcast across East Asia, Southeast Asia, South Asia, and Latin America by the anime television network, Animax; and across the United States, Nigeria, Canada, and the United Kingdom by Cartoon Network, YTV, and AnimeCentral (English and Japanese) respectively. It is distributed across North America by Bandai Entertainment. .hack//Legend of the Twilight is a miniseries adaptation of the manga series written by Tatsuya Hamazaki and drawn by Rei Izumi. The series was directed by Koichi Mashimo and Koji Sawai, and produced by Bee Train. Set in a fictional MMORPG, The World, the series focuses on twins Rena and Shugo, who receive chibi avatars in the design of the legendary .hackers known as Kite and BlackRose. After Shugo is given the Twilight Bracelet by a mysterious girl, the two embark on a quest to find Aura and unravel the mystery of the Twilight Bracelet. The anime series features many of the same characters as the manga version, but with an alternative storyline. It was localized as .hack//Dusk, among other names, in fan translations prior to the official English release. .hack//Liminality is a set of four DVD OVAs included with the .hack video game series for the PlayStation 2. Liminality is focused on the real world as opposed to the games' MMORPG The World. Separated into four volumes; each volume was released with its corresponding game. The initial episode is 45 minutes long and each subsequent episode is 30 minutes long. The video series was directed by Koichi Mashimo, and written by Kazunori Itō with music by Yuki Kajiura. Primary animation production was handled by Mashimo's studio Bee Train which collaborated for the four games as well as handled major production on .hack//Sign. Liminality follows the story of Mai Minase, Yuki Aihara, Kyoko Tohno, and ex-CyberConnect employee Junichiro Tokuoka as they attempt to find out why players are falling into comas when playing in The World. .hack//Gift, a self-deprecating, tongue-in-cheek, OVA that was created as a "gift" for those who had bought and completed all four .hack video games. It was released under Project .hack. In Japan, it was available when the Data Flag on the memory card file in .hack//Quarantine was present, whereas the American version included Gift on the fourth Liminality DVD. It is predominantly a comedy that makes fun of everything that developed throughout the series, even the franchise's own shortcomings. Character designs are deliberately simplistic. Novels .hack//AI buster, a novel released under Project .hack, in 2002. It tells the story of Albireo and a prototype of the ultimate AI, Lycoris, and of how Orca and Balmung defeated "The One Sin" and became the Descendants of Fianna. .hack//AI buster 2, a collection of stories released under Project .hack. It involves the characters of AI Buster and Legend of the Twilight Bracelet: ".hack//2nd Character", ".hack//Wotan's Spear", ".hack//Kamui", ".hack//Rumor" and ".hack//Firefly". "Rumor" was previously released with the Rena Special Pack in Japan. .hack//Another Birth, a novel series released under Project .hack. It retells the story of the .hack video games from BlackRose's point of view. .hack//Zero, a novel series released under Project .hack. It tells the story of a Long Arm named Carl, of what happened to Sora after he was trapped in The World by Morganna, and of Tsukasa's real life after being able to log out from The World. .hack//Epitaph of Twilight, a novel series telling the story of Harald Hoerwick's niece, Lara Hoerwick, who finds herself trapped in an early version of The World. Manga .hack//Legend of the Twilight, a manga series released under Project .hack. It tells the story of two player characters Shugo and Rena, as they win a mysterious contest that earns them chibi character models of the legendary .hackers Kite & BlackRose. .hack Conglomerate .hack Conglomerate is the current project of .hack by CyberConnect2 and various other companies and successor to Project .hack. The companies include Victor Entertainment, Nippon Cultural Broadcasting, Bandai, TV Tokyo, Bee Train, and Kadokawa Shoten. It encompasses a series of three PlayStation 2 games called .hack//G.U., an anime series called .hack//Roots, prose, and manga. .hack Conglomerate focuses on times and installments after the original The World MMORPG. Games .hack//G.U. is a series of three video games (Vol. 1 Rebirth, Vol. 2 Reminisce, and Vol. 3 Redemption) released for the .hack Conglomerate project. Taking place in the installment of The World R:2 in the year 2017, the series focuses on the player Haseo's search for a cure after his friend was attacked by a player known as Tri-edge, which led to his eventual involvement with Project G.U, and the mysterious anomalies called AIDA that plague The World R:2. A remastered collection was released on November 3, 2017 for the PlayStation 4 and PC that included all three previous volumes and added a new 4th volume called Reconnection. .hack//Link, a PSP game released under the .hack Conglomerate project. It was claimed to be the last game in the series; the game centers on a youth named Tokio in the year 2020, who is given a free copy of The World R:X by the popular but mysterious new classmate Saika Amagi. Contains unplayable characters from .hack and .hack//G.U. video games. .hack//Versus, a PS3 game released under the .hack Conglomerate project. The game is the first .hack fighter game, which is bundled with the film .hack//The Movie. .hack//Guilty Dragon, a card-based mobile game for Android and iOS, it was exclusive for Japan. Its services began from October 15, 2012 and ended on March 23, 2016. .hack//G.U. The Card Battle is a trading card game similar to that of .hack//Enemy released under the .hack Conglomerate project. Unlike .hack//Enemy, the game was made by the original creators of .hack//G.U.. There are two sets of rules, one based on the mini game in the G.U. series, Crimson VS, and the one specifically designed for the trading card game. This game won the Origins Award for Best Trading Card Game of 2003. New World Vol. 1: Maiden of Silver Tears, an Android & iOS game released under the .hack Conglomerate project. it was a Japanese exclusive mobile game, it served as a reboot to the franchise. Services began on January 8, 2016 and ended on December 20, 2016. Anime .hack//Roots is an anime series released under the .hack Conglomerate project. It follows Haseo and his joining (and subsequent exploits with) the Twilight Brigade guild. It also shows his rise to power and how he becomes known as "The Terror of Death". Towards the end of the series we see the start of .hack.//G.U. This series is the last in the .hack anime series to be licensed by Bandai Entertainment. .hack//G.U. Trilogy, a CGI movie adaptation of the .hack//G.U. video games released under the .hack Conglomerate project. .hack//G.U. Returner, a short follow up OVA and the conclusion to .hack//Roots released under the .hack Conglomerate project. It tells the story about the characters of .hack//G.U. in one last adventure. .hack//Quantum, a three part OVA series from Kinema Citrus and the first in the anime series of .hack to be licensed by Funimation. .hack//The Movie, a CGI movie, announced on August 23, 2011. On January 21, 2012, it was launched in theaters throughout Japan. The movie takes place in the year 2024, where a reboot of The World under the name FORCE:ERA is released to a new generation of players. Thanatos Report, OVA in .hack//Versus unlocked after finishing Story Mode. Novels .hack//Cell, a novel series released under the .hack Conglomerate project, written by Ryo Suzukaze. .hack//CELL takes place at the same time as .hack//Roots. The main premise of the story covers the happenings that Midori and Adamas witness and experience in The World R:2, an extremely popular MMORPG that is a new version of the original game, The World. Midori meets numerous characters from .hack//Roots (most notably Haseo) and .hack//G.U. (such as Silabus and Gaspard). The main plot centers around Midori selling herself out to would-be PKers, and some real-world events that center around the girl who also bears the name Midori (Midori Shimomura) who is in a coma. It is later revealed that Midori is a sentient PC, a result of the "virtual cell" that was taken from Midori Shimomura's blood. After Midori Shimomura awakens from her coma, she enters The World R:2 with a PC identical to Midori. Tokyopop has obtained the rights to .hack//CELL and was released on March 2, 2010. .hack//G.U., a novel series adaptation of the three .hack//G.U. Video games released under the .hack Conglomerate project. .hack//bullet, a web novel that follows Flugel after the events of .hack//Link. Manga .hack//4koma, a yonkoma manga series most of the 4-Koma is filled with gags and parodies centring mostly around the main characters of the original .hack video game series and the .hack//G.U. video games series. .hack//Alcor, a manga series released under the .hack Conglomerate project. It focuses on a girl called Nanase, who appears to be quite fond of Silabus, as well as Alkaid during her days as empress of the Demon Palace. .hack//GnU, a humorous manga series released under the .hack Conglomerate project. It revolves around a male Blade Brandier called Raid and the seventh division of the Moon Tree guild. .hack//G.U.+, a manga adaptation series loosely based on the three .hack//G.U. video games, released under the .hack Conglomerate project. .hack//XXXX (read as "X-Fourth"), a manga adaptation series released under the .hack Conglomerate project. The manga is loosely based on the four original .hack video games. .hack//Link, manga series released under the .hack Conglomerate project. It occurs three years after the end of .hack//G.U. in a new version of The World called The World R:X. It focuses on a player named Tokio and a mysterious exchange student named Saika. Other appearances A few characters from the franchise appear in the Nintendo 3DS games Project X Zone and Project X Zone 2. References External links .hack// - Official (Worldwide) .hack// - Official (Worldwide) .hack// - Official Project .hack// - Official .hack// Conglomerate - Official .hack//Trilogy - Official Bandai Namco Entertainment franchises Hack Fictional computers Massively multiplayer online role-playing games in fiction Fiction about artificial intelligence Fiction about sentient objects 2001 introductions
.hack
[ "Technology" ]
3,139
[ "Fictional computers", "Computers" ]
11,750,971
https://en.wikipedia.org/wiki/Geoportal
A geoportal is a type of web portal used to find and access geographic information (geospatial information) and associated geographic services (display, editing, analysis, etc.) via the Internet. Geoportals are important for effective use of geographic information systems (GIS) and a key element of a spatial data infrastructure (SDI). Geographic information providers, including government agencies and commercial sources, use geoportals to publish descriptions (geospatial metadata) of their geographic information. Geographic information consumers, professional or casual, use geoportals to search and access the information they need. Thus geoportals serve an increasingly important role in the sharing of geographic information and can avoid duplicated efforts, inconsistencies, delays, confusion, and wasted resources. Background The U.S. National Spatial Data Infrastructure (NSDI), started in 1994 (see OMB Circular A-16), is considered the earliest geoportal concept. The U.S. Federal Geospatial Data Committee (FGDC) coordinated development of the Federal Geographic Data Clearinghouse (or NSDI Clearinghouse Network), the first large geoportal. It has many distributed catalogs that can be searched via a client interface. First released in 2003, the Geospatial One-Stop (GOS) geoportal was developed as part of a U.S. e-government initiative. Unlike the NSDI Clearinghouse Network, GOS was built around a centralized metadata catalog database, with an architecture that links users to data providers through a Web-based geoportal. The user of GOS may employ a simple Web browser (thin client) or may interface directly with a GIS (thick client). In September 2011, GOS was retired and the content it included by then became part of the broader open data site (Geo.)Data.gov. At the same time, the United States federal government launched the Geospatial Platform, which represents a shift from focusing on cataloging references to resources, to providing shared web services for national significant datasets, API for developers, and end-user applications (built on those web services and API). More recently, there has been a proliferation of geoportals for sharing of geographic information based on region or theme. Examples include the INSPIRE geoportal (Infrastructure for Spatial Information in the European Community, established in 2007), the NatCarb geoportal, which provides geographic information concerning carbon sequestration in the United States, and UNSDI, the United Nations Spatial Data Infrastructure. Modern web-based geoportals include direct access to raw data in multiple formats, complete metadata, online visualization tools so users can create maps with data in the portal, automated provenance linkages across users, datasets and created maps, commenting mechanisms to discuss data quality and interpretation, and sharing or exporting created maps in various formats. Open portals allow user contribution of datasets as well. Geoportals also form a key component of commercial cloud-based GIS platforms, providing a library of geographic data that users can employ with online GIS tools or desktop GIS software. Google Earth Engine is a cloud-based image processing platform that includes a portal to several petabytes of satellite imagery. Esri's ArcGIS Online, with its Living Atlas Geoportal, provides a large of volume of data covering a wide variety of topics. Esri also sells Portal for ArcGIS as part of its ArcGIS Enterprise server software, which enables institutions to create their own geoportals. See also Georeference List of GIS data sources National Mapping Agency#List of national mapping agencies Spatial Data Infrastructure References Sources Fu, P., and J. Sun. 2010. Web GIS: Principles and Applications. ESRI Press. Redlands, CA. . Goodchild, M.F., P. Fu, and P.M. Rich. 2007. Geographic information sharing: the case of the Geospatial One-Stop portal. Annals of the Association of American Geographers 97(2):250-266. Maguire, D.J., and P.A. Longley. 2005. The emergence of geoportals and their role in spatial data infrastructures. Computers, Environment and Urban Systems 29: 3-14. Tang, W. and Selwood, J. 2005. Spatial Portals: Gateways to Spatial Information. ESRI Press, Redlands, CA. Geographic data and information Web portals
Geoportal
[ "Technology" ]
921
[ "Geographic data and information", "Data" ]
11,751,094
https://en.wikipedia.org/wiki/Binary%20constraint
A binary constraint, in mathematical optimization, is a constraint that involves exactly two variables. For example, consider the n-queens problem, where the goal is to place n chess queens on an n-by-n chessboard such that none of the queens can attack each other (horizontally, vertically, or diagonally). The formal set of constraints are therefore "Queen 1 can't attack Queen 2", "Queen 1 can't attack Queen 3", and so on between all pairs of queens. Each constraint in this problem is binary, in that it only considers the placement of two individual queens. Linear programs in which all constraints are binary can be solved in strongly polynomial time, a result that is not known to be true for more general linear programs. References Mathematical optimization Constraint programming
Binary constraint
[ "Mathematics" ]
161
[ "Mathematical optimization", "Applied mathematics", "Mathematical analysis", "Applied mathematics stubs" ]
11,751,313
https://en.wikipedia.org/wiki/Girder%20and%20Panel%20building%20sets
Girder and Panel Building Sets were a series of plastic toy construction kits created by Kenner Toys in the mid-1950s. Since then, the building sets have gone in and out of production several times, under a succession of different owners of the designs. Overview The Girder and Panel Building Set construction kits enabled a child to build plastic models of mid-twentieth century style buildings. Vertical plastic columns were placed in the holes of a Masonite base board and horizontal girders were then locked into the vertical columns to create the skeletal structure of a model building. Brightly coloured plastic panels containing translucent "windows" could then be snapped onto the outer girders to create a curtain wall. Square navy-blue roof panels—some with translucent skylight domes molded into them—were laid on the topmost beams to complete the structure. Bridge and Turnpike sets were later introduced that also employed frameworks of girders but with roadway sections instead of curtain wall panels and the addition of truss bracing and other techniques to construct models of various types of bridges, turnpikes, and interchanges. Still later, Kenner introduced sets with plastic-cased battery-operated motors that could be used to construct buildings with elevators, drawbridges that opened and closed, and other motorized structures. The Girder and Panel construction style emulated twentieth century construction techniques such as curtain walls of prefabricated panels attached to frameworks of girders, trusses, and cantilevers. Girder and Panel toy sets were an important toy in the transition from the metal-based Gilbert Erector Sets of the 1920-to-1950 era to the plastic toys of the modern age. While Lego is arguably the most popular contemporary construction toy, no other toy has replaced Girder and Panel as a direct reflection of modern building techniques. Girder and Panel products have been produced by several companies since 1957, and from 2005–2016 they were still being made by Bridge Street Toys, a privately owned company based in Massachusetts. Development The concept for Girder and Panel originated when Kenner president Albert Steiner witnessed the construction of a new office building in Cincinnati in 1956. The steel beam and girder structure of the new building and the steel and glass wall panels later applied to that framework gave Steiner the idea for a new toy. He proposed creating plastic construction toy sets that enabled the child to construct model buildings out of frameworks of red colored girders and beams, with exterior curtain walls, on a foundation board of green Masonite. The specific design of the sets was left to Kenner's James O. Kuhn. His assistant Michael Oppenheim became the product manager for the Girder and Panel product line. Vertical girders were placed in the holes of the Masonite base board. The top of the square girder had four V-shaped notches. A horizontal beam with a dovetail on each end would then lock into one of the notches on the beam, giving the skeletal structure of the toy building a considerable amount of strength. Tiny pegs protruded from the girders and beams, which then fit into matching holes in the curtain wall pieces, keeping the pieces securely fastened. The girders, beams, and curtain walls from these initial sets were originally molded in polystyrene. But due to negative customer feedback concerning breakage of dovetail ends and the notches in the beams, Kenner quickly switched to using the newly developed polyethylene plastic for the girders and beams, which provided a small amount of flexibility needed to withstand repeated assembly and disassembly of the pieces. The curtain walls were produced using a "vacuform" method, and were somewhat brittle as a result. Colors of the curtain wall panels were typically bright yellow and reddish-orange, with a variety of white or translucent "windows." Customers were encouraged to cut the panels with scissors, allowing the panels to fit into corners. Square navy blue roof panels lay on top of the beams, and some roof pieces even had translucent skylight domes molded into them. Kenner typically created two or three sets of different sizes for each theme of the Girder and Panel toy line, offering the buyer a choice of "basic, better (and best)." Thus, the initial theme of "buildings" was offered as sets #1, #2 and #3, with set #3 having the most parts of the group. The success of this initial group of three sets inspired Kenner to introduce the Bridge and Turnpike sets, which reflected an "interstate roadway" theme. These sets, #4,#5, and #6, employed frameworks of girders too, but with diagonal truss bracing and other techniques to construct models of various types of bridges, turnpikes, and interchanges. The trusses were very flexible, providing tensile strength, but little compressive strength, requiring trusses and cantilevers to be assembled using engineering principles. Unlike Girder and Panel, the Bridge and Turnpike sets featured roadway sections instead of curtain wall panels. The next set in the series, #7, was the Combined Girder and Panel, Bridge and Turnpike Set, which provided all the parts from the earlier Girder and Panel sets and Bridge and Turnpike sets. With this combination, one could construct models of structures that combined buildings and roadways, such as a bus terminal or hotel in which ramps lead cars and buses directly into and out of the building. From there, Kenner introduced their all-time best selling sets: The Motorized Girder and Panel, Bridge and Turnpike Building set. Set #8 added a yellow plastic battery-operated motor unit, and set #9 contained two motor units. With these motors, one could construct buildings with elevators, drawbridges that opened and closed by electric motor, and other motorized structures. These sets included yellow plastic pulleys, spools and wheels to be used in conjunction with the motor unit. Kenner even offered a Motorizing Kit, set #10, to let those who owned the earlier Girder and Panel and Bridge and Turnpike sets motorize them. Planning new sets With the initial success of the building and interstate road themes, Kuhn and Oppenheim looked for new ideas to expand the product line. Three new trends in society provided the themes for the next line of sets: the widespread use of chemicals, suburban growth by subdivisions, and futurist transportation concepts. The chemical theme would take shape in the new Hydro-Dynamic sets; the subdivision housing theme would be developed in the Build-A-Home sets; and the futurist transportation theme, embodied by the Disneyland monorail, would result in the Skyrail sets. Hydro-Dynamic Sets The Hydro-Dynamic sets enabled children to be hydraulic engineers, as these sets came with battery-operated pumps, which could pump water through polyvinyl plastic pipes into tanks and back into a plastic tray with a small reservoir. Set #11 had a tray with one pump, and set #12 had a tray with two pumps. Each pump required the use of 2 D-cell batteries that loaded underneath the tray, out of sight. The sets contained many new clear plastic polystyrene parts consisting of spray heads, dippers, turbines, funnels, small and large liquid chambers, and storage tanks. One could control the flow of the water with valves provided by the sets. Thus, with these sets, one could model structures that employ fluid hydraulics, such as chemical plants, oil refineries, and water treatment plants. Colored dye tablets were included to simulate different types of liquid chemicals. The project booklet included with the set actually suggested a design for a DDT plant, a pesticide that is now banned. A small amount of classic Girder and Panel and Bridge and Turnpike pieces were included, to allow an office to be built as part of the chemical plant, with roadways leading to it. Build-A-Home Sets As new subdivisions started to spring up around cities beginning in the 1950s and after, Kenner Toys reflected the trend in their Build-A-Home Building sets. The Build-A-Home sets enabled children to construct modern suburban homes, with simulated brickwork or white clapboard siding. New diagonal beams called joists were introduced to permit a low angle pitched roof to be built, and covered with vacuformed plastic roof plates. Styles varied between brick or white vinyl siding. Patios, swimming pools, TV antennas, steeples, chimneys, barbecues, and doghouses were all added as accessories to decorate the home. Molded green polystyrene foam trees along with vacuformed shrubs and vines also provided some crude landscaping. The basic set was #14, the better set was #15, and the best set of the group was #16. There were no motors or roadway pieces in this group. Skyrail Sets The Skyrail sets introduced yellow girders and beams, different colored window and door panels and battery powered red or blue "Sky Cars" that ran on monorail steel rails from building to building. The sets came in two sizes: a single red car set (17), and the bigger two-car set that contained the blue car (18). The sets that came in the upright storage containers were Set #30 (one car) and Set #31 (two cars) There were no track switches, so the layout was either a completed circuit (circle), or a single line, (red end-of-line bumpers were included to prevent the car(s) from flying off the ends). Unique to these sets (besides the 50s futuristic monorail cars, and the metal rails) was the red clip that fit onto the girders to hold up the metal rails, and some green signs that were only for the Monorail that would seem out of place on the conventional Girder/Panel sets. Holes in the non-conductive rubber, between the upper and lower metal rail parts, fit over the clips. This allowed the rails to be tilted into some spectacular angles. The rails fit together with two protruding pins that fit into the next rail, and so on. Unlike other Monorail toys, the Skyrail sets had no fixed pillars supporting the rails; the buildings that were made by assembling the girder/panel sets would accept the "clips"; the height, width, and complexity of the layout rested solely on the child's imagination. One drawback was the battery box that supplied power to the rails. The configuration of the interior of the box lid resembled a small upturned plastic stool. When one twisted the directional control atop the battery box, the "stool" inside turned and made the contacts touch. After repeated use, the "legs" of the stool broke off, and the box was useless. When designing the Girder and Panel Skyrail sets, the engineer at Kenner carved the rails out of wood. His statement that the curves were nearly impossible to create so the cars wouldn't hang up on them, demonstrates that this was mostly a single man's endeavor. Later evolution Later, Kenner upgraded their Girder and Panel, Bridge and Turnpike sets by changing the design to the Modern-As-Tomorrow and Freeway USA sets, which introduced grey colored girders and beams, new panels, newly designed and colored roofs, roadway pieces, realistic road signs and other items such as toll booths, sign and lamp posts. The last sets Kenner made before they sold to General Mills were the Girdermatic sets, which seemed to be based on mechanical structures, rather than buildings. These sets introduced many new parts that are unique to Girdermatic sets, including a new green colored motor and battery controller, round platforms, cog belts, truss assemblies, giant beams, and Ferris wheel rings, with which one could build moving cranes, observation towers, several different types of bridges, industrial plants and mills with conveyors, and amusement rides such as a Ferris Wheel, Incline ride and Whirling swing. CAD-based Girder and Panel Building Set CAD software tailored to the modeling of Girder and Panel Building Sets allows users to build 3D models of anything one could construct with the physical toy. Also known as a Virtual Girder and Panel Building Set, the only currently known software is RogCAD Virtual Girder and Panel Building Set. Girder and Panel Set details Here are the original Girder and Panel sets made by Kenner Toys. The parts counts were verified from Kenner Set boxes in Spaced-Out Bob's collection. Kenner (when owned by General Mills) By about 1968, the production of Girder and Panel sets had stopped and did not start up again until about 1974, when Kenner, then owned by General Mills, produced the larger 1,100-piece Sears Tower set #72001 with black girders and panels, which could make a model of the Sears Tower. These sets came with white/grey masonite baseboards. This set had 1,226 pieces, with the Sears Tower requiring 1,197 pieces to build. Kenner Toys then revived the Girder and Panel line with a series of inexpensive sets. The green Masonite base boards were replaced with interlocking plastic plates. The panels were now flexible printed acetate sheets. The following sets were made when Kenner was owned by General Mills: Irwin Toys Girder and Panel Sets Kenner Toys ceased production of the 72000 series of Girder and Panel sets in 1979, ending the long run with their five "KENSTRUCT" sets. The Girder and Panel trademark seems to have been abandoned by the company. In 1992, Irwin Toys of Toronto, Canada applied to the US Trademark Office for the assumption of the abandoned trademark. Irwin then began an entire new line of Girder and Panel sets unlike any that were made before. There were new blue/grey girders, beams, and new diagonal beams for slanted roofs along with new wall panels and some new plastic items were also added. Initially they produced three sets, called "Town Centre," "City Scape" and "Deluxe Skyscraper". All sets now had an internal light run by two AA batteries. Beginning in 1996, Irwin produced a second line of specialty sets beginning with the Gas Station set listed below. These sets had very little US distribution and were mainly sold in Canada. The following Girder and Panel sets were made by Irwin Toys: Bridge Street Toys Girder and Panel Sets From 2005 until 2016, the Girder and Panel were again marketed by Bridge Street Toys. They created a number of different sets, along with compatible parts for the older Kenner Toys and Irwin Toys sets: (The last two digits of the set # indicate its year of introduction) Bridge Street Toys created two prototype sets that were never officially released for sale. www.girderandpanel.net obtained the only 11 copies of each of these sets. 11 of the Toy Store and 10 of the Police Station sets were sold, leaving the one remaining set still available. In 2005, Bridge Street Toys created a special set for investors and family members as a Christmas gift: See also Construx Lego Lincoln Logs Lionel Corp. briefly made a construction set in the immediate postwar era Märklin Meccano Merkur sets Skyline Steel Tec Super City Tinkertoy Unit beams References External links - Kenner Girder and Panel Building Sets - History and information, and restored sets for sale Note: www.girderandpanel.net no longer exists Construction toys Educational toys Girders 1970s toys
Girder and Panel building sets
[ "Technology" ]
3,148
[ "Structural system", "Girders" ]
11,751,393
https://en.wikipedia.org/wiki/John%20Riordan%20%28mathematician%29
John Francis Riordan (April 22, 1903 – August 27, 1988) was an American mathematician and the author of major early works in combinatorics, particularly Introduction to Combinatorial Analysis and Combinatorial Identities. Biography Riordan was a graduate of Yale University. In his early life he wrote a number of poems and essays and a book of short-stories, On the Make, published in 1929, and was Editor-in-Chief of Salient and The Figure in the Carpet, literary magazines published by The New School for Social Research in New York. He married Mavis McIntosh, the well-known poet and literary agent and founder of McIntosh & Otis. The couple had two daughters: Sheila Riordan and Kathleen Riordan Speeth, and were long time residents of Hastings-on-Hudson, New York. Riordan's long professional career was at Bell Labs, which he joined in 1926 (a year after its foundation) and where he remained, publishing over a hundred scholarly papers on combinatorial analysis, until he retired in 1968. He then joined the faculty at Rockefeller University as professor emeritus. A Festschrift was published in his honor in 1978. Throughout his life Riordan led an active literary life, with many distinguished friends such as Kenneth Burke, William Carlos Williams, and A. R. Orage. The Riordan array, created by mathematician Louis W. Shapiro, is named after John Riordan. Tribute From the Introduction by Marc Kac to the Special Issue of the JCTA in honor of John Riordan: Foremost among the keepers of the barely flickering combinatorial flame was John Riordan. John’s work in Combinatorial Theory (or Combinatorial Analysis as he prefers to call it) is uncompromisingly classical in spirit and appearance. Though largely tolerant of modernity he does not let anyone forget that Combinatorial Analysis is the art and science of counting (enumerating is the word he prefers) and that a generating function by any other name or definition is still a generating function. From an interview with Neil Sloane published by Bell Labs: "Even at the end of my first year as a graduate student at Cornell, in 1962, I managed to arrange a summer job at Bell Labs in Holmdel. This was still on minimal cost networks. During that summer I met another of my heroes, John Riordan, one of the great early workers in combinatorics. His book An Introduction to Combinatorial Analysis is a classic. He was working at Bell Labs in West Street in Manhattan at that time. One of my earliest papers, on a problem that came up in my thesis work, was a joint paper with him." Selected publications (book of 14 short-stories) (reissued in 1980; reprinted again in 2002 by Courier Dover Publications) translated into Russian in 1962. (reprinted with corrections: ) Notes External links Former Members of the Technical Staff in the mathematics group at Bell Laboratories. A history of mathematics at Bell Labs A Guide to John F. Riordan Papers' Rockefeller University Faculty FA 191 John F. Riordan, 1903-1988. Mathematician and engineer The John Riordan Prize 20th-century American mathematicians Combinatorialists Scientists at Bell Labs Yale University alumni People from Hastings-on-Hudson, New York 1903 births 1988 deaths
John Riordan (mathematician)
[ "Mathematics" ]
668
[ "Combinatorialists", "Combinatorics" ]
11,752,313
https://en.wikipedia.org/wiki/Millioctave
The millioctave (moct) is a unit of measurement for musical intervals. As is expected from the prefix milli-, a millioctave is defined as 1/1000 of an octave. From this it follows that one millioctave is equal to the ratio 21/1000, the 1000th root of 2, or approximately 1.0006934 (). Given two frequencies a and b, the measurement of the interval between them in millioctaves can be calculated by Likewise, if you know a note b and the number n of millioctaves in the interval, then the other note a may be calculated by: Like the more common cent, the millioctave is a linear measure of intervals, and thus the size of intervals can be calculated by adding their millioctave values, instead of multiplication, which is necessary for calculations of frequencies. A millioctave is exactly 1.2 cents. History and use The millioctave was introduced by the German physicist Arthur von Oettingen in his book Das duale Harmoniesystem (1913). The invention goes back to John Herschel, who proposed a division of the octave into 1000 parts, which was published (with appropriate credit to Herschel) in George Biddell Airy's book on musical acoustics. Compared to the cent, the millioctave has not been as popular because it is not aligned with just intervals. It is however occasionally used by authors who wish to avoid the close association between the cent and twelve-tone equal temperament. Some considers that the millioctave introduces as well a bias for the less familiar 10-tone equal temperament however this bias is common in the decimal system. See also Cent (music) Savart Musical tuning Logarithm Degree (angle) Chiliagon Notes External links Logarithmic Interval Measures Equal temperaments Intervals (music) Units of measurement 1913 introductions 1000 (number)
Millioctave
[ "Physics", "Mathematics" ]
399
[ "Units of measurement", "Physical quantities", "Musical symmetry", "Quantity", "Logarithmic scales of measurement", "Equal temperaments", "Symmetry" ]
11,752,492
https://en.wikipedia.org/wiki/The%20Scary%20Guy
The Scary Guy (sometimes stylized THE SCARY GUY; born December 29, 1953) is a United Kingdom-based American motivational speaker who campaigns worldwide to eliminate hate, violence, prejudice, and bullying in schools and corporations. In addition to being a tattoo shop owner, comic, entertainer, inspirational speaker, and performance artist, The Scary Guy has pierced nose, eyebrows, and ears and covers over 85 percent of his body with tattoos. Early life The Scary Guy was born on December 29, 1953, as Earl Kenneth Kaufmann, to his father, Carroll August Kaufmann, and his mother Constance Joan Buckingham. Growing up in New Hope, Minnesota, The Scary Guy graduated in 1972 from Cooper Senior High School and excelled as a voice major at Macalester College, in Saint Paul. Tattoos The Scary Guy got his first tattoo at the age of 30 and now has tattoos that cover an estimated 85% of his body. Over the years, his collection has grown as a reflection of his life experiences. They are what he calls, 'modern tribalism', reflecting on various emotional events. One of these is a tattoo of a man called "Yuppiecide", a representation of his former self. Scary Guy's other tattoos represent his love of art and others are chosen simply because The Scary Guy was a computer salesman at one point in his life and they looked "cool". Bibliography Hatwood, Mark David. 7 Days and 7 Nights – An Official Biography of The Scary Guy. VisionHeart, Inc. 2008 Videos The Scary Guy on Firepit Friday Films Scary also known as Scary – tattoo therapy, title in German : Scary – Furchterregend (2006) by Uli Kick, Arte, Bayerischer Rundfunk, Filmworks, Südwestrundfunk, Westdeutscher Rundfunk. See also Teachers' TV References External links Official website American social workers American motivational speakers American businesspeople Personal development 1953 births American tattoo artists Male tattoo artists Living people People from New Hope, Minnesota
The Scary Guy
[ "Biology" ]
409
[ "Personal development", "Behavior", "Human behavior" ]
11,753,924
https://en.wikipedia.org/wiki/Climate%20risk%20management
Climate risk management (CRM) is a term describing the strategies involved in reducing climate risk, through the work of various fields including climate change adaptation, disaster management and sustainable development. Major international conferences and workshops include: United Nations Framework Convention on Climate Change, World Meteorological Organization - Living With Climate. Definition Climate risk management is a generic term referring to an approach to climate-sensitive decision making. The approach seeks to promote sustainable development by reducing the vulnerability associated with climate risk. CRM involves strategies aimed at maximizing positive and minimizing negative outcomes for communities in fields such as agriculture, food security, water resources, and health. Climate risk management covers a broad range of potential actions, including: early-response systems, strategic diversification, dynamic resource-allocation rules, financial instruments (such as climate risk insurance), infrastructure design and capacity building. AI tools enable more precise forecasts and resource allocation in sectors like agriculture, allowing stakeholders to capitalize on favorable climate conditions while minimizing risks. In addition to avoiding adverse outcomes, a climate risk management strategy also aims to maximize opportunities in climate-sensitive economic sectors--for example, farmers who use favorable seasonal forecasts to maximize their crop productivity. Major international conferences and workshops United Nations Framework Convention on Climate Change The United Nations Framework Convention on Climate Change involves negotiations among delegates on climate change framework. Discussions center on mitigation, adaptation, technology development and transfer, and financial resources and investment. During COP21, the international community funded investment in climate risk insurance as part of the strategies for addressing climate risk. World Meteorological Organization - Living With Climate The Living with Climate Conference was co-hosted by the World Meteorological Organization, the Earth Institute and the Finnish Meteorological Institute in July, 2006. The meeting was designed to review opportunities and constraints in integrating climate risks and uncertainties into decision-making. A major outcome was the Espoo Statement. See also Vulnerability Risk management Disaster risk reduction Finnish Meteorological Institute Earth Institute Climate change and insurance in the United States References Climate change and society Natural disasters
Climate risk management
[ "Physics" ]
408
[ "Weather", "Physical phenomena", "Natural disasters" ]
11,754,068
https://en.wikipedia.org/wiki/Kn%C3%B6del%20number
In number theory, an n-Knödel number for a given positive integer n is a composite number m with the property that each i < m coprime to m satisfies . The concept is named after Walter Knödel. The set of all n-Knödel numbers is denoted Kn. The special case K1 is the Carmichael numbers. There are infinitely many n-Knödel numbers for a given n. Due to Euler's theorem every composite number m is an n-Knödel number for where is Euler's totient function. Examples References Literature Eponymous numbers in mathematics Number theory
Knödel number
[ "Mathematics" ]
130
[ "Number theory stubs", "Discrete mathematics", "Number theory" ]
11,754,565
https://en.wikipedia.org/wiki/Fomitopsis%20supina
Fomitopsis supina is a species of fungus in the family Fomitopsidaceae. It is a plant pathogen that affects avocados. See also List of avocado diseases References Fungi described in 1806 Fungal tree pathogens and diseases Avocado tree diseases supina Taxa named by Olof Swartz Fungus species
Fomitopsis supina
[ "Biology" ]
69
[ "Fungi", "Fungus species" ]