id int64 580 79M | url stringlengths 31 175 | text stringlengths 9 245k | source stringlengths 1 109 | categories stringclasses 160 values | token_count int64 3 51.8k |
|---|---|---|---|---|---|
41,575,984 | https://en.wikipedia.org/wiki/Evolve%20%28video%20game%29 | Evolve is a first-person shooter video game developed by Turtle Rock Studios and published by 2K. Announced in January 2014, the game was released worldwide for Microsoft Windows, PlayStation 4, and Xbox One in February 2015. Evolve uses an asymmetrical multiplayer structure, where five players—four playing as hunters and one as the monster—battle against each other on an industrialized alien planet called Shear. The hunters' gameplay is based on the first-person shooter design, while the monsters are controlled from a third-person perspective. The hunters' goal is to eliminate the monster, while the monster's goal is to consume wildlife and evolve to make themselves stronger before either eliminating the hunters, or successfully destroying the objective.
Evolve was Turtle Rock Studios' first major project after the company split from Valve in 2010. The concept for Evolve existed prior to the development of their previous game, Left 4 Dead. Inspired by hunting games such as Cabela's Big Game Hunter and Deer Hunter, the idea is to have prey that can strike back at the hunters. The monster design was originally intended to be esoteric but was later toned down. Turtle Rock found difficulty when seeking publishers that could provide funding and marketing for the game. THQ was originally set to serve as the game's publisher, but the rights to the franchise and publishing duties were transferred to Take-Two Interactive after THQ filed for bankruptcy in late 2012.
Prior to release, Evolve received a largely positive reception and won the Best of Show Awards at Electronic Entertainment Expo 2014 and Gamescom 2014. Upon release, the game received positive reviews from critics, with praise mostly directed to the game's asymmetrical structure, controls, and designs. However, it received criticism for its progression system and light narrative, gameplay, as well as a large amount of downloadable content prepared. Evolve was a commercial success, although the player base significantly dwindled shortly after release. The game had briefly transitioned to become a free-to-play title known as Evolve Stage 2 before 2K Games shut down the game's dedicated servers in September 2018.
Plot
The game is set in a fictional future where humans have successfully discovered ways to survive outside Earth and have begun colonizing other planets. Humans arrived at Shear, a distant planet located in the "Far Arm" of space, and began creating colonies and industrial factories.
As the colonisation progressed, humans began to meet resistance from alien life-forms, known as Monsters, that had the ability to evolve by consuming local wildlife. As the Monsters destroyed the colonies on Shear, a former "planet tamer" named William Cabot was brought out of retirement to deal with the threat and to evacuate the remaining colonists from Shear. Cabot assembled a team of Hunters to eliminate the Monsters and protect their communities.
Gameplay
Evolve is an action video game with a focus on both co-operative, and competitive, multiplayer gameplay. The game adapts a '4v1' asymmetrical structure where four players take control of the Hunters, while the fifth player controls the Monster. The Hunters' main objective is to track and hunt the Monster in a limited amount of time, while the Monster's goal is to evolve and make themselves more powerful.
In the beginning of a match, a 30-second headstart is given to the Monster so that they can have enough time to escape before the Hunters parachute from a plane to where the Monster started. Each map features an open world environment for players to explore and play within. To help the Hunters navigate the environments quickly, they are equipped with jetpacks, allowing them to jump over obstacles and cliffs. The jetpack can also be used to dodge attacks performed by the monster, though it consumes a Hunter's energy. The team can track the monster, as well as place waypoints on an interactive map. The color of these waypoints are different based on what players have marked. The waypoint mark turns yellow for environment, orange for wildlife or red for the Monster.
The Monster needs to grow stronger in order to fight the Hunters by hunting and killing other local wildlife in order to gain experience points. When it gains enough experience, the Monster can evolve. Through evolution, the Monster's health bar is extended and refilled, and more abilities are available to the Monster, which makes it easier to kill the Hunters. However, the Monster is vulnerable during its evolution, and if caught by the hunters, the process is disrupted. The Monster can also enter a "stealth mode", allowing it to avoid detection by wildlife and Hunters.
Evolve features five different modes: Hunt, Nest, Rescue, Defend and Arena, which have different objectives for both the Hunters and the Monster. Evolve provides two different structures to these game modes: Quick Play, which starts a single playthrough match; and Evacuation, which serves as a five-match, multiplayer story mode. In Evacuation, each match gives the winning side an advantage in the next map, such as having a toxic gas the Monster is immune to, or autonomous gun turrets to assist the Hunters. The Evacuation mode ends with a 'Defend' match. Evolve also features an Observer Mode, allowing players to watch a match without playing in the match. The spectator can jump between cameras and view the match from both the Hunters' and the Monster's perspectives.
Normally, five players play in a standard round of Evolve, with four Hunters fighting one Monster. Playing with fewer than five players, including single player, is possible in all modes due to computer-controlled bots. These bots can control up to four of the characters, allowing between one and four human players in any game mode. Players can also switch to play as another class instantly in a single-player match.
Hunters
Evolve features a total of 20 different human characters split into four classes, each class containing 5 characters. Each class has different skills and abilities, and players are required to cooperate with each other in a match. Players unlock new characters as they progress through the game, e.g. the fourth Assault character will be unlocked if the player has upgraded the previous three Assault characters. The Hunters class features first-person gameplay. The ammunition of their weapons is automatically refilled when not in use, and iron sights are used in-game. Evolve does not allow multiple players to play as the same class in a match. Gameplay variations are also present within the characters in the same class.
Assault: The Assault-class characters serve as the main "damage dealers" to the Monster. They are equipped with heavy weapons such as shotguns, rocket launchers, flame throwers, and mini-guns. Assault-class characters also have shields for their own protection and land mines. The shield provides temporary invulnerability to damage. Starting from stage 2, the invulnerability ability was changed to Defense Matrix, a new ability that reduces damages when being attacked by enemies.
Trapper: The Trapper-class characters can use their gear to track the Monster's movements. As the Monster occasionally scatters local wildlife, such as birds, the trapper can use these 'signs' to find the location of the monster. Trappers also have other abilities and tools that can slow the movements of the Monster. Following the release of the Stage 2 alpha, all hunters gained the ability to use the mobile arena and the trappers gained the plant scanner ability, similar to the monster's smell.
Support: The Support-class characters provide backup to the other characters. They are equipped with a damage dealing weapon, such as a laser cutter, or a shield that can be used to protect other Hunters. They also have the ability to provide temporary shields for nearby allies. In Stage 2, they gain the ability to charge the shields of their companions hunters.
Medic: The Medic-class characters' main technique is replenishing the health of team members. Medics are also equipped with a damage dealing weapon. Some Medics also have the abilities to revive teammates that are incapacitated or killed by the Monster. According to Evolves concept artist, those playing as Medics should stay back and avoid direct combat with the Monster, and only use their abilities when necessary.
Starting from the release of Stage 2, changes were made to the hunter classes. Every class now possesses the ability to deploy a force field, an ability once exclusive to the Trapper class. It can be used to limit the Monster's movement to a small area. The ability's cooldown time decreases when the hunters deal enough damage to the monster. Starting from Stage 2, the health of all hunters regenerate if they manage to avoid damage, and they no longer have to rely on the Medic class.
Monsters
There are a total of five Monsters featured in Evolve. Similar to the Hunters, players need to inflict a certain amount of damage before unlocking a new Monster. The five different Monster-types also have different abilities, both offensive and defensive. Players control the Monster from a third-person perspective, and it features gameplay similar to an action game, unlike the Hunters. More abilities are given to a Monster after its evolution. Gameplay mechanics were not changed much after the release of Stage 2, but monsters are made more powerful. They are given more health, stamina, armor, and skill points to unlock all abilities. Cooldown time abilities also shortened and recharge rate becomes significantly faster.
Goliath: The Goliath is the starter Monster, available to all players. He has the strongest armor and health among the monsters. Goliath can charge and throw large rocks at Hunters, which can temporarily stun them, as well as perform attacks, such as breathing fire and "Leap Smash". Every monster has the ability to traverse the environment, Goliath's movement ability is to jump in the direction he is facing.
Kraken: The Kraken is the second monster players unlock. Kraken is electricity-based, and can unleash attacks such as "Lightning Strike", "Vortex" and "Aftershock". He can also set up traps such as "Banshee Mines" around the map to slow the Hunters down. The Kraken's movement ability allows him to hover around in the sky and is the only monster that can fly.
Wraith: The Wraith is the third unlockable monster. The Wraith can warp towards a Hunter and unleash a blast, dealing damage to the Hunters. She can also launch a supernova within a confined space which grants her accelerated attack speed. The Wraith also has the ability to teleport between places and abduct a Hunter whilst in a group. Her signature ability is a decoy that lets her create a copy of herself and turn invisible while the clone fights to confuse the Hunters, it is also useful when her armor pool is low. The Wraith's movement ability allows her to warp depending on the direction she goes.
Behemoth: The Behemoth is a DLC character. Behemoth can unleash abilities such as "Lava Bomb" and "Fissure", which can stun Hunters. He can also create a "Rock Wall", which can isolate a Hunter from their companions. The Behemoth's movement ability is to turn into a ball and roll, he can run Hunters and other wildlife over to damage them a little bit.
Gorgon: The Gorgon is also a DLC monster, The Gorgon has abilities such as "Acid Spit", and "Web Snare" which can slow down Hunters. The Gorgon also has two abilities that use a 'second Monster' called "Mimic", which allows herself to control a clone-version of Gorgon that explodes to deal damage to Hunters; and "Spider Trap", which sends a small spider to trap Hunters and slowly digests them. Gorgon's movement ability is similar to how Spiderman moves around. She shoots webs that allow her to traverse around the map, she can also cling on to walls and launch surprise attacks.
Development
Origin
Evolve was developed by Turtle Rock Studios. Evolves creative director, Phil Robb, and lead designer, Chris Ashton, are the co-founders of Turtle Rock Studios alongside Mike Booth. The team had a heritage of developing competitive multiplayer games, such as Valve's Counter-Strike series and the Left 4 Dead series. According to Robb, the team wanted to build a co-operative, multiplayer game because it gave the team a chance to play with their family and friends together, instead of against each other, and found it offered a more enjoyable experience than competitive multiplayer games. The concept for Evolve was completed in 2005, before the development of the first Left 4 Dead game. However, the Evolve project was put on hold, as Turtle Rock thought that the technology at that time was not advanced enough to handle the game's design.
Turtle Rock Studios merged into Valve in early 2008 but split away later the same year. When the company reestablished, it had only 13 staff members. As a new company, Turtle Rock Studios hoped to make use of the popularity of the Left 4 Dead franchise to create something ambitious and massive before people forgot about the company. When eighth generation video game consoles were released, the team realized they could create almost anything they wanted. They reviewed some of their previous projects and eventually chose Evolve, which seemed to be the most "straightforward" concept. The team also considered the new project as their "proving ground", a project that could show their ability to build a large-scale game beyond providing assistance to Valve. The development of Evolve officially began in early 2011.
Design
Evolve was inspired by hunting games Cabela's Big Game Hunter and Deer Hunter. Members of Turtle Rock Studios, including Robb and Ashton, thought that the gameplay of these hunting games, such as animal-tracking, was seldom incorporated in an action game. As a result, they came up with the original concept of Evolve in which, if players failed at hunting the animals, they could be attacked by their targets. Instead of typical big game animals such as elephants and lions, the team imagined it to be a "King Kong", which changed to an alien monster. The team picked a sci-fi setting, allowing them to add creative and unrealistic things into the game. The team also took the concept of boss battles, and expanded upon it by using the concept as a key idea when developing Evolve. The team envisioned Evolve as a video game version of Predator. The goal of Evolve was to create an experience that was new to video game players.
While Evolve carried some game mechanics from Left 4 Dead, others were discarded. The team originally thought that it could be added to the artificial intelligence system of Evolves wildlife, but was later scrapped. They thought that the core experience offered by Evolve should be tracking and hunting the monster, instead of getting attacked by wildlife constantly. The team also thought that it would become an irritation if they added too many complex mechanics for the wildlife. The developer also intentionally chose not to make Evolve action-packed all the time, and introduced segments that would require players to slow down and track the Monster. Robb explained that the design team wanted to create a contrast, so that players could appreciate the action and chaotic moments after experiencing the less exciting segments.
When the design team was deciding on the number of Hunters in a match, they chose four as they believed it was the optimal number in a team, as players would not lose track of the stats and health of other players. It also allowed the players to work collaboratively with each other, so that no character would get left behind, or neglected, by the team. From the Monster's perspective, the design team thought that having four Hunters engaging in combat with the Monster would provide a challenge for the Monster, as they could find difficulties in keeping track of the Hunters, and this would make a match feel more balanced. The Hunter team was divided into several different classes because it "makes senses" according to Turtle Rock. In order to showcase the features and abilities of different classes, each class has different variations, in both appearances and costume colors. It was designed to make characters more recognizable and memorable. Another reason was that the design team wanted the Monster to adapt and use different strategies when dealing with different Hunters. Turtle Rock considered this a way to effectively extend Evolves replayability and would add more variety to the gameplay. There were originally four Hunter characters in Evolve, but after the design team experimented with the free-to-play model, the list of characters was expanded to 16.
The titular "Evolve" game mechanic was inspired by the "bomb planting" mode from the Counter-Strike series. The Monsters started out as a relatively weak creature that could be defeated easily, but becomes stronger and gains more skills as it evolves. Early playtesters complained about the game mechanic, as they thought that this would bring an unfair disadvantage to the Hunters, since they do not "evolve" like the Monster. However, the design team still chose to maintain the game mechanic, as they thought that it would create an engaging experience. Ashton added that such game mechanics can create a "turning the tables" feeling for the Hunters, and that he thought that the sudden change in strategy – from offensive to defensive – could help deliver a dynamic experience to the players. The Monster was originally intended to play from a first-person perspective but was later shifted to a third-person perspective during development because the first-person control system was considered to be clumsy and confusing, and that the first person perspective took control away from players. The design team considered designing third-person gameplay a challenge, as they had no prior experience in creating such games. On the other hand, the design team implemented the first-person gameplay for the Hunters when Evolves development started. The design team thought that the first-person perspective would provide a sense of tension as players would not be able to see what was behind them.
Evolves environments are based on Earth's as the design team wanted to create a world that felt believable for players and had regions that made geological sense after an early concept design was found to be too extreme. As a result, the design team drew inspiration from real-world landscapes. The design team wanted Evolve to be set in lush forests so that Hunters and Monsters could hide from each other. The design team tried using the Source game engine to create a forest landscape but they ultimately failed. The design team then researched Crytek's CryEngine, which powers games like Far Cry and Crysis. As the design team felt that Far Cry and Crysis set new standards for in-game environments, they decided to utilize CryEngine for Evolve. Evolves maps were designed to be dark and mysterious, so that the various characters can hide from each other, as well as presenting a sense of surprise when players are ambushed.
As Evolve is multiplayer-focused, the design team put less time and focus into developing Evolves narrative and campaign. Conversations between characters were reduced during the multiplayer mode as the design team thought that it would negatively impact on the conversations between players. However, the story and narrative became more significant in the single-player mode. Evolve does not adapt traditional storytelling methods, nor use a campaign mode, instead, players learn about the Hunters' backstory and the fictional world of Shear by slowly progressing through Evolve. Playing as different characters would also lead to different conversations and dialogues between characters.
Evolve features a Cthulhu-inspired artstyle. As a result, much of the wildlife were intentionally designed to feature tentacles. Robb had previously drawn a lot of esoteric monster designs but the publisher, THQ at the time, thought that while the designs looked unique, they would not benefit the game. The team then began developing "marketing monsters" with a more stereotypical design. The original Goliath was based on a lobster, but changed to "a hybrid between King Kong and Godzilla", according to Evolves producer, Robb. Anthropological design features were later added to Goliath's design to make the players feel more connected to the Monsters, especially when they are killed in the game. For the second monster, Kraken, the team wanted to create an electricity-based creature and looked at marine creatures, such as eels, for inspiration. The third monster, Wraith, was inspired by sirens. The team noted that the key feature of this monster was its abduction ability, which the design team felt would capture the tense and exciting moments of classic monster movies. The team had designed more than three monsters, but many of them were dropped due to technical issues with Evolves artificial intelligence system, abilities that were deemed to be too powerful, and animation problems.
Evolves soundtrack was composed by Jason Graves and Lustmord. Graves composed the Monsters' soundtracks, while Lustmord composed the Hunters' soundtracks. According to Graves, much of the music was inspired by the Aliens vs. Predator series. Graves stated that he intentionally chose not to use an orchestra-based style for the music, instead, Graves used synthesizers to create sounds that he described as "odd". According to Graves, Evolves soundtrack has "evolved" as the game development progressed, and that it had shifted to become more electronic and synth-based. The Hunters' soundtracks are more futuristic and synth-sounding, while the Monsters' soundtracks are more drum-intense and distorting.
Publishing
Evolve adapts an asymmetrical multiplayer structure, a new concept in the video game industry during its development. It was so new that the developer itself worried and wondered why no one else was working on such project. The design team was also uncertain about whether the 4v1 structure would work or not. According to Turtle Rock, when the publishers heard that the original creators of Left 4 Dead were making a new game, they were interested. However, the design team encountered difficulties when they were pitching the game, and used two months to prepare for the pitching process. According to Robb, publishers were conservative and unsupportive about the idea and "[attempted] to poke holes" in their pitch. Even though the representatives from these publishers were excited about their pitch after they knew that it would be an extension to the Tank mode in Left 4 Dead, they questioned the ability of Turtle Rock making a triple-A video game, and were uncertain whether it was a project they should invest in.
After multiple failures, the Turtle Rock team looked for a business partner, a company that supported the idea and was in need of a co-operative shooter to fit into their game's lineup. They eventually partnered with THQ for Evolve, which would serve as the game's publisher and help with funding. According to Robb, they had to show the game on their iPad, as they forgot to bring the battery for their laptop. THQ's then president, Danny Bilson and later, Jason Rubin were also excited about the idea. However, at that time, THQ had already entered financial difficulties, suffering from a severe decline in profits. Turtle Rock also knew that THQ had internal problems, but Turtle Rock decided not to part with THQ.
THQ's financial situation continued to worsen and they declared bankruptcy on December 19, 2012. Evolve was listed alongside other unannounced titles from Relic Entertainment, Vigil Games, and THQ Studios Montreal in court documents filed by THQ. With THQ unable to continue its publishing and funding roles, an auction was held for other publishers to acquire these titles. Publishers interested in the game visited Turtle Rock Studio to see their "secret project". The team was frustrated, as they felt that the situation was "out of their control". Rubin later contacted Ashton and Robb, and suggested that they should bid the game themselves. They bid $250,000 for their own project, which Ashton described as "what [they] had in the bank". However, they were outbid by Take-Two Interactive, which paid $11 million to acquire the game and to secure the rights to the entire franchise, and its publishing label. 2K Games then served as the game's publisher. Despite being outbid, the Turtle Rock team was still "super excited" to collaborate with 2K. On January 8, 2015, Turtle Rock and 2K announced that Evolve had been declared gold, indicating it was being prepared for duplication and release.
Release
When Evolve was leaked in THQ's court document, it was expected that the game would be released on their 2015 fiscal year. The partnership between the two companies was revealed on May 26, 2011, and the game was re-revealed by gaming magazine Game Informer on January 7, 2014. It was announced that the game would be released for PlayStation 4, Windows, and Xbox One globally on October 21, 2014. However, 2K later decided to extend Evolves development time frame, so as to allow Turtle Rock to further polish the game, as well as to "fully realize the vision for Evolve". As a result, Evolve was delayed to February 10, 2015.
Prior to release, the game had been playtested multiple times by the general public. An alpha version of Evolve, called the 'Big Alpha' was released for Xbox One on October 31, 2014, The alpha version of Evolve was originally set to be released for the PlayStation 4 a day later on October 31, 2014, but was delayed to November 3, 2014 due to technical issues related to PS4's firmware update. As compensation, the duration of the demo was extended by a day, and ended on November 4, 2014 for all platforms. Players can play as the four classes of Hunters as well as the Goliath and Kraken in the alpha version. Turtle Rock expected 100,000 people to participate in the alpha. The team hoped that through the alpha testing, they could test the functionality of the game's servers, and make adjustments to the game's balance. Open beta trials of Evolve on Xbox One were held January 14–19, 2015. A limited test for the PlayStation 4 and PC was held January 16–19, 2015. Players could play as the first eight Hunters as well as the Goliath and Kraken in the beta. The Evacuation mode was also added to the beta on January 17, 2015.
In addition to the game's standard edition, players can purchase the game's Season Pass, Deluxe Edition and PC Monster Race Edition. The Season Pass features four additional Hunters and a set of "magma" Monster skins. The Deluxe Edition features all the content of the Season Pass, as well as a new Monster called Behemoth. The PC Monster Race Edition, which is an exclusive for PC players, features the content of the Deluxe Edition, as well as the fifth Monster and two additional Hunters. After Evolves release, a new season pass, called Evolve Hunting Pass 2 was released on June 23, 2015. It features new skins, Hunters and a new Monster.
Other media
On January 21, 2015, a mobile game titled Evolve: Hunter Quest appeared briefly on the iOS App Store and was later removed. The game was released by 2K on January 29, 2015 for iOS, Android, Windows Phone and Fire OS devices. The game is a free-to-play tile-matching video game, as well as a companion app to Evolve. In Evolve: Hunter Quest, players match three tokens of the same colour in order to unleash attacks on enemies, fill up energy bars to activate special Hunter abilities and earn mastery points to level up. Mastery points earned in-app can then be applied to characters in the main Evolve game on any platform. Players who download the app can also unlock unique game art and watch replays of online matches from a top-down view.
Evolve was launched with several merchandise items. Handled by Merchandise Monkey, the Evolves merchandise collection includes T-shirts and different figurines. Funko also made several toys for Evolve, including tall figurines of Markov, Val, Hank, Maggie and Goliath, each part of the first characters available to a player. A Goliath statue, standing at tall, was also available for purchase.
Post-release
In an interview with Official Xbox Magazine, Ashton claimed that Evolve would have the "best support for downloadable content ever". However, many of the downloadable content packages are not covered by Evolves Season Pass. In November 2014, Robb confirmed with IGN that all DLC maps will be free of charge. Robb stated the reason for this is "to allow people who don't have the DLC, to still play against those who do, the only difference is that they can't play as those hunters or monsters". Despite Turtle Rock claiming that all DLC maps would be free to all players, the high number of paid DLCs has attracted criticism from fans who feel that it constitutes a large amount of content being deliberately withheld to be sold. A large number of players who purchased the game wrote negative reviews for the game on Steam, complaining about the excessive amount of DLC planned as well as the general direction of the game, leading to an overall userscore of "Overwhelmingly Negative" on the platform. Turtle Rock Studios countered this by claiming that as much content as possible was packaged with the main game, with DLC only including content created after the completion of Evolves development. At release, Evolve launched with 44 different paid-DLC skin packs.
Free updates were added to the game. The Observer mode was added on March 31, 2015, and a less strategic mode, the Arena mode, was introduced on May 26, 2015. Robb thought that the game's format has the potential to become an eSports game. 2K expressed similar enthusiasm, and added that they would allocate resources into developing eSports-centric features to Evolve if fans of the game expressed demands for it. Turtle Rock and 2K collaborated with Electronic Sports League and Sony Pictures Entertainment to host a special tournament, in which players have to battle Chappie, the titular robot from the film Chappie, in February 2015. A Pro-Am Tournament of Evolve took place on March 6, 2015 during PAX East. During the tournament, they revealed that the eSport future of Evolve is determined. On June 15, 2015, another tournament was hosted by the Electronic Sports League and 2K.
On July 6, 2016, Turtle Rock announced that the game was transitioning to become a free-to-play game under the title Evolve: Stage 2 due to the game downloadable content controversy and mixed critical reception. The new version introduces new changes, including longer respawn time, non-ranked queue for casual players, and changes to hunters' abilities. Turtle Rock also promised that patches would be released more frequently and that most items featured in the game would be unlocked through simply playing the game. The alpha version of Stage 2 would begin on July 7, 2016, for PC, and will be followed by a beta in August in the same year. Players who purchased the game will be given the Founders status, which gives them exclusive cosmetic items. In October 2016, Turtle Rock announced that they would end support for Evolve, and that Evolve: Stage 2 would not be released for consoles, but servers for the game will remain online for 'foreseeable future'. Turtle Rock also revealed that it was 2K's decision to end the game's support. On September 3, 2018, the game's dedicated servers were shut down, though the game remains playable with peer-to-peer connection using Legacy Evolve.
In late July 2022, the multiplayer servers for Evolve: Stage 2 became reenabled, and later in October 2022 2K Games acknowledged the game's revival by reenabling the daily log-in bonus, and distributing free Steam Keys to members of the Evolve: Reunited Discord server. The game server's were again taken down "for the final time" in July 2023, rendering the game entirely unplayable.
Reception
Pre-release
Evolve received a largely positive reception from critics upon its initial announcement. It was nominated for six different awards in the Game Critics Awards, namely Best of Show, Best Original Game, Best Console Game, Best PC Game, Best Action Game and Best Online Multiplayer. It won four of them, and lost the Best Original Game Awards to No Man's Sky and Best PC Game to Tom Clancy's Rainbow Six: Siege. Evolve was also named the Best Game, Best Console Game Microsoft Xbox, Best PC Game and Best Online Multiplayer Game at Gamescom 2014. Publisher 2K Games stated that these awards indicated that Evolve could become a defining title for both the PlayStation 4 and Xbox One. However, the DLC controversy caused backlash from customers, and the game was criticized for serving as a framework for the release of DLC.
Post-release
Evolve received mostly positive reviews. Aggregating review website Metacritic gave the PlayStation 4 version 76/100 based on 46 reviews, the Windows version 77/100 based on 38 reviews, and the Xbox One version 74/100 based on 31 reviews. The game received backlash from users on Steam, due to the excessive amount of DLC sold on day one and the game for being overpriced.
The asymmetrical structure of the game mostly received praise from critics. Vince Ingenito, from IGN, thought that the system was smart and has successfully delivered a unique multiplayer experience for players. He added that the system is tactically deep, and that the "evolution" mechanic saved it from being gimmicky. This was contradicted by Steven Strom of Ars Technica, who stated that the game overall was just a "great gimmick and little else: something we'll play for a month or two, and not much longer." Evan Lahti, from PC Gamer, commended the structure and considered it the most compressed multiplayer experience since 2014's Titanfall. He added that such structure is something that the genre needs. GameSpot's Kevin VanOrd also appreciated the structure, which he thought made every battle feel "vicious and intense". Anthony LaBella of Game Revolution praised the asymmetrical idea and felt that the distinct gameplay elements between monsters and hunters successfully introduce Evolve to a broad audience. However, he noted that such a structure may become repetitive and boring for players after months of playthrough. Jeff Marchiafava, from Game Informer, also felt the structure to be limited, and that Evolve, even with all the modes, had failed to offer enough variety and challenges to players. Nic Rowan, from Destructoid, thought that Evolve had presented some of the best moments he had had in a multiplayer game, but he felt that these moments are too far between.
The Monster's gameplay was praised by Ingenito, as he thought it tasked players to use skills and patience while playing, and that Evolve has provided satisfying rewards for the player that successfully outsmarts the Hunters, a sentiment which was echoed by Strom. Lahti commended the Wraith, which he thought encouraged hit-and-run tactics. However, Rowan thought that the Monster gameplay can get old very fast. Furthermore, he noted that several Monsters felt overpowered, which made Evolve feel unbalanced. The controls of the game received praise. Marchiafava thought that it was smartly designed, and applauded it for its accessible nature. Lahti wrote a similar statement, but thought that the gameplay would be "difficult to master". David Meikleham from GamesRadar praised Evolves shooting mechanic, but complained that the action presented on-screen can become too chaotic for players to handle. Strom felt that the game-modes were unbalanced in terms of fun, and that certain game-modes prioritized fun for one team at the penalty of the other, and criticized the fact that outside of private lobbies with friends, you cannot choose any game-mode other than Hunt.
The process of hunting the Monster was praised. Ingenito thought that the hunting process was as tense as the actual confrontation and combat between the Hunters and the Monster. The four classes was also applauded by him, as he considered that the distant class abilities have successfully made players co-operate with each other in order to achieve success, as well as making the decision of choosing the correct character important and tactical. This was also contradicted by Strom, who felt that the hunter gameplay up until finding the monster was "hollow", and generally just consisted of going around in circles. Marchiafava thought that Evolve had successfully delivered a compelling experience while playing with other players. He was also surprised by the game's balance between the Monster and the Hunters. Meikleham thought that playing the game with other players can be an exhilarating experience, but only when players communicate with each other using microphones. Rowan thought that Evolve could only deliver an enjoyable experience when all players play cooperatively, and the overall experience would crumble if one of these players failed to do so. Lahti liked Evolves resources management, singling out the need for the Hunters to manage and conserve the energy for their jetpacks.
The Hunter characters featured in the game received praise. Ingenito thought that the Hunters in the game were memorable due to their pre-game dialog, and thought that the dialogue was well written. He called this the "true beauty" of Evolve. Rowan echoed a similar statement, calling the banter "charming". Marchiafava thought that the progression system has made the banter between characters repetitive because players need to play the game continuously to unlock characters. Marchiafava compared the narrative unfavorably to that of Left 4 Dead, and thought that it was not emergent enough.
Evolves map-design received mixed reviews. Ingenito thought that Shear was a "beautifully realized" planet, while Marchiafava thought that all the maps were both detailed and varied. VanOrd thought that Turtle Rock had successfully captured an unsettling atmosphere, and applauded the verticality of the maps. Lahti agreed that the maps were well-designed. However, he criticized them for being "homogeneous", as all the maps felt too similar to each other, and none offered a particularly unique experience that required players to change their tactics. He added that the lack of variety had significantly lowered Evolves replayability. Meikleham echoed a similar statement, adding that the maps are "bland", and that they did not look different from each other.
The progression system received criticism. Ingenito thought that it was an unnecessary addition to the game. He added that the upgrade system hides a lot of content from players unless they play the game frequently. Lahti, however, stated that after every character is unlocked, he felt less motivated to continue to play the game. LaBella thought that the system does not offer enough content and described it as "thin". This was echoed by Strom, who felt that unlocking the characters was a "grind".
Sales
Evolve debuted in No. 1 in the UK software-sales chart; the first title 2K Games had published to take the No. 1 spot since March 2013. Evolve was the second best selling game in the United States in February according to the NPD Group, only behind the handheld game The Legend of Zelda: Majora's Mask 3D. However, the average player count on Steam declined significantly since the game's launch. The game's player count increased 15,930% and was listed as one of Steam's most-played games after its transition to a free-to-play model. More than a million new players played the game after the transition.
Financial analyst Doug Creutz, of the Cowen Group, estimated only 300,000 physical copies were sold in Evolves launch month, and by its current sales rate, a well-below average figure for the triple-A gaming industry. Creutz stated that Evolve may be "too niche to reach a wide audience", adding that the negative reception to its DLC plan has hindered its success considerably. Despite such estimations, Karl Slatoff, President of Take-Two Interactive, stated that Evolve has achieved an "incredibly successful" launch and that the company was very satisfied with the sales of the game. As of May 2015, 2.5 million copies of the game had been shipped. Take-Two CEO Strauss Zelnick considered the property one of their "permanent" franchises, joining Grand Theft Auto, BioShock and Red Dead.
References
External links
2015 video games
2K games
Asymmetrical multiplayer video games
Cooperative video games
CryEngine games
First-person shooters
Hero shooters
Free-to-play video games
Multiplayer and single-player video games
PlayStation 4 games
Products and services discontinued in 2018
Science fiction video games
Video games set in outer space
Take-Two Interactive games
Video games about evolution
Video games about extraterrestrial life
Video games about robots
Video games developed in the United States
Video games featuring female protagonists
Video games scored by Jason Graves
Video games set on fictional planets
Windows games
Xbox One games
Inactive multiplayer online games | Evolve (video game) | Physics | 8,388 |
2,222,635 | https://en.wikipedia.org/wiki/Atmospheric%20electricity | Atmospheric electricity describes the electrical charges in the Earth's atmosphere (or that of another planet). The movement of charge between the Earth's surface, the atmosphere, and the ionosphere is known as the global atmospheric electrical circuit. Atmospheric electricity is an interdisciplinary topic with a long history, involving concepts from electrostatics, atmospheric physics, meteorology and Earth science.
Thunderstorms act as a giant battery in the atmosphere, charging up the electrosphere to about 400,000 volts with respect to the surface. This sets up an electric field throughout the atmosphere, which decreases with increase in altitude. Atmospheric ions created by cosmic rays and natural radioactivity move in the electric field, so a very small current flows through the atmosphere, even away from thunderstorms. Near the surface of the Earth, the magnitude of the field is on average around 100 V/m, oriented such that it drives positive charges down.
Atmospheric electricity involves both thunderstorms, which create lightning bolts to rapidly discharge huge amounts of atmospheric charge stored in storm clouds, and the continual electrification of the air due to ionization from cosmic rays and natural radioactivity, which ensure that the atmosphere is never quite neutral.
History
Sparks drawn from electrical machines and from Leyden jars suggested to early experimenters Hauksbee, Newton, Wall, Nollet, and Gray that lightning was caused by electric discharges. In 1708, Dr. William Wall was one of the first to observe that spark discharges resembled miniature lightning, after observing the sparks from a charged piece of amber.
Benjamin Franklin's experiments showed that electrical phenomena of the atmosphere were not fundamentally different from those produced in the laboratory, by listing many similarities between electricity and lightning. By 1749, Franklin observed lightning to possess almost all the properties observable in electrical machines.
In July 1750, Franklin hypothesized that electricity could be taken from clouds via a tall metal aerial with a sharp point. Before Franklin could carry out his experiment, in 1752 Thomas-François Dalibard erected a iron rod at Marly-la-Ville, near Paris, drawing sparks from a passing cloud. With ground-insulated aerials, an experimenter could bring a grounded lead with an insulated wax handle close to the aerial, and observe a spark discharge from the aerial to the grounding wire. In May 1752, Dalibard affirmed that Franklin's theory was correct.
Around June 1752, Franklin reportedly performed his famous kite experiment. The kite experiment was repeated by Romas, who drew from a metallic string sparks long, and by Cavallo, who made many important observations on atmospheric electricity. Lemonnier (1752) also reproduced Franklin's experiment with an aerial, but substituted the ground wire with some dust particles (testing attraction). He went on to document the fair weather condition, the clear-day electrification of the atmosphere, and its diurnal variation. Beccaria (1775) confirmed Lemonnier's diurnal variation data and determined that the atmosphere's charge polarity was positive in fair weather. Saussure (1779) recorded data relating to a conductor's induced charge in the atmosphere. Saussure's instrument (which contained two small spheres suspended in parallel with two thin wires) was a precursor to the electrometer. Saussure found that the atmospheric electrification under clear weather conditions had an annual variation, and that it also varied with height. In 1785, Coulomb discovered the electrical conductivity of air. His discovery was contrary to the prevailing thought at the time, that the atmospheric gases were insulators (which they are to some extent, or at least not very good conductors when not ionized). Erman (1804) theorized that the Earth was negatively charged, and Peltier (1842) tested and confirmed Erman's idea.
Several researchers contributed to the growing body of knowledge about atmospheric electrical phenomena. Francis Ronalds began observing the potential gradient and air-earth currents around 1810, including making continuous automated recordings. He resumed his research in the 1840s as the inaugural Honorary Director of the Kew Observatory, where the first extended and comprehensive dataset of electrical and associated meteorological parameters was created. He also supplied his equipment to other facilities around the world with the goal of delineating atmospheric electricity on a global scale. Kelvin's new water dropper collector and divided-ring electrometer were introduced at Kew Observatory in the 1860s, and atmospheric electricity remained a speciality of the observatory until its closure. For high-altitude measurements, kites were once used, and weather balloons or aerostats are still used, to lift experimental equipment into the air. Early experimenters even went aloft themselves in hot-air balloons.
Hoffert (1888) identified individual lightning downward strokes using early cameras. Elster and Geitel, who also worked on thermionic emission, proposed a theory to explain thunderstorms' electrical structure (1885) and, later, discovered atmospheric radioactivity (1899) from the existence of positive and negative ions in the atmosphere. Pockels (1897) estimated lightning current intensity by analyzing lightning flashes in basalt (c. 1900) and studying the left-over magnetic fields caused by lightning. Discoveries about the electrification of the atmosphere via sensitive electrical instruments and ideas on how the Earth's negative charge is maintained were developed mainly in the 20th century, with CTR Wilson playing an important part. Current research on atmospheric electricity focuses mainly on lightning, particularly high-energy particles and transient luminous events, and the role of non-thunderstorm electrical processes in weather and climate.
Description
Atmospheric electricity is always present, and during fine weather away from thunderstorms, the air above the surface of Earth is positively charged, while the Earth's surface charge is negative. This can be understood in terms of a difference of potential between a point of the Earth's surface, and a point somewhere in the air above it. Because the atmospheric electric field is negatively directed in fair weather, the convention is to refer to the potential gradient, which has the opposite sign and is about 100 V/m at the surface, away from thunderstorms. There is a weak conduction current of atmospheric ions moving in the atmospheric electric field, about 2 picoamperes per square meter, and the air is weakly conductive due to the presence of these atmospheric ions.
Variations
Global daily cycles in the atmospheric electric field, with a minimum around 03 UT and peaking roughly 16 hours later, were researched by the Carnegie Institution of Washington in the 20th century. This Carnegie curve variation has been described as "the fundamental electrical heartbeat of the planet".
Even away from thunderstorms, atmospheric electricity can be highly variable, but, generally, the electric field is enhanced in fogs and dust whereas the atmospheric electrical conductivity is diminished.
Links with biology
The atmospheric potential gradient leads to an ion flow from the positively charged atmosphere to the negatively charged earth surface. Over a flat field on a day with clear skies, the atmospheric potential gradient is approximately 120 V/m. Objects protruding these fields, e.g. flowers and trees, can increase the electric field strength to several kilovolts per meter. These near-surface electrostatic forces are detected by organisms such as the bumblebee to navigate to flowers and the spider to initiate dispersal by ballooning. The atmospheric potential gradient is also thought to affect sub-surface electro-chemistry and microbial processes.
On the other hand, swarming insects and birds can be a source of biogenic charge in the atmosphere, likely contributing to a source of electrical variability in the atmosphere.
Near space
The electrosphere layer (from tens of kilometers above the surface of the Earth to the ionosphere) has a high electrical conductivity and is essentially at a constant electric potential. The ionosphere is the inner edge of the magnetosphere and is the part of the atmosphere that is ionized by solar radiation. (Photoionization is a physical process in which a photon is incident on an atom, ion or molecule, resulting in the ejection of one or more electrons.)
Cosmic radiation
The Earth, and almost all living things on it, are constantly bombarded by radiation from outer space. This radiation primarily consists of positively charged ions from protons to iron and larger nuclei derived sources outside the Solar System. This radiation interacts with atoms in the atmosphere to create an air shower of secondary ionising radiation, including X-rays, muons, protons, alpha particles, pions, and electrons. Ionization from this secondary radiation ensures that the atmosphere is weakly conductive, and the slight current flow from these ions over the Earth's surface balances the current flow from thunderstorms. Ions have characteristic parameters such as mobility, lifetime, and generation rate that vary with altitude.
Thunderstorms and lightning
The potential difference between the ionosphere and the Earth is maintained by thunderstorms, with lightning strikes delivering negative charges from the atmosphere to the ground.
Collisions between ice and soft hail (graupel) inside cumulonimbus clouds causes separation of positive and negative charges within the cloud, essential for the generation of lightning. How lightning initially forms is still a matter of debate: Scientists have studied root causes ranging from atmospheric perturbations (wind, humidity, and atmospheric pressure) to the impact of solar wind and energetic particles.
An average bolt of lightning carries a negative electric current of 40 kiloamperes (kA) (although some bolts can be up to 120 kA), and transfers a charge of five coulombs and energy of 500 MJ, or enough energy to power a 100-watt lightbulb for just under two months. The voltage depends on the length of the bolt, with the dielectric breakdown of air being three million volts per meter, and lightning bolts often being several hundred meters long. However, lightning leader development is not a simple matter of dielectric breakdown, and the ambient electric fields required for lightning leader propagation can be a few orders of magnitude less than dielectric breakdown strength. Further, the potential gradient inside a well-developed return-stroke channel is on the order of hundreds of volts per meter or less due to intense channel ionization, resulting in a true power output on the order of megawatts per meter for a vigorous return-stroke current of 100 kA .
If the quantity of water that is condensed in and subsequently precipitated from a cloud is known, then the total energy of a thunderstorm can be calculated. In an average thunderstorm, the energy released amounts to about 10,000,000 kilowatt-hours (3.6 joule), which is equivalent to a 20-kiloton nuclear warhead. A large, severe thunderstorm might be 10 to 100 times more energetic.
Corona discharges
St. Elmo's Fire is an electrical phenomenon in which luminous plasma is created by a coronal discharge originating from a grounded object. Ball lightning is often erroneously identified as St. Elmo's Fire, whereas they are separate and distinct phenomena. Although referred to as "fire", St. Elmo's Fire is, in fact, plasma, and is observed, usually during a thunderstorm, at the tops of trees, spires or other tall objects, or on the heads of animals, as a brush or star of light.
Corona is caused by the electric field around the object in question ionizing the air molecules, producing a faint glow easily visible in low-light conditions. Approximately 1,000 – 30,000 volts per centimeter is required to induce St. Elmo's Fire; however, this is dependent on the geometry of the object in question. Sharp points tend to require lower voltage levels to produce the same result because electric fields are more concentrated in areas of high curvature, thus discharges are more intense at the end of pointed objects. St. Elmo's Fire and normal sparks both can appear when high electrical voltage affects a gas. St. Elmo's fire is seen during thunderstorms when the ground below the storm is electrically charged, and there is high voltage in the air between the cloud and the ground. The voltage tears apart the air molecules and the gas begins to glow. The nitrogen and oxygen in the Earth's atmosphere causes St. Elmo's Fire to fluoresce with blue or violet light; this is similar to the mechanism that causes neon signs to glow.
Earth-Ionosphere cavity
The Schumann resonances are a set of spectrum peaks in the extremely low frequency (ELF) portion of the Earth's electromagnetic field spectrum. Schumann resonance is due to the space between the surface of the Earth and the conductive ionosphere acting as a waveguide. The limited dimensions of the earth cause this waveguide to act as a resonant cavity for electromagnetic waves. The cavity is naturally excited by energy from lightning strikes.
Electrical system grounding
Atmospheric charges can cause undesirable, dangerous, and potentially lethal charge potential buildup in suspended electric wire power distribution systems. Bare wires suspended in the air spanning many kilometers and isolated from the ground can collect very large stored charges at high voltage, even when there is no thunderstorm or lightning occurring. This charge will seek to discharge itself through the path of least insulation, which can occur when a person reaches out to activate a power switch or to use an electric device.
To dissipate atmospheric charge buildup, one side of the electrical distribution system is connected to the earth at many points throughout the distribution system, as often as on every support pole. The one earth-connected wire is commonly referred to as the "protective earth", and provides path for the charge potential to dissipate without causing damage, and provides redundancy in case any one of the ground paths is poor due to corrosion or poor ground conductivity. The additional electric grounding wire that carries no power serves a secondary role, providing a high-current short-circuit path to rapidly blow fuses and render a damaged device safe, rather than have an ungrounded device with damaged insulation become "electrically live" via the grid power supply, and hazardous to touch.
Each transformer in an alternating current distribution grid segments the grounding system into a new separate circuit loop. These separate grids must also be grounded on one side to prevent charge buildup within them relative to the rest of the system, and which could cause damage from charge potentials discharging across the transformer coils to the other grounded side of the distribution network.
See also
General
Atmospheric physics
Ionosphere
Air quality
Lightning rocket
Electromagnetism
Earth's magnetic field
Sprites and lightning
Whistler (radio)
Telluric current
Other
Electrodynamic tether
Solar radiation
References and external articles
Citations and notes
Other reading
Richard E. Orville (ed.), "Atmospheric and Space Electricity". ("Editor's Choice" virtual journal) – "American Geophysical Union". (AGU) Washington, DC 20009-1277 USA
Schonland, B. F. J., "Atmospheric Electricity". Methuen and Co., Ltd., London, 1932.
MacGorman, Donald R., W. David Rust, D. R. Macgorman, and W. D. Rust, "The Electrical Nature of Storms". Oxford University Press, March 1998.
Volland, H., "Atmospheric Electrodynamics", Springer, Berlin, 1984.
Further reading
Electricity in the Atmosphere - The Feynman Lectures on Physics
James R. Wait, Some basic electromagnetic aspects of ULF field variations in the atmosphere. Journal Pure and Applied Geophysics, Volume 114, Number 1 / January, 1976 Pages 15–28 Birkhäuser Basel ISSN 0033-4553 (Print) 1420-9136 (Online) DOI 10.1007/BF00875488
National Research Council (U.S.)., & American Geophysical Union. (1986). The Earth's electrical environment. Washington, D.C: National Academy Pres
Solar variability, weather, and climate By National Research Council (U.S.). Geophysics Study Committee
This gives a detailed summary of the phenomena as understood in the early 20th century.
External links
Electric Current through the Atmosphere
The Global Circuit , phys.uh.edu
Soaking in atmospheric electricity 'Fair weather' measurements important to understanding thunderstorms. science.nasa.gov
Atmospheric Electricity HomePage, uah.edu
Tjt, Fair-weather atmospheric electricity. ava.fmi.fi
International Commission on Atmospheric Electricity (ICAE) Homepage
Electrical phenomena | Atmospheric electricity | Physics | 3,372 |
43,026,600 | https://en.wikipedia.org/wiki/EMBRACE%20%28telescope%29 | EMBRACE (Electronic MultiBeam Radio Astronomy ConcEpt) is a prototype radio telescope for phase two of the Square Kilometre Array (SKA) project. It's the first dense phased array for radioastronomy in the GHz frequency range (initially planned for covering the 0.5-1.5 GHz, mid-frequency band of SKA). It is composed of two sites, one at the Nançay radio telescope station in France, and one near the Westerbork Synthesis Radio Telescope antennas in Netherlands.
References
External links
http://satorchi.net/skads/embrace64.php
https://web.archive.org/web/20171226020648/http://www.astron.nl/r-d-laboratory/ska/embrace/embrace
Radio telescopes
Square Kilometre Array | EMBRACE (telescope) | Astronomy | 171 |
29,500,283 | https://en.wikipedia.org/wiki/Lambda%20Orionis%20Cluster | The Lambda Orionis Cluster (also known as the Collinder 69) is an open star cluster located north-west of the star Betelgeuse in the constellation of Orion. It is about five million years old and roughly away from the Sun. Included within the cluster is a double star named Meissa. With the rest of Orion, it is visible from the middle of August in the morning sky, to late April before Orion becomes too close to the Sun to be seen well. It can be seen from both the northern hemisphere and the southern hemisphere.
Description
The cluster is following an orbit through the Milky Way that has a period of 227.4 million years with an ellipticity of 0.06, carrying it as far as from the Galactic Center, and as close as . The inclination of the orbit carries it up to away from the galactic plane. On average it crosses the plane every 33.3 million years.
The star cluster is young and contains a large number of low-mass stars, some T Tauri stars and brown dwarfs. One notable member is LOri167, which is a wide binary consisting of a potential planetary-mass object and a brown dwarf. Observations of the star cluster with the Spitzer Space Telescope have shown that 25% of the low-mass stars and 40% of the substellar objects are surrounded by a circumstellar disk. Two of these being actively photoevaporated by Meissa.
Molecular ring and cluster evolution
The cluster might have formed in the central region of an elongated cloud, which is supported by the distribution of pre-main-sequence star candidates, which are concentrated in the cluster and nearby regions in an elongated shape. Massive OB stars and low-mass stars formed in the central regions of these clouds. The low-mass stars closest to the massive stars likely lost their circumstellar disks due to photoevaporation. Many low-mass stars parsecs away were unaffected by this and represent the current population of low-mass stars with a circumstellar disk. The cluster is surrounded by a large molecular ring, called the Lambda Orionis ring. This was interpreted as a remnant of a supernova that exploded one million years ago. The supernova blast encountered the clouds and gas in the region and the blast dispersed the parent core, creating the molecular ring.
See also
Other celestial bodies included in the constellation Orion:
Orion Nebula
Horsehead Nebula
Barnard 30
References
Sky Atlas 2000.0 Second Edition
External links
Orion (constellation)
Open clusters | Lambda Orionis Cluster | Astronomy | 507 |
12,765,635 | https://en.wikipedia.org/wiki/Lavoisier%20Medal | A Lavoisier Medal is an award named and given in honor of Antoine Lavoisier, considered by some to be a father of modern chemistry.
At least three organizations independently give awards for achievement in chemical-related disciplines, each using the name Lavoisier Medal. Lavoisier Medals are awarded by the following organizations:
French Chemical Society (Société Chimique de France (SCF))
The French Chemical Society's Médaille Lavoisier is given for work or actions which have enhanced the perceived value of chemistry in society.
International Society for Biological Calorimetry (ISBC)
The ISBC's Lavoisier Medal is awarded to an internationally acknowledged scientist for an outstanding contribution to the development and/or the application of direct calorimetry in biology and medicine
Source: ISBC
1990: Ingemar Wadsö, Lund, Sweden
1992: Richard B. Kemp, Aberystwyth, UK
1994: Lee Hansen, Provo, USA
1997: Ingolf Lamprecht, Berlin, Germany
1999: Anthony E. Beezer, London, UK
2001: Lena Gustafsson, Göteborg, Sweden
2003: Erich Gnaiger, Innsbruck, Austria
2006: Mario Monti, Lund, Sweden
2010: Edwin Battley, Stony Brook NY, USA
2014: Urs von Stockar, Lausanne, Switzerland
DuPont
The DuPont company's Lavoisier Medal for Technical Achievement is presented to DuPont scientists and engineers who have made outstanding contributions to DuPont and their scientific fields throughout their careers. Antoine Lavoisier mentored the founder of the company, E. I. du Pont, more than 200 years ago.
It was awarded 95 times from 1990 to 2013. Stephanie Louise Kwolek received the award in 1995. She was the first female DuPont employee to receive the honor.
Partial list of recipients
Source (1990-2012): Dupont (archived copy)
Source: (2011 onwards): Dupont (archived copy)
1990: Dr. Charles W. Todd
1990: Thomas H. Chilton (posthumously awarded).
1990: Nathaniel Wyeth
1991: Crawford Greenewalt
1992: Herman E. Schroeder
1993: Donald R. Johnson Pioneer of automatic clinical diagnostic instrumentation-Dupont Lavoisier Medal
1995: Stephanie Kwolek
1995: Herbert S. Eleuterio
1996: Owen Wright Webster
1997: William C. Drinkard
1997: Charles Stine
1999: Albert Moore
2000: Ivan Maxwell Robinson
2002: Wilfred Sweeny
2003: Rudolph Pariser
2005: Vlodek Gabara, Harry Kamack, Mel Kohan
2007: Edward J. Deyrup, Charles Joseph Noelke
2008: D. Peter Carlson, Noel C. Scrivner
2009: Calvin Chi-Ching Chien, George P. Lahm
2010: Robert L. Segebart
2011: Marc C. Albertsen
2012: Scott V. Tingey
2013: Mario Nappa
2014: Steve Taylor, Dave Estell
2015: Stephen Smith, Ronald McKinney
2016: Mick Ward, Tom Carney
2017: Joe Lachowski, George Weber
2018: Andrew Morgan, Scott Power, Peter Trefonas
2019: Mark Lamontia
2020: Andrew Morgan
2021: Mark Barger, Peter Berg
2022: Theresa Weston, Todd Buley
2024: Bradley K. Taylor
See also
List of chemistry awards
List of engineering awards
Notes
Chemistry awards
Materials science awards
Chemical engineering awards | Lavoisier Medal | Chemistry,Materials_science,Technology,Engineering | 695 |
36,831,006 | https://en.wikipedia.org/wiki/Hindsight%20optimization | Hindsight optimisation (HOP) is a computer science technique used in artificial intelligence for analysis of actions which have stochastic results. HOP is used in combination with a deterministic planner. By creating sample results for each of the possible actions from the given state (i.e. determinising the actions), and using the deterministic planner to analyse those sample results, HOP allows an estimate of the actual action.
References
Artificial intelligence engineering | Hindsight optimization | Engineering | 95 |
2,959,101 | https://en.wikipedia.org/wiki/Diethanolamine | Diethanolamine, often abbreviated as DEA or DEOA, is an organic compound with the formula HN(CH2CH2OH)2. Pure diethanolamine is a white solid at room temperature, but its tendencies to absorb water and to supercool often results in it being found in a colorless, viscous liquid state. Diethanolamine is polyfunctional, being a secondary amine and a diol. Like other organic amines, diethanolamine acts as a weak base. Reflecting the hydrophilic character of the secondary amine and hydroxyl groups, DEA is soluble in water. Amides prepared from DEA are often also hydrophilic. In 2013, the chemical was classified by the International Agency for Research on Cancer as "possibly carcinogenic to humans" (Group 2B).
Production
The reaction of ethylene oxide with aqueous ammonia first produces ethanolamine:
C2H4O + NH3 → H2NCH2CH2OH
which reacts with a second and third equivalent of ethylene oxide to give DEA and triethanolamine:
C2H4O + H2NCH2CH2OH → HN(CH2CH2OH)2
C2H4O + HN(CH2CH2OH)2 → N(CH2CH2OH)3
About 300M kg are produced annually in this way. The ratio of the products can be controlled by changing the stoichiometry of the reactants.
Uses
DEA is used as a surfactant and a corrosion inhibitor. It is used to remove hydrogen sulfide and carbon dioxide from natural gas.
Diethanolamine is widely used in the preparation of diethanolamides and diethanolamine salts of long-chain fatty acids that are formulated into soaps and surfactants used in liquid laundry and dishwashing detergents, cosmetics, shampoos and hair conditioners. In oil refineries, a DEA in water solution is commonly used to remove hydrogen sulfide from sour gas. It has an advantage over a similar amine, ethanolamine, in that a higher concentration may be used for the same corrosion potential. This allows refiners to scrub hydrogen sulfide at a lower circulating amine rate with less overall energy usage.
DEA is a chemical feedstock used in the production of morpholine.
Amides derived from DEA and fatty acids, known as diethanolamides, are amphiphilic.
The reaction of 2-chloro-4,5-diphenyloxazole with DEA gave rise to ditazole. The reaction of DEA and isobutyraldehyde with water removed produces an oxazolidine.
Commonly used ingredients that may contain DEA
DEA is used in the production of diethanolamides, which are common ingredients in cosmetics and shampoos added to confer a creamy texture and foaming action. Consequently, some cosmetics that include diethanolamides as ingredients contain DEA. Some of the most commonly used diethanolamides include:
Cocamide DEA
DEA-Cetyl Phosphate
DEA Oleth-3 Phosphate
Lauramide DEA
Myristamide DEA
Oleamide DEA
Safety and environment
DEA is a potential skin irritant in workers sensitized by exposure to water-based metalworking fluids.
DEA has potential toxicity properties for aquatic species.
References
External links
Chemical safety card for DEA
CDC - NIOSH Pocket Guide to Chemical Hazards
Toxicology and Carcinogenesis Studies
Brief technical specification of diethanolamine
Brief technical specification of diethanolamine pure
Diols
Endocrine disruptors
IARC Group 2B carcinogens
Secondary amines
Ethanolamines | Diethanolamine | Chemistry | 756 |
80,197 | https://en.wikipedia.org/wiki/Soviet%E2%80%93Afghan%20War | The Soviet–Afghan War took place in the Democratic Republic of Afghanistan from December 1979 to February 1989. Marking the beginning of the protracted Afghan conflict, it saw the Soviet Union and the Afghan military fight against the rebelling Afghan mujahideen. While they were backed by various countries and organizations, the majority of the mujahideen's support came from Pakistan, the United States (as part of Operation Cyclone), the United Kingdom, China, Iran, and the Arab states of the Persian Gulf, in addition to a large influx of foreign fighters known as the Afghan Arabs. American and British involvement on the side of the mujahideen escalated the Cold War, ending a short period of relaxed Soviet Union–United States relations. Combat took place throughout the 1980s, mostly in the Afghan countryside, as most of the country's cities remained under Soviet control. The conflict resulted in the deaths of one to three million Afghans, while millions more fled from the country as refugees; most externally displaced Afghans sought refuge in Pakistan and in Iran. Between 6.5 and 11.5% of Afghanistan's erstwhile population of 13.5 million people (per the 1979 census) is estimated to have been killed over the course of the Soviet–Afghan War. The decade-long confrontation between the mujahideen and the Soviet and Afghan militaries inflicted grave destruction throughout Afghanistan and has also been cited by scholars as a significant factor that contributed to the dissolution of the Soviet Union in 1991; it is for this reason that the conflict is sometimes referred to as "the Soviet Union's Vietnam" in retrospective analyses.
In March 1979, there had been a violent uprising in Herat, wherein a number of Soviet military advisers were executed. The ruling People's Democratic Party of Afghanistan (PDPA), which had determined that it could not subdue the uprising by itself, asked for urgent Soviet military assistance; in 1979, over 20 requests were sent. Soviet premier Alexei Kosygin, declining to send troops, advised in one call to Afghan prime minister Nur Muhammad Taraki to use local industrial workers in the province. This was apparently on the belief that these workers would be supporters of the Afghan government. This was discussed further in the Soviet Union with a wide range of views, mainly split between those who wanted to ensure that Afghanistan remained a socialist state and those who were concerned that the unrest would escalate. Eventually, a compromise was reached to send military aid, but not troops.
The conflict began when the Soviet military, under the command of Leonid Brezhnev, moved into Afghanistan to support the Afghan administration that had been installed during Operation Storm-333. Debate over their presence in the country soon ensued in international channels, with the Muslim world and the Western Bloc classifying it as an invasion, while the Eastern Bloc asserted that it was a legal intervention. Nevertheless, numerous sanctions and embargoes were imposed on the Soviet Union by the international community shortly after the beginning of the conflict. Soviet troops occupied Afghanistan's major cities and all main arteries of communication, whereas the mujahideen waged guerrilla warfare in small groups across the 80% of the country that was not subject to uncontested Soviet control—almost exclusively comprising the rugged, mountainous terrain of the countryside. In addition to laying millions of landmines across Afghanistan, the Soviets used their aerial power to deal harshly with both Afghan resistance and civilians, levelling villages to deny safe haven to the mujahideen, destroying vital irrigation ditches and other infrastructure through tactics of scorched earth.
The Soviet government had initially planned to swiftly secure Afghanistan's towns and road networks, stabilize the PDPA, and withdraw all of their military forces in a span of six months to one year. However, they were met with fierce resistance from Afghan guerrillas and experienced great operational difficulties on the rugged mountainous terrain. By the mid-1980s, the Soviet military presence in Afghanistan had increased to approximately 115,000 troops and fighting across the country intensified; the complication of the war effort gradually inflicted a high cost on the Soviet Union as military, economic, and political resources became increasingly exhausted. By mid-1987, reformist Soviet leader Mikhail Gorbachev announced that the Soviet military would begin a complete withdrawal from Afghanistan. The final wave of disengagement was initiated on 15 May 1988, and on 15 February 1989, the last Soviet military column occupying Afghanistan crossed into the Uzbek SSR. With continued external Soviet backing, the PDPA government pursued a solo war effort against the mujahideen, and the conflict evolved into the Afghan Civil War. However, following the dissolution of the Soviet Union in December 1991, all support to the Democratic Republic was pulled, leading to the toppling of the government at the hands of the mujahideen in 1992 and the start of a second Afghan Civil War shortly thereafter.
Naming
In Afghanistan, the war is usually called the Soviet war in Afghanistan (; ). In Russia and elsewhere in the former Soviet Union, it is usually called the Afghan war (; ; ; ); it is sometimes simply referred to as "Afgan" (), with the understanding that this refers to the war (just as the Vietnam War is often called "Vietnam" or just Nam" in the United States). It is also known as the Afghan jihad, especially by the non-Afghan volunteers of the Mujahideen.
Background
Russian interest in Central Asia
In the 19th century, the British Empire was fearful that the Russian Empire would invade Afghanistan and use it to threaten the large British colonies in India. This regional rivalry was called the "Great Game". In 1885, Russian forces seized a disputed oasis south of the Oxus River from Afghan forces, which became known as the Panjdeh Incident. The border was agreed by the joint Anglo-Russian Afghan Boundary Commission of 1885–87. The Russian interest in Afghanistan continued through the Soviet era, with billions in economic and military aid sent to Afghanistan between 1955 and 1978.
Following Amanullah Khan's ascent to the throne in 1919 and the subsequent Third Anglo-Afghan War, the British conceded Afghanistan's full independence. King Amanullah afterwards wrote to Russia (now under Bolshevik control) desiring for permanent friendly relations. Vladimir Lenin replied by congratulating the Afghans for their defence against the British, and a treaty of friendship between Afghanistan and Russia was finalized in 1921. The Soviets saw possibilities in an alliance with Afghanistan against the United Kingdom, such as using it as a base for a revolutionary advance towards British-controlled India.
The Red Army intervened in Afghanistan to suppress the Islamic Basmachi movement in 1929 and 1930, supporting the ousted king Amanullah, as part of the Afghan Civil War (1928–1929). The Basmachi movement had originated in a 1916 revolt against Russian conscription during World War I, bolstered by Turkish general Enver Pasha during the Caucasus campaign. Afterwards, the Soviet Army deployed around 120,000–160,000 troops in Central Asia, a force similar to the peak strength of the Soviet intervention in Afghanistan in size. By 1926–1928, the Basmachis were mostly defeated by the Soviets, and Central Asia was incorporated into the Soviet Union. In 1929, the Basmachi rebellion reignited, associated with anti-forced collectivization riots. Basmachis crossed over into Afghanistan under Ibrahim Bek, which gave a pretext for the Red Army interventions in 1929 and 1930.
Soviet–Afghan relations post-1920s
The Soviet Union (USSR) had been a major power broker and influential mentor in Afghan politics, its involvement ranging from civil-military infrastructure to Afghan society. Since 1947, Afghanistan had been under the influence of the Soviet government and received large amounts of aid, economic assistance, military equipment training and military hardware from the Soviet Union. Economic assistance and aid had been provided to Afghanistan as early as 1919, shortly after the Russian Revolution and when the regime was facing the Russian Civil War. Provisions were given in the form of small arms, ammunition, a few aircraft, and (according to debated Soviet sources) a million gold rubles to support the resistance during the Third Anglo-Afghan War in 1919. In 1942, the USSR again moved to strengthen the Afghan Armed Forces by providing small arms and aircraft and establishing training centers in Tashkent, Uzbek SSR. Soviet-Afghan military cooperation began on a regular basis in 1956, and further agreements were made in the 1970s, which saw the USSR send advisers and specialists. The Soviets also had interests in the energy resources of Afghanistan, including oil and natural gas exploration from the 1950s and 1960s. The USSR began to import Afghan gas from 1968 onwards. Between 1954 and 1977, the Soviet Union provided Afghanistan with economic aid worth of about 1 billion rubles.
Afghanistan-Pakistan border
In the 19th century, with the Czarist Russian forces moving closer to the Pamir Mountains, near the border with British India, civil servant Mortimer Durand was sent to outline a border, likely in order to control the Khyber Pass. The demarcation of the mountainous region resulted in an agreement, signed with the Afghan Emir, Abdur Rahman Khan, in 1893. It became known as the Durand Line.
In 1947, the Prime Minister of the Kingdom of Afghanistan, Mohammad Daoud Khan, rejected the Durand Line, which had been accepted as an international border by successive Afghan governments for over half a century.
The British Raj also came to an end, and the Dominion of Pakistan gained independence from British India and inherited the Durand Line as its frontier with Afghanistan.
Under the regime of Daoud Khan, Afghanistan had hostile relations with both Pakistan and Iran. Like all previous Afghan rulers since 1901, Daoud Khan also wanted to emulate Emir Abdur Rahman Khan and unite his divided country.
To do that, he needed a popular cause to unite the Afghan people divided along tribal lines, and a modern, well equipped Afghan army which would be used to suppress anyone who would oppose the Afghan government. His Pashtunistan policy was to annex Pashtun areas of Pakistan, and he used this policy for his own benefit.
Daoud Khan's irredentist foreign policy to reunite the Pashtun homeland caused much tension with Pakistan, a state that allied itself with the United States. The policy had also angered the non-Pashtun population of Afghanistan, and similarly, the Pashtun population in Pakistan were also not interested in having their areas being annexed by Afghanistan. In 1951, the U.S. State Department urged Afghanistan to drop its claim against Pakistan and accept the Durand Line.
1960s–1970s: Proxy war
In 1954, the United States began selling arms to its ally Pakistan, while refusing an Afghan request to buy arms, out of fear that the Afghans would use the weapons against Pakistan. As a consequence, Afghanistan, though officially neutral in the Cold War, drew closer to India and the Soviet Union, which were willing to sell them weapons. In 1962, China defeated India in a border war, and as a result, China formed an alliance with Pakistan against their common enemy, India, pushing Afghanistan even closer to India and the Soviet Union.
In 1960 and 1961, the Afghan Army, on the orders of Daoud Khan following his policy of Pashtun irredentism, made two unsuccessful incursions into Pakistan's Bajaur District. In both cases, the Afghan army was routed, suffering heavy casualties. In response, Pakistan closed its consulate in Afghanistan and blocked all trade routes through the Pakistan–Afghanistan border. This damaged Afghanistan's economy and Daoud's regime was pushed towards closer alliance with the Soviet Union for trade. However, these stopgap measures were not enough to compensate the loss suffered by Afghanistan's economy because of the border closure. As a result of continued resentment against Daoud's autocratic rule, close ties with the Soviet Union and economic downturn, Daoud Khan was forced to resign by the King of Afghanistan, Mohammed Zahir Shah. Following his resignation, the crisis between Pakistan and Afghanistan was resolved and Pakistan re-opened the trade routes. After the removal of Daoud Khan, the King installed a new prime minister and started creating a balance in Afghanistan's relation with the West and the Soviet Union, which angered the Soviet Union.
1973 coup d'état
In 1973, Daoud Khan, supported by Soviet-trained Afghan Army officers and a large base of the Afghan Commando Forces, seized power from the King in a bloodless coup, and established the first Afghan republic. Following his return to power, Daoud revived his Pashtunistan policy and for the first time started proxy warring against Pakistan by supporting anti-Pakistani groups and providing them with arms, training and sanctuaries. The Pakistani government of prime minister Zulfikar Ali Bhutto was alarmed by this. The Soviet Union also supported Daoud Khan's militancy against Pakistan as they wanted to weaken Pakistan, which was an ally of both the United States and China. However, it did not openly try to create problems for Pakistan as that would damage the Soviet Union's relations with other Islamic countries, hence it relied on Daoud Khan to weaken Pakistan. They had the same thought regarding Iran, another major U.S. ally. The Soviet Union also believed that the hostile behaviour of Afghanistan against Pakistan and Iran could alienate Afghanistan from the west, and Afghanistan would be forced into a closer relationship with the Soviet Union. The pro-Soviet Afghans (such as the People's Democratic Party of Afghanistan (PDPA)) also supported Daoud Khan's hostility towards Pakistan, as they believed that a conflict with Pakistan would induce Afghanistan to seek aid from the Soviet Union. As a result, the pro-Soviet Afghans would be able to establish their influence over Afghanistan.
In response to Afghanistan's proxy war, Pakistan started supporting Afghans who were critical of Daoud Khan's policies. Bhutto authorized a covert operation under MI's Major-General Naseerullah Babar. In 1974, Bhutto authorized another secret operation in Kabul where the Inter-Services Intelligence (ISI) and the Air Intelligence of Pakistan (AI) extradited Burhanuddin Rabbani, Gulbuddin Hekmatyar and Ahmad Shah Massoud to Peshawar, amid fear that Rabbani, Hekmatyar and Massoud might be assassinated by Daoud. According to Baber, Bhutto's operation was an excellent idea and it had hard-hitting impact on Daoud and his government, which forced Daoud to increase his desire to make peace with Bhutto. Pakistan's goal was to overthrow Daoud's regime and establish an Islamist theocracy in its place. The first ever ISI operation in Afghanistan took place in 1975, supporting militants from the Jamiat-e Islami party, led by Ahmad Shah Massoud, attempting to overthrow the government. They started their rebellion in the Panjshir valley, but lack of support along with government forces easily defeating them made it a failure, and a sizable portion of the insurgents sought refuge in Pakistan where they enjoyed the support of Bhutto's government.
The 1975 rebellion, though unsuccessful, shook President Daoud Khan and made him realize that a friendly Pakistan was in his best interests. He started improving relations with Pakistan and made state visits there in 1976 and 1978. During the 1978 visit, he agreed to stop supporting anti-Pakistan militants and to expel any remaining militants in Afghanistan. In 1975, Daoud Khan established his own party, the National Revolutionary Party of Afghanistan and outlawed all other parties. He then started removing members of its Parcham wing from government positions, including the ones who had supported his coup, and started replacing them with familiar faces from Kabul's traditional government elites. Daoud also started reducing his dependence on the Soviet Union. As a consequence of Daoud's actions, Afghanistan's relations with the Soviet Union deteriorated. In 1978, after witnessing India's nuclear test, Smiling Buddha, Daoud Khan initiated a military buildup to counter Pakistan's armed forces and Iranian military influence in Afghan politics.
Saur Revolution of 1978
The Marxist People's Democratic Party of Afghanistan's strength grew considerably after its foundation. In 1967, the PDPA split into two rival factions, the Khalq (Masses) faction headed by Nur Muhammad Taraki and the Parcham (Flag) faction led by Babrak Karmal. Symbolic of the different backgrounds of the two factions were the fact that Taraki's father was a poor Pashtun herdsman while Karmal's father was a Tajik general in the Royal Afghan Army. More importantly, the radical Khalq faction believed in rapidly transforming Afghanistan, if necessary even using violence, from a feudal system into a Communist society, while the moderate Parcham faction favored a more gradualist and gentler approach, arguing that Afghanistan was simply not ready for Communism and would not be for some time. The Parcham faction favored building up the PDPA as a mass party in support of the Daoud Khan government, while the Khalq faction were organized in the Leninist style as a small, tightly organized elite group, allowing the latter to enjoy ascendancy over the former. In 1971, the U.S. Embassy in Kabul reported that there had been increasing leftist activity in the country, attributed to disillusionment of social and economic conditions, and the poor response from the Kingdom's leadership. It added that the PDPA was "perhaps the most disgruntled and organized of the country's leftist groups."
Intense opposition from factions of the PDPA was sparked by the repression imposed on them by Daoud's regime and the death of a leading PDPA member, Mir Akbar Khyber. The mysterious circumstances of Khyber's death sparked massive anti-Daoud demonstrations in Kabul, which resulted in the arrest of several prominent PDPA leaders. On 27 April 1978, the Afghan Army, which had been sympathetic to the PDPA cause, overthrew and executed Daoud along with members of his family. The Finnish scholar Raimo Väyrynen wrote about the so-called "Saur Revolution": "There is a multitude of speculations on the real nature of this coup. The reality appears to be that it was inspired first of all by domestic economic and political concerns and that the Soviet Union did not play any role in the Saur Revolution". After this the Democratic Republic of Afghanistan (DRA) was formed. Nur Muhammad Taraki, General Secretary of the People's Democratic Party of Afghanistan, became Chairman of the Revolutionary Council and Chairman of the Council of Ministers of the newly established Democratic Republic of Afghanistan. On 5 December 1978, a treaty of friendship was signed between the Soviet Union and Afghanistan.
"Red Terror" of the revolutionary government
After the revolution, Taraki assumed the leadership, prime ministership and general secretaryship of the PDPA. As before in the party, the government never referred to itself as "communist". The government was divided along factional lines, with Taraki and Deputy Prime Minister Hafizullah Amin of the Khalq faction pitted against Parcham leaders such as Babrak Karmal. Though the new regime promptly allied itself to the Soviet Union, many Soviet diplomats believed that the Khalqi plans to transform Afghanistan would provoke a rebellion from the general population, which was socially and religiously conservative. Immediately after coming to power, the Khalqis began to persecute the Parchamis, not the least because the Soviet Union favored the Parchami faction whose "go slow" plans were felt to be better suited for Afghanistan, thereby leading the Khalqis to eliminate their rivals so the Soviets would have no other choice but to back them. Within the PDPA, conflicts resulted in exiles, purges and executions of Parcham members. The Khalq state executed between 10,000 and 27,000 people, mostly at Pul-e-Charkhi prison, prior to the Soviet intervention. Political scientist Olivier Roy estimated between 50,000 and 100,000 people disappeared during the Taraki–Amin period:
During its first 18 months of rule, the PDPA applied a Soviet-style program of modernizing reforms, many of which were viewed by conservatives as opposing Islam. Decrees setting forth changes in marriage customs and land reform were not received well by a population deeply immersed in tradition and Islam, particularly by the powerful landowners harmed economically by the abolition of usury (although usury is prohibited in Islam) and the cancellation of farmers' debts. The new government also enhanced women's rights, sought a rapid eradication of illiteracy and promoted Afghanistan's ethnic minorities, although these programs appear to have had an effect only in the urban areas. By mid-1978, a rebellion started, with rebels attacking the local military garrison in the Nuristan region of eastern Afghanistan and soon civil war spread throughout the country. In September 1979, Deputy Prime Minister Hafizullah Amin seized power, arresting and killing Taraki. More than two months of instability overwhelmed Amin's regime as he moved against his opponents in the PDPA and the growing rebellion.
Affairs with the USSR after the revolution
Even before the revolutionaries came to power, Afghanistan was "a militarily and politically neutral nation, effectively dependent on the Soviet Union." A treaty, signed in December 1978, allowed the Democratic Republic to call upon the Soviet Union for military support.
Following the Herat uprising, the first major sign of anti-regime resistance, General Secretary Taraki contacted Alexei Kosygin, chairman of the USSR Council of Ministers and asked for "practical and technical assistance with men and armament". Kosygin was unfavorable to the proposal on the basis of the negative political repercussions such an action would have for his country, and he rejected all further attempts by Taraki to solicit Soviet military aid in Afghanistan. Following Kosygin's rejection, Taraki requested aid from Leonid Brezhnev, the general secretary of the Communist Party of the Soviet Union and Soviet head of state, who warned Taraki that full Soviet intervention "would only play into the hands of our enemies – both yours and ours". Brezhnev also advised Taraki to ease up on the drastic social reforms and to seek broader support for his regime.
In 1979, Taraki attended a conference of the Non-Aligned Movement in Havana, Cuba. On his way back, he stopped in Moscow on 20 March and met with Brezhnev, Soviet Foreign Minister Andrei Gromyko and other Soviet officials. It was rumoured that Karmal was present at the meeting in an attempt to reconcile Taraki's Khalq faction and the Parcham against Amin and his followers. At the meeting, Taraki was successful in negotiating some Soviet support, including the redeployment of two Soviet armed divisions at the Soviet-Afghan border, the sending of 500 military and civilian advisers and specialists and the immediate delivery of Soviet armed equipment sold at 25 percent below the original price; however, the Soviets were not pleased about the developments in Afghanistan and Brezhnev impressed upon Taraki the need for party unity. Despite reaching this agreement with Taraki, the Soviets continued to be reluctant to intervene further in Afghanistan and repeatedly refused Soviet military intervention within Afghan borders during Taraki's rule as well as later during Amin's short rule.
Taraki and Amin's regime even attempted to eliminate Parcham's leader Babrak Karmal. After being relieved of his duties as ambassador, he remained in Czechoslovakia in exile, fearing for his life if he returned as the regime requested. He and his family were protected by the Czechoslovak StB; files from January 1979 revealed information that Afghanistan sent AGSA spies to Czechoslovakia to find and assassinate Karmal.
Initiation of the rebellion
In 1978, the Taraki government initiated a series of reforms, including a radical modernization of the traditional Islamic civil law, especially marriage law, aimed at "uprooting feudalism" in Afghan society. The government brooked no opposition to the reforms and responded with violence to unrest. Between April 1978 and the Soviet Intervention of December 1979, thousands of prisoners, perhaps as many as 27,000, were executed at the notorious Pul-e-Charkhi prison, including many village mullahs and headmen. Other members of the traditional elite, the religious establishment and intelligentsia fled the country.
Large parts of the country went into open rebellion. The Parcham Government claimed that 11,000 were executed during the Amin/Taraki period in response to the revolts. The revolt began in October among the Nuristani tribes of the Kunar Valley in the northeastern part of the country near the border with Pakistan, and rapidly spread among the other ethnic groups. By the spring of 1979, 24 of the 28 provinces had suffered outbreaks of violence. The rebellion began to take hold in the cities: in March 1979 in Herat, rebels led by Ismail Khan revolted. Between 3,000 and 5,000 people were killed and wounded during the Herat revolt. Some 100 Soviet citizens and their families were killed. By August 1979, up to 165,000 Afghans had fled across the border to Pakistan. The main reason the revolt spread so widely was the disintegration of the Afghan army in a series of insurrections. The numbers of the Afghan army fell from 110,000 men in 1978 to 25,000 by 1980. The U.S. embassy in Kabul cabled to Washington the army was melting away "like an ice floe in a tropical sea". According to scholar Gilles Dorronsoro, it was the violence of the state rather than its reforms that caused the uprisings.
Pakistan–U.S. relations and rebel aid
Pakistani intelligence officials began privately lobbying the U.S. and its allies to send materiel assistance to the Islamist rebels. Pakistani President Muhammad Zia-ul-Haq's ties with the U.S. had been strained during Jimmy Carter's presidency due to Pakistan's nuclear program and the execution of Zulfikar Ali Bhutto in April 1979, but Carter told National Security Adviser Zbigniew Brzezinski and Secretary of State Cyrus Vance as early as January 1979 that it was vital to "repair our relationships with Pakistan" in light of the unrest in Iran. According to former Central Intelligence Agency (CIA) official Robert Gates, "the Carter administration turned to CIA ... to counter Soviet and Cuban aggression in the Third World, particularly beginning in mid-1979." In March 1979, "CIA sent several covert action options relating to Afghanistan to the SCC [Special Coordination Committee]" of the United States National Security Council. At a 30 March meeting, U.S. Department of Defense representative Walter B. Slocombe "asked if there was value in keeping the Afghan insurgency going, 'sucking the Soviets into a Vietnamese quagmire? When asked to clarify this remark, Slocombe explained: "Well, the whole idea was that if the Soviets decided to strike at this tar baby [Afghanistan] we had every interest in making sure that they got stuck." Yet a 5 April memo from National Intelligence Officer Arnold Horelick warned: "Covert action would raise the costs to the Soviets and inflame Moslem opinion against them in many countries. The risk was that a substantial U.S. covert aid program could raise the stakes and induce the Soviets to intervene more directly and vigorously than otherwise intended."
In May 1979, U.S. officials secretly began meeting with rebel leaders through Pakistani government contacts. After additional meetings Carter signed two presidential findings in July 1979 permitting the CIA to spend $695,000 on non-military assistance (e.g., "cash, medical equipment, and radio transmitters") and on a propaganda campaign targeting the Soviet-backed leadership of the DRA, which (in the words of Steve Coll) "seemed at the time a small beginning."
Soviet deployment, 1979
The Amin government, having secured a treaty in December 1978 that allowed them to call on Soviet forces, repeatedly requested the introduction of troops in Afghanistan in the spring and summer of 1979. They requested Soviet troops to provide security and to assist in the fight against the mujahideen ("Those engaged in jihad") rebels. After the killing of Soviet technicians in Herat by rioting mobs, the Soviet government sold several Mi-24 helicopters to the Afghan military. On 14 April 1979, the Afghan government requested that the USSR send 15 to 20 helicopters with their crews to Afghanistan, and on 16 June, the Soviet government responded and sent a detachment of tanks, BMPs, and crews to guard the government in Kabul and to secure the Bagram and Shindand air bases. In response to this request, an airborne battalion, commanded by Lieutenant Colonel A. Lomakin, arrived at Bagram on 7 July. They arrived without their combat gear, disguised as technical specialists. They were the personal bodyguards for General Secretary Taraki. The paratroopers were directly subordinate to the senior Soviet military advisor and did not interfere in Afghan politics. Several leading politicians at the time such as Alexei Kosygin and Andrei Gromyko were against intervention.
After a month, the Afghan requests were no longer for individual crews and subunits, but for regiments and larger units. In July, the Afghan government requested that two motorized rifle divisions be sent to Afghanistan. The following day, they requested an airborne division in addition to the earlier requests. They repeated these requests and variants to these requests over the following months right up to December 1979. However, the Soviet government was in no hurry to grant them.
Based on information from the KGB, Soviet leaders felt that Prime Minister Hafizullah Amin's actions had destabilized the situation in Afghanistan. Following his initial coup against and killing of Taraki, the KGB station in Kabul warned Moscow that Amin's leadership would lead to "harsh repressions, and as a result, the activation and consolidation of the opposition."
The Soviets established a special commission on Afghanistan, comprising the KGB chairman Yuri Andropov, Boris Ponomarev from the Central Committee and Dmitry Ustinov, the Minister of Defence. In late April 1979, the committee reported that Amin was purging his opponents, including Soviet loyalists, that his loyalty to Moscow was in question and that he was seeking diplomatic links with Pakistan and possibly the People's Republic of China (which at the time had poor relations with the Soviet Union). Of specific concern were Amin's supposed meetings with the U.S. chargé d'affaires, J. Bruce Amstutz, which were used as a justification for the invasion by the Kremlin.
Information forged by the KGB from its agents in Kabul provided the last arguments to eliminate Amin. Supposedly, two of Amin's guards killed the former General Secretary Nur Muhammad Taraki with a pillow, and Amin himself was portrayed as a CIA agent. The latter is widely discredited, with Amin repeatedly demonstrating friendliness toward the various delegates of the Soviet Union in Afghanistan and maintaining the pro-Soviet line. Soviet General Vasily Zaplatin, a political advisor of Premier Brezhnev at the time, claimed that four of General Secretary Taraki's ministers were responsible for the destabilization. However, Zaplatin failed to emphasize this in discussions and was not heard.
During meetings between General Secretary Taraki and Soviet leaders in March 1979, the Soviets promised political support and to send military equipment and technical specialists, but upon repeated requests by Taraki for direct Soviet intervention, the leadership adamantly opposed him; reasons included that they would be met with "bitter resentment" from the Afghan people, that intervening in another country's civil war would hand a propaganda victory to their opponents, and Afghanistan's overall inconsequential weight in international affairs, in essence realizing they had little to gain by taking over a country with a poor economy, unstable government, and population hostile to outsiders. However, as the situation continued to deteriorate from May–December 1979, Moscow changed its mind on dispatching Soviet troops. The reasons for this complete turnabout are not entirely clear, and several speculative arguments include: the grave internal situation and inability for the Afghan government to retain power much longer; the effects of the Iranian Revolution that brought an Islamic theocracy into power, leading to fears that religious fanaticism would spread through Afghanistan and into Soviet Muslim Central Asian republics; Taraki's murder and replacement by Amin, who the Soviet leadership believed had secret contacts within the American embassy in Kabul and "was capable of reaching an agreement with the United States"; however, allegations of Amin colluding with the Americans have been widely discredited and it was revealed in the 1990s that the KGB actually planted the story; and the deteriorating ties with the United States after NATO's two-track missile deployment decision in response to Soviet nuclear presence in Eastern Europe and the failure of Congress to ratify the SALT II treaty, creating the impression that détente was "already effectively dead."
The British journalist Patrick Brogan wrote in 1989: "The simplest explanation is probably the best. They got sucked into Afghanistan much as the United States got sucked into Vietnam, without clearly thinking through the consequences, and wildly underestimating the hostility they would arouse". By the fall of 1979, the Amin regime was collapsing with morale in the Afghan Army having fallen to rock-bottom levels, while the mujahideen had taken control of much of the countryside. The general consensus amongst Afghan experts at the time was that it was not a question of if, but when the mujahideen would take Kabul.
In October 1979, a KGB Spetsnaz force Zenith covertly dispatched a group of specialists to determine the potential reaction from local Afghans to a presence of Soviet troops there. They concluded that deploying troops would be unwise and could lead to war, but this was reportedly ignored by the KGB chairman Yuri Andropov. A Spetsnaz battalion of Central Asian troops, dressed in Afghan Army uniforms, was covertly deployed to Kabul between 9 and 12 November 1979. They moved a few days later to the Tajbeg Palace, where Amin was moving to.
In Moscow, Leonid Brezhnev was indecisive and waffled as he usually did when faced with a difficult decision. The three decision-makers in Moscow who pressed the hardest for an invasion in the fall of 1979 were the troika consisting of Foreign Minister Andrei Gromyko; the Chairman of KGB, Yuri Andropov, and the Defense Minister Marshal Dmitry Ustinov. The principal reasons for the invasion were the belief in Moscow that Amin was a leader both incompetent and fanatical who had lost control of the situation, together with the belief that it was the United States via Pakistan who was sponsoring the Islamist insurgency in Afghanistan. Andropov, Gromyko and Ustinov all argued that if a radical Islamist regime came to power in Kabul, it would attempt to sponsor radical Islam in Soviet Central Asia, thereby requiring a preemptive strike. What was envisioned in the fall of 1979 was a short intervention under which Moscow would replace radical Khalqi Communist Amin with the moderate Parchami Communist Babrak Karmal to stabilize the situation. Contrary to the contemporary view of Brzezinski and the regional powers, access to the Persian Gulf played no role in the decision to intervene on the Soviet side.
The concerns raised by the Chief of the Soviet Army General Staff, Marshal Nikolai Ogarkov who warned about the possibility of a protracted guerrilla war, were dismissed by the troika who insisted that any occupation of Afghanistan would be short and relatively painless. Most notably, though the diplomats of the Narkomindel at the Embassy in Kabul and the KGB officers stationed in Afghanistan were well informed about the developments in that country, such information rarely filtered through to the decision-makers in Moscow who viewed Afghanistan more in the context of the Cold War rather than understanding Afghanistan as a subject in its own right. The viewpoint that it was the United States that was fomenting the Islamic insurgency in Afghanistan with the aim of destabilizing Soviet-dominated Central Asia tended to downplay the effects of an unpopular Communist government pursuing policies that the majority of Afghans violently disliked as a generator of the insurgency and strengthened those who argued some sort of Soviet response was required to a supposed "outrageous American provocation." It was assumed in Moscow that because Pakistan (an ally of both the United States and China) was supporting the mujahideen that therefore it was ultimately the United States and China who were behind the rebellion in Afghanistan.
Amin's revolutionary government had lost credibility with virtually all of the Afghan population. A combination of chaotic administration, excessive brutality from the secret police, unpopular domestic reforms, and a deteriorating economy, along with public perceptions that the state was atheistic and anti-Islamic, all added to the government's unpopularity. After 20 months of Khalqist rule, the country deteriorated in almost every facet of life. The Soviet Union believed that without intervention, Amin's government would have been disintegrated by the resistance and the country would have been "lost" to a regime most likely hostile to the USSR.
Soviet invasion and palace coup
On 31 October 1979, Soviet informants under orders from the inner circle of advisors around Soviet General Secretary Leonid Brezhnev relayed information to the Afghan Armed Forces for them to undergo maintenance cycles for their tanks and other crucial equipment. Meanwhile, telecommunications links to areas outside of Kabul were severed, isolating the capital.
The Soviet 40th Army launched its initial incursion into Afghanistan on 25 December under the pretext of extending "international aid" to its puppet Democratic Republic of Afghanistan. On 25 December, Soviet Defence Minister Dmitry Ustinov issued an official order, stating that "[t]he state frontier of the Democratic Republic of Afghanistan is to be crossed on the ground and in the air by forces of the 40th Army and the Air Force at 15:00 hrs on 25 December". This was the formal beginning of the Soviet invasion of Afghanistan. Subsequently, on 27 December, Soviet troops arrived at Kabul International Airport, causing a stir among the city's residents.
Simultaneously, Amin moved the offices of the General Secretary to the Tajbeg Palace, believing this location to be more secure from possible threats. According to Colonel General Tukharinov and Merimsky, Amin was fully informed of the military movements, having requested Soviet military assistance to northern Afghanistan on 17 December. His brother and General Dmitry Chiangov met with the commander of the 40th Army before Soviet troops entered the country, to work out initial routes and locations for Soviet troops.
On 27 December 1979, 700 Soviet troops dressed in Afghan uniforms, including KGB and GRU special forces officers from the Alpha Group and Zenith Group, occupied major governmental, military and media buildings in Kabul, including their primary target, the Tajbeg Palace. The operation began at 19:00, when the KGB-led Soviet Zenith Group destroyed Kabul's communications hub, paralyzing Afghan military command. At 19:15, the assault on Tajbeg Palace began; as planned, General Secretary Hafizullah Amin was assassinated. Simultaneously, other key buildings were occupied (e.g., the Ministry of Interior Affairs at 19:15). The operation was fully complete by the morning of 28 December 1979.
The Soviet military command at Termez, Uzbek SSR, announced on Radio Kabul that Afghanistan had been "liberated" from Amin's rule. According to the Soviet Politburo, they were complying with the 1978 Treaty of Friendship, Cooperation and Good Neighborliness, and Amin had been "executed by a tribunal for his crimes" by the Afghan Revolutionary Central Committee. That committee then installed former Deputy Prime Minister Babrak Karmal as head of government, who had been demoted to the relatively insignificant post of ambassador to Czechoslovakia following the Khalq takeover and announced that it had requested Soviet military assistance.
Soviet ground forces, under the command of Marshal Sergey Sokolov, entered Afghanistan from the north on 27 December. In the morning, the 103rd Guards 'Vitebsk' Airborne Division landed at the airport at Bagram and the deployment of Soviet troops in Afghanistan was underway. The force that entered Afghanistan, in addition to the 103rd Guards Airborne Division, was under command of the 40th Army and consisted of the 108th and 5th Guards Motor Rifle Divisions, the 860th Separate Motor Rifle Regiment, the 56th Separate Airborne Assault Brigade, and the 36th Mixed Air Corps. Later on, the 201st and 68th Motor Rifle Divisions also entered the country, along with other smaller units. In all, the initial Soviet force was around 1,800 tanks, 80,000 soldiers and 2,000 AFVs. In the second week alone, Soviet aircraft had made a total of 4,000 flights into Kabul. With the arrival of the two later divisions, the total Soviet force rose to over 100,000 personnel.
As part of Baikal-79, a larger operation aimed at taking 20 key strongholds in and around Kabul, the Soviet 105th Airborne Division secured the city and disarmed Afghan Army units without facing opposition. On 1 January 1980, Soviet paratroopers ordered the 26th Airborne Regiment in Bala Hissar to disarm, only for them to refuse and fire upon the Soviets as a firefight ensued. The Soviet paratroopers annihilated most of the regiment, with 700 Afghan paratroopers being killed or captured. In the aftermath of the battle, 26th Airborne Regiment was disbanded and later reorganized into the 37th Commando Brigade, led by Col. Shahnawaz Tanai, being the largest commando formation at a strength of three battalions. As a result of the battle with the 26th Airborne Regiment, the Soviet 357th Guards Airborne Regiment were permanently stationed in Bala Hissar fortress, meaning this new brigade was stationed as Rishkhor Garrison In the same year, the 81st Artillery Brigade was given airborne training and converted into the 38th Commando Brigade, stationed in Mahtab Qala (lit. Moonlit Fortress) garrison southwest of Kabul under the command of Brig. Tawab Khan.
International positions on Soviet invasion
The Christmas-time invasion of a practically defenseless country was shocking for the international community, and caused a sense of alarm for its neighbor Pakistan. On 2 January 1980 President Carter withdrew the SALT-II treaty from consideration before the Senate, and on 3 January he recalled US Ambassador Thomas J. Watson from Moscow. On 9 January the United Nations Security Council passed Resolution 462. Following the resolution, the Sixth emergency special session of the United Nations General Assembly took place. Soviet military activities were met with strong criticism internationally, including some of its allies at the UN General Assembly (UNGA), but the Soviet machine scored a victory when, in the words of political scientist William Maley, "the General Assembly accepted the credentials of the delegation of the Soviet-installed puppet regime in Kabul which duly voted against the resolution." The UNGA passed a resolution on 15 January by a vote of 104–18 protesting the Soviet intervention in Afghanistan. On 29 January foreign ministers from 34 Muslim-majority countries adopted at the Organisation of Islamic Cooperation a resolution which condemned the Soviet intervention and demanded "the immediate, urgent and unconditional withdrawal of Soviet troops" from the Muslim nation of Afghanistan. According to political scientist Gilles Kepel, the Soviet intervention or invasion was viewed with "horror" in the West, considered to be a fresh twist on the geo-political "Great Game" of the 19th century in which Britain feared that Russia sought access to the Indian Ocean, and posed a threat to Western security, explicitly violating the world balance of power agreed upon at Yalta in 1945.
The general feeling in the United States was that inaction against the Soviet Union could encourage Moscow to go further in its international ambitions. President Carter placed a trade embargo against the Soviet Union on shipments of commodities such as grain, while also leading a 66-nation boycott of the 1980 Summer Olympics in Moscow. Carter later suspended high-technology exports to the Soviet Union. The invasion, along with other concurrent events such as the Iranian Revolution and the hostage stand-off that accompanied it showed the volatility of the wider region for U.S. foreign policy. This was identified on 4 January during President Carter's Address to the Nation:
China condemned the Soviet coup and its military buildup, calling it a threat to Chinese security (both the Soviet Union and Afghanistan shared borders with China), that it marked the worst escalation of Soviet expansionism in over a decade, and that it was a warning to other Third World leaders with close relations to the Soviet Union. Vice Premier Deng Xiaoping warmly praised the "heroic resistance" of the Afghan people. Beijing also stated that the lacklustre worldwide reaction against Vietnam (in the Sino-Vietnamese War earlier in 1979) encouraged the Soviets to feel free invading Afghanistan.
Ba'athist Syria, led by Hafez al-Assad, was one of the few states outside the Warsaw Pact that publicly favoured the invasion. Soviet Union expanded its military support to the Syrian government in return. The Warsaw Pact Soviet satellites (excluding Romania) publicly supported the intervention; however, a press account in June 1980 showed that Poland, Hungary and Romania privately informed the Soviet Union that the invasion was a damaging mistake.
In his 2009 book, Maley excoriated "the West", which "allowed the issues for these negotiations to be determined substantially by the USSR—a
classic weakness of Western negotiating style. On 14 May 1980, the Kabul regime issued at Moscow's behest a statement directed at Iran and Pakistan, outlining a program for a 'political solution' to the 'tension that has come about in this region'. Its program was to be precisely mirrored in the agenda of the
subsequent negotiations conducted under UN auspices, which dealt with the withdrawal of the foreign troops, non-interference in the internal affairs of states, international guarantees, and the voluntary return of the refugees to their homes. This was a notable victory for the Soviet Union: the issue of self-determination
for the Afghan people, also mentioned by the General Assembly, of course did not figure in Kabul's program, and its exclusion effectively subordinated the General
Assembly's conditions for an acceptable settlement to those specified by the Soviet leadership."
Military aid
Weapons supplies were made available through numerous countries. Before the Soviet intervention, the insurgents received support from the United States, Pakistan, Saudi Arabia, Egypt, Libya and Kuwait, albeit on a limited scale. After the intervention, aid was substantially increased. The US clandestinely purchased all of Israel's captured Soviet weapons, and then funnelled the weapons to the Mujahideen, while Egypt upgraded its army's weapons and sent the older weapons to the militants. Turkey sold their World War II stockpiles to the warlords, and the British and Swiss provided Blowpipe missiles and Oerlikon anti-aircraft guns respectively, after they were found to be poor models for their own forces. China provided the most relevant weapons, likely due to their own experience with guerrilla warfare, and kept meticulous record of all the shipments. The US, Saudi and Chinese aid combined totaled between $6 billion and $12 billion.
State of the Cold War
In the wider Cold War, drastic changes were taking place in Southwestern Asia concurrent with the 1978–1979 upheavals in Afghanistan that changed the nature of the two superpowers. In February 1979, the Iranian Revolution ousted the American-backed Shah from Iran, losing the United States as one of its most powerful allies. The United States then deployed twenty ships in the Persian Gulf and the Arabian Sea including two aircraft carriers, and there were constant threats of war between the U.S. and Iran.
American observers argued that the global balance of power had shifted to the Soviet Union following the emergence of several pro-Soviet regimes in the Third World in the latter half of the 1970s (such as in Nicaragua and Ethiopia), and the action in Afghanistan demonstrated the Soviet Union's expansionism.
March 1979 marked the signing of the U.S.-backed peace agreement between Israel and Egypt. The Soviet leadership saw the agreement as giving a major advantage to the United States. A Soviet newspaper stated that Egypt and Israel were now "gendarmes of the Pentagon". The Soviets viewed the treaty not only as a peace agreement between their erstwhile allies in Egypt and the US-supported Israelis but also as a military pact. In addition, the US sold more than 5,000 missiles to Saudi Arabia, and the USSR's previously strong relations with Iraq had recently soured, as in June 1978 it began entering into friendlier relations with the Western world and buying French and Italian-made weapons, though the vast majority still came from the Soviet Union, its Warsaw Pact satellites, and China.
The Soviet invasion has also been analyzed with the model of the resource curse. The 1979 Islamic Revolution in Iran saw a massive increase in the scarcity and price of oil, adding tens of billions of dollars to the Soviet economy, as it was the major source of revenue for the USSR that spent 40–60% of its entire federal budget (15% of the GDP) on the military. The oil boom may have overinflated national confidence, serving as a catalyst for the invasion. The Politburo was temporarily relieved of financial constraints and sought to fulfill a long-term geopolitical goal of seizing the lead in the region between Central Asia and the Gulf.
December 1979 – February 1980: Occupation and national unrest
The first phase of the war began with the Soviet invasion of Afghanistan and first battles with various opposition groups. Soviet troops entered Afghanistan along two ground routes and one air corridor, quickly taking control of the major urban centers, military bases and strategic installations. However, the presence of Soviet troops did not have the desired effect of pacifying the country. On the contrary, it exacerbated nationalistic sentiment, causing the rebellion to spread further. Babrak Karmal, Afghanistan's new leader, charged the Soviets with causing an increase in the unrest, and demanded that the 40th Army step in and quell the rebellion, as his own army had proved untrustworthy. Thus, Soviet troops found themselves drawn into fighting against urban uprisings, tribal armies (called lashkar), and sometimes against mutinying Afghan Army units. These forces mostly fought in the open, and Soviet airpower and artillery made short work of them.
The Soviet occupation provoked a great deal of fear and unrest amongst a wide spectrum of the Afghan populace. The Soviets held the view that their presence would be accepted after having rid Afghanistan of the "tyrannical" Khalq regime, but this was not to be. In the first week of January 1980, attacks against Soviet soldiers in Kabul became common, with roaming soldiers often assassinated in the city in broad daylight by civilians. In the summer of that year, numerous members of the ruling party would be assassinated in individual attacks. The Soviet Army quit patrolling Kabul in January 1981 after their losses due to terrorism, handing the responsibility over to the Afghan army. Tensions in Kabul peaked during the 3 Hoot uprising on 22 February 1980, when the Soviet soldiers murdered hundreds of protesters. The city uprising took a dangerous turn once again during the student demonstrations of April and May 1980, in which scores of students were killed by soldiers and PDPA sympathizers.
The opposition to the Soviet presence was great nationally, crossing regional, ethnic, and linguistic lines. Never before in Afghan history had this many people been united in opposition against an invading foreign power. In Kandahar a few days after the invasion, civilians rose up against Soviet soldiers, killing a number of them, causing the soldiers to withdraw to their garrison. In this city, 130 Khalqists were murdered between January and February 1980.
According to the Mitrokhin Archive, the Soviet Union deployed numerous active measures at the beginning of the intervention, spreading disinformation relating to both diplomatic status and military intelligence. These efforts focused on most countries bordering Afghanistan, on several international powers, the Soviet's main adversary, the United States, and neutral countries. The disinformation was deployed primarily by "leaking" forged documents, distributing leaflets, publishing nominally independent articles in Soviet-aligned press, and conveying reports to embassies through KGB residencies. Among the active measures pursued in 1980–1982 were both pro- and anti-separatist documents disseminated in Pakistan, a forged letter implying a Pakistani-Iranian alliance, alleged reports of U.S. bases on the Iranian border, information regarding Pakistan's military intentions filtered through the Pakistan embassy in Bangkok to the Carter Administration, and various disinformation about armed interference by India, Sri Lanka, Bangladesh, Nepal, Indonesia, Jordan, Italy, and France, among others.
Soviet occupation, 1980–1985
Soviet military operations against Afghan guerrillas
The war now developed into a new pattern: the Soviets occupied the cities and main axis of communication, while the Afghan mujahideen, which the Soviet Army soldiers called 'Dushman,' meaning 'enemy', divided into small groups and waged a guerrilla war in the mountains. Almost 80 percent of the country was outside government control. Soviet troops were deployed in strategic areas in the northeast, especially along the road from Termez to Kabul. In the west, a strong Soviet presence was maintained to counter Iranian influence. Incidentally, special Soviet units would have also performed secret attacks on Iranian territory to destroy suspected Mujahideen bases, and their helicopters then got engaged in shootings with Iranian jets. Conversely, some regions such as Nuristan, in the northeast, and Hazarajat, in the central mountains of Afghanistan, were virtually untouched by the fighting, and lived in almost complete independence.
Periodically the Soviet Army undertook multi-divisional offensives into Mujahideen-controlled areas. Between 1980 and 1985, nine offensives were launched into the strategically important Panjshir Valley, but government control in the area did not improve. Heavy fighting also occurred in the provinces neighbouring Pakistan, where cities and government outposts were constantly besieged by the Mujahideen. Massive Soviet operations would regularly break these sieges, but the Mujahideen would return as soon as the Soviets left. In the west and south, fighting was more sporadic, except in the cities of Herat and Kandahar, which were always partly controlled by the resistance.
The Soviets did not initially foresee taking on such an active role in fighting the rebels and attempted to play down their role there as giving light assistance to the Afghan army. However, the arrival of the Soviets had the opposite effect as it incensed instead of pacified the people, causing the Mujahideen to gain in strength and numbers. Originally the Soviets thought that their forces would strengthen the backbone of the Afghan army and provide assistance by securing major cities, lines of communication and transportation. The Afghan army forces had a high desertion rate and were loath to fight, especially since the Soviet forces pushed them into infantry roles while they manned the armored vehicles and artillery. The main reason that the Afghan soldiers were so ineffective, though, was their lack of morale, as many of them were not truly loyal to the communist government but simply wanting a paycheck.
Once it became apparent that the Soviets would have to get their hands dirty, they followed three main strategies aimed at quelling the uprising. Intimidation was the first strategy, in which the Soviets would use airborne attacks and armored ground attacks to destroy villages, livestock and crops in trouble areas. The Soviets would bomb villages that were near sites of guerrilla attacks on Soviet convoys or known to support resistance groups. Local peoples were forced to either flee their homes or die as daily Soviet attacks made it impossible to live in these areas. By forcing the people of Afghanistan to flee their homes, the Soviets hoped to deprive the guerrillas of resources and safe havens. The second strategy consisted of subversion, which entailed sending spies to join resistance groups and report information, as well as bribing local tribes or guerrilla leaders into ceasing operations. Finally, the Soviets used military forays into contested territories in an effort to root out the guerrillas and limit their options. Classic search and destroy operations were implemented using Mil Mi-24 helicopter gunships that would provide cover for ground forces in armored vehicles. Once the villages were occupied by Soviet forces, inhabitants who remained were frequently interrogated and tortured for information or killed.
To complement their brute force approach to weeding out the insurgency, the Soviets used KHAD (Afghan secret police) to gather intelligence, infiltrate the Mujahideen, spread false information, bribe tribal militias into fighting and organize a government militia. While it is impossible to know exactly how successful KHAD was in infiltrating Mujahideen groups, it is thought that they succeeded in penetrating a good many resistance groups based in Afghanistan, Pakistan and Iran. KHAD is thought to have had particular success in igniting internal rivalries and political divisions amongst the resistance groups, rendering some of them completely useless because of infighting. KHAD had some success in securing tribal loyalties but many of these relationships were fickle and temporary. Often KHAD secured neutrality agreements rather than committed political alignment.
The Sarandoy were a centrally-commanded government paramilitary group placed under the control of the Ministry of Interior Affairs, before being placed under the control of the unified Ministry of State Security (WAD) in 1986. They had mixed success in the war, as Osama bin Laden and the Arab mujahideen fought the Sarandoy's 7th Operative Regiment, only to fail and sustain massive casualties. The label “Sarandoy” additionally included traffic police, provincial officers and corrections/labor facility officers. Large salaries and proper weapons attracted a good number of recruits to the cause, even if they were not necessarily "pro-communist". The problem was that many of the recruits they attracted were in fact Mujahideen who would join up to procure arms, ammunition and money while also gathering information about forthcoming military operations. By the end of 1981, there were reports of Bulgarian Armed Forces present in Mazar-i-Sharif, the Warsaw Pact and the Cuban Revolutionary Armed Forces all operating in Afghanistan. A fighter of the Mujahideen, describing the Cubans in combat, said they were "big and black and shout very loudly when they fight. Unlike the Russians they were not afraid to attack us in the open".
In 1985, the size of the LCOSF (Limited Contingent of Soviet Forces) was increased to 108,800 and fighting increased throughout the country, making 1985 the bloodiest year of the war. However, despite suffering heavily, the Mujahideen were able to remain in the field, mostly because they received thousands of new volunteers daily, and continued resisting the Soviets.
Reforms of the Karmal administration
Babrak Karmal, after the invasion, promised reforms to win support from the population alienated by his ousted predecessors. A temporary constitution, the Fundamental Principles of the Democratic Republic of Afghanistan, was adopted in April 1980. On paper, it was a democratic constitution including "right of free expression" and disallowing "torture, persecution, and punishment, contrary to human dignity". Karmal's government was formed of his fellow Parchamites along with (pro-Taraki) Khalqists, and a number of known non-communists/leftists in various ministries.
Karmal called his regime "a new evolutionary phase of the glorious April Revolution", but he failed at uniting the PDPA. In the eyes of many Afghans, he was still seen as a "puppet" of the Soviet Union.
Mujahideen insurrection
In the mid-1980s, the Afghan resistance movement, assisted by the United States, Pakistan, Saudi Arabia, the United Kingdom, Egypt, the People's Republic of China and others, contributed to Moscow's high military costs and strained international relations. The U.S. viewed the conflict in Afghanistan as an integral Cold War struggle, and the CIA provided assistance to anti-Soviet forces through the Pakistani intelligence services, in a program called Operation Cyclone.
Pakistan's North-West Frontier Province became a base for the Afghan resistance fighters and the Deobandi ulama of that province played a significant role in the Afghan 'jihad', with Darul Uloom Haqqania becoming a prominent organisational and networking base for the anti-Soviet Afghan fighters. As well as money, Muslim countries provided thousands of volunteer fighters known as "Afghan Arabs", who wished to wage jihad against the atheist communists. Notable among them was a young Saudi named Osama bin Laden, whose Arab group eventually evolved into al-Qaeda. Despite their numbers, the contribution has been called a "curious sideshow to the real fighting," with only an estimated 2000 of them fighting "at any one time", compared with about 250,000 Afghan fighters and 125,000 Soviet troops.
Their efforts were also sometimes counterproductive, as in the March 1989 battle for Jalalabad, when they showed the enemy the fate awaiting infidels in the form of a truck filled with dismembered bodies of their comrades chopped to pieces after surrendering to radical non-Afghan salafists. Though demoralized by the abandonment of them by the Soviets, the Afghan Communist government forces rallied to break the siege of Jalalabad and to win the first major government victory in years. "This success reversed the government's demoralization from the withdrawal of Soviet forces, renewed its determination to fight on, and allowed it to survive three more years."
Maoist guerrilla groups were also active, to a lesser extent compared to the religious Mujahideen. A notable Maoist group was the Liberation Organization of the People of Afghanistan (SAMA), whose founder and leader Abdul Majid Kalakani was reportedly arrested in 1980.
Afghanistan's resistance movement was born in chaos, spread and triumphed chaotically, and did not find a way to govern differently. Virtually all of its war was waged locally by regional warlords. As warfare became more sophisticated, outside support and regional coordination grew. Even so, the basic units of Mujahideen organization and action continued to reflect the highly segmented nature of Afghan society.
Olivier Roy estimates that after four years of war, there were at least 4,000 bases from which Mujahideen units operated. Most of these were affiliated with the seven expatriate parties headquartered in Pakistan, which served as sources of supply and varying degrees of supervision. Significant commanders typically led 300 or more men, controlled several bases and dominated a district or a sub-division of a province. Hierarchies of organization above the bases were attempted. Their operations varied greatly in scope, the most ambitious being achieved by Ahmad Shah Massoud of the Panjshir valley north of Kabul. He led at least 10,000 trained troopers at the end of the Soviet war and had expanded his political control of Tajik-dominated areas to Afghanistan's northeastern provinces under the Supervisory Council of the North.
Roy also describes regional, ethnic and sectarian variations in Mujahideen organization. In the Pashtun areas of the east, south and southwest, tribal structure, with its many rival sub-divisions, provided the basis for military organization and leadership. Mobilization could be readily linked to traditional fighting allegiances of the tribal lashkar (fighting force). In favorable circumstances such formations could quickly reach more than 10,000, as happened when large Soviet assaults were launched in the eastern provinces, or when the Mujahideen besieged towns, such as Khost in Paktia province in July 1983. But in campaigns of the latter type the traditional explosions of manpower—customarily common immediately after the completion of harvest—proved obsolete when confronted by well dug-in defenders with modern weapons. Lashkar durability was notoriously short; few sieges succeeded.
Mujahideen mobilization in non-Pashtun regions faced very different obstacles. Prior to the intervention, few non-Pashtuns possessed firearms. Early in the war they were most readily available from army troops or gendarmerie who defected or were ambushed. The international arms market and foreign military support tended to reach the minority areas last. In the northern regions, little military tradition had survived upon which to build an armed resistance. Mobilization mostly came from political leadership closely tied to Islam. Roy contrasts the social leadership of religious figures in the Persian- and Turkic-speaking regions of Afghanistan with that of the Pashtuns. Lacking a strong political representation in a state dominated by Pashtuns, minority communities commonly looked to pious learned or charismatically revered pirs (saints) for leadership. Extensive Sufi and maraboutic networks were spread through the minority communities, readily available as foundations for leadership, organization, communication and indoctrination. These networks also provided for political mobilization, which led to some of the most effective of the resistance operations during the war.
The Mujahideen favoured sabotage operations. The more common types of sabotage included damaging power lines, knocking out pipelines and radio stations, blowing up government office buildings, air terminals, hotels, cinemas, and so on. In the border region with Pakistan, the Mujahideen would often launch 800 rockets per day. Between April 1985 and January 1987, they carried out over 23,500 shelling attacks on government targets. The Mujahideen surveyed firing positions that they normally located near villages within the range of Soviet artillery posts, putting the villagers in danger of death from Soviet retaliation. The Mujahideen used land mines heavily. Often, they would enlist the services of the local inhabitants, even children.
They concentrated on both civilian and military targets, knocking out bridges, closing major roads, attacking convoys, disrupting the electric power system and industrial production, and attacking police stations and Soviet military installations and air bases. They assassinated government officials and PDPA members, and laid siege to small rural outposts. In March 1982, a bomb exploded at the Ministry of Education, damaging several buildings. In the same month, a widespread power failure darkened Kabul when a pylon on the transmission line from the Naghlu power station was blown up. In June 1982 a column of about 1,000 young communist party members sent out to work in the Panjshir valley were ambushed within 30 km of Kabul, with heavy loss of life. On 4 September 1985, insurgents shot down a domestic Bakhtar Airlines plane as it took off from Kandahar airport, killing all 52 people aboard.
Mujahideen groups used for assassination had three to five men in each. After they received their mission to kill certain government officials, they busied themselves with studying his pattern of life and its details and then selecting the method of fulfilling their established mission. They practiced shooting at automobiles, shooting out of automobiles, laying mines in government accommodation or houses, using poison, and rigging explosive charges in transport.
In May 1985, the seven principal rebel organizations formed the Seven Party Mujahideen Alliance to coordinate their military operations against the Soviet Army. Late in 1985, the groups were active in and around Kabul, unleashing rocket attacks and conducting operations against the communist government.
Raids inside Soviet territory
In an effort to foment unrest and rebellion by the Islamic populations of the Soviet Union, starting in late 1984 Director of CIA William Casey encouraged Mujahideen militants to mount sabotage raids inside the Soviet Union, according to Robert Gates, Casey's executive assistant and Mohammed Yousef, the Pakistani ISI brigadier general who was the chief for Afghan operations. The rebels began cross-border raids into the Soviet Union in spring 1985. In April 1987, three separate teams of Afghan rebels were directed by the ISI to launch coordinated raids on multiple targets across the Soviet border and extending, in the case of an attack on an Uzbek factory, as deep as over into Soviet territory. In response, the Soviets issued a thinly-veiled threat to invade Pakistan to stop the cross-border attacks, and no further attacks were reported.
Media reaction
International journalistic perception of the war varied. Major American television journalists were sympathetic to the Mujahideen. Most visible was CBS News correspondent Dan Rather, who in 1982 accused the Soviet Union of genocide, comparing them to Hitler. Rather was embedded with the Mujahideen for a 60 Minutes report. In 1987, CBS produced a full documentary special on the war.
Reader's Digest took a highly positive view of the Mujahideen, a reversal of their usual view of Islamic fighters. The publication praised their martyrdom and their role in entrapping the Soviets in a Vietnam War-style disaster.
Leftist journalist Alexander Cockburn was unsympathetic, criticizing Afghanistan as "an unspeakable country filled with unspeakable people, sheepshaggers and smugglers, who have furnished in their leisure hours some of the worst arts and crafts ever to penetrate the occidental world. I yield to none in my sympathy to those prostrate beneath the Russian jackboot, but if ever a country deserved rape it's Afghanistan." Robert D. Kaplan on the other hand, thought any perception of Mujahideen as "barbaric" was unfair: "Documented accounts of mujahidin savagery were relatively rare and involved enemy troops only. Their cruelty toward civilians was unheard of during the war, while Soviet cruelty toward civilians was common." Lack of interest in the Mujahideen cause, Kaplan believed, was not the lack of intrinsic interest to be found in a war between a small, poor country and a superpower where a million civilians were killed, but the result of the great difficulty and unprofitability of media coverage. Kaplan noted that "none of the American TV networks had a bureau for a war", and television cameramen venturing to follow the Mujahideen "trekked for weeks on little food, only to return ill and half starved". In October 1984, the Soviet ambassador to Pakistan, Vitaly Smirnov, told Agence France Presse "that journalists traveling with the mujahidin 'will be killed. And our units in Afghanistan will help the Afghan forces to do it. Unlike Vietnam and Lebanon, Afghanistan had "absolutely no clash between the strange and the familiar", no "rock-video quality" of "zonked-out GIs in headbands" or "rifle-wielding Shiite terrorists wearing Michael Jackson T-shirts" that provided interesting "visual materials" for newscasts.
Soviet exit and change of Afghan leadership, 1985–1989
Foreign diplomatic efforts
As early as 1983, Pakistan's Foreign Ministry began working with the Soviet Union to provide them an exit from Afghanistan, initiatives led by Foreign Minister Yaqub Ali Khan and Khurshid Kasuri. Despite an active support for insurgent groups, Pakistanis remained sympathetic to the challenges faced by the Soviets in restoring the peace, eventually exploring the possibility of setting up an interim system of government under former monarch Zahir Shah, but this was not authorized by President Zia-ul-Haq due to his stance on the issue of the Durand Line. In 1984–85, Foreign Minister Yaqub Ali Khan paid state visits to China, Saudi Arabia, Soviet Union, France, United States and the United Kingdom in order to develop a framework. On 20 July 1987, the withdrawal of Soviet troops from the country was announced.
April 1985 – January 1987: Exit strategy
The first step of the Soviet Union's exit strategy was to transfer the burden of fighting the Mujahideen to the Afghan armed forces, with the aim of preparing them to operate without Soviet help. During this phase, the Soviet contingent was restricted to supporting the DRA forces by providing artillery, air support and technical assistance, though some large-scale operations were still carried out by Soviet troops.
Under Soviet guidance, the DRA armed forces were built up to an official strength of 302,000 in 1986. To minimize the risk of a coup d'état, they were divided into different branches, each modeled on its Soviet counterpart. The ministry of defence forces numbered 132,000, the ministry of interior 70,000 and the ministry of state security (KHAD) 80,000. However, these were theoretical figures: in reality each service was plagued with desertions, the army alone suffering over 10% annual losses, or 32,000 per year.
The decision to engage primarily Afghan forces was taken by the Soviets, but was resented by the PDPA, who viewed the departure of their protectors without enthusiasm. In May 1987 a DRA force attacked well-entrenched Mujahideen positions in the Arghandab District, but the Mujahideen managed to hold their ground, and the attackers suffered heavy casualties. Meanwhile, the Mujahideen benefited from expanded foreign military support from the United States, United Kingdom, Saudi Arabia, Pakistan, and other Muslim-majority countries. Two Heritage Foundation foreign policy analysts, Michael Johns and James A. Phillips, championed Ahmad Shah Massoud as the Afghan resistance leader most worthy of US support under the Reagan Doctrine.
May 1986–1988: Najibullah and his reforms
The government of President Karmal, a puppet state, was largely ineffective. It was weakened by divisions within the PDPA and the Parcham faction, and the regime's efforts to expand its base of support proved futile. Moscow came to regard Karmal as a failure and blamed him for the problems. Years later, when Karmal's inability to consolidate his government had become obvious, Mikhail Gorbachev, then General Secretary of the Soviet Communist Party, said, "The main reason that there has been no national consolidation so far is that Comrade Karmal is hoping to continue sitting in Kabul with our help." Karmal's consolidation plan only involved those who had not raised arms against the regime, and even demanded Soviet troops to seal the border with Pakistan before any negotiations with Mujahideen. Eventually, the Soviet Union decided to dispose of Karmal from the leadership of Afghanistan.
In May 1986, Mohammad Najibullah, former chief of the Afghan secret police (KHAD), was elected General Secretary and later as President of the Revolutionary Council. The relatively young new leader wasn't known that well to the Afghan population at the time, but he made swift reforms to change the country's situation and win support as devised by experts of the Communist Party of the Soviet Union. An eloquent speaker in both the Pashto and Dari languages, Najibullah engaged with elders and presented both himself and the state as Islamic, sometimes backing his speeches with excerpts from the Qur'an. A number of prisoners were released, while the night curfew in Kabul that had been in place since 1980 was finally lifted. He also moved against pro-Karmal Parchamites, who were expelled from the Revolutionary Council and the Politburo.
President Najibullah launched the "National Reconciliation" program at the start of 1987, the goal of which was to unite the nation and end the war that had enveloped the nation for seven years. He expressed willingness to negotiate with the Mujahideen resistance, allow parties other than the PDPA to be active, and indicated that exiled King Zahir Shah could be part of the process. A six-month ceasefire also began in December 1986. His administration was also more open to foreign visitors outside the Soviet bloc. In November 1987, Najibullah convened a loya jirga selected by the authorities which successfully passed a new constitution for Afghanistan, creating a presidential system with an elective bicameral parliament. The constitution declared "the sacred religion of Islam" the official religion, guaranteed the democratic rights of the individual, made it legal to form "political parties", and promoted equality between the various tribes and nationalities. Despite high expectations, the new policy only had limited impact in regaining support from the population and the resistance, partly because of the high distrust and unpopularity of the PDPA and KHAD, as well as Najibullah's loyalty to Moscow.
As part of the new structure, national parliamentary elections were held in 1988 to elect members of the new National Assembly, the first such elections in Afghanistan in 19 years.
Negotiations for a coalition
Ex-king Zahir Shah remained a popular figure to most Afghans. Diego Cordovez of the UN also recognized the king as a potential key to a political settlement to the war after the Soviet troops would leave. Polls in 1987 showed that he was a favored figure to lead a potential coalition between the DRA regime and Mujahideen factions, as well as an opposition to the unpopular but powerful guerrilla leader Gulbuddin Hekmatyar, who was strongly against the King's return. Pakistan however was against this and refused to grant the ex-king a visa for potential negotiations with Mujahideen. Pakistan's President Zia-ul-Haq and his supporters in the military were determined to put a conservative Islamic ally in power in Kabul.
Negotiations continued and in 1988 through 1989, The Interim Afghan Government was formed in Pekhawar as an alliance of various Mujahadeen groups including Hezbi Islami and Jamiat, and would be involved in Operation Arrow and the siege of Khost.
April 1988: The Geneva Accords
Following lengthy negotiations, the Geneva Accords was signed in 1988 between Afghanistan and Pakistan. Supported by the Soviet Union and the United States respectively, the two Asian countries agreed to refrain from any form of interference in each other's territory. They also agreed to allow Afghan refugees in Pakistan to voluntarily return. The two superpowers agreed to halt their interference in Afghanistan, which included a Soviet withdrawal.
The United Nations set up a special mission to oversee the process. In this way, President Najibullah had stabilized his political position enough to begin matching Moscow's moves toward withdrawal. Among other things the Geneva Accords identified the US and Soviet non-intervention in the internal affairs of Pakistan and Afghanistan and a timetable for full Soviet withdrawal. The agreement on withdrawal held, and on 15 February 1989, the last Soviet troops departed on schedule from Afghanistan.
January 1987 – February 1989: Withdrawal
The promotion of Mikhail Gorbachev to General Secretary in 1985 and his 'new thinking' on foreign and domestic policy was likely an important factor in the Soviets' decision to withdraw. Gorbachev had been attempting to remove the Soviet Union from the economic stagnation that had set in under the leadership of Brezhnev, and to reform the Soviet Union's economy and image with the Glasnost and Perestroika policies. Gorbachev had also been attempting to ease cold war tensions by signing the Intermediate-Range Nuclear Forces Treaty with the U.S. in 1987 and withdrawing the troops from Afghanistan, whose presence had garnered so much international condemnation. Beijing had stipulated that a normalization of relations would have to wait until Moscow withdrew its army from Afghanistan (among other things), and in 1989 the first Sino-Soviet summit in 30 years took place. At the same time, Gorbachev pressured his Cuban allies in Angola to scale down activities and withdraw even though Soviet allies were faring somewhat better there. The Soviets also pulled many of their troops out of Mongolia in 1987, where they were also having a far easier time than in Afghanistan, and restrained the Vietnamese invasion of Kampuchea to the point of an all-out withdrawal in 1988. This massive withdrawal of Soviet forces from such highly contested areas shows that the Soviet government's decision to leave Afghanistan was based upon a general change in Soviet foreign policy – from one of confrontation to avoidance of conflict wherever possible.
In the last phase, Soviet troops prepared and executed their withdrawal from Afghanistan, whilst limiting the launching of offensive operations by those who had not withdrawn yet.
By mid-1987 the Soviet Union announced that it would start withdrawing its forces. Sibghatullah Mojaddedi was selected as the head of the Interim Islamic State of Afghanistan, in an attempt to reassert its legitimacy against the Moscow-sponsored Kabul regime. Mojaddedi, as head of the Interim Afghan Government, met with then-Vice President of the United States George H. W. Bush, achieving a critical diplomatic victory for the Afghan resistance. Defeat of the Kabul government was their solution for peace. This confidence, sharpened by their distrust of the United Nations, virtually guaranteed their refusal to accept a political compromise.
In September 1988, Soviet MiG-23 fighters shot down two Iranian AH-1J Cobra helicopters which had intruded into Afghan airspace.
Operation Magistral was one of the final offensive operations undertaken by the Soviets, a successful sweep operation that cleared the road between the towns of Gardez and Khost. This operation did not have any lasting effect on the outcome of the conflict nor on the soiled political and military status of the Soviets in the eyes of the West but was a symbolic gesture that marked the end of their widely condemned presence in the country with a victory.
The first half of the Soviet contingent was withdrawn from 15 May to 16 August 1988, and the second from 15 November to 15 February 1989. In order to ensure a safe passage, the Soviets had negotiated ceasefires with local Mujahideen commanders. The withdrawal was generally executed peacefully except for the operation "Typhoon".
General Yazov, the Defense Minister of Soviet Union, ordered the 40th Army to violate the agreement with Ahmad Shah Massoud, who commanded a large force in the Panjshir Valley, and attack his relaxed and exposed forces. The Soviet attack was initiated to protect Najibullah, who did not have a ceasefire in effect with Massoud, and who rightly feared an offensive by Massoud's forces after the Soviet withdrawal. General Gromov, the 40th Army Commander, objected to the operation, but reluctantly obeyed the order. "Typhoon" began on 23 January and continued for three days. To minimize their own losses, the Soviets abstained from close-range fighting. Instead, they used long-range artillery, surface-to-surface and air-to-surface missiles. Numerous civilian casualties were reported. Massoud had not threatened the withdrawal to this point and did not attack Soviet forces after they breached the agreement. Overall, the Soviet attack represented a defeat for Massoud's forces, who lost 600 fighters killed and wounded.
After the withdrawal of the Soviets, the DRA forces were left fighting alone and had to abandon some provincial capitals, as well as disbanding their air assault brigades a year prior. It was widely believed that they would not be able to resist the Mujahideen for long. However, in the spring of 1989, DRA forces inflicted a major defeat on the Mujahideen during the Battle of Jalalabad, as well launching successful assaults on fortified complexes in Paghman in 1990. The United States, having achieved its goal of forcing the Soviet Union's withdrawal from Afghanistan, gradually disengaged itself from the country.
Causes of withdrawal
Some of the causes of the Soviet Union's withdrawal from Afghanistan leading to the Afghanistan regime's eventual defeat include
The Soviet Army of 1980 was trained and equipped for large scale, conventional warfare in Central Europe against a similar opponent, i.e., it used armored and motor-rifle formations. This was notably ineffective against small scale guerrilla groups using hit-and-run tactics in the rough terrain of Afghanistan. Also, the Soviet Army's large formations were not mobile enough to engage small groups of Mujahideen fighters that easily merged back into the terrain. The set strategy also meant that troops were discouraged from "tactical initiative", essential in counter insurgency, because it "tended to upset operational timing".
The Soviets used large-scale offensives against Mujahideen strongholds, such as in the Panjshir Valley, which temporarily cleared those sectors and killed many civilians in addition to enemy combatants. The biggest shortcoming here, though, was the fact that once the Soviets engaged the enemy with force, they failed to hold the ground, as they withdrew once their operation was completed. The killing of civilians further alienated the population from the Soviets, with bad long-term effects.
The Soviets did not have enough men to fight a counter-insurgency war (COIN), and their troops had low morale. The peak number of Soviet troops during the war was 115,000, but the bulk of these troops were conscripts, which led to poor combat performance in their Motor-Rifle Formations. However, the Soviets did have their elite infantry units, such as the famed Spetsnaz, the VDV, and their recon infantry. The problem with their elite units was not combat effectiveness, but that there were not enough of them and that they were employed incorrectly.
Intelligence gathering, essential for successful COIN, was inadequate. The Soviets overly relied on less-than-accurate aerial recon and radio intercepts rather than their recon infantry and special forces. Although their special forces and recon infantry units performed very well in combat against the Mujahideen, they would have better served in intelligence gathering.
The concept of a "war of national liberation" against a Soviet-sponsored "revolutionary" regime was so alien to the Soviet dogma that the leadership could not "come to grips" with it. This led to, among other things, a suppression by the Soviet media for several years of the truth about how bad the war was going, which caused a backlash when it was unable to hide it further.
Fall of Najibullah government, 1992
After the withdrawal of Soviet troops in 1989, the government of Mohammad Najibullah remained in power until 15 April 1992. Najibullah stepped down that day as Mujahideen guerrilla forces moved into Kabul. He attempted to fly to India under the protection of the U.N. but was blocked from leaving at the airport. He then took refuge at a United Nations compound in Kabul. After a bloody, four-year power struggle between different factions of the victorious anti-Najibullah forces, the Taliban took Kabul. They stormed the U.N. compound on September 26, 1996. They eventually tortured and killed Najibullah.
Aerial engagements
Aerial losses in Pakistan airspace
During the conflict, Pakistan Air Force F-16 had shot down ten aircraft, belonging to Soviet Union, which had intruded into Pakistani territory. However, the Soviet record only confirmed five kills (three Su-22s, one Su-25 and one An-26). Some sources show that PAF had shot down at least a dozen more aircraft during the war. However, those kills were not officially acknowledged because they took place in Afghanistan's airspace and acknowledging those kills would mean that Afghan airspace was violated by PAF. In all, Pakistan Air Force F-16s had downed several MiG-23s, Su-22s, a Su-25, and an An-24 while losing only one F-16.
Stinger missiles and the "Stinger effect"
Whether the introduction of the personal, portable, infrared-homing "Stinger" surface-to-air missile in September 1986 was a turning point in the war is disputed.
Many Western military analysts credit the Stinger with a kill ratio of about 70% and with responsibility for most of the over 350 Soviet or Afghan government aircraft and helicopters downed in the last two years of the war. Some military analysts considered it a "game changer" and coined the term "Stinger effect" to describe it.
Congressman Charlie Wilson claimed that before the Stinger the Mujahideen never won a set piece battle with the Soviets, but after it was introduced, the Mujahideen never again lost one.
However, these statistics are based on Mujahideen self-reporting, which is of unknown reliability. A Russian general claimed the United States "greatly exaggerated" Soviet and Afghan aircraft losses during the war. According to Soviet figures, in 1987–1988, only 35 aircraft and 63 helicopters were destroyed by all causes. The Pakistan Army fired twenty-eight Stingers at Soviet aircraft near the border without a single kill.
Many Russian military analysts tend to be dismissive of the impact of the Stinger. Soviet General Secretary Mikhail Gorbachev decided to withdraw from Afghanistan a year before the Mujahideen fired their first Stinger missiles; Gorbachev was motivated by U.S. sanctions, not military losses. The Stingers did make an impact at first but within a few months flares, beacons, and exhaust baffles were installed to disorient the missiles, while night operation and terrain-hugging tactics tended to prevent the rebels from getting a clear shot. By 1988 the Mujahideen had all but stopped firing them. Stingers also forced Soviet helicopters and ground attack planes to bomb from higher altitudes with less accuracy but did not bring down many more aircraft than Chinese heavy machine guns and other less sophisticated anti-aircraft weaponry. Gorbachev stated in an interview in 2010 that the Stinger did not influence his decision-making process.
War crimes
The Soviet Union committed war crimes during the war to the point some have characterized their scale and systematicity as being a genocide of Afghans during the Soviet-Afghan War.
Foreign involvement
The Afghan mujahideen were backed primarily by Pakistan, the United States, Saudi Arabia, and the United Kingdom making it a Cold War proxy war. Out of the countries that supported the Mujahideen, the U.S. and Saudi Arabia offered the greatest financial support. However, private donors and religious charities throughout the Muslim world—particularly in the Persian Gulf—raised considerably more funds for the Afghan rebels than any foreign government; Jason Burke recounts that "as little as 25 per cent of the money for the Afghan jihad was actually supplied directly by states." Saudi Arabia was heavily involved in the war effort and matched the United States' contributions dollar-for-dollar in public funds. Saudi Arabia also gathered an enormous amount of money for the Afghan mujahideen in private donations that amounted to about $20 million per month at their peak.
Other countries that supported the Mujahideen were Egypt, China, and Israel. Iran on the other hand only supported the Shia Mujahideen, namely the Persian speaking Shiite Hazaras in a limited way. One of these groups was the Tehran Eight, a political union of Afghan Shi'a. They were supplied predominantly by the Islamic Revolutionary Guard Corps, but Iran's support for the Hazaras nevertheless frustrated efforts for a united Mujahideen front.
Spillover
Raids inside the Soviet Union
The Mujahideen launched multiple raids into the Soviet Union in an effort to foment unrest and rebellion by the Islamic populations of the Soviet Union, starting in late 1984 Director of CIA William Casey encouraged Mujahideen militants to mount sabotage raids inside the Soviet Union, according to Robert Gates, Casey's executive assistant and Mohammed Yousef, the Pakistani ISI brigadier general who was the chief for Afghan operations. The rebels began cross-border raids into the Soviet Union in Spring 1985.
Aerial engagements with Pakistan
During the conflict, Soviet aircraft intruded into Pakistani airspace multiple times and Pakistan Air Force F-16 had shot down ten aircraft, belonging to Soviet Union, which had intruded into Pakistani territory. However, the Soviet record only confirmed five plane kills (three Su-22s, one Su-25 and one An-26) and 4 helicopter (Mi-8) kills. Some sources show that PAF had shot down at least a dozen more aircraft during the war. However, those kills were not officially acknowledged because they took place in Afghanistan's airspace and acknowledging those kills would mean that Afghan airspace was violated by PAF. In all, Pakistan Air Force F-16s had downed 3 Su-22,1 Su-25,2 Mig-23,2 An-26, and Several Mi-8 while 1 Mig-23 was damaged while losing only one F-16.
Terror campaign in Pakistan
The KhAD-KGB campaign in Pakistan was a joint campaign in which the Afghan KhAD’s foreign "Tenth Directorate" and the Soviet KGB targeted Pakistan using prostitution spy rings, terror attacks, hijackings, serial killings, assassinations and the dissemination of propaganda to dissuade Pakistan from supporting the Afghan Mujahideen.
Miram Shah incident
On 2 April 1986, the 38th commando brigade of Democratic Republic of Afghanistan accidentally landed inside Pakistan territory during the Second Battle of Zhawar. The strike force, in the darkness of night, accidentally landed near Miram Shah in Pakistan instead of Zhawar. The force was surrounded and 120 soldiers were taken prisoner and 6 Mi-8 helicopters were captured.
Badaber uprising
In between 26 and 27 April 1985, in Badaber, Pakistan, an armed rebellion was instigated by Soviet and Afghan prisoners of war who were being held at the Badaber fortress near Peshawar, Pakistan. The prisoners fought the Afghan Mujahideen of the Jamiat-e Islami party and the Pakistani XI Corps supported by American CIA advisors in an attempt to escape but the rebellion was squashed and all POWs were killed.
Raid inside Iran
On 5 April 1982, Soviet forces accidentally infiltrated Iranian territory, in which Soviet forces strayed from the target of a Mujahideen base in southern Afghanistan and accidentally destroyed an asphalt factory in Iran. Iranian security forces attacked this strike force by using tanks and aircraft destroying two Soviet Mi-8 helicopters and damaging many more.
Impact
Soviet personnel strengths and casualties
Between 25 December 1979, and 15 February 1989, a total of 620,000 soldiers served with the forces in Afghanistan (though there were only 80,000–104,000 serving at one time): 525,000 in the Army, 90,000 with border troops and other KGB sub-units, 5,000 in independent formations of MVD Internal Troops, and police forces. A further 21,000 personnel were with the Soviet troop contingent over the same period doing various white collar and blue-collar jobs.
The total official fatalities of the Soviet Armed Forces, frontier, and internal security troops came to 14,453. Other estimates give a figure of 26,000 killed Soviet soldiers. Soviet Army formations, units, and HQ elements lost 13,833, KGB sub-units lost 572, MVD formations lost 28, and other ministries and departments lost 20 men. During this period 312 servicemen were missing in action or taken prisoner; 119 were later freed, of whom 97 returned to the USSR and 22 went to other countries.
Of the troops deployed, 53,753 were wounded, injured, or sustained concussion and 415,932 fell sick. A high proportion of casualties were those who fell ill. This was because of local climatic and sanitary conditions, which were such that acute infections spread rapidly among the troops. There were 115,308 cases of infectious hepatitis, 31,080 of typhoid fever, and 140,665 of other diseases. Of the 11,654 who were discharged from the army after being wounded, maimed, or contracting serious diseases, 10,751 men, were left disabled.
Material losses were as follows:
451 aircraft (includes 333 helicopters)
147 tanks
1,314 IFV/APCs
433 artillery guns and mortars
11,369 cargo and fuel tanker trucks.
In early 1987 a CIA report estimated that, from 1979 to 1986, the Soviet military spent 18 billion rubles on the war in Afghanistan (not counting other costs incurred to the Soviet state such as economic and military aid to the DRA). The CIA noted that this was the equivalent of US$50 billion ($115 billion in 2019 USD). The report credited the relatively low cost to the small size of the Soviet deployment and the fact that the supply lines to Afghanistan were very short (in some cases, easier and cheaper than internal USSR lines). Military aid to the DRA's armed forces totaled 9.124 billion rubles from 1980 to 1989 (peaking at 3.972 billion rubles in 1989). Financial and economic aid were also significant; by 1990, 75% of the Afghan state's income came from Soviet aid.
Casualties and destruction in Afghanistan
The war resulted in the deaths of approximately 3,000,000 Afghans, Civilian death and destruction from the war was massive and detrimental. Estimates of Afghan civilian deaths vary from 562,000 to 2,000,000.
The Geneva Accords of 1988, which ultimately led to the withdrawal of the Soviet forces in early 1989, left the Afghan government in ruins. The accords had failed to address adequately the issue of the post-occupation period and the future governance of Afghanistan. The assumption among most Western diplomats was that the Soviet-backed government in Kabul would soon collapse; however, this was not to happen for another three years. During this time the Interim Islamic Government of Afghanistan (IIGA) was established in exile. The exclusion of key groups such as refugees and Shias, combined with major disagreements between the different Mujahideen factions, meant that the IIGA never succeeded in acting as a functional government.
Before the war, Afghanistan was already one of the world's poorest countries. The prolonged conflict left Afghanistan ranked 170 out of 174 in the UNDP's Human Development Index, making Afghanistan one of the least developed countries in the world.
Once the Soviets withdrew, US interest in Afghanistan slowly decreased over the following four years, much of it administered through the DoD Office of Humanitarian Assistance, under the then Director of HA, George M. Dykes III. With the first years of the Clinton Administration in Washington, DC, all aid ceased. The US decided not to help with reconstruction of the country, instead handing the interests of the country over to US allies Saudi Arabia and Pakistan. Pakistan quickly took advantage of this opportunity and forged relations with warlords and later the Taliban, to secure trade interests and routes. The ten years following the war saw much ecological and agrarian destruction—from wiping out the country's trees through logging practices, which has destroyed all but 2% of forest cover country-wide, to substantial uprooting of wild pistachio trees for the exportation of their roots for therapeutic uses, to opium agriculture.
Captain Tarlan Eyvazov, a soldier in the Soviet forces during the war, stated that the Afghan children's future is destined for war. Eyvazov said, "Children born in Afghanistan at the start of the war... have been brought up in war conditions, this is their way of life." Eyvazov's theory was later strengthened when the Taliban movement developed and formed from orphans or refugee children who were forced by the Soviets to flee their homes and relocate their lives in Pakistan. The swift rise to power, from the young Taliban in 1996, was the result of the disorder and civil war that had warlords running wild because of the complete breakdown of law and order in Afghanistan after the departure of the Soviets.
The CIA World Fact Book reported that as of 2004, Afghanistan still owed $8 billion in bilateral debt, mostly to Russia, however, in 2007 Russia agreed to cancel most of the debt.
Refugees
5.5 million Afghans were made refugees by the war—a full one third of the country's pre-war population—fleeing the country to Pakistan or Iran. Another estimate states 6.2 million refugees. By the end of 1981, the UN High Commission for Refugees reported that Afghans represented the largest group of refugees in the world.
A total of 3.3 million Afghan refugees were housed in Pakistan by 1988, some of whom continue to live in the country up until today. Of this total, about 100,000 were based in the city of Peshawar, while more than 2 million were located in other parts of the northwestern province of Khyber Pakhtunkhwa (then known as the North-West Frontier Province). At the same time, close to two million Afghans were living in Iran. Over the years Pakistan and Iran have imposed tighter controls on refugees which have resulted in numerous returnees. In 2012 Pakistan banned extensions of visas to foreigners. Afghan refugees have also settled in India and became Indian citizens over time. Some also made their way into North America, the European Union, Australia, and other parts of the world. The photo of Sharbat Gula placed on National Geographic cover in 1985 became a symbol both of the 1980s Afghan conflict and of the refugee situation.
Effect on Afghan society
The legacy of the war introduced a culture of guns, drugs and terrorism in Afghanistan. The traditional power structure was also changed in favor of the powerful Mujahideen militias:
The militarization transformed the society in the country, leading to heavily armed police, private bodyguards, and openly armed civil defense groups becoming the norm in Afghanistan both during the war and decades thereafter.
The war also altered the ethnic balance of power in the country. While Pashtuns were historically politically dominant since the modern foundation of the Durrani Empire in 1747, many of the well-organized pro-Mujahideen or pro-government groups consisted of Tajiks, Uzbeks and Hazaras. With Pashtuns increasingly politically fragmented, their influence on the state was challenged.
Aftermath
Media and popular culture
Within Afghanistan, war rugs were a popular form of carpet designs woven by victims of the war.
Perception in Afghanistan
Perception in the former Soviet Union
Notes
References
Bibliography
(free online access courtesy of UCP).
Historiography and memory
Galbas, Michael. " 'We Are Heroes': The Homogenising Glorification of the Memories on the Soviet–Afghan War in Present Russia." in Conflict Veterans: Discourses and Living Contexts of an Emerging Social Group (2018): 134+. online
Gibbs, David N. "Reassessing Soviet motives for invading Afghanistan: A declassified history." Critical Asian Studies 38.2 (2006): 239–263. online
Further reading
Shaw, Tamsin, "Ethical Espionage" (review of Calder Walton, Spies: The Epic Intelligence War Between East and West, Simon and Schuster, 2023, 672 pp.; and Cécile Fabre, Spying Through a Glass Darkly: The Ethics of Espionage and Counter-Intelligence, Oxford University Press, 251 pp., 2024), The New York Review of Books, vol. LXXI, no. 2 (8 February 2024), pp. 32, 34–35. "[I]n Walton's view, there was scarcely a US covert action that was a long-term strategic success, with the possible exception of intervention in the Soviet-Afghan War (a disastrous military fiasco for the Soviets) and perhaps support for the anti-Soviet Solidarity movement in Poland."
External links
Cold War conflicts
1979 in Afghanistan
1980s in Afghanistan
Conflicts in 1979
1980s conflicts
Invasions of Afghanistan
Invasions by the Soviet Union
Soviet military occupations
Wars involving Afghanistan
Anti-communism in Pakistan
Afghanistan
Guerrilla warfare
Chemical warfare by conflict
Proxy wars
Cold War military history of the Soviet Union
Anti-communism in Afghanistan
Communism in Afghanistan
Islamism in Afghanistan
Maoism in Afghanistan
1979 in the Soviet Union
1980s in the Soviet Union
Afghanistan–Soviet Union relations
History of Islam in Afghanistan
Terrorism in Pakistan
Genocide of indigenous peoples in Asia | Soviet–Afghan War | Chemistry | 21,095 |
62,469,606 | https://en.wikipedia.org/wiki/Darmois%E2%80%93Skitovich%20theorem | In mathematical statistics, the Darmois–Skitovich theorem characterizes the normal distribution (the Gaussian distribution) by the independence of two linear forms from independent random variables. This theorem was proved independently by G. Darmois and V. P. Skitovich in 1953.
Formulation
Let be independent random variables. Let be nonzero constants. If the linear forms and are independent then all random variables have normal distributions (Gaussian distributions).
History
The Darmois–Skitovich theorem is a generalization of the Kac–Bernstein theorem in which the normal distribution (the Gaussian distribution) is characterized by the independence of the sum and the difference of two independent random variables. For a history of proving the theorem by V. P. Skitovich, see the article
References
Mathematical theorems | Darmois–Skitovich theorem | Mathematics | 170 |
3,381,059 | https://en.wikipedia.org/wiki/Parom | The Parom (ferry in Russian) is a space tug that has been proposed by RKK Energia. The purpose of this vehicle is to replace most of the Progress' active components. Progress spacecraft have flown re-supply missions since 1978. Nikolai Bryukhanov, RKK Energia's deputy general designer, said in May 2005 that the Federal Space Agency had received a design for a new space system. According to him, the system's operation principle is completely different from that used by Progress. A launch vehicle first places a Parom reusable inter-orbit "tug" into a 200 km orbit. As this spacecraft will not carry any consignments, other rockets will orbit payload containers that will be docked by Parom. The tug will then deliver them to the ISS or another orbiter.
"Any Russian or foreign launch vehicle can orbit such containers," Bryukhanov said. The size of the container and its shape depend on payload characteristics. "This can be an airtight instrument module or a fuel tanker," the deputy general designer continued. "Moreover, depressurized platform featuring large scientific equipment and auxiliary systems, i.e., solar batteries that cannot be stored inside the airtight module".
In layout, the Parom will be built around a pressurized transfer passage with docking ports at each end: each of these two docking ports can be used to dock with the cargo container, the Kliper, the space station or any other spacecraft. It will have its own engines, along with propellant transfer lines to feed fuel from the cargo container into its own tanks or into the space station's or another spacecraft's tank. It will also have engines scaled to handle cargo modules weighing up to 30 tonnes (around 60,000 pounds), twice the mass of the largest station sections carried into orbit aboard Space Shuttles and Proton rockets.
See also
Kliper Proposed spacecraft to use the Parom
References
External links
Russian Space Web
Flight International Article, Lighter Kliper could make toward trip to ISS
MSN Article on Kliper
Space program of Russia
Hypothetical spacecraft
Space tugs | Parom | Astronomy,Technology | 434 |
2,602,927 | https://en.wikipedia.org/wiki/Laramie-Poudre%20Tunnel | The Laramie-Poudre Tunnel is an early transmountain tunnel in the U.S. state of Colorado. The tunnel transfers water from the west side of the Laramie River basin, which drains to the North Platte River, to the east side Cache la Poudre River basin that drains to the South Platte River. The tunnel is about long with variable diameters with a minimum diameter of about . The diameter varied due to the different material mined through and the erosion of almost 90 years of water flow. It is located at about elevation with about a 1.7 degree down slope. The Laramie River lies about higher than the Cache La Poudre River at this location separated only by a mountain ridge. The Laramie-Poudre Tunnel is located about west-northwest of Fort Collins, Colorado, about south of the Wyoming border and about north of Rocky Mountain National Park. It was built between 1909 and 1911 for the Laramie-Poudre Reservoirs & Irrigation Co. to convey water from the Laramie River to the Poudre River for Front Range irrigation. The tunnel was driven for the purpose of conveying through the divide 800 cu.ft of water per second.
"Work on the power-plant for operating the tunnel was begun Dec 1st 1909. The Hydro-electric power-plant was erected on the west bank of the Cache-la-Poudre, nearly opposite the eastern portal. "Repauno 60-per cent. gelatine" was used along with German Insolid and Z.L. fuse were used for blasting, with exception where the granite was really hard and tough. There were about 60 people employed with skills ranging from helpers, muckers, mechanics, stable-helpers, blacksmiths, book keeper and foremen. These were arranged in an 8-hr shift.
Court battles between Colorado and Wyoming over water rights prevented operation until 1914 (Case 1995).
As a result of the court battles the tunnel is restricted to a maximum of of water from the Laramie river instead of its designed . Most of the flow occurs during the peak snow melt season of mid May to mid July. The Laramie-Poudre Tunnel typically transfers about 14,000 acre ft (17,300,000 cubic meters) from the Laramie River basin to the Cache La Poudre Basin. Agriculture users typically use about 450,000 acre ft and municipal users use a further 75,000 acre ft in the Cache La Poudre drainage basin.
In the spring of 2000 after almost 90 years of use, part of the tunnel collapsed, requiring extensive rebuilding of part of the tunnel. This rebuild cost $4,500,000 and took from November 2000 to May 16, 2001.
Greeley, Colorado partnered with North Weld County Water and the Fort Collins-Loveland Water District to purchase the Laramie Poudre Tunnel in 2006.
References
Tunnels in Colorado
Water tunnels in the United States
Irrigation projects
Irrigation in the United States
Tunnels completed in 1911
Transportation buildings and structures in Larimer County, Colorado
1911 establishments in Colorado | Laramie-Poudre Tunnel | Engineering | 627 |
35,411,581 | https://en.wikipedia.org/wiki/Erotic%20plasticity | Erotic plasticity is the degree to which one's sex drive can be changed by cultural or social factors. Someone has "high erotic plasticity" when their sex drives can be affected by situational, social and cultural influences, whereas someone with "low erotic plasticity" has a sex drive that is relatively rigid and unsusceptible to change. Since social psychologist Roy Baumeister coined the term in 2000, only two studies directly assessing erotic plasticity have been completed .
The female erotic plasticity hypothesis states that women have higher erotic plasticity than men, and therefore their sex drives are more socially flexible and responsive than those of men (factors such as religion, culture and education have a greater effect on women's sexual behaviors). Men, on the other hand, remain relatively rigid after puberty but can still be affected by these factors.
Female erotic plasticity hypothesis
As women have been theorised to possess a weaker sex drive than men, they may more readily accept substitutes or alternate forms of satisfaction. Baumeister theorized that weaker motivations tend to lead to greater plasticity. However, a lower sex drive does not necessarily imply that sex is less important for women, or that females have a lower capacity to become aroused. Rather, Baumeister's hypothesis supports the notion that women are less willing to engage in sex than their male counterparts.
Evidence for female erotic plasticity
Culture
According to Baumeister, the culture a woman is raised in affects her sexual attitudes and behaviours more than it would affect a man raised in the same culture. Factors such as politics, cultural and societal views on sexual behaviours would all play a role. A multinational study by Lippa (2009) found that women are more variable in their sex drives, suggesting that their sexuality is more malleable and influenced by society than men's. Another study showed that South Korean women had a higher median age of first intercourse, lower rates of premarital sex, and greater disapproval of premarital sex. In South Korea, there are strong gender-based sexual double standards such that women are expected to be passive and virgins at marriage. Therefore, Baumeister theorized that cultural norms have affected women's attitudes and behaviours more so than men. Another study showed that female, but not male, Hispanic immigrants to the United States were less likely to engage in vaginal, oral, and anal sex than Hispanics who had been born and raised in the United States. Condom use was unaffected by whether or not the person was an immigrant, suggesting that upbringing and acculturation had a significant impact on engaging in sexual activity and not on how they would protect themselves during sex.
Baumeister predicted that acculturation, the process of adopting the behaviour patterns and attitudes of the surrounding culture, should have a greater effect on the sexual behaviours and attitudes of female immigrants. However, in a study conducted by Benuto and Meana, one of the two studies conducted about erotic plasticity, supporting evidence was not found. When examining the acculturation of college students from an American college of non-American background, acculturation had the same effect on sexual behaviours and attitudes of both men and women. Numerous potential methodological flaws of the study are indicated that may have produced this contradictory data, such as women trying to appear socially desirable in their responses (see social desirability bias) or that participants were too acculturated.
Religion
Catholic nuns are more successful at fulfilling their vows of celibacy and more willing to commit to their promises of sexual abstinence than male clergy, suggesting women can more easily adapt to such high non-permissive standards. A study on older unmarried adults found that those who were highly religious were less likely to have recently had sex compared to non-religious unmarried adults. However, this effect was stronger in women, suggesting a stronger influence on women's sexual behaviour. Church attendance and religiosity is also associated with lower odds of reporting masturbation among females. One possible explanation is that higher levels of spirituality and religiosity are associated with higher levels of sex guilt in women. One study even suggests that this differs amongst women of different culture. Religious Euro-Canadian women reported significantly higher levels of sexual desire and less sex guilt than Eastern Asian women. This is an example of two societal pressures, religion and culture, interacting to shape sexuality. Finally, Farmer and colleagues (2009) found that unreligious women are more likely to engage in unrestricted premarital intercourse behaviour than religious women. Such a difference was not demonstrated in religious and non-religious men.
Adolescent sexuality
Religiosity can also affect whether adolescents choose to abstain from sexual conduct. Commitment to religion and having friends with similar commitments has a stronger impact on girls than boys. Other factors, such as family members' disapproval of adolescent sexual behaviour also play a significant role.
Heritability
Heritability is the amount of differences between individuals that is the product of genetics. According to female erotic plasticity theory, sexual behaviours of men should be more heritable because there is a stronger biological component driving these behaviours. A study examining adult twins in Sweden found a lower genetic component for the engagement in same-sex behaviours in women than in men. Shared environment also played a larger role in women's same-sex behaviours than in men's, although unique environmental factors were roughly the same. On the other hand, in their study of Australian twins, Bailey, Dunne and Martin found a concordance in sexual behaviour of 20% for male MZ twins, and of 24% for female MZ twins.
Another twin study showed male identical twins are more likely than female identical twins to begin having sex at the same age. Shared environment plays a greater role than genetics in risky sexual behaviours in adolescent females.
Attitude–behaviour inconsistencies
Baumeister's 3rd prediction states that women should have greater inconsistencies between their attitudes towards sexual behaviours and whether they actually engage in said behaviours. Wives are more likely than husbands to report that they changed a "great deal" in their habits, ideas and expectations of sex over 20 years of marriage. Even more husbands reported that their spouses changed than did wives. Another example is condom use, for which women in the past have demonstrated difficulty in expressing their desire to use them during sex. However, a 2008 study by Woolf and Maisto found that this trend is declining, suggesting traditional gender roles in culture may be changing.
Gender similarities
Although the female erotic plasticity theory states that the men's and women's sexuality are different, some evidence suggests that men's sexuality too can be affected by sociocultural factors. Although religious commitment and family member's stances on adolescent sexual behaviours have a significant impact on females' choice to abstain, to a lesser extent it affects males' choice as well. Also, the fact that some male clergy are successful in maintaining their vows of celibacy suggests some degree of erotic plasticity. College education is associated with an increase in variety of sexual behaviours in both men and women. Asian males and females consistently report more conservative sexual attitudes than Hispanic and Euro-Americans.
Sexual arousal
According to Meredith Chivers, straight women are physically aroused by a greater variety of erotic images than men, and this physical arousal does not match subjective arousal.
Similar results were found in a study that showed both consensual and non-consensual sex scenes to men and women. Neither men nor women reported sexual arousal to the rape scenes, but women's bodies responded in a similar way to both scenarios. This may be because women's physical arousal, regardless of psychological arousal, is an evolutionary automatic response to prevent damage during rape.
A study that measured sexual arousal through pupil dilation found that physical response of lesbian and bisexual women to erotic images was more category-specific than that of straight women, with lesbian women showing more response to women, and bisexual women showing more response to the preferred sex than the other. This may be due to masculinization of the brain via prenatal hormones. The difference between straight and non-straight women was consistent with Chivers' findings, although straight women did show more consistency with their orientation with this measure than with the genital measure.
Sexual fluidity and same-sex behaviours
Sexual fluidity is the concept that sexual orientation or sexuality is not rigid, but rather can change over time. According to Lisa Diamond, developer of the concept, women generally tend to be more fluid in their sexuality than men. In her study of lesbian, bisexual and unlabeled women, she found that these had a tendency of changing their sexual identities and behaviour over time.
Other studies have shown as well greater fluidity among lesbians, compared with homosexual men. However, heterosexual men and women were equally stable in their orientation, and bisexual men and women were similarly unstable.
Often, sexual orientation and sexual orientation identity are not distinguished, which can impact accurately assessing sexual identity and whether or not sexual orientation is able to change; sexual orientation identity can change throughout an individual's life, and may or may not align with biological sex, sexual behavior or actual sexual orientation. Sexual orientation is stable and unlikely to change for the vast majority of people, but some research indicates that some people may experience change in their sexual orientation, and this is more likely for women than for men. The American Psychological Association distinguishes between sexual orientation (an innate attraction) and sexual orientation identity (which may change at any point in a person's life).
Women who remained in relationships with male-to-female transsexuals maintained a heterosexual identity, yet reported changes in their sexual lives. (Aramburu Alegría, 2012) Some women reported that their relationships no longer included sexual activity, while others reported that things were still changing. According to Lippa (2006), heterosexual women with high sex drives tend to be attracted to both women and men, whereas heterosexual men with high sex drives are only associated with attraction to either women or men, suggesting greater plasticity in women's sexuality.
Erotic plasticity and gender/sexual variation
Lesbians are more likely than gay men to engage in heterosexual sex, suggesting greater variability in their sexuality. No research has been done on people with gender variation, such as transgender people.
Little is known about erotic plasticity in transsexuals. Sexual reassignment surgery and hormone therapy (i.e. testosterone) in female-to-male transsexuals produce an increase in their sexual desire, but it is uncertain how erotic plasticity plays a role. Heterosexual female-to-male transsexuals – those who are sexually attracted to women – have more sexual partners than nonheterosexual female-to-male transsexuals, but again, erotic plasticity's role in this - if one actually exists - is uncertain.
Erotic Plasticity Questionnaire
In her Ph.D. dissertation in 2009, Lorraine Benuto attempted to create a scale measuring erotic plasticity. Her scale, the EPQ (Erotic Plasticity Questionnaire), contained the following subscales, each believed to be a component of erotic plasticity:
Fluidity (of behaviours on the same-sex/opposite-sex continuum)
Attitude-Behaviour Inconsistency
Changes in Attitudes (over time)
Perception of Choice
Sociocultural Influence
When administered to a test population, women scored higher on fluidity, attitude-behaviour inconsistency, and overall erotic plasticity. There were no significant gender differences in changes in attitudes, perception of choice and sociocultural influences. The test also did not demonstrate any relationship between erotic plasticity and locus of control, sexual liberality and openness. Benuto also did not find a negative correlation between sex drive and EPQ score, which is unexpected. This is either because of a methodological problem in the scale or a problem in Baumeister's hypothesis that plasticity is related to sex drive. Furthermore, the subscales of the EPQ did not correlate well with each other, leading Benuto to hypothesize that perhaps there is not just one type of plasticity, but plasticities, and erotic plasticity is a much more complicated construct than initially imagined.
Contrary to the numerous studies Baumeister cites as evidence of sociocultural influences on women, Benuto did not find a gender difference on the sociocultural influence subscale. However, Baumeister's cited studies were not self-report studies, whereas Benuto's scale was, which may have contributed to the discrepancy.
Applications in sex therapy
Baumeister provided three applications of the theory of erotic plasticity in sex therapy. Sex differences in erotic plasticity can change how therapists will approach providing sex therapy to men and women. Baumeister found that cognitive therapy would be a better approach for female patients because sexual responses and behaviours are influenced by what things mean, therefore working with women's interpretations and understanding of these responses and behaviours would be of greatest benefit. Physiological therapy, such as hormone therapy, would therefore be best for male patients, since the focus would be more on the body than on the man's cognitions. Also, someone with high erotic plasticity will have less sexual self-knowledge and self-understanding than someone with low erotic plasticity since their behaviours and tastes are susceptible to change; this knowledge could be useful in helping someone perhaps confused about his or her sexual identity. Finally, prospects for successful sex therapy may be better for women than men, because if men develop a problem, their low plasticity will make it difficult to allow significant change after puberty.
Other useful applications of erotic plasticity in sex therapy include having women place much consideration in family and peer relationships, and any internal and external pressures that may be affecting their sexual identity, such as religious influences, cultural norms and politics.
Criticism and alternate explanations
Baumeister's theory of female erotic plasticity has been met with some criticism. Some argue Baumeister makes causal inferences from correlational research when discussing how education affects men and women differently. He was also criticized for his use of extreme groups to support his predictions, such as people of the least and most education. Below are two posited alternate explanations of erotic plasticity:
Shibley Hyde and Durik
In a 2000 paper, Janet Shibley Hyde and Amanda M. Durik argued that a more sociocultural explanation could be used to explain erotic plasticity. Firstly, education does not affect men's and women's sexual behaviour differently. Instead, it increases women's power, therefore women with the greatest amount of education are nearly equal in power with men. On the other hand, women who are the least educated have the least power relative to men. When comparing sexual behaviours of most educated and least educated men and women, they found that education actually increased the prevalence of many sexual activities in both sexes, including oral sex, anal sex, and having a same sex partner. The differences between men and women were much smaller in the most educated group than in the least educated group. Shibley Hyde and Durik speculate that more educated women are better at communicating their desires and have enough self-confidence to do so. They also may perform a greater variety of sexual activities because of their greater exposure to ideas and their commitment to learning.
Shibley Hyde and Durik also asserted that religion has a greater effect on women's sexual behaviours because a group with less power – in this case, women – will shape their behaviour to be more like the group with power, in this case, men. Therefore, women pay more attention to and conform more to religious teachings since it is the culture to which they must adapt. They back up this claim by presenting evidence that non-religious women and men are similar in the prevalence of all sexual activities, minus masturbation. On the contrary, Conservative Protestant men and women differed significantly in all sexual behaviours.
They also argued for a modern sexual double standard that is more restrictive of female sexuality than male sexuality. They claimed that now, extramarital sex is more tolerated in women than in the past, but it is still less acceptable in women than in men. Therefore, these different gender roles will exert powerful influences on both men's and women's behaviour and sexuality. Finally, they claim that the greater evidence for attitude-behaviour inconsistency in women is not the result of high erotic plasticity, but because of men's greater interpersonal power. Although women may, for example, have the intention of using condoms or have negative attitudes towards anal sex, men may use their greater power to do what it is they desire if it differs from what their partner wants.
Benuto
Benuto (2009) argues that heightened fluidity and sociocultural influences, two components of erotic plasticity, actually stand in opposition to each other. Although scientific evidence exists that women's sexual behaviours are indeed more fluid than men's, Benuto argues there is nothing in society that would encourage women to engage in same-sex behaviour. She hypothesizes that, based on the properties of her EPQ scale (Benuto, 2009), erotic plasticity may not be a unitary construct like Baumeister initially proposed, and that there perhaps may be multiple "plasticities", each composed of different constructs. Such constructs include sociocultural influences, locus of control and changes of sexual attitudes over time. Finally, it is possible that the heightened attitude-behaviour inconsistency in women could either be due to the powerlessness of women, or women wanting to maintain harmony and nurturance in their relationship.
See also
Environment and sexual orientation
Lovemap
References
Human sexuality
Sexuality and society | Erotic plasticity | Biology | 3,620 |
36,956,239 | https://en.wikipedia.org/wiki/Isolated%20horizon | It was customary to represent black hole horizons via stationary solutions of field equations, i.e., solutions which admit a time-translational Killing vector field everywhere, not just in a small neighborhood of the black hole. While this simple idealization was natural as a starting point, it is overly restrictive. Physically, it should be sufficient to impose boundary conditions at the horizon which ensure only that the black hole itself is isolated. That is, it should suffice to demand only that the intrinsic geometry of the horizon be time independent, whereas the geometry outside may be dynamical and admit gravitational and other radiation.
An advantage of isolated horizons over event horizons is that while one needs the entire spacetime history to locate an event horizon, isolated horizons are defined using local spacetime structures only. The laws of black hole mechanics, initially proved for event horizons, are generalized to isolated horizons.
An isolated horizon refers to the quasilocal definition of a black hole which is in equilibrium with its exterior, and both the intrinsic and extrinsic structures of an isolated horizon (IH) are preserved by the null equivalence class . The concept of IHs is developed based on the ideas of non-expanding horizons (NEHs) and weakly isolated horizons (WIHs): A NEH is a null surface whose intrinsic structure is preserved and constitutes the geometric prototype of WIHs and IHs, while a WIH is a NEH with a well-defined surface gravity and based on which the black-hole mechanics can be quasilocally generalized.
Definition of IHs
A three-dimensional submanifold equipped with an equivalence class is defined as an IH if it respects the following conditions:
(i) is null and topologically ;
(ii) Along any null normal field tangent to , the outgoing expansion rate vanishes;
(iii) All field equations hold on , and the stress–energy tensor on is such that is a future-directed causal vector () for any future-directed null normal .
(iv) The commutator , where denotes the induced connection on the horizon.
Note: Following the convention set up in refs., "hat" over the equality symbol means equality on the black-hole horizons (NEHs), and "hat" over quantities and operators (, , etc.) denotes those on the horizon or on a foliation leaf of the horizon (this makes no difference for IHs).
Boundary conditions of IHs
The properties of a generic IH manifest themselves as a set of boundary conditions expressed in the language of Newman–Penrose formalism,
(geodesic), (twist-free, hypersurface orthogonal), (expansion-free), (shear-free),
(no flux of any kinds of matter charges across the horizon),
(no gravitational waves across the horizon).
In addition, for an electromagnetic IH,
Moreover, in a tetrad adapted to the IH structure, we have
Remark: In fact, these boundary conditions of IHs just inherit those of NEHs.
Extension of the on-horizon adapted tetrad
Full analysis of the geometry and mechanics of an IH relies on the on-horizon adapted tetrad. However, a more comprehensive view of IHs often requires investigation of the near-horizon vicinity and off-horizon exterior. The adapted tetrad on an IH can be smoothly extended to the following form which cover both the horizon and off-horizon regions,
where are either real isothermal coordinates or complex stereographic coordinates labeling the cross-sections of { v=constant, r=constant}, and the gauge conditions in this tetrad are
Applications
The local nature of the definition of an isolated horizon makes it more convenient for numerical studies.
The local nature makes the Hamiltonian description viable. This framework offers a natural point of departure for non-perturbative quantization and derivation of black hole entropy from microscopic degrees of freedom.
See also
Non-expanding horizon
Newman–Penrose formalism
References
General relativity
Black holes | Isolated horizon | Physics,Astronomy | 816 |
30,856,548 | https://en.wikipedia.org/wiki/Samsung%20Galaxy%20S%20II | The Samsung Galaxy S II (also unofficially known as the Samsung Galaxy S2) is a touchscreen-enabled, slate-format Android smartphone developed and marketed by Samsung Electronics, as the second smartphone of the Samsung Galaxy S series. It has additional software features, expanded hardware, and a redesigned physique compared to its predecessor, the Samsung Galaxy S. The S II was launched with Android 2.3.3/2.3.4 "Gingerbread", with updates to Android 4.1.2 "Jelly Bean".
Samsung unveiled the S II on 13 February 2011 at the Mobile World Congress (MWC) in Barcelona. It was one of the slimmest smartphones of the time, mostly 8.49 mm thick, except for two small bulges which take the maximum thickness of the phone to 9.91 mm.
The Galaxy S II has a 1.2 GHz dual-core "Exynos" system on a chip (SoC) processor, 1 GB of RAM, a WVGA Super AMOLED Plus screen display and an 8-megapixel camera with flash and 1080p full high definition video recording. It is one of the first devices to offer a Mobile High-definition Link (MHL), which allows up to 1080p uncompressed video output to an MHL enabled TV or to an MHL to HDMI adapter, while charging the device at the same time. USB On-The-Go is supported.
The user-replaceable battery gives up to ten hours of heavy usage, or two days of lighter usage. According to Samsung, the Galaxy S II is capable of providing 9 hours of talk time on 3G and 18.3 hours on 2G.
The Galaxy S II was popular and a huge success both critically and commercially, selling 3 million units within its first 55 days on the market. It was succeeded by the Samsung Galaxy S III in May 2012.
Release
The Galaxy S II was given worldwide release dates starting from May 2011, by more than 140 vendors in some 120 countries. On 9 May 2011, Samsung announced that they had received pre-orders for 3 million Galaxy S II units globally.
Some time after the device's release, Samsung also released a variation of the phone known as the Galaxy R, which uses a Nvidia Tegra 2 chipset.
Another variant of the S II, called the Galaxy S II Touch Epic, was announced in August 2011 and was released on September that same year. The phone was available via Sprint, and has a bigger capacity battery than the original S II. It was heavier than the original S II, at 130g.
Samsung also reportedly shipped Galaxy S IIs for free, to several developers of the custom Android distribution CyanogenMod (particularly those who had maintained its ports for the Galaxy S with an intent for them to port CyanogenMod 7 to the device).
Features
Software and services
The Galaxy S II was launched with Android 2.3 "Gingerbread". American variants began shipments with the slightly updated version 2.3.5 installed. Version 2.3.6 was made globally available on 12 December 2011. On 13 March 2012, Samsung began to roll out upgrades to Android 4.0.3 "Ice Cream Sandwich" through their phone management software KIES to users in South Korea, Hungary, Poland and Sweden. Russian users received the update on 5 July 2012, while the rest of Europe received it on 1 August 2012. In February 2013, Samsung began rolling out an update to Android 4.1.2 "Jelly Bean" for the device.
The S II employs the TouchWiz 4.0 user interface, following the same principle as TouchWiz 3.0 found on the Galaxy S, with new improvements, such as hardware acceleration. It also has an optional gesture-based interaction called "motion" which (among other things) allows users to zoom in and out by placing two fingers on the screen and tilting the device towards and away from themselves to zoom in and out respectively. This gesture function works on both the web browser and the images in gallery used within this device. "Panning" on TouchWiz 4.0 allows the movement of widgets and icons shortcuts between screens, by allowing the device to be held and moved from side to side to scroll through home screens. This gesture-based management of widgets is a new optional method next to the existing method of holding and swiping between home screens. The Android 4.1 update backports the TouchWiz Nature interface and other features from the Galaxy S III, such as Direct Call, Pop-up Play, Smart Stay, and Easy Mode.
Four new Samsung Hub applications were revealed at the 2011 MWC: Social Hub, which integrates popular social networking services into one place rather than in separate applications, Readers Hub, providing the ability to access, read and download online newspapers, ebooks and magazines from a worldwide selection, Music Hub (in partnership with 7digital,) an application store for downloading and purchasing music tracks on the device, and Game Hub (in partnership with Gameloft,) an application store for downloading and purchasing games. Additional applications include Kies 2.0, Kies Air, AllShare (for DLNA), Voice Recognition, Google Voice Translation, Google Maps with Latitude, Places, Navigation (beta) and Lost Phone Management, Adobe Flash 10.2, QuickOffice application and 'QuickType' by SWYPE.
Before launch, it was announced that Samsung had taken steps to incorporate Enterprise software for business users, which included On Device Encryption, Cisco’s AnyConnect VPN, device management, Cisco WebEx, Juniper, and secure remote device management from Sybase.
The Galaxy S II comes with support for many multimedia file formats and codecs. For audio it supports FLAC, WAV, Vorbis, MP3, AAC, AAC+, eAAC+, WMA, AMR-NB, AMR-WB, MID, AC3, XMF. For video formats and codecs it supports MPEG-4, H.264, H.263, DivX HD/XviD, VC-1, 3GP (MPEG-4), WMV (ASF) as well as AVI (DivX)), MKV, FLV and the Sorenson codec. For H.264 playback, the device natively supports 8-bit encodes along with up to 1080p HD video playback.
Unofficially, the Galaxy S II can run Android 13 "Tiramisu".
Hardware and design
Chipsets
The Galaxy S II has a 1.2 GHz dual core ARM Cortex-A9 processor that uses Samsung's own 'Exynos 4210' System on a chip (SoC) that was previously code-named "Orion". The Exynos branded SoC was the source of much speculation concerning another branded successor to the previous "Hummingbird" single-core SoC of the Samsung Galaxy S. The Exynos 4 Dual 45 nm (previously Exynos 4210) uses ARM's Mali-400 MP GPU. This graphics GPU, supplied by ARM, is a move away from the PowerVR GPU of the Samsung Galaxy S.
The Exynos 4210 supports ARM's SIMD engine (also known as Media Processing Engine, or 'NEON' instructions), and may give a significant performance advantage in critical performance situations such as accelerated decoding for many multimedia codecs and formats (e.g., On2's VP6/7/8 or Real formats).
The Mali 400 GPU in the Exynos 4210 SOC is one of the only, if not the only GPU powering Android devices, that does not support GL_RGB Framebuffer Objects (FBOs), only GL_RGBA. The newer Galaxy S II (9100G), based on the PowerVR SGX540, does not exhibit the issue.
At the 2011 Game Developers Conference ARM's representatives demonstrated 60 Hz framerate playback in stereoscopic 3D running on the same Mali-400 MP and Exynos SoC. They said that an increased framerate of 70 Hz would be possible through the use of an HDMI 1.4 port.
The Motorola Atrix advertised in June 2011 that it was "the world's most powerful smartphone"; in August 2011 the UK Advertising Standards Authority ruled that the Atrix was not as powerful as Galaxy S II due to its faster processor.
A newer Samsung Galaxy S II variant (i9100G) uses a 1.2 GHz dual core TI OMAP 4430 processor with PowerVR SGX540 graphics.
Storage and RAM
The Galaxy S II has 1 GB of dedicated RAM and up to 32 GB of internal mass storage. Within the battery compartment there is an external microSD card slot capable of recognizing and using a 32 GB microSDHC memory card.
Display
The Samsung Galaxy S II uses a WVGA (800 x 480) Super AMOLED Plus capacitive touchscreen that is covered by Gorilla Glass with an oleophobic fingerprint-resistant coating. The display is an upgrade of its predecessor, and the "Plus" signifies that the display panel has done away with Pentile matrix to regular RGB matrix display which results in a 50% increase in sub-pixels. This translates to grain reduction and sharper images and text. In addition, Samsung has claimed that Super AMOLED Plus displays are 18% more power efficient than the older Super AMOLED displays. Some phones have display issues, with a few users reporting a "yellow tint" on the left bottom edge of the display when a neutral grey background is displayed.
Audio
The Galaxy S II uses Yamaha audio hardware. The Galaxy S II's predecessor, the original Galaxy S, used Wolfson's WM8994 DAC. User feedback on Internet forums as well as an in-depth review at Clove, have expressed the Yamaha chip's inferior sound quality compared to that of the Wolfson chip featured in the original Galaxy S.
Camera
On the back of the device is an 8-megapixel Back-illuminated sensor camera with single-LED flash that can record videos in full high-definition 1080p at 30 frames per second.
It is the first mobile phone by Samsung Mobile that is able to record videos in full high-definition (FullHD 1080p).
There is also a fixed focus front-facing 2-megapixel camera for video calling, taking photos as well as general video recording, with a maximum resolution of 640x480 (VGA).
Near-field communication
The Galaxy S II is one of the earliest Android devices to natively support NFC Near field communication. This follows on from the Google Nexus S which was the first de facto NFC smartphone device. Reportedly the UK version was supplied without an NFC chip at the beginning of its production run, with an NFC-equipped version released later in 2011.
Mobile high-definition link
Samsung has also included a new high-definition connection technology called Mobile High-definition Link (MHL). The main specialty of MHL is that it is optimized for mobile devices by allowing the device's battery to be charged while at the same time playing back multimedia content. For the Galaxy S II, the industry standard micro USB port found on the bottom of the device can be used with an MHL connector for a TV out connection to an external display, such as a high definition television.
USB on-the-go
The micro USB port on this device also supports USB OTG standard which means the Galaxy S II can act as a 'host' device in the same way as a desktop computer, allowing external USB devices to be plugged in and used. These external USB devices typically include USB flash drives and separately powered external hard drives. A video demonstration on YouTube has shown the OTG function to be readily available with an ordinary micro USB (B-type) OTG adaptor. The same YouTube video goes on to mention a successful test completed on a 2 TB USB external hard drive (requiring own power source) but however reports of failure when trying to connect USB keyboards, tested USB mice and tested USB game pads. Currently the only file-system supported for USB drives within OTG is FAT32.
Headphone plug
A 3.5 mm TRRS headset jack is available and is located on the top side of the device. The micro USB connection port is located on the bottom side of the device.
Connectivity
CC3920, 67783, And 3392910.
Phones released to the US market lack the FM receiver. BCM4330 supports Wi-Fi Direct that enable devices to communicate directly with one another without having to interact with an access point. Although the BCM4330 chip supports Bluetooth 4.0, the Galaxy S2 is limited to Bluetooth 3.0 using the last Android version released by Samsung (4.1.2). Bluetooth 4.0 support has been introduced in Android 4.3 versions, however the upgrade to an alternative firmware is required.
Additional accessories available include:
Dock connector for battery charging and audio-visual output
MHL cable which makes use of the device's micro USB port for HDMI output
USB OTG adaptor for use with external USB devices such as USB flash drives.
Stylus pen for use on the device's capacitive screen. Support for a stylus on the Galaxy S II was a precursor to the Samsung Galaxy Note.
A number of case manufacturers have released a variety of cases for the Galaxy S II.
A Samsung branded Bluetooth headset for making phone calls.
A pair of portable speakers powered by the phone's USB port.
A vehicle mounting kit for dashboard placement of the phone, allows GPS navigation using the phone.
Variants
Galaxy S II – Model GT-I9100G
The Samsung Galaxy S II GT-I9100G was released in late 2011, and is usually sold instead of the original GT-I9100 in certain markets (mostly Asia and some parts of Europe). An overview of the Samsung Galaxy S II GT-I9100G can be seen on Samsung's official website. It features a Texas Instruments OMAP4430 SoC instead of the Exynos 4210 in the GT-I9100. It is visually identical to the GT-I9100, as well as having the same 1.2 GHz processor speed and dual-core ARM Cortex A9 processor technology. However, the SoC is of a different design and the Mali-400 GPU has been replaced by a PowerVR SGX 540 GPU. This difference in the SoC makes this variant incompatible with custom ROMs intended for the I9100, but it has been steadily gaining its own aftermarket support (such as from CyanogenMod) due to the relative ease of development and the openness of the TI OMAP platform.
Australia
Telstra and Vodafone Australia – Models GT-I9100T
The Galaxy S II (Model GT-I9100T) sold by Telstra, Vodafone Australia and some certain other carriers outside Australia is virtually identical to the I9100 and is functionally equivalent.
Telstra and Optus – Model GT-I9210T
In Australia the Galaxy S II 4G (Model GT-I9210T) uses a Qualcomm processor and supports Telstra's and Optus' 4G networks. However, analog radio and digital media are not supported.
Canada
Bell Mobility – Models GT-I9100M and SGH-I757M
Bell's Galaxy S II is identical to the international version, except that its model number is GT-I9100M. All custom ROMs running on I9100 international versions can be flashed to the I9100M also.
Bell's Samsung Galaxy S II HD LTE (Model SGH-I757M) is identical to the cancelled AT&T Skyrocket HD hence making the device another variant of the South Korean model of the Galaxy S II HD LTE. One difference between the South Korean model and the Bell Mobility model is the lack of a physical home button, instead, four capacitive buttons are used, one of which directly replaces the physical home button. The specification of the device is identical to the South Korean model. However, different frequencies bands are enabled on this device.
Rogers – Models SGH-I727R and SGH-I927
The Rogers Galaxy S II LTE (Model SGH-I727R) is identical to the AT&T Skyrocket, and features a larger screen 4.52", a bigger battery 1,850 mAh, and a different 1.5 GHz Qualcomm processor.
Rogers' Galaxy S Glide (Model SGH-I927) is the same phone with the same specs as the AT&T's Captivate Glide, except the carrier logo is on the back instead of behind the front glass panel.
Rogers launched the Samsung Galaxy S II LTE, launching in Fall 2011, soon after its LTE Launch in Toronto.
Note that the Galaxy S II LTE has a different model number: I9210
and came out later and only in select markets, including Canada and South Korea.
Telus Mobility – Model SGH-T989D
Telus Mobility's 4G Galaxy S II X (Model SGH-T989D) is identical hardware-wise to the T-Mobile SGH-T989, including the Qualcomm 1.5 GHz dual core processor, larger 4.52 inch screen and 1,850 mAh battery. Although utilizing a different modem firmware, most custom ROMs running on T-Mobile versions can be flashed to the Telus T989D.
The design differs from both the Rogers/International and Bell/AT&T models. There is a chrome band around the edge and the plastic on the back has a leathery feel. Instead of the hardware home button, it has the standard four capacitive buttons. The Qualcomm processor allows for 42 Mbit/s HSPA+ download speeds that the Samsung Exynos processor is not currently capable of. It was released on 28 October 2011. A subsidiary of Telus, Koodo Mobile, also offers the SGH-T989D.
China
China – Model GT-I9108(China Mobile), GT-I9100G, SCH-I929
The Samsung Galaxy S II (Model GT-I9108) was released in late 2011, and it is sold in China by China Mobile. It is identical to the GT-I9100G, featuring the same Texas Instruments OMAP4430 SoC with a 1.2 GHz dual-core ARM Cortex A9 processor and PowerVR SGX 540 graphics processor. However, the GT-I9108 has TD-SCDMA support in place of WCDMA support found in other variants. The GT-I9108 is a regional model and has few available custom ROMs.
The Samsung Galaxy S II (Model SCH-I929) was released in late 2011, and it is sold in China by China Telecom. It is based on the design of Galaxy S II LTE (GT-I9210), but supports CDMA2000 1x EVDO for use with the carrier.
Europe – Model GT-I9100P
The Samsung Galaxy S II (Model GT-I9100P) was released in late 2011. It has the same hardware as GT-I9100 plus the NFC chip and battery (the battery is specific because it includes the antenna). To keep NFC enabled it is necessary to update the firmware using a P version. Any I9100 firmware can be used, but doing so will disable the NFC hardware.
India
Model: GT-i9100
GT-i9100 is a sim-free model released on 2 May 2011. This supports 2G/3G only.
Japan
KDDI AU – Model: ISW11SC
The KDDI Au Galaxy S II WiMAX (Model: ISW11SC) was first released on 20 January 2012 in the color Noble Black and was followed by a Ceramics White model on 24 March 2012 and a Shiny Magenta model on 20 July 2012. The ISW11SC currently runs Android 4.0.4 via an OTA update from the original 2.3.6 firmware. The ISW11SC uses the Samsung Exynos 4210 dual-core 1.4 GHz main CPU and a Qualcomm QSC6085 Modem chipset running at 192 MHz. It features 1 GB of RAM and 16 GB of ROM (11 GB available for user data storage) with support for up to 64 GB additional storage via the internal microSD slot. An 1850mAh battery powers the device. The ISW11SC features a Samsung SUPER AMOLED HD 1280x720 screen measuring 4.7 inches. Connectivity includes CDMA 800 MHz/2,100 MHz; 3G EV-DO Rev A; 2.4 GHz / 5 GHz 802.11 a/b/g/n Wi-Fi; Bluetooth 3.0 and an integrated WiMAX modem with speeds up to 40 Mbit/s down and 15.4 Mbit/s up. Like most Japanese domestic model phones the ISW11SC includes many Japan-specific applications. This phone features NFC functionality which is technically compatible with FeliCa RFID (such as with PASMO and SUICA payment systems) however, the software doesn't support the Japanese "Osaifu Keitai" mobile wallet and thus the phone cannot be used to make transactions with NFC in Japan.
NTT DoCoMo – Model SC-02C
NTT DoCoMo introduced a variant of the Galaxy S II (Model SC-02C) on 23 June 2011 as the successor to the DoCoMo Galaxy S (Model SC-02B). The SC-02C includes 1seg terrestrial television support, as well as i-mode software functions specific to DoCoMo handsets, such as i-channel, BeeTV, MelodyCall and DoCoMo map navigation. The SC-02C is powered by the Samsung Exynos 4210 Orion Dual-core 1.2 GHz (S5PC210) processor. The SC-02C uses the Wnn Japanese input system.
South Korea
All of variants optimized for use with South Korean wireless carriers have Samsung's Korean input system for feature phones, a Dubeolsik-layout virtual keyboard and a T-DMB tuner in place of an FM radio tuner.
KT – Model SHW-M250K
The KT variant, the Galaxy S II KT (Model SHW-M250K) uses KT's Wi-Fi CM instead of Android's Wi-Fi CM to connect to Wi-Fi networks. Additional features for KT users are installed by default.
LG U+ – Model SHW-M250L
Instead of WCDMA and HSPA, LG U+'s variant of the Galaxy S II (Model SHW-M250L) uses EV-DO Rev.B (KPCS 1.8 GHz) to accommodate the network technology deployed by LG U+. The SHW-M250L is slightly thicker (9.4 mm) than SK Telecom and KT variants (8.89 mm). Additional features for LG U+ users are installed by default.
SK Telecom – Model SHW-M250S
The SK Telecom variant of the Galaxy S II (Model SHW-M250S) uses the SK-MMS system instead of the OMA-MMS system for multimedia messaging. Additional features for SK Telecom users are installed by default.
United States
AT&T – Models SGH-I777, SGH-I727 and SGH-I927
AT&T Mobility began offering its first variant of the Galaxy S II (Model SGH-I777) on 2 October 2011. Prior to its release, AT&T Mobility's first variant of the device was code named "Attain" by Samsung.
The AT&T Mobility variant maintains the 4.3 inch display of the international version, but features four capacitive buttons. It also includes NFC capability.
AT&T Mobility introduced a second variant of the device called the Galaxy S II Skyrocket (Model SGH-I727) on 6 November 2011. Prior to its release, this second variant was code named "Skyrocket" by Samsung. This variant is similar to the international Samsung Galaxy S II LTE and is notable for its inclusion of an LTE radio. The inclusion of the LTE radio required changing the device's main processor from the Exynos to the Qualcomm Snapdragon MSM8660 because the Exynos does not support LTE. This version features the same 4.52 inch screen of the Sprint model. This variant supports Near Field Communications (NFC).
AT&T Mobility introduced a third variant called the Captivate Glide (Model SGH-I927) on 20 November 2011. The Captivate Glide differs from the other two AT&T Mobility variants primarily by the inclusion of a slide-out, physical QWERTY keyboard. The Captivate Glide also includes a dual-core, 1 GHz Tegra 2 dual-core processor instead of a 1.2 GHz Exynos processor. The display of this third variant is Super AMOLED instead of Super AMOLED Plus and the display size is reduced to 4 inches.
Sprint – Model GT-I900
The Sprint variant (Model SPH-D710) of the Galaxy S II was initially released as the Galaxy S II Epic 4G Touch and was later renamed to the simpler Galaxy S II 4G. Prior to its release, Sprint's variant was codenamed "Within" by Samsung. The SPH-D710 first became available for Sprint customers on 16 September 2011, making Sprint the first carrier in the United States to offer a variant of the S II. The SPH-D710 is available to Sprint customers in black, titanium grey or white.
The Sprint variant has key differences from the "International" version of the Galaxy S II. The Sprint variant includes a 2500 MHz WiMax radio. The display of the Sprint variant, at 4.52 inches, is larger than that of the international version. The Sprint variant features four touch-capacitive buttons as opposed to the three-button hardware/capacitive combination found on the international version. Other differences include an LED notification light and a larger, 6.66 Wh battery.
The Sprint variant does not come equipped with NFC capability, unlike the variants offered by T-Mobile US and AT&T Mobility.
The Galaxy S II is a touchscreen-only device, unlike the Epic 4G, which includes a physical QWERTY keyboard.
On 28 March 2013, the Android 4.1.2 Jelly Bean (GB27) update was released through the Samsung Kies software As of February 2014, there are no additional confirmed updates for this device.
The device has received 7 updates from Samsung since its original release on 16 September 2011.
Sprint has announced that on 6 November 2015 the Sprint WiMAX network will be decommissioned effectively removing 4G capabilities on the SPH-D710 model. Users had access to mobile broadband using 3G until March 31, 2022 when Sprint's 3G CDMA network was shut down.
Boost Mobile and Virgin Mobile USA
Sprint subsidiaries Boost Mobile offers a Sprint SPH-D710 variant of the Galaxy S II 4G in both titanium grey or white options. Virgin Mobile offers a variant, model i9210, for their service.
Boost Mobile began the Samsung Galaxy S II 4G on 6 September 2012 for $369.99. Virgin Mobile USA began offering the Galaxy S II 4G on 15 November 2012 for $369.99.
In March 2013, the Boost Mobile and Virgin Mobile variants were also updated along with Sprint's to Android 4.1.2 Jelly Bean.
T-Mobile – Model SGH-T989
T-Mobile USA began taking pre-orders for its variant (Model SGH-T989) of the Galaxy S II on 11 October 2011 and began selling it in stores on 12 October 2011. Prior to its release, T-Mobile's variant of the device was code named "Hercules" by Samsung.
The T-Mobile variant has important key differences from the "International" version of the Galaxy S II. The T-Mobile variant uses a 1.5 GHz dual-core Qualcomm APQ8060 (S3) Snapdragon processor, as opposed to the 1.2 GHz dual-core Exynos processor of the International version because the Exynos processor is not compatible with T-Mobile's 42 Mbit/s HSPA+ network. The cellular radio of the T-Mobile supports UMTS bands I (2100 MHz), II (1900 MHz), IV (1700 MHz) and V (850 MHz). The display of the T-Mobile variant, at 4.52 inches, is larger than that of the international version. The T-Mobile variant features four touch-capacitive buttons as opposed to the three-button hardware/capacitive combination found on the international version, this variant of the smartphone uses the powerful Adreno-220 series GPU and supports up to version 4.4.4–based ROMs of the Android OS The T-Mobile variant, like the AT&T variant, supports Near Field Communications (NFC) integrated in the battery, which has 6.85Wh capacity.
As of 8 March 2013, the T-Mobile variant can be updated to Android 4.1.2 "Jelly Bean" using Samsung Kies.
U.S. Cellular – Model SCH-R760
U.S. Cellular's variant (Model SCH-R760) is equivalent to the Sprint variant, except for one specification; the U.S. Cellular variant does not include a 2500 MHz WiMax radio.
Galaxy S II Plus – Model GT-I9105/P
The Galaxy S II Plus was announced in CES 2013. The phone has a Broadcom BCM28155 SoC with a 1.2 GHz dual-core processor and a VideoCore IV HW GPU instead of the Mali 400MP in the original Galaxy S II. Both the original and the "Plus" have 1 GB of RAM, but the latter only has 8 GB of internal storage, half that of the original, of which the operating system takes a significant cut. It uses a hyperglazed plastic body (the same as the Samsung Galaxy S III) and is available in Chic White and Dark Blue. The phone originally ran on Android 4.1.2 "Jelly Bean" with Samsung's TouchWiz Nature UX. An update to Android 4.2.2 was made available. Also released was a I9105P model, which supports NFC.
Reception
Reviews of the Galaxy S II have been universally positive. It was honored by MWC's Global Mobile Awards as "SmartPhone Of The Year 2012" Engadget gave the device a 9/10, calling it "the best Android smartphone yet" and "possibly the best smartphone, period." CNET UK gave the device a favorable review of 4.5/5 and described it as "one of the slimmest, lightest mobiles we've ever had the privilege to hold." TechRadar gave the device 5/5 stars and describes the device as one that "set a new bar for smartphones in 2011." Pocketnow was "impressed" with the speed of the web browser. SlashGear states that the device "sets the benchmark for smartphones in general." GSMArena points out minor drawbacks such as an "all-plastic body" and the handset having "no dedicated camera key," but still calls the handset "absurdly powerful" and concluding "we just cannot see beyond the new Samsung flagship if we're to name the ultimate smartphone."
After slightly over one month since its debut, more than 1 million units of Samsung Galaxy S II were activated in South Korea. Worldwide, 3 million units were sold in 55 days. 85 days after its first release, Samsung declared global shipments of over 5 million for the Galaxy S II and 10 million after 5 months. Partially owing to strong sales of Samsung's Galaxy range of smartphones, Samsung overtook Apple in smartphone sales during Q3 2011, with a total market share of 23.8%, compared to Apple's 14.6%. In Q2 2012, Samsung also became the world's largest maker of mobile phones, dethroning Nokia.
Successor
The successor to the Galaxy S II was the Galaxy S III, unveiled in London on 3 May 2012 and commencing sales on 29 May 2012 with 10 million reported pre-orders.
See also
Comparison of Samsung Galaxy S smartphones
Comparison of smartphones
Samsung Galaxy S series
References
External links
Android (operating system) devices
Discontinued flagship smartphones
Samsung smartphones
Galaxy S II
Mobile phones introduced in 2011
Discontinued Samsung Galaxy smartphones
Mobile phones with user-replaceable battery | Samsung Galaxy S II | Technology | 6,894 |
64,547,519 | https://en.wikipedia.org/wiki/Transition%20metal%20isocyanide%20complexes | Transition metal isocyanide complexes are coordination compounds containing isocyanide ligands. Because isocyanides are relatively basic, but also good pi-acceptors, a wide range of complexes are known. Some isocyanide complexes are used in medical imaging.
Scope of isocyanide ligands
Several thousand isocyanides are known, but the coordination chemistry is dominated by a few ligands. Common isonitrile ligands are methyl isocyanide, tert-butyl isocyanide, phenyl isocyanide, and cyclohexylisocyanide.
Isocyanides are electronically similar to CO, but for most R groups, isocyanides are superior Lewis bases and weaker pi-acceptors. Trifluoromethylisocyanide is the exception, its coordination properties are very similarly to those of CO.
Because the CNC linkage is linear, the cone angle of these ligands is small, so it is easy to prepare polyisocyanide complexes. Many complexes of isocyanides show high coordination numbers, e.g. the eight-coordinate cation . Very bulky isocyanide ligands are also known, e.g. C6H3-2,6-Ar2-NC (Ar =aryl).
Di- and triisocyanide ligands are well developed, e.g., (CH2)n(NC)2. . Usually steric factors force these ligands to bind to two separate metals, i.e., they are binucleating ligands. Chelating diisocyanide ligands require elaborate backbones.
Synthesis
Because of their low steric profile and high basicity, isocyanide ligands often install easily, e.g. by treating metal halides with the isocyanide. Many metal cyanides can be N-alkylated to give isocyanide complexes.
Reactions
Typically, isocyanides are spectator ligands, but their reduced and oxidized complexes can prove reactive by virtue of the unsaturated nature of the ligand
Cationic complexes are susceptible to nucleophilic attack at carbon. In this way, the first metal carbene complexes where prepared. Because isocyanides are both acceptors and donors, they stabilize a broader range of oxidation states than does CO. This advantage is illustrated by the isolation of the homoleptic vanadium hexaisocyanide complex in three oxidation states, i.e., [V(CNC6H3-2,6-Me2)6]n for n = -1, 0, +1.
Because isocyanides are more basic donors ligands than CO, their complexes are susceptible to oxidation and protonation. Thus, is easily protonated, whereas its counterpart is not:
Fe(CNR)5 + H+ → [HFeL5]+
Fe(CO)5 + H+ → no reaction
Some electron-rich isocyanide complexes protonate at N to give aminocarbyne complexes:
LnM-CNR + H+ → [LnM≡CN(H)R]+
Isocyanides sometimes insert into metal-alkyl bonds to form iminoacyls.
Structure and bonding
Isocyanide complexes often mirror the stoichiometry and structures of metal carbonyls. Like CO, isocyanides engage in pi-backbonding. The M-C-N angle provides some measure of the degree of backbonding. In electron-rich complexes, this angle is usually deviates from 180°. Unlike CO, cationic and dicationic complexes are common. RNC ligands are typically terminal, but bridging RNC ligands are common. Bridging isocyanides are always bent. General trends can be appreciated by inspection of the homoleptic complexes of the first row transition metals.
Homoleptic complexes
IR spectroscopy
The νC≡N band in isocyanides is intense in the range of 2165–2110 cm−1. The value of νC≡N is diagnostic of the electronic character of the complex. In complexes where RNC is primarily a sigma donor ligand, νC≡N shifts to higher energies vs free isocyanide. Thus, for , νC≡N = 2152, 2120 cm−l. In contrast, for the electron-rich species Fe2(CNEt)9, νC≡N = 2060, 1920, 1701, 1652 cm−l.
See also
Cyanometalate - coordination compounds containing cyanide ligands (coordinating via C)
Transition metal nitrile complexes - coordination compounds containing nitrile ligands, which are isomers of isonitriles
References
Coordination complexes
Isocyanides | Transition metal isocyanide complexes | Chemistry | 1,005 |
1,600,321 | https://en.wikipedia.org/wiki/Chicago%20Sanitary%20and%20Ship%20Canal | The Chicago Sanitary and Ship Canal, historically known as the Chicago Drainage Canal, is a canal system that connects the Chicago River to the Des Plaines River. It reverses the direction of the Main Stem and the South Branch of the Chicago River, which now flows out of Lake Michigan rather than into it. The related Calumet-Saganashkee Channel does the same for the Calumet River a short distance to the south, joining the Chicago canal about halfway along its route to the Des Plaines. The two provide the only navigation for ships between the Great Lakes Waterway and the Mississippi River system.
The canal was in part built as a sewage treatment scheme. Prior to its opening in 1900, sewage from the city of Chicago was dumped into the Chicago River and flowed into Lake Michigan. The city's drinking water supply was (and remains) located offshore, and there were fears that the sewage could reach the intake and cause serious disease outbreaks. Since the sewer systems were already flowing into the river, the decision was made to reverse the flow of the river, thereby sending all the sewage inland where it could be diluted before emptying it into the Des Plaines.
Another goal of the construction was to replace the shallow and narrow Illinois and Michigan Canal (I&M), which had originally connected Lake Michigan with the Mississippi starting in 1848. As part of the construction of the new canal, the entire route was built to allow much larger ships to navigate it. It is wide and deep, over three times the size of the I&M. The I&M became a secondary route with the new canal's opening and was shut down entirely with the creation of the Illinois Waterway network in 1933.
The building of the Chicago canal served as intensive and practical training for engineers who later built the Panama Canal. The canal is operated by the Metropolitan Water Reclamation District of Greater Chicago. In 1999, the system was named a Civil Engineering Monument of the Millennium by the American Society of Civil Engineers (ASCE). The Canal was listed on the National Register of Historic Places on December 20, 2011.
Reasons for construction
Early Chicago sewage systems discharged directly into Lake Michigan or into the Chicago River, which itself flowed into the lake. The city's water supply also comes from the lake, through water intake cribs located offshore. There were fears that sewage could infiltrate the water supply, leading to typhoid fever, cholera, and dysentery. During a tremendous storm in 1885, the rainfall washed refuse from the river far out into the lake (although reports of an 1885 cholera epidemic are untrue), spurring a panic that a future similar storm would cause a huge epidemic in Chicago. The only reason for the storm not causing such a catastrophic event was that the weather was cooler than normal. The Sanitary District of Chicago (now The Metropolitan Water Reclamation District) was created by the Illinois legislature in 1889 in response to this close call.
In addition, the canal was built to supplement and ultimately replace the older and smaller Illinois and Michigan Canal (built 1848) as a conduit to the Mississippi River system. In 1871, the old canal had been deepened in an attempt to reverse the river and improve shipping but the reversal of the river only lasted one season. The I&M canal was also badly polluted as a result of unrestricted dumping from city sewers and industries, such as the Union Stock Yards.
Planning and construction, 1887–1922
By 1887, it was decided to reverse the flow of the Chicago River through civil engineering. Engineer Isham Randolph noted that a ridge about from the lakeshore divided the Mississippi River drainage system from the Great Lakes drainage system. This low divide had been known since pre-Columbian time by the Native Americans, who used it as the Chicago Portage to cross from the Chicago River drainage to the Des Plaines River basin drainage. The Illinois and Michigan Canal was cut across that divide in the 1840s. In an attempt to better drain sewage and pollution in the Chicago River, the flow of the river had already been reversed in 1871 when the Illinois and Michigan Canal was deepened enough to reverse the river's flow for one season. A plan soon emerged to again cut through the ridge and reverse the flow permanently carrying wastewater away from the lake, through the Des Plaines and Illinois rivers, to the Mississippi River and the Gulf of Mexico. In 1889, the Illinois General Assembly created the Sanitary District of Chicago (SDC) to carry out the plan. After four years of turmoil during construction, Isham Randolph was appointed Chief Engineer for the newly formed Sanitary District of Chicago and resolved many issues circulating around the project. While the canal was being built, permanent reversal of the Chicago River was attained in 1892, when the Army Corps of Engineers further deepened the Illinois and Michigan Canal.
One of the issues for Randolph to resolve was a strike of about 2000 union workers, centered in Lemont and Joliet. On June 1, 1893, quarrymen went out to protest a wage cut, an action that also drew in 1200 canal workers. Reports describe 400 quarrymen marching along the length of the canal project on June 2, between Lemont and Romeo, conducting a "reign of terror" at worksites, "armed with clubs and revolvers", "almost crazed with liquor". On the 9th strikers clashed with replacement workers and local law enforcement, and Governor Altgeld called out the First and Second Regiments of the Illinois National Guard. Dozens were wounded and at least five killed: strikers Gregor Kilka, Jacob (or Ignatz) Ast, Thomas Moorski, Mike Berger, and 17-year-old bystander John Kluga. The strike was settled by the 15th.
The new Chicago Sanitary and Ship Canal, linking the south branch of the Chicago River to the Des Plaines River at Lockport, and in advance of an application by the Missouri Attorney General for an injunction against the opening, opened on January 2, 1900. However, it was not until January 17 that the complete flow of the water was released. Further construction from 1903 to 1907 extended the canal to Joliet, as the SDC wanted to replace the previously built Illinois and Michigan Canal with the Chicago Sanitary and Ship Canal. The rate of flow is controlled by the Lockport Powerhouse, sluice gates at Chicago Harbor and at the O'Brien Lock in the Calumet River, and also by pumps at Wilmette Harbor. Two more canals were later built to add to the system: the North Shore Channel in 1910, and the Calumet-Saganashkee Channel in 1922.
Construction of the Ship and Sanitary Canal was the largest earth-moving operation that had been undertaken in North America up to that time. It was also notable for training a generation of engineers, many of whom later worked on the Panama Canal. In 1989, the Sanitary District of Chicago was renamed the Metropolitan Water Reclamation District of Greater Chicago.
Diversion of water from the Great Lakes
The Chicago Sanitary and Ship Canal is designed to work by taking water from Lake Michigan and discharging it into the Mississippi River watershed. At the time of construction, a specific amount of water diversion was authorized by the United States Army Corps of Engineers (USACE) and approved by the Secretary of War, under provisions of various Rivers and Harbors Acts; over the years however, this limit was not honored or well regulated. While the increased flow more rapidly flushed the untreated sewage, it also was seen as a hazard to navigation, a concern to USACE in relation to the level of the Great Lakes and the St. Lawrence River, from which the water was diverted. Litigation ensued from 1907, which eventually saw states downstream of the canal siding with the sanitary district and those states upstream of Lake Michigan with Canada siding against the district. The litigation was eventually decided by the Supreme Court in Sanitary District of Chicago v. United States in 1925, and again in Wisconsin v. Illinois in 1929. In 1930, management of the canal was turned over to the United States Army Corps of Engineers. The Corps of Engineers reduced the flow of water from Lake Michigan into the canal, but kept it open for navigation purposes. These decisions prompted the sanitary district to accelerate their treatment of raw sewage. Today, diversions from the Great Lakes system are regulated by an international treaty with Canada, through the International Joint Commission, and by governors of the Great Lakes states.
Pollution of the canals
Most local sewers in the Chicago area were built over 100 years ago before wastewater treatment existed. They were designed to drain sanitary flow and a limited amount of stormwater directly into the river. If intercepting sewers and the Metropolitan Water Reclamation District of Greater Chicago (MWRD) water reclamation plants reach capacity during heavy rain, the local sewer continues to drain, or “overflow,” to a waterway, thus causing concern for pollution. However, the MWRD’s Tunnel and Reservoir Plan (TARP) has worked to decrease the combined sewage overflow (CSOs) and nearly eliminated them in the Calumet Area River System. Since the tunnels became operational in 2006, CSOs have been reduced from an average of 100 days per year to 50. Since Thornton Reservoir came online in 2015, CSOs have been nearly eliminated. TARP captures and stores combined stormwater and sewage that would otherwise overflow from sewers into waterways in rainy weather. This stored water is pumped from TARP to water reclamation plants to be cleaned before being released to waterways.
Asian carp and the canal
On November 20, 2009, the Corps of Engineers announced a single sample of DNA from Asian carp had been found above the electric barrier constructed in the canal in an attempt to prevent carp from migrating into the Great Lakes. The silver carp, also known as the flying carp, displace native species of fish by filter feeding and removing the bottom of the food chain. It migrated through the Mississippi River system, and could make its way into the Great Lakes, through the man-made canal. Carp were introduced to the U.S. with the blessing of the Environmental Protection Agency (EPA) in the 1970s to help remove algae from catfish farms in Arkansas. They escaped the farms.
On December 2, 2009, the Chicago Sanitary and Ship Canal closed, as the EPA and the Illinois Department of Natural Resources (IDNR) began applying a fish poison, rotenone, in an effort to kill Asian carp north of Lockport. Although no Asian carp were found in the two months of commercial and electrofishing, the massive fish kill did yield a single carp.
On December 21, 2009, Michigan Attorney General Mike Cox filed a lawsuit with the Supreme Court seeking the immediate closure of the Chicago Sanitary and Ship Canal to keep Asian carp out of Lake Michigan. The state of Illinois and the Corps of Engineers, which constructed the Canal, are co-defendants in the lawsuit.
In response to the Michigan lawsuit, on January 5, 2010, Illinois State Attorney General Lisa Madigan filed a counter-suit with the Supreme Court requesting that it reject Michigan's claims. Siding with the State of Illinois, both the Illinois Chamber of Commerce and the American Waterways Operators have filed affidavits, arguing that closing the Chicago Sanitary and Ship Canal would upset the movement of millions of tons of vital shipments of iron ore, coal, grain and other cargo, totaling more than $1.5 billion a year, and contribute to the loss of hundreds, perhaps thousands of jobs. However, Michigan along with several other Great Lakes states argue that the sport and commercial fishery and tourism associated with the fishery of the entire Great Lakes region is estimated at $7 billion a year, and impacts the economies of all Great Lakes states and Canada.
On January 19, 2010, the U.S. Supreme Court rejected the request for a preliminary injunction closing the canal. In August 2011, the United States Court of Appeals also rejected the preliminary injunction.
See also
Chicago 1885 cholera epidemic myth
Chicago flood
Tunnel and Reservoir Plan (TARP)
Isham Randolph
References
External links
A History from the Chicago Public Library . (However this credits Rudolph Hering, not Isham Randolph with the project.)
An album of photographs of the dig, including a 26 stanza poem written by Isham Randolph to Admiral Dewey on the opening of the canal
History and Heritage of Civil Engineering – Reversal of the Chicago River
Graph of Lakes Michigan and Huron water levels since 1860
Evaluation of the Potential for Hysteresis in Index-Velocity Ratings for the Chicago Sanitary and Ship Canal Near Lemont, Illinois United States Geological Survey
Canals in Illinois
Ship canals
Water supply and sanitation in the United States
Canals opened in 1900
Canals on the National Register of Historic Places in Illinois
Buildings and structures on the National Register of Historic Places in Chicago
Buildings and structures on the National Register of Historic Places in Cook County, Illinois
Historic districts in Chicago
Illinois waterways
Interbasin transfer
Transportation buildings and structures in Chicago
Transportation buildings and structures in DuPage County, Illinois
Transportation buildings and structures in Will County, Illinois
Lockport, Illinois
United States Army Corps of Engineers
Historic American Engineering Record in Illinois
Historic Civil Engineering Landmarks
Metropolitan Water Reclamation District of Greater Chicago | Chicago Sanitary and Ship Canal | Engineering,Environmental_science | 2,643 |
31,678,753 | https://en.wikipedia.org/wiki/Paracytophagy | Paracytophagy () is the cellular process whereby a cell engulfs a protrusion which extends from a neighboring cell. This protrusion may contain material which is actively transferred between the cells. The process of paracytophagy was first described as a crucial step during cell-to-cell spread of the intracellular bacterial pathogen Listeria monocytogenes, and is also commonly observed in Shigella flexneri. Paracytophagy allows these intracellular pathogens to spread directly from cell to cell, thus escaping immune detection and destruction. Studies of this process have contributed significantly to our understanding of the role of the actin cytoskeleton in eukaryotic cells.
Actin cytoskeleton
Actin is one of the main cytoskeletal proteins in eukaryotic cells. The polymerization of actin filaments is responsible for the formation of pseudopods, filopodia and lamellipodia during cell motility. Cells actively build actin microfilaments that push the cell membrane towards the direction of advance.
Nucleation factors and the Arp2/3 complex
Nucleation factors are enhancers of actin polymerization and contribute to the formation of the trimeric polymerization nucleus. This is a structure required to initiate the process of actin filament polymerization in a stable and efficient way. Nucleation factors such as WASP (Wiskott-Aldrich syndrome protein) help to form the seven-protein Arp2/3 nucleation complex, which resembles two actin monomers and therefore allows for easier formation of the polymerization nucleus. Arp2/3 is able to cap the trailing ("minus") end of the actin filament, allowing for faster polymerization at the "plus" end. It can also bind to the side of existing filaments to promote filament branching.
WASP analogs used by pathogens for intracellular motility
Certain intracellular pathogens such as the bacterial species Listeria monocytogenes and Shigella flexneri can manipulate host cell actin polymerization to move through the cytosol and spread to neighboring cells (see below). Studies of these bacteria, especially of Listeria Actin assembly-inducing protein (ActA), have resulted in further understanding of the actions of WASP. ActA is a nucleation promoting factor that mimics WASP. It is expressed polarized to the posterior end of the bacterium, allowing Arp2/3-mediated actin nucleation. This pushes the bacterium in the anterior direction, leaving a trailing "comet tail" of actin. In the case of Shigella, which also moves using an actin comet tail, the bacterial factor recruits host cell WASPs in order to promote actin nucleation.
Exchange of cellular material between adjacent cells
Cells can exchange material through various mechanisms, such as by secreting proteins, releasing extracellular vesicles such as exosomes or microvesicles, or more directly engulfing pieces of adjacent cells. In one example, filopodia-like protrusions, or tunneling nanotubes directed toward neighboring cells in a culture of rat PC12 cells have been shown to facilitate transport of organelles through transient membrane fusion. In another example, during bone marrow homing, cells of the surrounding bone engulf pieces of bone marrow hematopoietic cells. These osteoblasts make contact with hematopoietic stem-progenitor cells through membrane nanotubes, and pieces of the donor cells are transferred over time to various endocytic compartments of the target osteoblasts.
A distinct process known as trogocytosis, the exchange of lipid rafts or membrane patches between immune cells, can facilitate response to foreign stimuli. Moreover, exosomes have been shown to deliver not only antigens for cross-presentation, but also MHCII and co-stimulatory molecules for lymphocyte T activation. In non-immune cells, it has been demonstrated that mitochondria can be exchanged intercellularly to rescue metabolically non-viable cells lacking mitochondria. Mitochondrial transfer has also been observed in cancer cells.
Argosomes and melanosomes
Argosomes are derived from basolateral epithelial membranes and allow communication between adjacent cells. They were first described in Drosophila melanogaster, where they act as a vehicle for the spread of molecules through the epithelia of imaginal discs. Melanosomes are also transferred by filopodia from melanocytes to keratinocytes. This transfer involves a classic filopodial forming pathway, with Cdc42 and WASP as key factors.
Argosomes, melanosomes, and other examples of epithelial transfer have been compared with the process of paracytophagy, all of which can be viewed as special cases of intercellular material transfer between epithelial cells.
Role in the life cycle of intracellular pathogens
The two main examples of paracytophagy are the modes of cell-cell transmission of Listeria monocytogenes and Shigella flexneri. In the case of Listeria, the process was first described in detail using electron microscopy and video microscopy. The following is a description of the process of cell-cell transmission of Listeria monocytogenes, primarily based on Robbins et al. (1999):
Early events
In an already infected "donor" cell, the Listeria bacterium expresses ActA, which results in formation of the actin comet tail and movement of the bacterium throughout the cytoplasm. When the bacterium encounters the donor cell membrane, it will either ricochet off it or adhere to it and begin to push outwards, distending the membrane and forming a protrusion of 3-18 μm. The close interaction between the bacterium and the host cell membrane is thought to depend on Ezrin, a member of the ERM family of membrane-associated proteins. Ezrin attaches the actin-propelled bacterium to the plasma membrane by crosslinking the actin comet tail to the membrane, and maintains this interaction throughout the protrusion process.
Invasion of target cell and secondary vacuole formation
As the normal site of infection is the gut columnar epithelium, cells are packed closely together and a cell protrusion from one cell will easily push into a neighboring "target" cell without rupturing the target cell membrane or the donor protrusion membrane. At this point, the bacterium at the tip of the protrusion will begin to undergo "fitful movement" caused by continuing polymerization of actin at its rear. After 7–15 minutes, the donor cell membrane pinches off and fitful movement ceases for 15–25 minutes due to depletion of ATP. Subsequently, the target membrane pinches off (taking 30–150 seconds) and the secondary vacuole containing the bacterium forms inside the target cell cytoplasm.
Secondary vacuole breakdown and target cell infection
Within 5 minutes, the target cell becomes infected when the secondary vacuole begins to acidify and the inner (donor cell-derived) membrane breaks down through the action of bacterial phospholipases (PI-PLC and PC-PLC). Shortly thereafter, the outer membrane breaks down as a result of the actions of the bacterial protein listeriolysin O which punctures the vacuolar membrane. A cloud of residual donor cell-derived actin persists around the bacterium for up to 30 minutes. The bacterial metalloprotease Mpl cleaves ActA in a pH-dependent fashion while the bacterium is still within the acidified secondary vacuole, but new ActA transcription is not required as pre-existing ActA mRNA can be utilized to translate new ActA protein. The bacterium regains motility and the infection proceeds.
Impact on disease
The most severe symptoms of Listeriosis result from involvement of the central nervous system (CNS). These severe and often fatal symptoms include meningitis, rhombencephalitis, and encephalitis. These forms of disease are a direct result of Listeria pathogenicity mechanisms at the cellular level. Listerial infection involving the CNS can occur via three known routes: through the blood, through intracellular delivery, or through neuronal intracellular spread. Paracytophagous cell to cell spread offers Listeria access to the CNS by the latter two mechanisms.
Paracytophagy in CNS infection by Listeria
In peripheral tissues, Listeria can invade cells such as monocytes and dendritic cells from infected endothelial cells via the paracytophagous mode of invasion. Using these phagocytic cells as vectors, Listeria travels throughout the nerves and reaches tissues usually inaccessible to other bacterial pathogens. Similar to the mechanism seen in HIV, infected leukocytes in the blood cross the blood brain barrier and transport Listeria into the CNS. Once in the CNS, cell to cell spreading causes associated damage leading to brain encephalitis and bacterial meningitis. Listeria uses phagocytic leukocytes as a “Trojan Horse” to gain access to a greater range of target cells.
In one study, mice treated with gentamicin via infusion pump displayed CNS and brain involvement during infection with Listeria, indicating that the population of bacteria responsible for severe pathogenesis resided within cells and was protected from the circulating antibiotic. Macrophages infected with Listeria pass the infection on to neurons more easily through paracytophagy than through extracellular invasion by free bacteria. The mechanism which specifically targets these infected cells to the CNS is currently not known. This Trojan horse function is also observed and thought to be important in early stages of infection where gut-to-lymph node infection is mediated by infected dendritic cells.
A second mechanism of reaching the brain tissue is achieved through intra-axonal transport. In this mechanism, Listeria travels along the nerves to the brain, resulting in encephalitis or transverse myelitis. In rats, the dorsal root ganglia can be infected directly by Listeria, and the bacteria can move in retrograde as well as anterograde direction through the nerve cells. The specific mechanisms involved in brain disease are not yet known, but paracytophagy is thought to have some role. Bacteria have not been shown to infect neuronal cells directly in an efficient manner, and the previously described macrophage hand-off is thought to be necessary for this mode of spread.
See also
The process of paracytophagy is considered distinct from similar but unrelated processes such as phagocytosis and trogocytosis. Some related concepts include:
Membrane nanotubes
Intercellular signaling
References
Cell biology | Paracytophagy | Biology | 2,250 |
17,967,841 | https://en.wikipedia.org/wiki/5-HTTLPR | 5-HTTLPR (serotonin-transporter-linked promoter region) is a degenerate repeat (redundancy in the genetic code) polymorphic region in SLC6A4, the gene that codes for the serotonin transporter.
Since the polymorphism was identified in the middle of the 1990s,
it has been extensively investigated, e.g., in connection with neuropsychiatric disorders.
A 2006 scientific article stated that "over 300 behavioral, psychiatric, pharmacogenetic and other medical genetics papers" had analyzed the polymorphism. While often discussed as an example of gene-environment interaction, this contention is contested.
Alleles
The polymorphism occurs in the promoter region of the gene.
Researchers commonly report it with two variations in humans: A short ("s") and a long ("l"), but it can be subdivided further. The short (s)- and long (l)- alleles have been thought to be related to stress and psychiatric disorders.
In connection with the region are two single nucleotide polymorphisms (SNP): rs25531 and rs25532.
One study published in 2000 found 14 allelic variants (14-A, 14-B, 14-C, 14-D, 15, 16-A, 16-B, 16-C, 16-D, 16-E, 16-F, 19, 20 and 22) in a group of around 200 Japanese and Europeans.
The difference between 16-A and 16-D is the rs25531 SNP.
It is also the difference between 14-A and 14-D.
Some studies have found that long allele results in higher serotonin transporter mRNA transcription in human cell lines.
The higher level may be due to the A-allele of rs25531, such that subjects with the long-rs25531(A) allelic combination (sometimes written LA) have higher levels while long-rs25531(G) carriers have levels more similar to short-allele carriers.
Newer studies examining the effects of genotype may compare the LA/LA genotype against all other genotypes. The allele frequency of this polymorphism seems to vary considerably across populations, with a higher frequency of the long allele in Europe and lower frequency in Asia. It is argued that the population variation in the allele frequency is more likely due to neutral evolutionary processes than natural selection.
Neuropsychiatric disorders
In the 1990s it has been speculated that the polymorphism might be related to affective disorders,
and an initial study found such a link.
However, another large European study found no such link. A decade later two studies found that 5-HTT polymorphism influences depressive responses to life stress; an example of gene-environment interaction (GxE) not considered in the previous studies. However, a 2017 meta-analysis found no such association. Earlier, two 2009 meta-analyses found no overall GxE effect, while a 2011 meta-analysis, demonstrated a positive result. In turn, the 2011 meta-analysis has been criticized as being overly inclusive (e.g. including hip fractures as outcomes), for deeming a study supportive of the GxE interaction which is actually in the opposite direction, and because of substantial evidence of publication bias and data mining in the literature. This criticism points out that if the original finding were real, and not the result of publication bias, we would expect that those replication studies which are closest in design to the original are the most likely to replicate—instead we find the opposite. This suggests that authors may be data dredging for measures and analytic strategies which yield the results they want.
Treatment response
With the results from one study the polymorphism was thought to be related to treatment response so that long-allele patients respond better to antidepressants.
Another antidepressant treatment response study did, however, rather point to the rs25531 SNP,
and a large study by the group of investigators found a "lack of association between response to an SSRI and variation at the SLC6A4 locus".
One study could find a treatment response effect for repetitive transcranial magnetic stimulation to drug-resistant depression with long/long homozygotes benefitting more than short-allele carriers.
The researchers found a similar effect for the Val66Met polymorphism in the BDNF gene.
Amygdala
The 5-HTTLPR has been thought to predispose individuals to affective disorders such as anxiety and depression. There have been some studies that test whether this association is due to the effects of variation in 5-HTTLPR on the reactivity of the human amygdala. In order to test this, researchers gathered a group of subjects and administered a harm avoidance (HA) subset of the Tridimensional Personality Questionnaire as an initial mood and personality assessment. Subjects also had their DNA isolated and analyzed in order to be genotyped. Next, the amygdala was then engaged by having the subject match fearful facial expressions during an fMRI scan (by the 3-T GE Signa scanner). The results of the study showed that there was bilateral activity in the amygdala for every subject when processing the fearful images, as expected. However, the activity in the right amygdala was much higher for subjects with the s-allele, which shows that the 5-HTTLPR has an effect on amygdala activity. There did not seem to be the same effect on the left amygdala.
Insomnia
There has been speculation that the 5-HTTLPR gene is associated with insomnia and sleep quality. Primary insomnia is one of the most common sleep disorders and is defined as having trouble falling or staying asleep, enough to cause distress in one's life. Serotonin (5-HT) has been associated with the regulation of sleep for a very long time now. The 5-HT transporter (5-HTT) is the main regulator of serotonin and serotonergic energy and is therefore targeted by many antidepressants. There also have been several family and twin studies that suggest that insomnia is heavily genetically influenced. Many of these studies have found that there is a genetic and environment dual-factor that influences insomnia. It has been hypothesized that the short 5-HTTLPR genotype is related to poor sleep quality and, therefore, also primary insomnia. It is important to note that research studies have found that this variation does not cause insomnia, but rather may predispose an individual to experience worse quality of sleep when faced with a stressful life event.
Brummett
The effect that the 5-HTTLPR gene had on sleep quality was tested by Brummett in a study conducted at Duke University Medical Center from 2001 to 2004. The sleep quality of 344 participants was measured using The Pittsburgh Sleep Quality Index. The study found that caregivers with the homozygous s-allele had poorer sleep quality, which shows that the stress of caregiving combined with the allele gave way to worse sleep quality. Although the study found that the 5-HTTLPR genotype did not directly affect sleep quality, the 5-HTTLPR polymorphism's effect on sleep quality was magnified by one's environmental stress. It supports the notion that the 5-HTTLPR s-allele is what leads to hyperarousal when exposed to stress; hyperarousability is commonly associated with insomnia.
Deuschle
However, in a 2007 study conducted by a sleep laboratory in Germany, it was found that the 5-HTTLPR gene did have a strong association with both insomnia and depression both in participants with and without lifetime affective disorders. This study included 157 insomnia patients and a control group of 836 individuals that had no psychiatric disorders. The subjects were then genotyped through polymerase chain reaction (PCR) techniques. The researchers found that the s-allele was greater represented in the vast majority of patients with insomnia compared to those who had no disorder. This shows that there is an association between the 5-HTTPLR genotype and primary insomnia. However, it is important to consider the fact that there was a very limited number of subjects with insomnia tested in this study.
Personality traits
5-HTTLPR may be related to personality traits:
Two 2004 meta-analyses found 26 research studies investigating the polymorphism in relation to anxiety-related traits.
The initial and classic 1996 study found s-allele carriers to on average have slightly higher neuroticism score with the NEO PI-R personality questionnaire,
and this result was replicated by the group with new data.
Some other studies have, however, failed to find this association,
nor with peer-rated neuroticism,
and a 2006 review noted the "erratic success in replication" of the first finding.
A meta-analysis published in 2004 stated that the lack of replicability was "largely due to small sample size and the use of different inventories".
They found that neuroticism as measured with the NEO-family of personality inventories had quite significant association with 5-HTTLPR while the trait harm avoidance from the Temperament and Character Inventory family did not have any significant association.
A similar conclusion was reached in an updated 2008 meta-analysis.
However, based on over 4000 subjects, the largest study that used the NEO PI-R found no association between variants of the serotonin transporter gene (including 5-HTTLPR) and neuroticism, or its facets (Anxiety, Angry-Hostility, Depression, Self-Consciousness, Impulsiveness, and Vulnerability).
In a study published in 2009, authors found that individuals homozygous for the long allele of 5-HTTLPR paid more attention on average to positive affective pictures while selectively avoiding negative affective pictures presented alongside the positive pictures compared to their heterozygous and short-allele-homozygous peers. This biased attention of positive emotional stimuli suggests they may tend to be more optimistic. Other research indicates carriers of the short 5-HTTLPR allele have difficulty disengaging attention from emotional stimuli compared to long allele homozygotes. Another study published in 2009 using an eye tracking assessment of information processing found that short 5-HTTLPR allele carriers displayed an eye gaze bias to view positive scenes and avoid negative scenes, while long allele homozygotes viewed the emotion scenes in a more even-handed fashion. This research suggests that short 5-HTTLPR allele carriers may be more sensitive to emotional information in the environment than long allele homozygotes.
Another research group have given evidence for a modest association between shyness and the long form in grade school children.
This is, however, just a single report and the link is not investigated as intensively as for the anxiety-related traits.
Neuroimaging
Molecular neuroimaging studies have examined the association between genotype and serotonin transporter binding with positron emission tomography (PET) and SPECT brain scanners.
Such studies use a radioligand that binds—preferably selectively—to the serotonin transporter so an image can be formed that quantifies the distribution of the serotonin transporter in the brain.
One study could see no difference in serotonin transporter availability between long/long and short/short homozygotes subjects among 96 subjects scanned with SPECT using the iodine-123 β-CIT radioligand.
Using the PET radioligand carbon-11-labeled McN 5652 another research team could neither find any difference in serotonin transporter binding between genotype groups.
Newer studies have used the radioligand carbon-11-labeled DASB
with one study finding higher serotonin transporter binding in the putamen of LA homozygotes compared to other genotypes.
Another study with similar radioligand and genotype comparison found higher binding in the midbrain.
Associations between the polymorphism and the grey matter in parts of the anterior cingulate brain region have also been reported based on magnetic resonance imaging brain scannings and voxel-based morphometry analysis. 5-HTTLPR short allele–driven amygdala hyperreactivity was confirmed in a large (by MRI study standards) cohort of healthy subjects with no history of psychiatric illness or treatment. Brain blood flow measurements with positron emission tomography brain scanners can show genotype-related changes.
The glucose metabolism in the brain has also been investigated with respect to the polymorphism,
and the functional magnetic resonance imaging (fMRI) brain scans have also been correlated to the polymorphism.
Especially the amygdala brain structure has been the focus of the functional neuroimaging studies.
Electrophysiology
The relationship between the Event Related Potentials P3a and P3b and the genetic variants of 5-HTTLPR were investigated using an auditory oddball paradigm and revealed short allele homozygotes mimicked those of COMT met/met homozygotes with an enhancement of the frontal, but not parietal P3a and P3b. This suggests a frontal-cortical dopaminergic and serotoninergic mechanism in bottom-up attentional capture.
Other organisms
In rats (Rattus rattus) berberine increases 5-HTTLPR activity.
References
Further reading
External links
5-HTTLPR: A Pointed Review at Slate Star Codex | 5-HTTLPR | Biology | 2,845 |
2,299,135 | https://en.wikipedia.org/wiki/Anti-diagonal%20matrix | In mathematics, an anti-diagonal matrix is a square matrix where all the entries are zero except those on the diagonal going from the lower left corner to the upper right corner (↗), known as the anti-diagonal (sometimes Harrison diagonal, secondary diagonal, trailing diagonal, minor diagonal, off diagonal or bad diagonal).
Formal definition
An -by- matrix is an anti-diagonal matrix if the th element is zero for all rows and columns whose indices do not sum to . Symbolically:
Example
An example of an anti-diagonal matrix is
Another example would be
...which can be used to reverse the elements of an array (as a column matrix) by multiplying on the left.
Properties
All anti-diagonal matrices are also persymmetric.
The product of two anti-diagonal matrices is a diagonal matrix. Furthermore, the product of an anti-diagonal matrix with a diagonal matrix is anti-diagonal, as is the product of a diagonal matrix with an anti-diagonal matrix.
An anti-diagonal matrix is invertible if and only if the entries on the diagonal from the lower left corner to the upper right corner are nonzero. The inverse of any invertible anti-diagonal matrix is also anti-diagonal, as can be seen from the paragraph above. The determinant of an anti-diagonal matrix has absolute value given by the product of the entries on the diagonal from the lower left corner to the upper right corner. However, the sign of this determinant will vary because the one nonzero signed elementary product from an anti-diagonal matrix will have a different sign depending on whether the permutation related to it is odd or even:
More precisely, the sign of the elementary product needed to calculate the determinant of an anti-diagonal matrix is related to whether the corresponding triangular number is even or odd. This is because the number of inversions in the permutation for the only nonzero signed elementary product of any anti-diagonal matrix is always equal to the th such number.
See also
Main diagonal, all off-diagonal elements are zero in a diagonal matrix.
Exchange matrix, an anti-diagonal matrix with 1s along the counter-diagonal.
External links
Sparse matrices
Matrices | Anti-diagonal matrix | Mathematics | 453 |
67,610,207 | https://en.wikipedia.org/wiki/Nilufar%20Mamadalieva | Nilufar Mamadalieva is a biochemist from Uzbekistan.
Biography
Mamadalieva completed a Master's in science at Fergana State University and a PhD at the Institute of the Chemistry of Plant Substances in Tashkent. She is a scientific researcher at the institute. Her work focuses on the phytochemical and biological investigation of active compounds in the local medicinal plants of Central Asia.
In 2011 Mamadalieva received the UNESCO-L’Oreal Award for Young Women in Life Sciences. In 2014 she received the Elsevier Foundation Award for Early Career Women Scientists in the Developing World.
References
Living people
Year of birth missing (living people)
Uzbekistani scientists
Uzbekistani women scientists
Women biochemists | Nilufar Mamadalieva | Chemistry | 149 |
37,192,426 | https://en.wikipedia.org/wiki/Cryogenic%20energy%20storage | Cryogenic energy storage (CES) is the use of low temperature (cryogenic) liquids such as liquid air or liquid nitrogen to store energy.
The technology is primarily used for the large-scale storage of electricity. Following grid-scale demonstrator plants, a 250 MWh commercial plant is now under construction in the UK, and a 400 MWh store is planned in the USA.
Grid energy storage
Process
When it is cheaper (usually at night), electricity is used to cool air from the atmosphere to -195 °C using the Claude Cycle to the point where it liquefies. The liquid air, which takes up one-thousandth of the volume of the gas, can be kept for a long time in a large vacuum flask at atmospheric pressure. At times of high demand for electricity, the liquid air is pumped at high pressure into a heat exchanger, which acts as a boiler. Air from the atmosphere at ambient temperature, or hot water from an industrial heat source, is used to heat the liquid and turn it back into a gas. The massive increase in volume and pressure from this is used to drive a turbine to generate electricity.
Efficiency
In isolation, the process is only 25% efficient. This is increased to around 50% when used with a low-grade cold store, such as a large gravel bed, to capture the cold generated by evaporating the cryogen. The cold is re-used during the next refrigeration cycle.
Efficiency is further increased when used in conjunction with a power plant or other source of low-grade heat that would otherwise be lost to the atmosphere. Highview Power claims an AC to AC round-trip efficiency of 70%, by using an otherwise waste heat source from the compressor and other process wasted low grade heat at 115 °C with the IMechE (Institution of Mechanical Engineers) agreeing these efficiency estimates for a commercial-scale plant are realistic. However this number was not checked or confirmed by independent professional institutions.
Advantages
The system is based on proven technology, used safely in many industrial processes, and does not require any particularly rare elements or expensive components to manufacture. Dr Tim Fox, the head of Energy at the IMechE says "It uses standard industrial components - which reduces commercial risk; it will last for decades and it can be fixed with a spanner."
Applications
Economics
The technology is only economic where there is large variation in the wholesale price of electricity over time. Typically this will be where it is difficult to vary generation in response to changing demand. The technology thus complements growing energy sources like wind and solar, and allows a greater penetration of such renewables into the energy mix. It is less useful where electricity is mostly provided by dispatchable generation, like coal or gas-fired thermal plants, or hydro-electricity.
Cryogenic plants can also provide grid services, including grid balancing, voltage support, frequency response and synchronous inertia.
Locations
Unlike other grid-scale energy storage technologies which require specific geographies such as mountain reservoirs (pumped-storage hydropower) or underground salt caverns (compressed-air energy storage), a cryogenic energy storage plant can be located just about anywhere.
To achieve the greatest efficiencies, a cryogenic plant should be located near a source of low-grade heat which would otherwise be lost to the atmosphere. Often this would be a thermal power station that could be expected to be also generating electricity at times of peak demand and the highest prices. Colocation with a source of unused cold, such as an LNG regasification facility is also an advantage.
Grid-scale demonstrators
United Kingdom
In April 2014, the UK government announced it had given £8 million to Viridor and Highview Power to fund the next stage of the demonstration. The resulting grid-scale demonstrator plant at Pilsworth Landfill facility in Bury, Greater Manchester, UK, started operation in April 2018. The design was based on research by the Birmingham Centre for Cryogenic Energy Storage (BCCES) associated with the University of Birmingham, and has storage for up to 15 MWh, and can generate a peak supply of 5 MW (so when fully charged lasts for three hours at maximum output) and is designed for an operational life of 40 years.
United States
In 2019, the Washington State Department of Commerce's Clean Energy Fund announced it would provide a grant to help Tacoma Power partner with Praxair to build a 15 MW / 450 MWh liquid air energy storage plant. It will store up to 850,000 gallons of liquid nitrogen to help balance power loads.
Commercial plants
United Kingdom
In October 2019, Highview Power announced that it planned to build a 50 MW / 250 MWh commercial plant in Carrington, Greater Manchester.
Construction began in November 2020,
with commercial operation planned for 2022.
At 250 MWh, the plant would match the storage capacity of the world's largest existing lithium-ion battery, the Gateway Energy Storage facility in California. In November 2022 Highview Power stated that they were still trying to raise money "to construct a storage plant in Carrington that has a 30 megawatts capacity and can store 300 megawatt hours of electricity" with commissioning planned for "the end of 2024."
In 2024, Highview Power announced it had raised £300 million invesments the UK Infrastructure Bank and Centrica and would begin immediate construction of a 50MW/300MWh facility at Carrington. Commercial operation is planned to start in early 2026.
United States
In December 2019, Highview announced plans to build a 50 MW plant in northern Vermont, with the proposed facility able to store eight hours of energy, for a 400 MWh storage capacity.
Chile
In June 2021, Highview announced that it was developing a 50MW / 500MWh storage plant in the Atacama region of Chile.
History
Transport
Both liquid air and liquid nitrogen have been used experimentally to power cars. A liquid air powered car called Liquid Air was built between 1899 and 1902 but it couldn't at the time compete in terms of efficiency with other engines.
More recently, a liquid nitrogen vehicle was built. Peter Dearman, a garage inventor in Hertfordshire, UK who had initially developed a liquid air powered car, then put the technology to use as grid energy storage The Dearman engine differs from former nitrogen engine designs in that the nitrogen is heated by combining it with the heat exchange fluid inside the cylinder of the engine.
Electricity storage pilots
In 2010, the technology was piloted at a UK power station.
A 300 kW, 2.5 MWh storage capacity pilot cryogenic energy system developed by researchers at the University of Leeds and Highview Power that uses liquid air (with the and water removed as they would turn solid at the storage temperature) as the energy store, and low-grade waste heat to boost the thermal re-expansion of the air, operated at an 80 MW biomass power station in Slough, UK, from 2010 until 2014 when it was relocated to the university of Birmingham. The efficiency is less than 15% because of low efficiency hardware components used, but the engineers are targeting an efficiency of about 60 percent for the next generation of CES based on operation experiences of this system.
See also
United States Department of Energy International Energy Storage Database
References
Grid energy storage
Cryogenics | Cryogenic energy storage | Physics | 1,479 |
74,259,791 | https://en.wikipedia.org/wiki/Cadmium%20tellurite | Cadmium tellurite is the tellurite salt of cadmium, with the chemical formula CdTeO3.
Preparation
Cadmium tellurite can be prepared by the reaction of cadmium sulfate and sodium tellurite in ammonia.
Properties
Cadmium tellurite is a colourless solid that is insoluble in water. It is a semiconductor. It is part of the monoclinic crystal system, with space group P21/c (No. 14). It can also crystallize in the cubic crystal system and hexagonal crystal system at temperatures above 540 °C.
References
Cadmium compounds
Tellurites | Cadmium tellurite | Chemistry | 130 |
490,539 | https://en.wikipedia.org/wiki/Pseudocopulation | Pseudocopulation is a behavior similar to copulation that serves a reproductive function for one or both participants but does not involve actual sexual union between the individuals. It is most generally applied to a pollinator attempting to copulate with a flower adapted to mimic a potential female mate. The resemblance may be visual, but the key stimuli are often chemical and tactile. The form of mimicry in plants that deceives an insect into pseudocopulation is called Pouyannian mimicry after the French lawyer and amateur botanist Maurice-Alexandre Pouyanne.
A non-mimetic form of pseudocopulation has been observed in some parthenogenetic, all-female species of lizard. The behaviour does not appear to be necessary to trigger parthenogenesis.
Definition
In zoology, pseudocopulation is attempted copulation that serves a reproductive function for one or both participants but does not involve actual sexual union between the individuals.
In orchids
Pseudocopulation by an insect on a flower is a result of Pouyannian mimicry, named after the French lawyer and amateur botanist Maurice-Alexandre Pouyanne. . This occurs in several orchids, whose flowers mimic the female mating signals of specific pollinator insects, such as bees. The mimicry results in attempted copulation by males of the pollinator species, facilitating pollen transfer. Bee orchids (Ophrys apifera) and fly orchids (Ophrys insectifera), specifically, utilize flower morphology, coloration, and scent to deceive their respective pollinators. These orchids have evolved traits matching the preferences of specific pollinator niches, leading to adaptive speciation. Although bee and fly orchids are visual mimics of their pollinators, visual traits are not the only (nor the most important) ones mimicked to increase attraction.
In lizards
Some lizard species, such as the Laredo striped whiptail (Aspidoscelis [Cnemidophorus] laredoensis) and the Desert grassland whiptail lizard (A. uniparens), consist only of females, which reproduce by parthenogenesis. Some of these species have been observed to practise pseudocopulation in captivity, but it does not appear to be required to trigger parthenogenesis.
References
Mimicry
Animal sexuality
Sexual acts
Pollination | Pseudocopulation | Biology | 479 |
69,039,000 | https://en.wikipedia.org/wiki/Glugging | Glugging (also referred to as "the glug-glug process") is the physical phenomenon which occurs when a liquid is poured rapidly from a vessel with a narrow opening, such as a bottle. It is a facet of fluid dynamics.
As liquid is poured from a bottle, the air pressure in the bottle is lowered, and air at higher pressure from outside the bottle is forced into the bottle, in the form of a bubble, impeding the flow of liquid. Once the bubble enters, more liquid escapes, and the process is repeated. The reciprocal action of glugging creates a rhythmic sound. The English word "glug" is onomatopoeic, describing this sound. Onomatopoeias in other languages include (German).
Academic papers have been written about the physics of glugging, and about the impact of glugging sounds on consumers' perception of products such as wine. Research into glugging has been done using high-speed photography.
Factors which affect glugging are the viscosity of the liquid, its carbonation, the size and shape of the container's neck and its opening (collectively referred to as "bottle geometry"), the angle at which the container is held, and the ratio of air to liquid in the bottle (which means that the rate and the sound of the glugging changes as the bottle empties).
See also
References
fluid dynamics
food science | Glugging | Chemistry,Engineering | 299 |
63,989,659 | https://en.wikipedia.org/wiki/Solid%20set | In mathematics, specifically in order theory and functional analysis, a subset of a vector lattice is said to be solid and is called an ideal if for all and if then
An ordered vector space whose order is Archimedean is said to be Archimedean ordered.
If then the ideal generated by is the smallest ideal in containing
An ideal generated by a singleton set is called a principal ideal in
Examples
The intersection of an arbitrary collection of ideals in is again an ideal and furthermore, is clearly an ideal of itself;
thus every subset of is contained in a unique smallest ideal.
In a locally convex vector lattice the polar of every solid neighborhood of the origin is a solid subset of the continuous dual space ;
moreover, the family of all solid equicontinuous subsets of is a fundamental family of equicontinuous sets, the polars (in bidual ) form a neighborhood base of the origin for the natural topology on (that is, the topology of uniform convergence on equicontinuous subset of ).
Properties
A solid subspace of a vector lattice is necessarily a sublattice of
If is a solid subspace of a vector lattice then the quotient is a vector lattice (under the canonical order).
See also
References
Functional analysis
Order theory | Solid set | Mathematics | 258 |
46,995,261 | https://en.wikipedia.org/wiki/Association%20of%20Greek%20Chemists | The Association of Greek Chemists () is the chemical society of Greek chemists. The Association of Greek Chemists is a public legal entity that reports to the Ministry of Industry, Energy and Technology.
Its headquarters are in Athens: 27 Kaniggos Street, 10682, Ethans, Greece. It was founded in 1924 in order to act as the Greek government's official advisor on Chemistry related issues.
History
Chemistry has been taught in Greece as a category of natural sciences since the 19th century. In 1837, chemistry was taught in universities by the Bavarian Dr Lanterer, and later by Al. Venizelos and An. Christomanos. The first public analytical laboratory was founded in Lesbos while the island was under Turkish occupation. It worked on the ground floor of the island's city council building until 1902. Dr Stefanidis, its founder, called it «αστυχημείο», and its aim was the control of imported food as well as the local adulterations.
In 1900, the first Greek Analytical Laboratory was founded at Chania, Crete. It was bombed and destroyed in 1941.The Chemistry Department of Athens University and the Chemical Engineering department of the National Technical University of Athens (Greek: Εθνικό Μετσόβιο Πολυτεχνείο, National Metsovian Polytechnic), sometimes known as Athens Polytechnic, were founded in 1918. Six years later, Zoe Mela (Macedonian fighter Pavlos Melas' daughter) wrote the Association of Greek Chemists' founding declaration together with nine more chemists on 31 March at her house (17 Asteriou street, Athens). It was then signed by 53 chemists from the Universities and the Rousopoulos Academy on 4 August 1924.
Mrs Melpo Nikolitsa became the first woman elected into the Association's committee in 1953. In January 1960, all the chemists-applicants for employment as chemistry secondary education teachers are appointed by the Ministry of Education. The Association bought its own office on 14 June 1963, where the headquarters remain until today.
The board of directors of AGC, since January 2022 is the following:
President: Ioannis Katsoyiannis, Vice presidents: Vasilios Koulos and Kostas Theodorakis, General Secretary: Ioannis Sitaras, Treasurer and immediate past president: Athanasios Papadopoulos, Specific Secretary: Ioannis Vafeiadis, Members: Panagiotis Giannopoulos, Vasilios Panagopoulos, Emmanouil Pappas, Andreas Triantafyllakis, Anastasios Korillis
Membership
Registration in the Association of Greek Chemists is obligatory according to the Law 1804/1988 for those who meet the requirements. Members can be those that possess a university degree in chemistry or its equivalent. The equivalency to a University chemistry degree is recognized by a special body (DIKATSA) set up by the Ministry of Education. Those obliged to become members fill in an application form, submit a copy of their University degree or its equivalent and pay the membership fees. The current annual membership fee is 35 euros.
According to the official profile issued by the Association to celebrate its 80-year anniversary in 2004, about 25.65% of the registered members are employed in the public sector; 28% are employed in the private sector (industries, consultants, laboratories etc.); 5% are postgraduate students; 21% are unemployed; and 18% are retired members.
Flagship magazine
One of the benefits of membership is the receipt of the Association's flagship publication, the chimika chronika (chemical chronicles) magazine. It was published by the AGC from 1936 until 1997. In 1998, it was absorbed by the European Journal of Organic Chemistry and the European Journal of Inorganic Chemistry that were created after the merge of various European Chemistry Journals:
CHemistry: A European Journal
EurJIC: European journal of Inorganic Chemistry
EurJOC: European Journal of Organic Chemistry
ChemBioChem: European Journal of Chemical Biology
ChemPhysChem: European Journal of Chemical Physics and Physical Chemistry
The magazine has changed its name twice in the past: Chimika Chronika (1936-1968), Chimika Chronika Epistemonike Ekdosis (1969-1970), and Chimika Chronika New Series (1972-1997). It can include commercial advertising.
Funding Announcements
The Association's website is used to announce available funding by government or private bodies.
Affiliations
The AGC is affiliated with a number of professional bodies, such as the Panhellenic Association of Industrial Chemists, the Panhellenic Association of Shipping Chemists, and the Association of employees of the General Chemistry Laboratory.
References
External links
Learned societies of Greece
Chemistry societies
Scientific organizations established in 1924
1924 establishments in Greece | Association of Greek Chemists | Chemistry | 1,004 |
61,594,585 | https://en.wikipedia.org/wiki/Bubaline%20alphaherpesvirus%201 | Bubaline alphaherpesvirus 1 (BuHV-1) is a species of virus in the genus Varicellovirus, subfamily Alphaherpesvirinae, family Herpesviridae, and order Herpesvirales.
References
Alphaherpesvirinae | Bubaline alphaherpesvirus 1 | Biology | 56 |
5,813,818 | https://en.wikipedia.org/wiki/Fernanda%20Vi%C3%A9gas | Fernanda Bertini Viégas (born 1971) is a Brazilian computer scientist and graphical designer, whose work focuses on the social, collaborative and artistic aspects of information visualization.
Biography
Viégas studied graphic design and art history at the University of Kansas, where she obtained her bachelor's degree in 1997. She then moved to the MIT Media Lab, where she received an M.S. in 200 and a Ph.D. in Media Arts and Sciences in 2005 under the supervision of Judith Donath. The same year she began work at IBM's Thomas J. Watson Research Center in Cambridge, Massachusetts, as part of the Visual Communication Lab.
In April 2010, she and Martin M. Wattenberg started a new venture called Flowing Media, Inc., to focus on visualization aimed at consumers and mass audiences. Four months later, both of them joined Google as the co-leaders of the Google's "Big Picture" data visualization group in Cambridge, Massachusetts.
Work
Social visualization
Viégas began her research while at the MIT Media Lab, focusing on graphical interfaces for online communication. Her Chat Circles system introduced ideas such as proximity-based filtering of conversation and a visual archive of chat history displaying the overall rhythm and form of a conversation. Her email visualization designs (including PostHistory and Themail) are the foundation for many other systems; her findings on how visualizations are often used for storytelling influenced subsequent work on the collaborative aspects of visualization. While at MIT, she also studied usage of Usenet and blogs.
Collective intelligence and public visualization
A second stream of work, in partnership with Martin Wattenberg, centers on collective intelligence and the public use of data visualization.
Her work with visualizations such as History Flow and Chromogram led to some of the earliest publications on the dynamics of Wikipedia, including the first scientific study of the repair of vandalism.
Viégas is one of the founders of IBM's experimental Many Eyes website, created in 2007, which seeks to make visualization technology accessible to the public. In addition to broad uptake from individuals, the technology from Many Eyes has been used by nonprofits and news outlets such as the New York Times Visualization Lab.
Art
Viégas is also known for her artistic work, which explores the medium of visualization for explorations of emotionally charged digital data. An early example is Artifacts of the Presence Era, an interactive installation at the Boston Center for the Arts in 2003, which featured a video-based timeline of visitor interactions with the museum. She often works with Martin Wattenberg to visualize emotionally charged information. An example of these works is their piece "Web Seer", which is a visualization of Google Suggest. The Fleshmap series (started in 2008) uses visualization to portray aspects of sensuality, and includes work on the web, video, and installations. In 2012, she launched the Wind Map project, which displays continuously updated forecasts of wind patterns across the United States.
Publications
Chat Circles. Fernanda B. Viégas and Judith Donath. ACM Conference on Computer-Human Interaction (CHI), 1999
Visualizing Conversations, Judith Donath, Karrie Karahalios and Fernanda B. Viégas . Journal of Computer-Mediated Communication, Vol. 4, Number 4, June 1999
Studying Cooperation and Conflict between Authors with history flow Visualizations. Fernanda B. Viégas, Martin Wattenberg, and Kushal Dave. ACM Conference on Computer-Human Interaction (CHI), 2004
Many Eyes: A Site for Visualization at Internet Scale. Fernanda B. Viégas, Martin Wattenberg, Frank van Ham, Jesse Kriss, Matt McKeon. IEEE Symposium on Information Visualization, 2007
"Luscious". Fernanda Viégas & Martin Wattenberg. Book chapter in Net Works: Case Studies in Web Art and Design. Ed. xtine burrough, Routledge 2011
"Beautiful History". Fernanda Viégas & Martin Wattenberg. Book chapter in Beautiful Visualization: Looking at Data Through the Eyes of Experts. Ed. Julie Steele, Noah Iliinsky. O'Reilly Media, 2010.
References
External links
Fernanda B. Viégas Personal home page for Viégas
Academic publications listed on IBM's site.
Many Eyes Experimental public visualization site.
1971 births
Living people
Data and information visualization experts
Human–computer interaction
Human–computer interaction researchers
Brazilian contemporary artists
Brazilian digital artists
Women digital artists
Massachusetts Institute of Technology alumni
IBM employees
Brazilian women scientists
Brazilian scientists
People from Cambridge, Massachusetts
Google employees
Technology company founders
American company founders
American women company founders
Brazilian women company founders
Brazilian designers
MIT Media Lab people
21st-century Brazilian women artists
21st-century Brazilian artists
21st-century American women
University of Kansas alumni | Fernanda Viégas | Engineering | 960 |
23,728,527 | https://en.wikipedia.org/wiki/Estipite | The estipite column is a type of pilaster used in buildings in the Mannerist and Baroque styles,a moment when many classical architectural elements lost their simple shapes and became increasingly complex, offering a variety of forms and exuberant decoration. This sort of column has the shape of an inverted pyramid or obelisk. Sometimes the shaft is wider in its middle part than in the base or capital. There are many examples by architects like Michelangelo’s Biblioteca Laurenziana (1523-1571) and others. It became later a signature element of the Churrigueresque Baroque style of Spain and Spanish America in the 18th century.
Characteristics
Form
The shape of the estipite has a narrow base and the shaft is in the shape of an inverted obelisk. This is a variation to previous uses of the pilaster which deviates from classical architecture with its form. In classical architecture, pilasters give the impression that they have a load bearing function. However, due to the obelisk shape of the estipite, this tradition is disrupted. The estipite is not supposed to look solid, instead be dynamic and create movement. Creating an apparent lightness to the structure.
Manuel Toussaint defines estipites as:
“A supporting member, square or rectangular in section, and formed of multiple elements: pyramids and truncated prisms, parallelepipeds, superimposed foliage, medallions, garlands, bouquets, festoons. The ornament is all vegetable, applied to geometric forms”.
Capitals
The capitals usually highlight the line of a broken cornice and are unabridged. Or may be connected to another estipite by a horizontal entablature. The capital for esiplite pilasters are typically Corinthian. There are deviations to this. For example, decorations of vegetation and cherub heads take the place of the Corinthian capital in Capilla del Sagrario for the Cathedral of Segovia by Jeronimo de Balbas.
Double Columns
Similar to Baroque styling with the use of double columns, the double estipites is a feature in some Churrigueresque buildings.
Alongside other styles
Estipies were utilized between Ultra-Baroque and the rise of Neo-Classical styles. Therefore, even though estipites are distinct in style, they are sometimes used alongside Solomonic and classical columns. A good example of this is San Francisco Acatepec in Puebla.
History
Origin
In Richard W. Amero's thesis, The California Building: A Case Of The Misunderstood Baroque, he claims that Michelangelo is the first one to use an estipite pilaster in the Laurentian Library (1526). Meanwhile, John F Moffitt states in his thesis El Sagrario Metropolitano, Wendel Dietterlin, and The Estipite that Juan de Arfe y Villafane could have been the first known person to mention the estipite. This is seen in Arfe's, Description de la traza de la custodia de la Iglesia de Sevilla (1587). Therefore, the origins of the estipite are debated among scholars.
Spain and New Spain
The architect known for making estipites popular is Jose Benito de Churriguera, who has the Churrigueresque style named after him. His first works with estipites were Capilla del Sagrario for the Segovia Cathedral (1690) and Convento de San Esteban, Salamanca (1693). Jeronimo de Balbas was a Spanish architect who moved to Mexico (New Spain) in 1717, and introduced the new world to estipites. His work Retablo de los Reyes in the Mexico City Metropolitan Cathedral (1718–37) was the first building to showcase estipites in the New World. The era of estipites only lasted till 1783 with the establishment of Academia de San Carlos, an architecture school in New Spain. However, in this short period of time 1736, the completion of Retablo de los Reyes, till 1783, many buildings in New Spain (Mexico) had facades or alters with estipites. Due to the decline in popularity for the estipite pilasters, Solomonic and Classical columns were revived throughout Spain and New Spain. This led to many estipite-style monuments to be destroyed or replaced with classical columns in the last decades of the 1800s.
Buildings
Small list of buildings that estipites are a design feature for.
References
Baroque architectural features
Churrigueresque architecture
Orders of columns
Columns and entablature | Estipite | Technology | 943 |
11,911,464 | https://en.wikipedia.org/wiki/Model%20of%20hierarchical%20complexity | The model of hierarchical complexity (MHC) is a framework for scoring how complex a behavior is, such as verbal reasoning or other cognitive tasks. It quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized, in terms of information science. This model was developed by Michael Commons and Francis Richards in the early 1980s.
Overview
The model of hierarchical complexity (MHC) is a formal theory and a mathematical psychology framework for scoring how complex a behavior is. Developed by Michael Lamport Commons and colleagues, it quantifies the order of hierarchical complexity of a task based on mathematical principles of how the information is organized, in terms of information science. Its forerunner was the general stage model.
Behaviors that may be scored include those of individual humans or their social groupings (e.g., organizations, governments, societies), animals, or machines. It enables scoring the hierarchical complexity of task accomplishment in any domain. It is based on the very simple notions that higher order task actions:
are defined in terms of the next lower ones (creating hierarchy);
organize the next lower actions;
organize lower actions in a non-arbitrary way (differentiating them from simple chains of behavior).
It is cross-culturally and cross-species valid. The reason it applies cross-culturally is that the scoring is based on the mathematical complexity of the hierarchical organization of information. Scoring does not depend upon the content of the information (e.g., what is done, said, written, or analyzed) but upon how the information is organized.
The MHC is a non-mentalistic model of developmental stages. It specifies 16 orders of hierarchical complexity and their corresponding stages. It is different from previous proposals about developmental stage applied to humans; instead of attributing behavioral changes across a person's age to the development of mental structures or schema, this model posits that task sequences of task behaviors form hierarchies that become increasingly complex. Because less complex tasks must be completed and practiced before more complex tasks can be acquired, this accounts for the developmental changes seen in an individual persons' performance of complex tasks. For example, a person cannot perform arithmetic until the numeral representations of numbers are learned, or a person cannot operationally multiply the sums of numbers until addition is learned. However, as much as natural intelligence helps human to understand some numbers, it does not play a complete role in multiplying large numbers without learning additions.
The creators of the MHC claim that previous theories of stage have confounded the stimulus and response in assessing stage by simply scoring responses and ignoring the task or stimulus. The MHC separates the task or stimulus from the performance. The participant's performance on a task of a given complexity represents the stage of developmental complexity.
Previous stage theories were unsatisfying to Commons and Richards because the theories did not show the existence of the stages more than describing sequential changes in human behavior. This led them to create a list of two concepts they felt a successful developmental theory should address. The two ideas they wanted to study were (1) the hierarchical complexity of the task to be solved and (2) the psychology, sociology, and anthropology of the task performance (and the development of the performance).
Vertical complexity of tasks performed
One major basis for this developmental theory is task analysis. The study of ideal tasks, including their instantiation in the real world, has been the basis of the branch of stimulus control called psychophysics. Tasks are defined as sequences of contingencies, each presenting stimuli and each requiring a behavior or a sequence of behaviors that must occur in some non-arbitrary fashion. The complexity of behaviors necessary to complete a task can be specified using the horizontal complexity and vertical complexity definitions described below. Behavior is examined with respect to the analytically-known complexity of the task.
Tasks are quantal in nature. They are either completed correctly or not completed at all. There is no intermediate state (tertium non datur). For this reason, the model characterizes all stages as P-hard and functionally distinct. The orders of hierarchical complexity are quantized like the electron atomic orbitals around the nucleus: each task difficulty has an order of hierarchical complexity required to complete it correctly, analogous to the atomic Slater determinant. Since tasks of a given quantified order of hierarchical complexity require actions of a given order of hierarchical complexity to perform them, the stage of the participant's task performance is equivalent to the order of complexity of the successfully completed task. The quantal feature of tasks is thus particularly instrumental in stage assessment because the scores obtained for stages are likewise discrete.
Every task contains a multitude of subtasks. When the subtasks are carried out by the participant in a required order, the task in question is successfully completed. Therefore, the model asserts that all tasks fit in some configured sequence of tasks, making it possible to precisely determine the hierarchical order of task complexity. Tasks vary in complexity in two ways: either as horizontal (involving classical information); or as vertical (involving hierarchical information).
Horizontal complexity
Classical information describes the number of "yes–no" questions it takes to do a task. For example, if one asked a person across the room whether a penny came up heads when they flipped it, their saying "heads" would transmit 1 bit of "horizontal" information. If there were 2 pennies, one would have to ask at least two questions, one about each penny. Hence, each additional 1-bit question would add another bit. Let us say they had a four-faced top with the faces numbered 1, 2, 3, and 4. Instead of spinning it, they tossed it against a backboard as one does with dice in a game of craps. Again, there would be 2 bits. One could ask them whether the face had an even number. If it did, one would then ask if it were a 2. Horizontal complexity, then, is the sum of bits required by just such tasks as these.
Vertical complexity
Hierarchical complexity refers to the number of recursions that the coordinating actions must perform on a set of primary elements. Actions at a higher order of hierarchical complexity: (a) are defined in terms of actions at the next lower order of hierarchical complexity; (b) organize and transform the lower-order actions (see Figure 2); (c) produce organizations of lower-order actions that are qualitatively new and not arbitrary, and cannot be accomplished by those lower-order actions alone. Once these conditions have been met, we say the higher-order action coordinates the actions of the next lower order.
To illustrate how lower actions get organized into more hierarchically complex actions, let us turn to a simple example. Completing the entire operation 3 × (4 + 1) constitutes a task requiring the distributive act. That act non-arbitrarily orders adding and multiplying to coordinate them. The distributive act is therefore one order more hierarchically complex than the acts of adding and multiplying alone; it indicates the singular proper sequence of the simpler actions. Although simply adding results in the same answer, people who can do both display a greater freedom of mental functioning. Additional layers of abstraction can be applied. Thus, the order of complexity of the task is determined through analyzing the demands of each task by breaking it down into its constituent parts.
The hierarchical complexity of a task refers to the number of concatenation operations it contains, that is, the number of recursions that the coordinating actions must perform. An order-three task has three concatenation operations. A task of order three operates on one or more tasks of vertical order two and a task of order two operates on one or more tasks of vertical order one (the simplest tasks).
Stages of development
Stage theories describe human organismic and/or technological evolution as systems that move through a pattern of distinct stages over time. Here development is described formally in terms of the model of hierarchical complexity (MHC).
Formal definition of stage
Since actions are defined inductively, so is the function h, known as the order of the hierarchical complexity. To each action A, we wish to associate a notion of that action's hierarchical complexity, h(A). Given a collection of actions A and a participant S performing A, the stage of performance of S on A is the highest order of the actions in A completed successfully at least once, i.e., it is: stage (S, A) = max{h(A) | A ∈ A and A completed successfully by S}. Thus, the notion of stage is discontinuous, having the same transitional gaps as the orders of hierarchical complexity. This is in accordance with previous definitions.
Because MHC stages are conceptualized in terms of the hierarchical complexity of tasks rather than in terms of mental representations (as in Piaget's stages), the highest stage represents successful performances on the most hierarchically complex tasks rather than intellectual maturity.
Stages of hierarchical complexity
The following table gives descriptions of each stage in the MHC.
Relationship with Piaget's theory
The MHC builds on Piagetian theory but differs from it in many ways; notably the MHC has additional higher stages. In both theories, one finds:
Higher-order actions defined in terms of lower-order actions. This forces the hierarchical nature of the relations and makes the higher-order tasks include the lower ones and requires that lower-order actions are hierarchically contained within the relative definitions of the higher-order tasks.
Higher-order of complexity actions organize those lower-order actions. This makes them more powerful. Lower-order actions are organized by the actions with a higher order of complexity, i.e., the more complex tasks.
What Commons et al. (1998) have added includes:
Higher-order-of-complexity actions organize those lower-order actions in a non-arbitrary way.
This makes it possible for the model's application to meet real world requirements, including the empirical and analytic. Arbitrary organization of lower order of complexity actions, possible in the Piagetian theory, despite the hierarchical definition structure, leaves the functional correlates of the interrelationships of tasks of differential complexity formulations ill-defined.
Moreover, the model is consistent with the neo-Piagetian theories of cognitive development. According to these theories, progression to higher stages or levels of cognitive development is caused by increases in processing efficiency and working memory capacity. That is, higher-order stages place increasingly higher demands on these functions of information processing, so that their order of appearance reflects the information processing possibilities at successive ages.
The following dimensions are inherent in the application:
Task and performance are separated.
All tasks have an order of hierarchical complexity.
There is only one sequence of orders of hierarchical complexity.
Hence, there is structure of the whole for ideal tasks and actions.
There are transitional gaps between the orders of hierarchical complexity.
Stage is defined as the most hierarchically complex task solved.
There are discrete gaps in Rasch scaled stage of performance.
Performance stage is different task area to task area.
There is no structure of the whole—horizontal décalage—for performance. It is not inconsistency in thinking within a developmental stage. Décalage is the normal modal state of affairs.
Orders and corresponding stages
The MHC specifies 16 orders of hierarchical complexity and their corresponding stages, positing that each of Piaget's substages, in fact, are robustly hard stages. The MHC adds five postformal stages to Piaget's developmental trajectory: systematic stage 12, metasystematic stage 13, paradigmatic stage 14, cross-paradigmatic stage 15, and meta-cross-paradigmatic stage 16. It may be the Piaget's consolidate formal stage is the same as the systematic stage. The sequence is as follows: (0) calculatory, (1) automatic, (2) sensory & motor, (3) circular sensory-motor, (4) sensory-motor, (5) nominal, (6) sentential, (7) preoperational, (8) primary, (9) concrete, (10) abstract, (11) formal, and the five postformal: (12) systematic, (13) metasystematic, (14) paradigmatic, (15) cross-paradigmatic, and (16) meta-cross-paradigmatic. The first four stages (0–3) correspond to Piaget's sensorimotor stage at which infants and very young children perform. Adolescents and adults can perform at any of the subsequent stages. MHC stages 4 through 5 correspond to Piaget's pre-operational stage; 6 through 8 correspond to his concrete operational stage; and 9 through 11 correspond to his formal operational stage.
More complex behaviors characterize multiple system models. The four highest stages in the MHC are not represented in Piaget's model. The higher stages of the MHC have extensively influenced the field of positive adult development. Some adults are said to develop alternatives to, and perspectives on, formal operations; they use formal operations within a "higher" system of operations. Some theorists call the more complex orders of cognitive tasks "postformal thought", but other theorists argue that these higher orders cannot exactly be labelled as postformal thought.
Jordan (2018) argued that unidimensional models such as the MHC, which measure level of complexity of some behavior, refer to only one of many aspects of adult development, and that other variables are needed (in addition to unidimensional measures of complexity) for a fuller description of adult development.
Empirical research using the model
The MHC has a broad range of applicability. Its mathematical foundation permits it to be used by anyone examining task performance that is organized into stages. It is designed to assess development based on the order of complexity which the actor utilizes to organize information. The model thus allows for a standard quantitative analysis of developmental complexity in any cultural setting. Other advantages of this model include its avoidance of mentalistic explanations, as well as its use of quantitative principles which are universally applicable in any context.
The following practitioners can use the MHC to quantitatively assess developmental stages:
Cross-cultural developmentalists
Animal developmentalists
Evolutionary psychologists
Organizational psychologists
Developmental political psychologists
Learning theorists
Perception researchers
Historians of science
Educators
Therapists
Anthropologists
List of examples
In one representative study, Commons, Goodheart, and Dawson (1997) found, using Rasch analysis (Rasch, 1980), that hierarchical complexity of a given task predicts stage of a performance, the correlation being r = 0.92. Correlations of similar magnitude have been found in a number of the studies. The following are examples of tasks studied using the model of hierarchical complexity or Kurt W. Fischer's similar skill theory:
Algebra (Commons, Giri, & Harrigan, 2014)
Animal stages (Commons & Miller, 2004)
Atheism (Commons-Miller, 2005)
Attachment and loss (Commons, 1991; Miller & Lee, 2000)
Balance beam and pendulum (Commons, Goodheart, & Bresette, 1995; Commons, Giri, & Harrigan, 2014)
Contingencies of reinforcement (Commons & Giri, 2016)
Counselor stages (Lovell, 2002)
Empathy of hominids (Commons & Wolfsont, 2002)
Epistemology (Kitchener & Fischer, 1990; Kitchener & King, 1990)
Evaluative reasoning (Dawson, 2000)
Four story problem (Commons, Richards & Kuhn, 1982; Kallio & Helkama, 1991)
Good education (Dawson-Tunik, 2004)
Good interpersonal relations (Armon, 1984a; Armon, 1984b; Armon, 1989)
Good work (Armon, 1993)
Honesty and kindness (Lamborn, Fischer & Pipp, 1994)
Informed consent (Commons & Rodriguez, 1990; Commons & Rodriguez, 1993; Commons, Goodheart, Rodriguez, & Gutheil, 2006)
Language stages (Commons et al., 2007)
Leadership before and after crises (Oliver, 2004)
Loevinger's sentence completion task (Cook-Greuter, 1990)
Moral judgment (Armon & Dawson, 1997; Dawson, 2000)
Music (Beethoven) (Funk, 1989)
Physics tasks (Inhelder & Piaget, 1958)
Political development (Sonnert & Commons, 1994)
Report patient's prior crimes (Commons, Lee, Gutheil, et al., 1995)
Social perspective-taking (Commons & Rodriguez, 1990; Commons & Rodriguez, 1993)
Spirituality (Miller & Cook-Greuter, 1994)
Tool making of hominids (Commons & Miller 2002)
Views of the good life (Armon, 1984b; Danaher, 1993; Dawson, 2000; Lam, 1995)
Workplace culture (Commons, Krause, Fayer, & Meaney, 1993)
Workplace organization (Bowman, 1996)
As of 2014, people and institutes from all the major continents of the world, except Africa, have used the model of hierarchical complexity. Because the model is very simple and is based on analysis of tasks and not just performances, it is dynamic. With the help of the model, it is possible to quantify the occurrence and progression of transition processes in task performances at any order of hierarchical complexity.
Criticisms
The descriptions of stages 13–15 have been described as insufficiently precise.
See also
References
Literature
Biggs, J.B. & Collis, K. (1982). Evaluating the quality of learning: The SOLO taxonomy (structure of the observed learning outcome). New York: Academic Press.
Fischer, K.W. (1980). A theory of cognitive development: The control and construction of hierarchies of skills. Psychological Review, 87(6), 477–531.
External links
Behavioral Development Bulletin
Society for Research in Adult Development
Cognition
Management cybernetics
Complex systems theory
Developmental stage theories
Psychophysics | Model of hierarchical complexity | Physics | 3,666 |
4,954,583 | https://en.wikipedia.org/wiki/FreeOTFE | FreeOTFE is a discontinued open source computer program for on-the-fly disk encryption (OTFE). On Microsoft Windows, and Windows Mobile (using FreeOTFE4PDA), it can create a virtual drive within a file or partition, to which anything written is automatically encrypted before being stored on a computer's hard or USB drive. It is similar in function to other disk encryption programs including TrueCrypt and Microsoft's BitLocker.
The author, Sarah Dean, went absent as of 2011. The FreeOTFE website is unreachable as of June 2013 and the domain name is now registered by a domain squatter. The original program can be downloaded from a mirror at Sourceforge. In June 2014, a fork of the project now named LibreCrypt appeared on GitHub.
Overview
FreeOTFE was initially released by Sarah Dean in 2004, and was the first open source code disk encryption system that provided a modular architecture allowing 3rd parties to implement additional algorithms if needed. Older FreeOTFE licensing required that any modification to the program be placed in the public domain. This does not conform technically to section 3 of the Open Source definition. Newer program licensing omits this condition. The FreeOTFE license has not been approved by the Open Source Initiative and is not certified to be labeled with the open-source certification mark.
This software is compatible with Linux encrypted volumes (e.g. LUKS, cryptoloop, dm-crypt), allowing data encrypted under Linux to be read (and written) freely. It was the first open source transparent disk encryption system to support Windows Vista and PDAs.
Optional two-factor authentication using smart cards and/or hardware security modules (HSMs, also termed security tokens) was introduced in v4.0, using the PKCS#11 (Cryptoki) standard developed by RSA Laboratories.
FreeOTFE also allows any number of "hidden volumes" to be created, giving plausible deniability and deniable encryption, and also has the option of encrypting full partitions or disks (but not the system partition).
Portable use
FreeOTFE can be used in "portable" (or "traveller") mode, which allows it to be kept on a USB drive or other portable media, together with its encrypted data, and carried around. This allows it to be used under Microsoft Windows without installation of the complete program to "mount" and access the encrypted data through a virtual disk.
The use of this mode requires installing device drivers (at least temporarily) to create virtual disks, and as a consequence administrator rights are needed to start this traveller mode. As with most open source software that uses device drivers, the user must enable test signing when running Windows Vista x64 and Windows 7 x64 systems.
Driverless operation
Packaged with FreeOTFE is another program called "FreeOTFE Explorer", which provides a driverless system that allows encrypted disks to be used without administrator rights.
This allows FreeOTFE encrypted data to be used on (for example) public computers found in libraries or computer kiosks (interactive kiosks), where administrator rights are unavailable.
Unlike FreeOTFE, FreeOTFE Explorer does not provide on-the-fly encryption through a virtual drive. Instead it lets files be stored and extracted from encrypted disk images, in a similar manner as ZIP and RAR archives, by using a Windows Explorer like interface.
Algorithms implemented
Due to its architecture, FreeOTFE provides great flexibility to the user with its encryption options.
Ciphers
FreeOTFE implements several ciphers, including:
AES
Blowfish
CAST5 / CAST6
DES / Triple DES
MARS
RC6
Serpent
Twofish
It includes all National Institute of Standards and Technology (NIST) Advanced Encryption Standard (AES) finalists, and all ciphers can be used with multiple different keylengths.
Cipher modes
FreeOTFE originally offered encryption using cipher-block chaining (CBC) with encrypted salt-sector initialization vector (ESSIV), though from v3.00 introduced LRW and also the more secure XTS mode, which supersedes LRW in the IEEE P1619 standard for disk encryption.
Hashes
As with its cipher options, FreeOTFE offers many different hash algorithms:
MD2
MD4
MD5
RIPEMD-128
RIPEMD-160
RIPEMD-224
RIPEMD-320
SHA-1
SHA-224
SHA-256
SHA-384
SHA-512
Tiger
Whirlpool
See also
Disk encryption
Disk encryption software
On-the-fly encryption
Comparison of disk encryption software
References
External links
Cryptographic software
Disk encryption
Free security software
Windows security software
Windows Mobile software
Portable software | FreeOTFE | Mathematics | 983 |
75,282,554 | https://en.wikipedia.org/wiki/Remibrutinib | Remibrutinib is a small molecule drug that acts as a Bruton's tyrosine kinase (BTK) inhibitor. It is in development for the treatment of chronic spontaneous urticaria. In November 2023, Novartis announced that the compound "demonstrated clinically meaningful and statistically significant reduction in urticaria activity vs placebo" in a Phase III trial.
References
Tyrosine kinase inhibitors
Acrylamides
Aminopyrimidines
Benzamides
Fluoroarenes
Ethanolamines
Cyclopropyl compounds | Remibrutinib | Chemistry | 114 |
2,995,958 | https://en.wikipedia.org/wiki/Potential%20isomorphism | In mathematical logic and in particular in model theory, a potential isomorphism is a collection of finite partial isomorphisms between two models which satisfies certain closure conditions. Existence of a partial isomorphism entails elementary equivalence, however the converse is not generally true, but it holds for ω-saturated models.
Definition
A potential isomorphism between two models M and N is a non-empty collection F of finite partial isomorphisms between M and N which satisfy the following two properties:
for all finite partial isomorphisms Z ∈ F and for all x ∈ M there is a y ∈ N such that Z ∪ {(x,y)} ∈ F
for all finite partial isomorphisms Z ∈ F and for all y ∈ N there is a x ∈ M such that Z ∪ {(x,y)} ∈ F
A notion of Ehrenfeucht-Fraïssé game is an exact characterisation of elementary equivalence and potential isomorphism can be seen as an approximation of it. Another notion that is similar to potential isomorphism is that of local isomorphism.
References
Model theory | Potential isomorphism | Mathematics | 221 |
47,825,086 | https://en.wikipedia.org/wiki/Dark%20data | Dark data is data which is acquired through various computer network operations but not used in any manner to derive insights or for decision making. The ability of an organisation to collect data can exceed the throughput at which it can analyse the data. In some cases the organisation may not even be aware that the data is being collected. IBM estimate that roughly 90 percent of data generated by sensors and analog-to-digital conversions never get used.
In an industrial context, dark data can include information gathered by sensors and telematics.
Organizations retain dark data for a multitude of reasons, and it is estimated that most companies are only analyzing 1% of their data. Often it is stored for regulatory compliance and record keeping. Some organizations believe that dark data could be useful to them in the future, once they have acquired better analytic and business intelligence technology to process the information. Because storage is inexpensive, storing data is easy. However, storing and securing the data usually entails greater expenses (or even risk) than the potential return profit.
In academic discourse, the term dark data was essentially coined by Bryan P. Heidorn. He uses it to describe research data, especially from the long tail of science (the many, small research projects), which are not or no longer available for research because they disappear in a drawer without adequate data management. Without this, the data become dark, and further reasons for this are e.g. missing metadata annotation, missing data management plans and data curators.
Analysis
The term "dark data" very often refers to data that is not amenable to computer processing. For example, a company might have a great deal of data that exists only as scanned page-images. Even the bare text in such documents is not available without something like Optical character recognition, which can vary greatly in accuracy. Even with OCR, the significance of each part of the data is unavailable. An obvious examples is whether a capitalized word is a name or not, and if so, whether it represents a person, place, organization, or even a work of art. Bibliographic and other references, data within tables (that may be labeled quite adequately for humans, but not for processing), and countless assertions represented with the full complexity and ambiguity of human language.
A lot of unused data is very valuable, and would be used if it could be; but is blocked because it is in formats that are difficult to process, categorise, identify, and analyse. Often the reason that business does not use their dark data is because of the amount of resources it would take and the difficulty of having that data analysed. In other words, the data is "dark" not because it is not used, but because it cannot (feasibly or affordably) be used, given its poor representation.
There are many data representations that can make data much more accessible for automation. However, a great deal of information lacks any such identification of information items or relationships; and much more loses it during "downhill" conversion such as saving to page-oriented representations, printing, scanning, or faxing. The journey back "uphill" can be costly.
According to Computer Weekly, 60% of organisations believe that their own business intelligence reporting capability is "inadequate" and 65% say that they have "somewhat disorganised content management approaches".
Relevance
Useful data may become dark data after it becomes irrelevant, as it is not processed fast enough. This is called "perishable insights" in "live flowing data". For example, if the geolocation of a customer is known to a business, the business can make offer based on the location, however if this data is not processed immediately, it may be irrelevant in the future. According to IBM, about 60 percent of data loses its value immediately.
Storage
According to the New York Times, 90% of energy used by data centres is wasted. If data was not stored, energy costs could be saved. Furthermore, there are costs associated with the underutilisation of information and thus missed opportunities. According to Datamation, "the storage environments of EMEA organizations consist of 54 percent dark data, 32 percent redundant, obsolete and trivial data and 14 percent business-critical data. By 2020, this can add up to $891 billion in storage and management costs that can otherwise be avoided."
The continuous storage of dark data can put an organisation at risk, especially if this data is sensitive. In the case of a breach, this can result in serious repercussions. These can be financial, legal and can seriously hurt an organisation's reputation. For example, a breach of private records of customers could result in the stealing of sensitive information, which could result in identity theft. Another example could be the breach of the company's own sensitive information, for example relating to research and development. These risks can be mitigated by assessing and auditing whether this data is useful to the organisation, employing strong encryption and security and finally, if it is determined to be discarded, then it should be discarded in a way that it becomes unretrievable.
Future
It is generally considered that as more advanced computing systems for analysis of data are built, the higher the value of dark data will be. It has been noted that "data and analytics will be the foundation of the modern industrial revolution". Of course, this includes data that is currently considered "dark data" since there are not enough resources to process it. All this data that is being collected can be used in the future to bring maximum productivity and an ability for organisations to meet consumers' demand. Technology advancements are helping to leverage this dark data affordably. Furthermore, many organisations do not realise the value of dark data right now, for example in healthcare and education organisations deal with large amounts of data that could create a significant "potential to service students and patients in the manner in which the consumer and financial services pursue their target population".
References
Data analysis
Data collection
Databases
Computer data storage | Dark data | Technology | 1,221 |
22,398,576 | https://en.wikipedia.org/wiki/Neutrino%20Array%20Radio%20Calibration | The Neutrino Array Radio Calibration (NARC) experiment was the successor to the Radio Ice Cherenkov Experiment (RICE) which served as a testbed for future development of an eventual large-scale neutrino radio-detection array. NARC involved detecting ultra high energy electron neutrinos through their interactions with ice molecules in the Antarctic icecap, based on the principle of radio coherence. Experimentally, the goal was to detect and measure long-wavelength (radiofrequency) pulses resulting from this interaction. The experiment ended 2012 (end of data-taking 2010). The experiment is succeeded by the Askaryan Radio Array (ARA) experiment.
External links
Neutrino Array Radio Calibration webpage
IceCube Neutrino Observatory
Science and technology in Antarctica
Neutrino astronomy
Astronomical experiments in the Antarctic | Neutrino Array Radio Calibration | Astronomy | 175 |
6,586,455 | https://en.wikipedia.org/wiki/Lycorine | Lycorine is a toxic crystalline alkaloid found in various Amaryllidaceae species, such as the cultivated bush lily (Clivia miniata), surprise lilies (Lycoris), and daffodils (Narcissus). It may be highly poisonous, or even lethal, when ingested in certain quantities. Regardless, it is sometimes used medicinally, a reason why some groups may harvest the very popular Clivia miniata.
Source
Lycorine is found in different species of Amaryllidaceae which include flowers and bulbs of daffodil, snowdrop (Galanthus) or spider lily (Lycoris). Lycorine is the most frequent alkaloid of Amaryllidaceae.
The earliest diversification of Amaryllidaceae was most likely in North Africa and the Iberian peninsula and that lycorine is one of the oldest in the Amaryllidaceae alkaloid biosynthetic pathway.
Mechanism of action
There is currently very little known about the mechanism of action of lycorine, although there have been some tentative hypotheses advanced concerning the metabolism of the alkaloid, based on experiments carried out upon beagle dogs.
Lycorine inhibits protein synthesis, and may inhibit ascorbic acid biosynthesis, although studies on the latter are controversial and inconclusive. Presently, it serves some interest in the study of certain yeasts, the principal organism on which lycorine is tested.
It is known that lycorine weakly inhibits acetylcholinesterase (AChE) and ascorbic acid biosynthesis. The IC50 of lycorine was found to vary between the different species it can be found in, but a common deduction from the experiments on lycorine was that it had some effect on inhibiting AChE.
Lycorine exhibits cytostatic effects by targeting the actin cytoskeleton rather than by inducing apoptosis in cancer cells, though lycorine has been found to induce apoptosis or arrest the cell cycle at different points in various cell lines.
Toxicity
Poisoning by lycorine most often occurs through the ingestion of daffodil bulbs.
Daffodil bulbs are sometimes confused with onions, leading to accidental poisoning.
In a study of dosage used on beagle dogs, the first sign of nausea was observed at as little of a dose of 0.5 mg/kg and occurred within a 2.5 hour span. The effective dose to induce emesis in the dogs was seen to be 2.0 mg/kg and lasted no longer than 2.5 hours after administration.
Symptoms
Symptoms of lycorine toxicity are nausea, vomiting, diarrhea, and convulsions.
Current research
Lycorine has been seen to have promising biological and pharmacological activities such as antibacterial, antiviral, or anti-inflammatory effects and may have anticancer properties. It has displayed various inhibitory properties towards multiple cancer cell lines that include, lymphoma, carcinoma, multiple myeloma, melanoma, leukemia, human A549 non-small-cell lung cancer, human OE21 esophageal cancer and more.
Lycorine has many derivatives used for anti-cancer research such as lycorine hydrochloride (LH) which is a novel anti-ovarian cancer agent, and data has shown that LH effectively inhibited mitotic proliferation of Hey1B cells with very low toxicity. This drug could be used for effective anti-ovarian cancer therapy in the future.
References
External links
Isoquinoline alkaloids
Quinoline alkaloids
Diols
Phenanthridines
Plant toxins
Acetylcholinesterase inhibitors
Protein synthesis inhibitors | Lycorine | Chemistry | 802 |
9,641,508 | https://en.wikipedia.org/wiki/William%20L.%20Jorgensen | William L. Jorgensen (born October 5, 1949, in New York) is a Sterling Professor of Chemistry at Yale University. He is known for his work in the field of computational chemistry. Some of his contributions include the TIP3P, TIP4P, and TIP5P water models, the OPLS force field, free-energy perturbation theory for modelling reactions in solution, protein-ligand binding, and drug design. Jorgensen served as the Editor of the ACS Journal of Chemical Theory and Computation from its founding in 2005 until 2022.
Academic career
Jorgensen earned a bachelor's degree from Princeton University in 1970 and a PhD from Harvard University in 1975 in Chemical Physics while studying under Elias J. Corey. Jorgensen then worked at Purdue University from 1975 to 1990 first as an assistant professor and then later as a Professor. He joined the Yale faculty in 1990 and has remained there since.
Research
Jorgensen's research interests include the calculation of free energy of reactions using quantum mechanics, molecular mechanics, and Metropolis Monte Carlo methods. These methods have application to the calculation of protein-ligand binding affinities. Generally, the research goals involve developing theoretical and computational methods to contribute to the understanding of the structure and reactivity of organic and biomolecular systems. His research group has also pursued de novo drug design, synthesis, and protein crystallography. This drug design being particularly based towards anti-infective, anti-proliferative, and anti-inflammatory agents. Jorgensen was an early contributor to the use of free-energy perturbation calculations for applications several applications including efficient drug lead optimization. Using these methods, he developed improved NNRTIs for HIV treatment. In 2020, Jorgensen's group discovered inhibitors of the main protease of the SARS-CoV-2 virus.
Awards and honors
Jorgensen's work has been recognized by many awards including election to the American Academy of Arts and Sciences, the National Academy of Sciences, and the International Academy of Quantum and Molecular Sciences. He has also received the ACS Award for Computers in Chemical and Pharmaceutical Research, the ACS Hildebrand Award, the Tetrahedron Prize, and Arthur C. Cope Award.
See also
BOSS (molecular mechanics)
OPLS
Sources
External links
William L. Jorgensen Research Group
21st-century American chemists
Living people
1949 births
Yale University faculty
Harvard University alumni
Princeton University alumni
Yale Sterling Professors
Members of the United States National Academy of Sciences
Computational chemists | William L. Jorgensen | Chemistry | 503 |
172,732 | https://en.wikipedia.org/wiki/Glycerol | Glycerol () is a simple triol compound. It is a colorless, odorless, viscous liquid that is sweet-tasting and non-toxic. The glycerol backbone is found in lipids known as glycerides. It is also widely used as a sweetener in the food industry and as a humectant in pharmaceutical formulations. Because of its three hydroxyl groups, glycerol is miscible with water and is hygroscopic in nature.
Modern use of the word glycerine (alternatively spelled glycerin) refers to commercial preparations of less than 100% purity, typically 95% glycerol.
Structure
Although achiral, glycerol is prochiral with respect to reactions of one of the two primary alcohols. Thus, in substituted derivatives, the stereospecific numbering labels the molecule with a sn- prefix before the stem name of the molecule.
Production
Natural sources
Glycerol is generally obtained from plant and animal sources where it occurs in triglycerides, esters of glycerol with long-chain carboxylic acids. The hydrolysis, saponification, or transesterification of these triglycerides produces glycerol as well as the fatty acid derivative:
Triglycerides can be saponified with sodium hydroxide to give glycerol and fatty sodium salt or soap.
Typical plant sources include soybeans or palm. Animal-derived tallow is another source. From 2000 to 2004, approximately 950,000 tons per year were produced in the United States and Europe; 350,000 tons of glycerol were produced in the U.S. alone. Since around 2010, there is a large surplus of glycerol as a byproduct of biofuel, enforced for example by EU directive 2003/30/EC that required 5.75% of petroleum fuels to be replaced with biofuel sources across all member states.
Crude glycerol produced from triglycerides is of variable quality, with a low selling price of as low as US$0.02–0.05 per kilogram already in 2011. It can be purified in a rather expensive process by treatment with activated carbon to remove organic impurities, alkali to remove unreacted glycerol esters, and ion exchange to remove salts. High purity glycerol (greater than 99.5%) is obtained by multi-step distillation; a vacuum chamber is necessary due to its high boiling point (290 °C).
Consequently, glycerol recycling is more of a challenge than its production, for instance by conversion to glycerol carbonate or to synthetic precursors, such as acrolein and epichlorohydrin.
Synthetic glycerol
Although usually not economical anymore, glycerol can be synthesized by various routes. During World War II, synthetic glycerol processes became a national defense priority because it is a precursor to nitroglycerine. Epichlorohydrin is the most important precursor. Chlorination of propylene gives allyl chloride, which is oxidized with hypochlorite to dichlorohydrin, which reacts with a strong base to give epichlorohydrin. Epichlorohydrin can be hydrolyzed to glycerol. Chlorine-free processes from propylene include the synthesis of glycerol from acrolein and propylene oxide.
Applications
Food industry
In food and beverages, glycerol serves as a humectant, solvent, and sweetener, and may help preserve foods. It is also used as filler in commercially prepared low-fat foods (e.g., cookies), and as a thickening agent in liqueurs. Glycerol and water are used to preserve certain types of plant leaves. As a sugar substitute, it has approximately 27 kilocalories per teaspoon (sugar has 20) and is 60% as sweet as sucrose. It does not feed the bacteria that form a dental plaque and cause dental cavities. As a food additive, glycerol is labeled as E number E422. It is added to icing (frosting) to prevent it from setting too hard.
As used in foods, glycerol is categorized by the U.S. Academy of Nutrition and Dietetics as a carbohydrate. The U.S. Food and Drug Administration (FDA) carbohydrate designation includes all caloric macronutrients excluding protein and fat. Glycerol has a caloric density similar to table sugar, but a lower glycemic index and different metabolic pathway within the body.
It is also recommended as an additive when polyol sweeteners such as erythritol and xylitol are used, as its heating effect in the mouth will counteract these sweeteners' cooling effect.
Medical
Glycerol is used in medical, pharmaceutical and personal care preparations, often as a means of improving smoothness, providing lubrication, and as a humectant.
Ichthyosis and xerosis have been relieved by the topical use of glycerin. It is found in allergen immunotherapies, cough syrups, elixirs and expectorants, toothpaste, mouthwashes, skin care products, shaving cream, hair care products, soaps, and water-based personal lubricants. In solid dosage forms like tablets, glycerol is used as a tablet holding agent. For human consumption, glycerol is classified by the FDA among the sugar alcohols as a caloric macronutrient. Glycerol is also used in blood banking to preserve red blood cells prior to freezing.
Taken rectally, glycerol functions as a laxative by irritating the anal mucosa and inducing a hyperosmotic effect, expanding the colon by drawing water into it to induce peristalsis resulting in evacuation. It may be administered undiluted either as a suppository or as a small-volume (2–10 ml) enema. Alternatively, it may be administered in a dilute solution, such as 5%, as a high-volume enema.
Taken orally (often mixed with fruit juice to reduce its sweet taste), glycerol can cause a rapid, temporary decrease in the internal pressure of the eye. This can be useful for the initial emergency treatment of severely elevated eye pressure.
In 2017, researchers showed that the probiotic Limosilactobacillus reuteri bacteria can be supplemented with glycerol to enhance its production of antimicrobial substances in the human gut. This was confirmed to be as effective as the antibiotic vancomycin at inhibiting Clostridioides difficile infection without having a significant effect on the overall microbial composition of the gut.
Glycerol has also been incorporated as a component of bio-ink formulations in the field of bioprinting. The glycerol content acts to add viscosity to the bio-ink without adding large protein, saccharide, or glycoprotein molecules.
Botanical extracts
When utilized in "tincture" method extractions, specifically as a 10% solution, glycerol prevents tannins from precipitating in ethanol extracts of plants (tinctures). It is also used as an "alcohol-free" alternative to ethanol as a solvent in preparing herbal extractions. It is less extractive when utilized in a standard tincture methodology. Alcohol-based tinctures can also have the alcohol removed and replaced with glycerol for its preserving properties. Such products are not "alcohol-free" in a scientific or FDA regulatory sense, as glycerol contains three hydroxyl groups. Fluid extract manufacturers often extract herbs in hot water before adding glycerol to make glycerites.
When used as a primary "true" alcohol-free botanical extraction solvent in non-tincture based methodologies, glycerol has been shown to possess a high degree of extractive versatility for botanicals including removal of numerous constituents and complex compounds, with an extractive power that can rival that of alcohol and water–alcohol solutions. That glycerol possesses such high extractive power assumes it is utilized with dynamic (critical) methodologies as opposed to standard passive "tincturing" methodologies that are better suited to alcohol. Glycerol does not denature or render a botanical's constituents inert as alcohols (ethanol, methanol, and so on) do. Glycerol is a stable preserving agent for botanical extracts that, when utilized in proper concentrations in an extraction solvent base, does not allow inverting or reduction-oxidation of a finished extract's constituents, even over several years. Both glycerol and ethanol are viable preserving agents. Glycerol is bacteriostatic in its action, and ethanol is bactericidal in its action.
Electronic cigarette liquid
Glycerin, along with propylene glycol, is a common component of e-liquid, a solution used with electronic vaporizers (electronic cigarettes). This glycerol is heated with an atomizer (a heating coil often made of Kanthal wire), producing the aerosol that delivers nicotine to the user.
Antifreeze
Like ethylene glycol and propylene glycol, glycerol is a non-ionic kosmotrope that forms strong hydrogen bonds with water molecules, competing with water-water hydrogen bonds. This interaction disrupts the formation of ice. The minimum freezing point temperature is about corresponding to 70% glycerol in water.
Glycerol was historically used as an anti-freeze for automotive applications before being replaced by ethylene glycol, which has a lower freezing point. While the minimum freezing point of a glycerol-water mixture is higher than an ethylene glycol-water mixture, glycerol is not toxic and is being re-examined for use in automotive applications.
In the laboratory, glycerol is a common component of solvents for enzymatic reagents stored at temperatures below due to the depression of the freezing temperature. It is also used as a cryoprotectant where the glycerol is dissolved in water to reduce damage by ice crystals to laboratory organisms that are stored in frozen solutions, such as fungi, bacteria, nematodes, and mammalian embryos. Some organisms like the moor frog produce glycerol to survive freezing temperatures during hibernation.
Chemical intermediate
Glycerol is used to produce a variety of useful derivatives.
Nitration gives nitroglycerin, an essential ingredient of various explosives such as dynamite, gelignite, and propellants like cordite. Nitroglycerin under the name glyceryl trinitrate (GTN) is commonly used to relieve angina pectoris, taken in the form of sub-lingual tablets, patches, or as an aerosol spray.
Trifunctional polyether polyols are produced from glycerol and propylene oxide.
Oxidation of glycerol affords mesoxalic acid. Dehydrating glycerol affords hydroxyacetone.
Chlorination of glycerol gives the 1-chloropropane-2,3-diol:
The same compound can be produced by hydrolysis of epichlorohydrin.
Epoxidation by reaction with epichlorohydrin and a Lewis acid yields Glycerol triglycidyl ether.
Vibration damping
Glycerol is used as fill for pressure gauges to damp vibration. External vibrations, from compressors, engines, pumps, etc., produce harmonic vibrations within Bourdon gauges that can cause the needle to move excessively, giving inaccurate readings. The excessive swinging of the needle can also damage internal gears or other components, causing premature wear. Glycerol, when poured into a gauge to replace the air space, reduces the harmonic vibrations that are transmitted to the needle, increasing the lifetime and reliability of the gauge.
Niche uses
Entertainment industry
Glycerol is used by set decorators when filming scenes involving water to prevent an area meant to look wet from drying out too quickly.
Glycerine is also used in the generation of theatrical smoke and fog as a component of the fluid used in fog machines as a replacement for glycol, which has been shown to be an irritant if exposure is prolonged.
Ultrasonic couplant
Glycerol can be sometimes used as replacement for water in ultrasonic testing, as it has favourably higher acoustic impedance (2.42 MRayl versus 1.483 MRayl for water) while being relatively safe, non-toxic, non-corrosive and relatively low cost.
Internal combustion fuel
Glycerol is also used to power diesel generators supplying electricity for the FIA Formula E series of electric race cars.
Research on additional uses
Research continues into potential value-added products of glycerol obtained from biodiesel production. Examples (aside from combustion of waste glycerol):
Hydrogen gas production.
Glycerine acetate is a potential fuel additive.
Additive for starch thermoplastic.
Conversion to various other chemicals:
Propylene glycol
Acrolein
Ethanol
Epichlorohydrin, a raw material for epoxy resins
Metabolism
Glycerol is a precursor for synthesis of triacylglycerols and of phospholipids in the liver and adipose tissue. When the body uses stored fat as a source of energy, glycerol and fatty acids are released into the bloodstream.
Glycerol is mainly metabolized in the liver. Glycerol injections can be used as a simple test for liver damage, as its rate of absorption by the liver is considered an accurate measure of liver health. Glycerol metabolism is reduced in both cirrhosis and fatty liver disease.
Blood glycerol levels are highly elevated during diabetes, and is believed to be the cause of reduced fertility in patients who suffer from diabetes and metabolic syndrome. Blood glycerol levels in diabetic patients average three times higher than healthy controls. Direct glycerol treatment of testes has been found to cause significant long-term reduction in sperm count. Further testing on this subject was abandoned due to the unexpected results, as this was not the goal of the experiment.
Circulating glycerol does not glycate proteins as do glucose or fructose, and does not lead to the formation of advanced glycation endproducts (AGEs). In some organisms, the glycerol component can enter the glycolysis pathway directly and, thus, provide energy for cellular metabolism (or, potentially, be converted to glucose through gluconeogenesis).
Before glycerol can enter the pathway of glycolysis or gluconeogenesis (depending on physiological conditions), it must be converted to their intermediate glyceraldehyde 3-phosphate in the following steps:
The enzyme glycerol kinase is present mainly in the liver and kidneys, but also in other body tissues, including muscle and brain. In adipose tissue, glycerol 3-phosphate is obtained from dihydroxyacetone phosphate with the enzyme glycerol-3-phosphate dehydrogenase.
Toxicity and safety
Glycerol has very low toxicity when ingested; its LD50 oral dose for rats is 12600 mg/kg and 8700 mg/kg for mice. It does not appear to cause toxicity when inhaled, although changes in cell maturity occurred in small sections of lung in animals under the highest dose measured. A sub-chronic 90-day nose-only inhalation study in Sprague–Dawley (SD) rats exposed to 0.03, 0.16 and 0.66 mg/L glycerin (Per liter of air) for 6-hour continuous sessions revealed no treatment-related toxicity other than minimal metaplasia of the epithelium lining at the base of the epiglottis in rats exposed to 0.66 mg/L glycerin.
Glycerol intoxication
Excessive consumption by children can lead to glycerol intoxication. Symptoms of intoxication include hypoglycemia, nausea and a loss of consciousness. While intoxication as a result of excessive glycerol consumption is rare and its symptoms generally mild, occasional reports of hospitalization have occurred. In the United Kingdom in August 2023, manufacturers of syrup used in slush ice drinks were advised to reduce the amount of glycerol in their formulations by the Food Standards Agency to reduce the risk of intoxication.
Food Standards Scotland advises that slush ice drinks containing glycerol should not be given to children under the age of 4, owing to the risk of intoxication. It also recommends that businesses do not use free refill offers for the drinks in venues where children under the age of 10 are likely to consume them, and that products should be appropriately labelled to inform consumers of the presence of glycerol.
Historical cases of contamination with diethylene glycol
On 4 May 2007, the FDA advised all U.S. makers of medicines to test all batches of glycerol for diethylene glycol contamination. This followed an occurrence of hundreds of fatal poisonings in Panama resulting from a falsified import customs declaration by Panamanian import/export firm Aduanas Javier de Gracia Express, S. A. The cheaper diethylene glycol was relabeled as the more expensive glycerol. Between 1990 and 1998, incidents of DEG poisoning reportedly occurred in Argentina, Bangladesh, India, and Nigeria, and resulted in hundreds of deaths. In 1937, more than one hundred people died in the United States after ingesting DEG-contaminated elixir sulfanilamide, a drug used to treat infections.
Etymology
The origin of the gly- and glu- prefixes for glycols and sugars is from Ancient Greek glukus which means sweet. Name glycérine was coined ca. 1811 by Michel Eugène Chevreul to denote what was previously called "sweet principle of fat" by its discoverer Carl Wilhelm Scheele. It was borrowed into English ca. 1838 and in the 20th c. displaced by 1872 term glycerol featuring an alcohols' suffix -ol.
Properties
Table of thermal and physical properties of saturated liquid glycerin:
{|class="wikitable mw-collapsible mw-collapsed"
!Temperature (°C)
!Density (kg/m3)
!Specific heat (kJ/kg·K)
!Kinematic viscosity (m2/s)
!Conductivity (W/m·K)
!Thermal diffusivity (m2/s)
!Prandtl number
!Bulk modulus (K−1)
|-
|0
|1276.03
|2.261
|
|0.282
|
|84700
|
|-
|10
|1270.11
|2.319
|
|0.284
|
|31000
|
|-
|20
|1264.02
|2.386
|
|0.286
|
|12500
|
|-
|30
|1258.09
|2.445
|
|0.286
|
|5380
|
|-
|40
|1252.01
|2.512
|
|0.286
|
|2450
|
|-
|50
|1244.96
|2.583
|
|0.287
|
|1630
|
|}
See also
Dioxalin
Epichlorohydrin
Nitroglycerin
Oleochemicals
Saponification/Soapmaking
Solketal
Transesterification
References
External links
Mass spectrum of glycerol
CDC – NIOSH Pocket Guide to Chemical Hazards – Glycerin (mist)
Alcohol solvents
Biofuels
Commodity chemicals
Cosmetics chemicals
Demulcents
E-number additives
Food additives
Glassforming liquids and melts
Household chemicals
Laxatives
Sugar alcohols
Triols
By-products | Glycerol | Chemistry | 4,255 |
415,167 | https://en.wikipedia.org/wiki/Semitone | A semitone, also called a minor second, half step, or a half tone, is the smallest musical interval commonly used in Western tonal music, and it is considered the most dissonant when sounded harmonically.
It is defined as the interval between two adjacent notes in a 12-tone scale (or half of a whole step), visually seen on a keyboard as the distance between two keys that are adjacent to each other. For example, C is adjacent to C; the interval between them is a semitone.
In a 12-note approximately equally divided scale, any interval can be defined in terms of an appropriate number of semitones (e.g. a whole tone or major second is 2 semitones wide, a major third 4 semitones, and a perfect fifth 7 semitones).
In music theory, a distinction is made between a diatonic semitone, or minor second (an interval encompassing two different staff positions, e.g. from C to D) and a chromatic semitone or augmented unison (an interval between two notes at the same staff position, e.g. from C to C). These are enharmonically equivalent if and only if twelve-tone equal temperament is used; for example, they are not the same thing in meantone temperament, where the diatonic semitone is distinguished from and larger than the chromatic semitone (augmented unison), or in Pythagorean tuning, where the diatonic semitone is smaller instead. See for more details about this terminology.
In twelve-tone equal temperament all semitones are equal in size (100 cents). In other tuning systems, "semitone" refers to a family of intervals that may vary both in size and name. In Pythagorean tuning, seven semitones out of twelve are diatonic, with ratio 256:243 or 90.2 cents (Pythagorean limma), and the other five are chromatic, with ratio 2187:2048 or 113.7 cents (Pythagorean apotome); they differ by the Pythagorean comma of ratio 531441:524288 or 23.5 cents. In quarter-comma meantone, seven of them are diatonic, and 117.1 cents wide, while the other five are chromatic, and 76.0 cents wide; they differ by the lesser diesis of ratio 128:125 or 41.1 cents. 12-tone scales tuned in just intonation typically define three or four kinds of semitones. For instance, Asymmetric five-limit tuning yields chromatic semitones with ratios 25:24 (70.7 cents) and 135:128 (92.2 cents), and diatonic semitones with ratios 16:15 (111.7 cents) and 27:25 (133.2 cents). For further details, see below.
The condition of having semitones is called hemitonia; that of having no semitones is anhemitonia. A musical scale or chord containing semitones is called hemitonic; one without semitones is anhemitonic.
Minor second
The minor second occurs in the major scale, between the third and fourth degree, (mi (E) and fa (F) in C major), and between the seventh and eighth degree (ti (B) and do (C) in C major). It is also called the diatonic semitone because it occurs between steps in the diatonic scale. The minor second is abbreviated m2 (or −2). Its inversion is the major seventh (M7 or Ma7).
. Here, middle C is followed by D, which is a tone 100 cents sharper than C, and then by both tones together.
Melodically, this interval is very frequently used, and is of particular importance in cadences. In the perfect and deceptive cadences it appears as a resolution of the leading-tone to the tonic. In the plagal cadence, it appears as the falling of the subdominant to the mediant. It also occurs in many forms of the imperfect cadence, wherever the tonic falls to the leading-tone.
Harmonically, the interval usually occurs as some form of dissonance or a nonchord tone that is not part of the functional harmony. It may also appear in inversions of a major seventh chord, and in many added tone chords.
In unusual situations, the minor second can add a great deal of character to the music. For instance, Frédéric Chopin's Étude Op. 25, No. 5 opens with a melody accompanied by a line that plays fleeting minor seconds. These are used to humorous and whimsical effect, which contrasts with its more lyrical middle section. This eccentric dissonance has earned the piece its nickname: the "wrong note" étude. This kind of usage of the minor second appears in many other works of the Romantic period, such as Modest Mussorgsky's Ballet of the Unhatched Chicks. More recently, the music to the movie Jaws exemplifies the minor second.
In other temperaments
In just intonation a 16:15 minor second arises in the C major scale between B & C and E & F, and is "the sharpest dissonance found in the [major] scale."
Augmented unison
The augmented unison, the interval produced by the augmentation, or widening by one half step, of the perfect unison, does not occur between diatonic scale steps, but instead between a scale step and a chromatic alteration of the same step. It is also called a chromatic semitone. The augmented unison is abbreviated A1, or aug 1. Its inversion is the diminished octave (d8, or dim 8). The augmented unison is also the inversion of the augmented octave, because the interval of the diminished unison does not exist. This is because a unison is always made larger when one note of the interval is changed with an accidental.
Melodically, an augmented unison very frequently occurs when proceeding to a chromatic chord, such as a secondary dominant, a diminished seventh chord, or an augmented sixth chord. Its use is also often the consequence of a melody proceeding in semitones, regardless of harmonic underpinning, e.g. D, D, E, F, F. (Restricting the notation to only minor seconds is impractical, as the same example would have a rapidly increasing number of accidentals, written enharmonically as D, E, F, G, A).
Harmonically, augmented unisons are quite rare in tonal repertoire. In the example to the right, Liszt had written an E against an E in the bass. Here E was preferred to a D to make the tone's function clear as part of an F dominant seventh chord, and the augmented unison is the result of superimposing this harmony upon an E pedal point.
In addition to this kind of usage, harmonic augmented unisons are frequently written in modern works involving tone clusters, such as Iannis Xenakis' Evryali for piano solo.
History
The semitone appeared in the music theory of Greek antiquity as part of a diatonic or chromatic tetrachord, and it has always had a place in the diatonic scales of Western music since. The various modal scales of medieval music theory were all based upon this diatonic pattern of tones and semitones.
Though it would later become an integral part of the musical cadence, in the early polyphony of the 11th century this was not the case. Guido of Arezzo suggested instead in his Micrologus other alternatives: either proceeding by whole tone from a major second to a unison, or an occursus having two notes at a major third move by contrary motion toward a unison, each having moved a whole tone.
"As late as the 13th century the half step was experienced as a problematic interval not easily understood, as the irrational remainder between the perfect fourth and the ditone ." In a melodic half step, no "tendency was perceived of the lower tone toward the upper, or of the upper toward the lower. The second tone was not taken to be the 'goal' of the first. Instead, the half step was avoided in clausulae because it lacked clarity as an interval."
However, beginning in the 13th century cadences begin to require motion in one voice by half step and the other a whole step in contrary motion. These cadences would become a fundamental part of the musical language, even to the point where the usual accidental accompanying the minor second in a cadence was often omitted from the written score (a practice known as musica ficta). By the 16th century, the semitone had become a more versatile interval, sometimes even appearing as an augmented unison in very chromatic passages. Semantically, in the 16th century the repeated melodic semitone became associated with weeping, see: passus duriusculus, lament bass, and pianto.
By the Baroque era (1600 to 1750), the tonal harmonic framework was fully formed, and the various musical functions of the semitone were rigorously understood. Later in this period the adoption of well temperaments for instrumental tuning and the more frequent use of enharmonic equivalences increased the ease with which a semitone could be applied. Its function remained similar through the Classical period, and though it was used more frequently as the language of tonality became more chromatic in the Romantic period, the musical function of the semitone did not change.
In the 20th century, however, composers such as Arnold Schoenberg, Béla Bartók, and Igor Stravinsky sought alternatives or extensions of tonal harmony, and found other uses for the semitone. Often the semitone was exploited harmonically as a caustic dissonance, having no resolution. Some composers would even use large collections of harmonic semitones (tone clusters) as a source of cacophony in their music (e.g. the early piano works of Henry Cowell). By now, enharmonic equivalence was a commonplace property of equal temperament, and instrumental use of the semitone was not at all problematic for the performer. The composer was free to write semitones wherever he wished.
Semitones in different tunings
The exact size of a semitone depends on the tuning system used. Meantone temperaments have two distinct types of semitones, but in the exceptional case of equal temperament, there is only one. The unevenly distributed well temperaments contain many different semitones. Pythagorean tuning, similar to meantone tuning, has two, but in other systems of just intonation there are many more possibilities.
Meantone temperament
In meantone systems, there are two different semitones. This results because of the break in the circle of fifths that occurs in the tuning system: diatonic semitones derive from a chain of five fifths that does not cross the break, and chromatic semitones come from one that does.
The chromatic semitone is usually smaller than the diatonic. In the common quarter-comma meantone, tuned as a cycle of tempered fifths from E to G, the chromatic and diatonic semitones are 76.0 and 117.1 cents wide respectively.
Extended meantone temperaments with more than 12 notes still retain the same two semitone sizes, but there is more flexibility for the musician about whether to use an augmented unison or minor second. 31-tone equal temperament is the most flexible of these, which makes an unbroken circle of 31 fifths, allowing the choice of semitone to be made for any pitch.
Equal temperament
12-tone equal temperament is a form of meantone tuning in which the diatonic and chromatic semitones are exactly the same, because its circle of fifths has no break. Each semitone is equal to one twelfth of an octave. This is a ratio of 21/12 (approximately 1.05946), or 100 cents, and is 11.7 cents narrower than the 16:15 ratio (its most common form in just intonation, discussed below).
All diatonic intervals can be expressed as an equivalent number of semitones. For instance a major sixth equals nine semitones.
There are many approximations, rational or otherwise, to the equal-tempered semitone. To cite a few:
suggested by Vincenzo Galilei and used by luthiers of the Renaissance,
suggested by Marin Mersenne as a constructible and more accurate alternative,
used by Julián Carrillo as part of a sixteenth-tone system.
For more examples, see Pythagorean and Just systems of tuning below.
Well temperament
There are many forms of well temperament, but the characteristic they all share is that their semitones are of an uneven size. Every semitone in a well temperament has its own interval (usually close to the equal-tempered version of 100 cents), and there is no clear distinction between a diatonic and chromatic semitone in the tuning. Well temperament was constructed so that enharmonic equivalence could be assumed between all of these semitones, and whether they were written as a minor second or augmented unison did not effect a different sound. Instead, in these systems, each key had a slightly different sonic color or character, beyond the limitations of conventional notation.
Pythagorean tuning
Like meantone temperament, Pythagorean tuning is a broken circle of fifths. This creates two distinct semitones, but because Pythagorean tuning is also a form of 3-limit just intonation, these semitones are rational. Also, unlike most meantone temperaments, the chromatic semitone is larger than the diatonic.
The Pythagorean diatonic semitone has a ratio of 256/243 (), and is often called the Pythagorean limma. It is also sometimes called the Pythagorean minor semitone. It is about 90.2 cents.
It can be thought of as the difference between three octaves and five just fifths, and functions as a diatonic semitone in a Pythagorean tuning.
The Pythagorean chromatic semitone has a ratio of 2187/2048 (). It is about 113.7 cents. It may also be called the Pythagorean apotome or the Pythagorean major semitone. (See Pythagorean interval.)
It can be thought of as the difference between four perfect octaves and seven just fifths, and functions as a chromatic semitone in a Pythagorean tuning.
The Pythagorean limma and Pythagorean apotome are enharmonic equivalents (chromatic semitones) and only a Pythagorean comma apart, in contrast to diatonic and chromatic semitones in meantone temperament and 5-limit just intonation.
Just 5-limit intonation
A minor second in just intonation typically corresponds to a pitch ratio of 16:15 () or 1.0666... (approximately 111.7 cents), called the just diatonic semitone. This is a practical just semitone, since it is the interval that occurs twice within the diatonic scale between a:
major third (5:4) and perfect fourth (4:3) and a
major seventh (15:8) and the perfect octave (2:1)
The 16:15 just minor second arises in the C major scale between B & C and E & F, and is, "the sharpest dissonance found in the scale".
An "augmented unison" (sharp) in just intonation is a different, smaller semitone, with frequency ratio 25:24 () or 1.0416... (approximately 70.7 cents). It is the interval between a major third (5:4) and a minor third (6:5). In fact, it is the spacing between the minor and major thirds, sixths, and sevenths (but not necessarily the major and minor second). Composer Ben Johnston used a sharp () to indicate a note is raised 70.7 cents, or a flat () to indicate a note is lowered 70.7 cents. (This is the standard practice for just intonation, but not for all other microtunings.)
Two other kinds of semitones are produced by 5 limit tuning. A chromatic scale defines 12 semitones as the 12 intervals between the 13 adjacent notes, spanning a full octave (e.g. from C to C). The 12 semitones produced by a commonly used version of 5 limit tuning have four different sizes, and can be classified as follows:
Just chromatic semitone chromatic semitone, or smaller, or minor chromatic semitone between harmonically related flats and sharps e.g. between E and E (6:5 and 5:4):
Larger chromatic semitone or major chromatic semitone, or larger limma, or major chroma, e.g. between C and an accute C (C raised by a syntonic comma) (1:1 and 135:128):
Just diatonic semitone or smaller, or minor diatonic semitone, e.g. between E and F (5:4 to 4:3):
Larger diatonic semitone or greater or major diatonic semitone, e.g. between A and B (5:3 to 9:5), or C and chromatic D (27:25), or F and G (25:18 and 3:2):
The most frequently occurring semitones are the just ones (, 16:15, and , 25:24): S occurs at 6 short intervals out of 12, 3 times, twice, and at only one interval (if diatonic D replaces chromatic D and sharp notes are not used).
The smaller chromatic and diatonic semitones differ from the larger by the syntonic comma (81:80 or 21.5 cents). The smaller and larger chromatic semitones differ from the respective diatonic semitones by the same 128:125 diesis as the above meantone semitones. Finally, while the inner semitones differ by the diaschisma (2048:2025 or 19.6 cents), the outer differ by the greater diesis (648:625 or 62.6 cents).
Extended just intonations
In 7 limit tuning there is the septimal diatonic semitone of 15:14 () available in between the 5 limit major seventh (15:8) and the 7 limit minor seventh / harmonic seventh (7:4). There is also a smaller septimal chromatic semitone of 21:20 () between a septimal minor seventh and a fifth (21:8) and an octave and a major third (5:2). Both are more rarely used than their 5 limit neighbours, although the former was often implemented by theorist Cowell, while Partch used the latter as part of his 43 tone scale.
Under 11 limit tuning, there is a fairly common undecimal neutral second (12:11) (), but it lies on the boundary between the minor and major second (150.6 cents). In just intonation there are infinitely many possibilities for intervals that fall within the range of the semitone (e.g. the Pythagorean semitones mentioned above), but most of them are impractical.
In 13 limit tuning, there is a tridecimal tone (13:12 or 138.57 cents) and tridecimal tone (27:26 or 65.34 cents).
In 17 limit just intonation, the major diatonic semitone is 15:14 or 119.4 cents (), and the minor diatonic semitone is 17:16 or 105.0 cents, and septendecimal limma is 18:17 or 98.95 cents.
Though the names diatonic and chromatic are often used for these intervals, their musical function is not the same as the meantone semitones. For instance, 15:14 would usually be written as an augmented unison, functioning as the chromatic counterpart to a diatonic 16:15. These distinctions are highly dependent on the musical context, and just intonation is not particularly well suited to chromatic use (diatonic semitone function is more prevalent).
Other equal temperaments
19-tone equal temperament distinguishes between the chromatic and diatonic semitones; in this tuning, the chromatic semitone is one step of the scale (), and the diatonic semitone is two (). 31-tone equal temperament also distinguishes between these two intervals, which become 2 and 3 steps of the scale, respectively. 53-ET has an even closer match to the two semitones with 3 and 5 steps of its scale while 72-ET uses 4 () and 7 () steps of its scale.
In general, because the smaller semitone can be viewed as the difference between a minor third and a major third, and the larger as the difference between a major third and a perfect fourth, tuning systems that closely match those just intervals (6/5, 5/4, and 4/3) will also distinguish between the two types of semitones and closely match their just intervals (25/24 and 16/15).
See also
12-tone equal temperament
List of meantone intervals
List of musical intervals
List of pitch intervals
Approach chord
Major second
Neutral second
Pythagorean interval
Regular temperament
References
Further reading
Grout, Donald Jay, and Claude V. Palisca. A History of Western Music, 6th ed. New York: Norton, 2001. .
Hoppin, Richard H. Medieval Music. New York: W. W. Norton, 1978. .
Minor intervals
Seconds (music)
Units of level | Semitone | Physics,Mathematics | 4,498 |
28,279,387 | https://en.wikipedia.org/wiki/Google%20Nexus | Google Nexus is a discontinued line of consumer electronic mobile devices that ran a stock version of the Android operating system. Google managed the design, development, marketing, and support of these devices, but some development and all manufacturing were carried out by partnering with original equipment manufacturers (OEMs). Alongside the main smartphone products, the line also included tablet computers and streaming media players; the Nexus started out in January 2010 and reached its end in October 2016, replaced by Google Pixel family.
Devices in the Nexus line were considered Google's core Android products. They contained little to no manufacturer or wireless carrier modifications to Android (such as custom user interfaces), although devices sold through carriers may be SIM locked, had some extra branding, and may have received software updates at a slower pace than the unlocked variant. Save for some carrier-specific variants, Nexus devices were often among the first Android devices to receive updates to the operating system. All Nexus devices featured an unlockable bootloader to allow further development and end-user modification. Although Nexus devices were originally produced in small quantities as they were intended as developer phones, the lack of bloatware/modifications to Android while providing similar performance to more expensive flagship smartphones from OEMs gained Nexus devices a considerable following. In addition to the Nexus program, Google also sold Google Play editions of OEM devices, which run the "stock" version of Android without the OEM nor carrier modifications.
OEMs that were part of the Nexus program were namely HTC, Samsung, LG, Motorola, Huawei and Asus. In late 2016, the Nexus lineup was replaced by the Google Pixel, which provides a similar stock Android experience but sold for considerably higher prices, directly competing with flagship smartphones from OEMs. Google stated that they "don't want to close a door completely, but there is no plan right now to do more Nexus devices." In 2017, Google partnered with HMD Global in making new Nokia phones, as part of the Android One program, which has been considered by some as a spiritual successor to the Nexus.
Devices
Phones
Nexus One
The Nexus One was manufactured by HTC and released in January 2010 as the first Nexus phone. It was released with Android 2.1 Eclair, and was updated in May 2010 to be the first phone with Android 2.2 Froyo. It was further updated to Android 2.3 Gingerbread. It was announced that Google would cease support for the Nexus One, whose graphics processing unit (GPU) is poor at rendering the new 2D acceleration engine of the UI in Android 4.0 Ice Cream Sandwich. The Nexus S and newer models have hardware designed to handle the new rendering. It was the only Nexus device to have card storage expandability (SD).
Display: 3.7" display with 800×480 pixel resolution
CPU: 1 GHz Qualcomm Scorpion
Storage: 512 MB (expandable)
RAM: 512 MB
GPU: Adreno 200
Camera: 5 MP rear camera
Nexus S
The Nexus S, manufactured by Samsung, was released in December 2010 to coincide with the release of Android 2.3 Gingerbread. In December 2011 it was updated to Android 4.0 Ice Cream Sandwich, with most variations later being updatable to Android 4.1 Jelly Bean in July 2012. The device's support was ended after 4.1 Jelly Bean and no longer receives updates from Google.
Display: 4.0" display with 800×480 pixel resolution
Chipset: Hummingbird
CPU: 1 GHz single-core ARM Cortex-A8
Storage: 16 GB (Partitioned: 1 GB internal storage and 15 GB USB storage)
RAM: 512 MB
GPU: PowerVR SGX540
Battery: 1500 mAH (replaceable) (After Ended)
Galaxy Nexus
The Galaxy Nexus, again manufactured by Samsung, was released in November 2011 (GSM version, US version released on December 15, 2011) to coincide with the release of Android 4.0 Ice Cream Sandwich. The device support was ended after 4.3 Jelly Bean and no longer receives updates from Google. This device is known in Brazil as Galaxy X due to a trademark on the "Nexus" brand. It is also the last Nexus device to have a removable battery.
Display: 4.65" HD Super AMOLED display with 1280×720 pixel resolution
CPU: 1.2 GHz dual-core ARM Cortex A9
Storage: 16 or 32 GB
RAM: 1 GB
Nexus 4
The Nexus 4 smartphone, also known as the LG Nexus 4 or LG Mako, was released in November 2012 and manufactured by LG. It was the first Android device that used Android 4.2 Jelly Bean update version. Nexus 4 is the first Nexus device to have wireless charging capabilities. It was updated to Android 4.3 in June 2013 and to Android 4.4 in November 2013. It can run Android 5.1 as of April 2015.
The Nexus 4 has the following characteristics:
Display: 4.7" Corning Gorilla Glass 2, True HD IPS Plus capacitive touchscreen, 768×1280 pixel resolution, 16M colors
CPU: Quad-core 1.5 GHz Krait
Chipset: Qualcomm Snapdragon APQ8064
Storage: 8 or 16 GB
RAM: 2 GB
GPU: Adreno 320
Battery: Non-removable Li-Po 2100 mAh battery, wireless charging
Camera: 8 MP rear camera with 3264×2448 pixels, autofocus, and LED flash; 1.3 MP front camera
Nexus 5
The Nexus 5 smartphone, again manufactured by LG, was scheduled for sale on October 31, 2013 for US$349 at the Google Play store. It was the first device to run Android 4.4 KitKat. The Nexus 5 did not receive an official Android 7.0 Nougat update, meaning that Android 6.0.1 Marshmallow is the last officially supported Android version for the device. The Nexus 5 has the following characteristics:
Display: 4.95" Corning Gorilla Glass 3, IPS LCD touchscreen, 1080×1920 pixel resolution (1080p)
Processor: 2.26 GHz Krait 400 quad-core processor on a Qualcomm Snapdragon 800 SoC
Storage: 16 or 32 GB
RAM: 2 GB
GPU: Adreno 330
Battery: 2,300 mAh lithium polymer, wireless charging
Cameras: 8 MP rear camera with optical image stabilization (OIS); 1.3 MP front camera
Connectivity: 4G LTE, 802.11 a/b/g/n/ac Wi-Fi, Bluetooth 4.0
Colors: Black, White, or Red
Nexus 6
The Nexus 6 is a smartphone developed by Motorola, originally running Android 5.0 Lollipop (upgradeable to Android 7.1.1 Nougat). It was first announced on October 15, 2014 along with the Nexus 9 and the Nexus Player.
Display: 5.96" Quad HD AMOLED PenTile (RGBG) display with 1440×2560 pixel resolution (493 ppi)
Processor: Qualcomm Snapdragon 805 - Quad-core 2.7 GHz
Modem: Qualcomm MDM9625M
Storage: 32 or 64 GB
RAM: 3 GB
GPU: Adreno 420
Battery: 3220 mAh with Turbo Charging technology, non-removable, wired charging
Cameras: 13 MP rear camera with f/2.0 lens featuring OIS; 2 MP front camera
Speakers: Dual front facing stereo
Colors: Midnight Blue and Cloud White
Nexus 5X
The Nexus 5X is a smartphone developed by LG originally running Android 6.0 Marshmallow (upgradeable to Android 8.1.0 Oreo). It was first announced on September 29, 2015, along with the Nexus 6P and several other Google devices (such as the Pixel C tablet).
Display: 5.2" FHD LCD display with 1080×1920 pixel resolution (423ppi)
Processor: Qualcomm Snapdragon 808 - Hexa-core 1.8 GHz
Storage: 16 or 32 GB
RAM: 2 GB LPDDR3
GPU: Adreno 418
Battery: 2700 mAh with rapid charging, non-removable
Cameras: 12.3 MP rear camera with f/2.0 lens and IR laser-assisted autofocus; 5 MP front camera with f/2.0 lens
Speakers: Single front-facing speaker
Colors: Carbon (black), Quartz (white), and Ice (mint)
Nexus 6P
The Nexus 6P is a smartphone developed by Huawei originally running Android 6.0 Marshmallow. It was first announced on September 29, 2015 along with the Nexus 5X and several other Google devices (such as the Pixel C tablet).
Display: 5.7" WQHD AMOLED display with 1440×2560 pixel resolution (518ppi)
Processor: Qualcomm Snapdragon 810 - Octa-core 4 × 1.95 GHz, 4 × 1.55 GHz
Storage: 32, 64, or 128 GB
RAM: 3 GB LPDDR4
GPU: Adreno 430
Battery: 3450 mAh with rapid charging, non-removable
Cameras: 12.3 MP rear camera with f/2.0 lens and IR laser-assisted autofocus; 8 MP front camera with f/2.0 lens
Speakers: Dual front-facing stereo
Colors: Aluminum, Graphite, Frost, or Gold
Tablets
Nexus 7
First generation
On June 27, 2012, at its I/O 2012 keynote presentation, Google introduced the Nexus 7, a 7-inch tablet computer developed with and manufactured by Asus. Released in July 2012, it was the first device to run Android 4.1 Jelly Bean. The latest Android version supported by Google for the device is Android 5.1.1 Lollipop.
Display: 7" display with 1280×800 pixel resolution
SoC: Nvidia Tegra 3
CPU: 1.2 GHz quad-core Cortex-A9
Storage: 8, 16, or 32 GB
RAM: 1 GB
GPU: ULP GeForce
Battery: 4325 mAh (non-removable)
Second generation
On July 24, 2013, at Google's "Breakfast with Sundar Pichai" press conference, Pichai introduced the second generation Nexus 7, again co-developed with Asus. Keeping with Google Nexus tradition, it was simultaneously released with the latest version, Android 4.3 Jelly Bean. It was made available on July 26, 2013 at select retailers and on the Google Play store in the United States. On November 20, 2013, it was available from the Google Play stores in Hong Kong and India. On the same day, the Nexus Wireless Charger was made available in the United States and Canada. In December 2015, Google released Android 6.0.1 Marshmallow for the device. The Nexus 7 (2013) will not receive an official Android 7.0 Nougat update, meaning that Android 6.0.1 Marshmallow is the last officially supported Android version for the tablet.
Display: 7.02" display with 1920×1200 pixel resolution
Chipset: Qualcomm Snapdragon S4Pro
CPU: 1.51 GHz quad-core Krait 300
Storage: 16 or 32 GB
RAM: 2 GB
GPU: 400 MHz quad-core Adreno 320
Battery: 3950 mAh (non-removable)
Nexus 10
The Nexus 10, a 10.1-inch tablet manufactured by Samsung, was revealed in late October 2012 by the Exif data of photos taken by Google executive, Vic Gundotra, along with the leaks of its manual and a comprehensive series of photos. The leaked photos revealed a design similar to the Samsung Galaxy Note 10.1, with a 10.1-inch 2560×1600 display, 16 or 32 GB of storage, Android 4.2, and a dual-core 1.7 GHz Exynos 5 processor. The Nexus 10 was expected to be unveiled officially during a Google press event on October 29, 2012, but the event was postponed due to Hurricane Sandy. The Nexus 10 would not receive any official updates beyond Android 5.1.1.
Display: 10.1" Corning Gorilla Glass 2 with 2560×1600 pixel resolution
CPU: 1.7 GHz dual-core Cortex-A15
Chipset: Samsung Exynos 5250
Storage: 16 or 32 GB
RAM: 2 GB
GPU: Mali-T604 MP4
Nexus 9
The Nexus 9 is an 8.9-inch tablet running Android 5.0 Lollipop, developed in collaboration between Google and HTC. It was first announced on October 15, 2014 along with the Nexus 6 and the Nexus Player.
Display: 8.9" Corning Gorilla Glass 3 with 2048×1536 pixel resolution
CPU: 2.3 GHz dual-core 64-bit Nvidia Tegra K1 "Denver"
Chipset: Nvidia Tegra K1
Storage: 16 or 32 GB
RAM: 2 GB
Dual front-facing speakers featuring HTC BoomSound
Digital media players
Nexus Q
The Nexus Q is a discontinued digital media player that ran Android and integrated with Google Play, to sell at US$299 in the United States.
After complaints about a lack of features for the price, the Nexus Q was shelved indefinitely; Google said it needed time to make the product "even better". The Nexus Q was unofficially replaced by the Chromecast, and further by the Nexus Player.
Storage: 16 GB
RAM: 1 GB
Nexus Player
The Nexus Player is a streaming media player created in collaboration between Google and Asus. It is the first device running Android TV. It was first announced on October 15, 2014 along with the Nexus 6 and the Nexus 9. On May 24, 2016, Google discontinued sales of the Nexus Player. In March 2018, Google confirmed that the Nexus Player would not receive the upcoming version of Android, Android Pie, and that security updates had also ended for the device.
1.8 GHz quad-core Intel Atom processor
802.11ac 2x2 (MIMO)
HDMI out
Remote control (with 2 AAA batteries)
Gamepad (Purchased separately)
Philip K. Dick estate claim
Upon the announcement of the first Nexus device, the Nexus One, the estate of science fiction author Philip K. Dick claimed that the Nexus One name capitalized on intellectual property from Dick's 1968 novel Do Androids Dream of Electric Sheep? and that the choice of name was a direct reference to the Nexus-6 series of androids in the novel.
See also
Android Dev Phone
Android One
Google Play Edition
Chromebook
List of Google products
References
Android (operating system)
Google hardware
Line of flagship smartphones
Tablet computers | Google Nexus | Technology | 3,045 |
1,687,359 | https://en.wikipedia.org/wiki/Primary%20succession | Primary succession is the beginning step of ecological succession where species known as pioneer species colonize an uninhabited site, which usually occurs in an environment devoid of vegetation and other organisms.
In contrast, secondary succession occurs on substrates that previously supported vegetation before an ecological disturbance. This occurs when smaller disturbances like floods, hurricanes, tornadoes, and fires destroy only the local plant life and leave soil nutrients for immediate establishment by intermediate community species.
Occurrence
In primary succession pioneer species like lichen, algae and fungi as well as abiotic factors like wind and water start to "normalise" the habitat or in other words start to develop soil and other important mechanisms for greater diversity to flourish. Primary succession begins on rock formations, such as volcanoes or mountains, or in a place with no organisms or soil. Primary succession leads to conditions nearer optimum for vascular plant growth; pedogenesis or the formation of soil, and the increased amount of shade are the most important processes.
These pioneer lichen, algae, and fungi are then dominated and often replaced by plants that are better adapted to less harsh conditions, these plants include vascular plants like grasses and some shrubs that are able to live in thin soils that are often mineral-based. Water and nutrient levels increase with the amount of succession exhibited.
The early stages of primary succession are dominated by species with small propagules (seed and spores) which can be dispersed long distances. The early colonizers—often algae, fungi, and lichens—stabilize the substrate. Nitrogen supplies are limited in new soils, and nitrogen-fixing species tend to play an important role early in primary succession. Unlike in primary succession, the species that dominate secondary succession, are usually present from the start of the process, often in the soil seed bank. In some systems the successional pathways are fairly consistent, and thus, are easy to predict. In others, there are many possible pathways. For example, nitrogen-fixing legumes alter successional trajectories.
Spores of lichen or fungus, being the pioneer species, are spread onto a land of rocks. Then, the rocks are broken down into smaller particles.
Organic matter gradually accumulates, favoring the growth of herbaceous plants like grass, ferns and herbs. These plants further improve the habitat by creating more organic matter when they die, and providing habitats for insects and other small animals.
This leads to the occurrence of larger vascular plants like shrubs, or trees. More animals are then attracted to the area and a climax community is reached.
Species diversity is also a large influence on the stages of succession, and as succession progresses further, species diversity changes with it. For example, there is far less richness and evenness of microorganisms in the very early stages of succession, but late successional stage bacteria are far more even and rich. This again supports the hypothesis that as more resources are present in later stages of succession, there is enough to support a more diverse ecosystem with many different reproductive strategies. A 2000 case study suggests that plant species composition is more important to later-successional species than simply having high plant diversity early on.
Examples
Volcanism
One example of primary succession takes place after a volcano has erupted. The lava flows into the ocean and hardens into new land. The resulting barren land is first colonized by pioneer organisms, like algae, which pave the way for later, less hardy plants, such as hardwood trees, by facilitating pedogenesis, especially through the biotic acceleration of weathering and the addition of organic debris to the surface regolith. An example of this is the island of Surtsey, which is an island formed in 1963 after a volcanic eruption from beneath the sea. Surtsey is off the south coast of Iceland and is being monitored to observe primary succession in progress. About thirty species of plant had become established by 2008 and more species continue to arrive, at a typical rate of roughly 2–5 new species per year.
A volcanic eruption occurred on Mount St. Helens as well, with primary succession beginning after the destruction of the region's ecosystem. In Mount St. Helens' primary succession, the region was heavily isolated. This type of incident causes the rate of primary succession to be rather low, as many species that excel in establishment lack the ability to effectively disperse into the new frontier. The opposite is true as well, as species that were not very good at establishing could not survive, even with high dispersal rates. The region has almost no organic materials to utilize, which was especially significant at Mount St. Helens, as its isolated location prevented succession to occur at the periphery of the destruction site. Initially effective long distance colonizers are rare, as they are only truly effective after an initial colonizer has helped to change the region into more suitable conditions. This is why primary succession was slow in the destroyed region around Mount St. Helens.
Glacier Retreat
Another example is taking place on Signy Island in the South Orkney Islands of Antarctica, due to glacier retreat. Glacier retreat is becoming more normal with the warming climate, and lichens and mosses are the first colonizers. The study, conducted by Favero-Longo et al. found that lichen species diversity varies based on the environmental conditions of the previously existing earth that is first exposed, and the lichens' reproductive patterns.
The characteristics of succession
By analyzing a case study in Grand Bend, Ontario, a full understanding of the distinction between primary and secondary succession can be accomplished. The two species, Juniperus virginiana and Quercus prinoides, are quickly reproducing and spreading grasses that are associated with primary succession in the dunes of Grand Bend's beaches. They are classified as r selected species, with high mortality, quick reproduction, and a distinct ability to survive in harsh and nutrient-low conditions. In contrast, ecological development after primary succession completes often leads to a more heavily k selected population, which has lower mortality and slower reproduction rates. In the Grand Bend, this is shown through the succession of oak-pine forests, and the continued reduction of r selected grasses. The timescale is also relevant, as the secondary succession of oak-pine forests occurs approximately 2,900 years after the initial cases of primary succession, while the end of solely grassland dominated dunes occurs around 1,600 years after the beginning of primary succession. This is extremely important, as it shows a 1,300 year intermittent period in which primary succession is overcome by secondary succession. This period is likely characterized by high species diversity, a mix of k and r selected species, and high community productivity. It is a well-supported principle that an intermediate between k and r dominated populations leads to high productivity and species diversity, while the secondary succession afterwards leads towards climax communities with low species diversity. During this 1,300 year period, it is likely that resources grew into a surplus, which reduced species diversity, resulting in the k dominated oak-pine forest.
It is very difficult to determine exactly what events will hinder or support the growth of a community, as shown in the following example. Very few seedlings survive for a long period of time during primary succession, with 1.7% of seedlings in an outwash plain named Skeiðarársandur in southeast Iceland lasting from 2005 to 2007. The rest were replaced by new colonizers, as the mortality rates for r selected species like these are extremely high. This is a very important phenomenon to observe, as even though population sizes may remain consistent throughout the history of a region, it is highly likely that many of the r selected organisms present are entirely new organisms. This is one of many factors that are highly unpredictable in the scale of ecological succession.
See also
Ecological succession
Lithophyte
Pioneer species
Secondary succession
Soil creation
Stability (ecology)
References
Ecological succession
Plants
Ecology terminology | Primary succession | Biology | 1,584 |
27,908,073 | https://en.wikipedia.org/wiki/Triton%20Systems | Triton Systems LLC is a manufacturer of automated teller machines (ATMs). Triton ATMs are built in Long Beach, Mississippi. Triton has been in business since 1979, and has nearly 200,000 installations in over 24 countries.
History
Founded in 1979 by Ernest L. Burdette, Frank J. Wilem, Jr., and Robert E. Sandoz, Triton Systems developed ATMjr, the world's first battery-powered and completely portable device for training bank customers to use what was, at the time, a fairly new banking service, the ATM. Triton followed this product with the development of a Card Activation System that allowed financial institutions to instantly issue ATM cards with customized (often customer chosen) personal identification numbers (PINs).
In the early 1990s, Triton pioneered in-store cash withdrawals with the introduction of the Scrip terminal, a machine that allows a store's customers to use an ATM card to generate a voucher, redeemable for cash at the register.
In 2000 Triton was acquired by Dover Corporation (NYSE-traded DOV), a diversified manufacturer of a wide range of proprietary products and components for industrial and commercial use. In 2004, Fujitsu and Triton entered into a strategic licensing agreement to provide a broader range of solutions for financial institutions and retailers though the deployment of Fujitsu's Windows-based Prism software on Triton ATMs. Later that same year, Triton launched the RT2000, a smaller, low-cost through-the-wall ATM that was easy to install and easy to maintain. 120,000 ATMs were shipped to 17 countries around the world.
In 2005 Hurricane Katrina provided a major challenge for Triton. Its headquarters and manufacturing plant on the Mississippi Gulf coast was shut down, and the entire coastal area evacuated. Triton's Long Beach, Mississippi administrative, manufacturing and production facilities were back on-line within two weeks.
Triton opened a Memphis manufacturing and service facility in July 2006. In 2008 Triton launched the RL2000, a stand-alone ATM. Also that year, Triton's subsidiary, Calypso, began operations in Australia. On April 14, 2008, Calypso successfully conducted the largest migration of ATMs to be completed in a single day — 2,808 ATMs.
In March 2009, Triton introduced the RL1600, a new off-premises ATM. The RL1600 was named the Convenience Store and Petroleum (CSP) magazine Product of the Year for 2009. Also in March 2009, Triton made the decision to sell its Calypso processing operation in order to focus on ATM manufacturing, software development and support.
In September 2009, the company launched ATMGurus. ATMGurus provides customers with multi-brand parts, repair and training support for their ATM estates.
In July 2008, Nautilus Hyosung offered to acquire Triton from its parent company Dover Corporation for $63 million U.S. Dollars. However, in May 2009, citing anti-trust scrutiny from regulators, the acquisition was cancelled. Subsequently, in March 2010, Dover completed the sale of Triton to a group of private investors. The company is currently privately held.
ATMGurus
ATMGurus is a division of Triton Systems of Delaware Inc. and provides parts, repair, and training for a variety of retail ATM brands.
Memberships
Triton has active membership in the following industry associations:
ATM Industry Association (ATMIA)
NAAIO (National Association of ATM ISOs and Operators)
FSPA (Financial and Security Products Association)
ICBA (Independent Community Banking Association)
References
Triton RL1600, Convenience Store and Petroleum product of year for 2009
External links
Triton Systems Website
ATMs Location Information
Independent Community Banking Association
National Association of ATM ISOs and Operators
Manufacturing companies established in 1979
1979 establishments in Mississippi
Automated teller machines
Manufacturing companies based in Mississippi
Harrison County, Mississippi | Triton Systems | Engineering | 804 |
10,879,760 | https://en.wikipedia.org/wiki/1%2C3-Dichloropropene | 1,3-Dichloropropene, sold under diverse trade names, is an organochlorine compound with the formula . It is a colorless liquid with a sweet smell. It is feebly soluble in water and evaporates easily. It is used mainly in farming as a pesticide, specifically as a preplant fumigant and nematicide. It acts non-specifically and is in IRAC class 8A. It is widely used in the US and other countries, but is banned in 34 countries (including the European Union).
Production, chemical properties, biodegradation
It is a byproduct in the chlorination of propene to make allyl chloride.
It is usually obtained as a mixture of the geometric isomers, called (Z)-1,3-dichloropropene, and (E)-1,3-dichloropropene. Although it was first applied in agriculture in the 1950s, at least two biodegradation pathways have evolved. One pathway degrades the chlorocarbon to acetaldehyde via chloroacrylic acid.
Safety
The TLV-TWA for 1,3-dichloropropene (DCP) is 1 ppm. It is a contact irritant. A wide range of complications have been reported.
Carcinogenicity
Evidence for the carcinogenicity of 1,3-dichloropropene in humans is inadequate, but results from several cancer bioassays provide adequate evidence of carcinogenicity in animals. In the US, the Department of Health and Human Services (DHHS) has determined that 1,3-dichloropropene may reasonably be anticipated to be a carcinogen. In California, the Office of Environmental Health Hazard Assessment has determined that 1,3-dichloropropene is a carcinogen, and in 2022 established a No Significant Risk Level (NSRL) of 3.7 micrograms/day. The International Agency for Research on Cancer (IARC) has determined that 1,3-dichloropropene is possibly carcinogenic to humans. The EPA has classified 1,3-dichloropropene as a probable human carcinogen.
Use
1,3-Dichloropropene is used as a pesticide in the following crops:
Contamination
The ATSDR has extensive contamination information available.
Market history
Under the brand name Telone, 1,3-D was one of Dow AgroSciences's products until the merger into DowDuPont. Then it was spun off with Corteva, and has been licensed to Telos Ag Solutions and is no longer a Corteva product.
References
ATSDR ToxFAQs: Dichloropropenes
USGS Pesticide National Synthesis Project – Crop & Compound
Further reading
ATSDR Toxicological Profile (9.2 MB)
CDC – NIOSH Pocket Guide to Chemical Hazards
Pesticides
Chloroalkenes
IARC Group 2B carcinogens
Fumigants
Sweet-smelling chemicals | 1,3-Dichloropropene | Biology,Environmental_science | 647 |
70,126,157 | https://en.wikipedia.org/wiki/Calibrated%20automated%20thrombogram | The calibrated automated thrombogram (CAT or CT) is a thrombin generation assay (TGA) and global coagulation assay (GCA) which can be used as a coagulation test to assess thrombotic risk. It is the most widely used TGA. The CAT is a semi-automated test performed in a 96-well plate and requires specialized technologists to be performed. As a result, it has seen low implementation in routine laboratories and has been more limited to research settings. Lack of standardization with the CAT has also led to difficulties in study-to-study comparisons in research. However, efforts have recently been made towards standardization of the assay. An example of a specific commercial CAT is the Thrombinoscope by Thrombinoscope BV (now owned by Diagnostica Stago).
The CAT can be used to measure thrombogram parameters such as the endogenous thrombin potential (ETP) and to assess activated protein C resistance (APCR). The CAT ETP-based APC resistance test is especially sensitive to estrogen-induced procoagulation, such as with combined oral contraceptives.
In 2018, a commercial fully-automated TGA system and alternative to the CAT called the ST Genesia debuted. It has been said that this system should allow for more widespread adoption of TGAs in clinical laboratories. The ST Genesia system also shows improved reproducibility compared to the CAT.
References
Blood tests
Coagulation system
Medical signs | Calibrated automated thrombogram | Chemistry | 315 |
11,420,730 | https://en.wikipedia.org/wiki/Citrus%20tristeza%20virus%20replication%20signal | The Citrus tristeza virus replication signal is a regulatory element involved in a viral replication signal which is highly conserved in citrus tristeza viruses. Replication signals are required for viral replication and are usually found near the 5' and 3' termini of protein coding genes. This element is predicted to form ten stem loop structures some of which are essential for functions that provide for efficient viral replication.
See also
Cardiovirus cis-acting replication element (CRE)
Coronavirus SL-III cis-acting replication element (CRE)
Heron HBV RNA encapsidation signal epsilon
References
External links
Cis-regulatory RNA elements | Citrus tristeza virus replication signal | Chemistry | 127 |
345,758 | https://en.wikipedia.org/wiki/Solar%20constant | The solar constant (GSC) measures the amount of energy received by a given area one astronomical unit away from the Sun. More specifically, it is a flux density measuring mean solar electromagnetic radiation (total solar irradiance) per unit area. It is measured on a surface perpendicular to the rays, one astronomical unit (au) from the Sun (roughly the distance from the Sun to the Earth).
The solar constant includes radiation over the entire electromagnetic spectrum. It is measured by satellite as being 1.361 kilowatts per square meter (kW/m2) at solar minimum (the time in the 11-year solar cycle when the number of sunspots is minimal) and approximately 0.1% greater (roughly 1.362 kW/m2) at solar maximum.
The solar "constant" is not a physical constant in the modern CODATA scientific sense; that is, it is not like the Planck constant or the speed of light which are absolutely constant in physics. The solar constant is an average of a varying value. In the past 400 years it has varied less than 0.2 percent. Billions of years ago, it was significantly lower.
This constant is used in the calculation of radiation pressure, which aids in the calculation of a force on a solar sail.
Calculation
Solar irradiance is measured by satellites above Earth's atmosphere, and is then adjusted using the inverse square law to infer the magnitude of solar irradiance at one Astronomical Unit (au) to evaluate the solar constant. The approximate average value cited, 1.3608 ± 0.0005 kW/m2, which is 81.65 kJ/m2 per minute, is equivalent to approximately 1.951 calories per minute per square centimeter, or 1.951 langleys per minute.
Solar output is nearly, but not quite, constant. Variations in total solar irradiance (TSI) were small and difficult to detect accurately with technology available before the satellite era (±2% in 1954). Total solar output is now measured as varying (over the last three 11-year sunspot cycles) by approximately 0.1%; see solar variation for details.
For extrasolar planets
Therefore:
Where f is the irradiance of the star at the extrasolar planet at distance d.
Historical measurements
In 1838, Claude Pouillet made the first estimate of the solar constant. Using a very simple pyrheliometer he developed, he obtained a value of 1.228 kW/m2, close to the current estimate.
In 1875, Jules Violle resumed the work of Pouillet and offered a somewhat larger estimate of 1.7 kW/m2 based, in part, on a measurement that he made from Mont Blanc in France.
In 1884, Samuel Pierpont Langley attempted to estimate the solar constant from Mount Whitney in California. By taking readings at different times of day, he tried to correct for effects due to atmospheric absorption. However, the final value he proposed, 2.903 kW/m2, was much too large.
Between 1902 and 1957, measurements by Charles Greeley Abbot and others at various high-altitude sites found values between 1.322 and 1.465 kW/m2. Abbot showed that one of Langley's corrections was erroneously applied. Abbot's results varied between 1.89 and 2.22 calories (1.318 to 1.548 kW/m2), a variation that appeared to be due to the Sun and not the Earth's atmosphere.
In 1954 the solar constant was evaluated as 2.00 cal/min/cm2 ± 2%. Current results are about 2.5 percent lower.
Relationship to other measurements
Solar irradiance
The actual direct solar irradiance at the top of the atmosphere fluctuates by about 6.9% during a year (from 1.412 kW/m2 in early January to 1.321 kW/m2 in early July) due to the Earth's varying distance from the Sun, and typically by much less than 0.1% from day to day. Thus, for the whole Earth (which has a cross section of 127,400,000 km2), the power is 1.730×1017 W (or 173,000 terawatts), plus or minus 3.5% (half the approximately 6.9% annual range). The solar constant does not remain constant over long periods of time (see Solar variation), but over a year the solar constant varies much less than the solar irradiance measured at the top of the atmosphere. This is because the solar constant is evaluated at a fixed distance of 1 Astronomical Unit (au) while the solar irradiance will be affected by the eccentricity of the Earth's orbit. Its distance to the Sun varies annually between 147.1·106 km at perihelion and 152.1·106 km at aphelion. In addition, several long term (tens to hundreds of millennia) cycles of subtle variation in the Earth's orbit (Milankovich cycles) affect the solar irradiance and insolation (but not the solar constant).
The Earth receives a total amount of radiation determined by its cross section (π·RE2), but as it rotates this energy is distributed across the entire surface area (4·π·RE2). Hence the average incoming solar radiation, taking into account the angle at which the rays strike and that at any one moment half the planet does not receive any solar radiation, is one-fourth the solar constant (approximately 340 W/m2). The amount reaching the Earth's surface (as insolation) is further reduced by atmospheric attenuation, which varies. At any given moment, the amount of solar radiation received at a location on the Earth's surface depends on the state of the atmosphere, the location's latitude, and the time of day.
Apparent magnitude
The solar constant includes all wavelengths of solar electromagnetic radiation, not just the visible light (see Electromagnetic spectrum). It is positively correlated with the apparent magnitude of the Sun which is −26.8. The solar constant and the magnitude of the Sun are two methods of describing the apparent brightness of the Sun, though the magnitude is based on the Sun's visual output only.
The Sun's total radiation
The angular diameter of the Earth as seen from the Sun is approximately 1/11,700 radians (about 18 arcseconds), meaning the solid angle of the Earth as seen from the Sun is approximately 1/175,000,000 of a steradian. Thus the Sun emits about 2.2 billion times the amount of radiation that is caught by Earth, in other words about 3.846×1026 watts.
Past variations in solar irradiance
Space-based observations of solar irradiance started in 1978. These measurements show that the solar constant is not constant. It varies with the 11-year sunspot solar cycle.
When going further back in time, one has to rely on irradiance reconstructions, using sunspots for the past 400 years or cosmogenic radionuclides for going back 10,000 years.
Such reconstructions show that solar irradiance varies with distinct periodicities. These cycles are: 11 years (Schwabe), 88 years (Gleisberg cycle), 208 years (DeVries cycle) and 1,000 years (Eddy cycle).
Over billions of years, the Sun is gradually expanding, and emitting more energy from the resultant larger surface area. The unsolved question of how to account for the clear geological evidence of liquid water on the Earth billions of years ago, at a time when the sun's luminosity was only 70% of its current value, is known as the faint young Sun paradox.
Variations due to atmospheric conditions
At most about 75% of the solar energy actually reaches the earth's surface, as even with a cloudless sky it is partially reflected and absorbed by the atmosphere. Even light cirrus clouds reduce this to 50%, stronger cirrus clouds to 40%. Thus the solar energy arriving at the surface with the sun directly overhead can vary from 550 W/m2 with cirrus clouds to 1025 W/m2 with a clear sky.
See also
References
Atmospheric radiation
Photovoltaics
Radiometry
Sun | Solar constant | Engineering | 1,705 |
53,153,455 | https://en.wikipedia.org/wiki/Maximal%20entropy%20random%20walk | Maximal entropy random walk (MERW) is a popular type of biased random walk on a graph, in which transition probabilities are chosen accordingly to the principle of maximum entropy, which says that the probability distribution which best represents the current state of knowledge is the one with largest entropy. While standard random walk chooses for every vertex uniform probability distribution among its outgoing edges, locally maximizing entropy rate, MERW maximizes it globally (average entropy production) by assuming uniform probability distribution among all paths in a given graph.
MERW is used in various fields of science. A direct application is choosing probabilities to maximize transmission rate through a constrained channel, analogously to Fibonacci coding. Its properties also made it useful for example in analysis of complex networks, like link prediction, community detection,
robust transport over networks and centrality measures. Also in image analysis, for example for detecting visual saliency regions, object localization, tampering detection or tractography problem.
Additionally, it recreates some properties of quantum mechanics, suggesting a way to repair the discrepancy between diffusion models and quantum predictions, like Anderson localization.
Basic model
Consider a graph with vertices, defined by an adjacency matrix : if there is an edge from vertex to , 0 otherwise. For simplicity assume it is an undirected graph, which corresponds to a symmetric ; however, MERW can also be generalized for directed and weighted graphs (for example Boltzmann distribution among paths instead of uniform).
We would like to choose a random walk as a Markov process on this graph: for every vertex and its outgoing edge to , choose probability of the walker randomly using this edge after visiting . Formally, find a stochastic matrix (containing the transition probabilities of a Markov chain) such that
for all and
for all .
Assuming this graph is connected and not periodic, ergodic theory says that evolution of this stochastic process leads to some stationary probability distribution such that .
Using Shannon entropy for every vertex and averaging over probability of visiting this vertex (to be able to use its entropy), we get the following formula for average entropy production (entropy rate) of the stochastic process:
This definition turns out to be equivalent to the asymptotic average entropy (per length) of the probability distribution in the space of paths for this stochastic process.
In the standard random walk, referred to here as generic random walk (GRW), we naturally choose that each outgoing edge is equally probable:
.
For a symmetric it leads to a stationary probability distribution with
.
It locally maximizes entropy production (uncertainty) for every vertex, but usually leads to a suboptimal averaged global entropy rate .
MERW chooses the stochastic matrix which maximizes , or equivalently assumes uniform probability distribution among all paths in a given graph. Its formula is obtained by first calculating the dominant eigenvalue and corresponding eigenvector of the adjacency matrix, i.e. the largest with corresponding such that . Then stochastic matrix and stationary probability distribution are given by
for which every possible path of length from the -th to -th vertex has probability
.
Its entropy rate is and the stationary probability distribution is
.
In contrast to GRW, the MERW transition probabilities generally depend on the structure of the entire graph (are nonlocal). Hence, they should not be imagined as directly applied by the walker – if random-looking decisions are made based on the local situation, like a person would make, the GRW approach is more appropriate. MERW is based on the principle of maximum entropy, making it the safest assumption when we don't have any additional knowledge about the system. For example, it would be appropriate for modelling our knowledge about an object performing some complex dynamics – not necessarily random, like a particle.
Sketch of derivation
Assume for simplicity that the considered graph is indirected, connected and aperiodic, allowing to conclude from the Perron–Frobenius theorem that the dominant eigenvector is unique. Hence can be asymptotically () approximated by (or in bra–ket notation).
MERW requires uniform distribution along paths. The number of paths with length and vertex in the center is
,
hence for all ,
.
Analogously calculating probability distribution for two succeeding vertices, one obtains that the probability of being at the -th vertex and next at the -th vertex is
.
Dividing by the probability of being at the -th vertex, i.e. , gives for the conditional probability of the -th vertex being next after the -th vertex
.
Weighted MERW: Boltzmann path ensemble
We have assumed that for MERW corresponding to uniform ensemble among paths. However, the above derivation works for real nonnegative . Parametrizing and asking for probability of length path , we get:
As in Boltzmann distribution of paths for energy defined as sum of over given path. For example, it allows to calculate probability distribution of patterns in Ising model.
Examples
Let us first look at a simple nontrivial situation: Fibonacci coding, where we want to transmit a message as a sequence of 0s and 1s, but not using two successive 1s: after a 1 there has to be a 0. To maximize the amount of information transmitted in such sequence, we should assume uniform probability distribution in the space of all possible sequences fulfilling this constraint. To practically use such long sequences, after 1 we have to use 0, but there remains a freedom of choosing the probability of 0 after 0. Let us denote this probability by , then entropy coding would allow encoding a message using this chosen probability distribution. The stationary probability distribution of symbols for a given turns out to be . Hence, entropy production is , which is maximized for , known as the golden ratio. In contrast, standard random walk would choose suboptimal . While choosing larger reduces the amount of information produced after 0, it also reduces frequency of 1, after which we cannot write any information.
A more complex example is the defected one-dimensional cyclic lattice: let say 1000 nodes connected in a ring, for which all nodes but the defects have a self-loop (edge to itself). In standard random walk (GRW) the stationary probability distribution would have defect probability being 2/3 of probability of the non-defect vertices – there is nearly no localization, also analogously for standard diffusion, which is infinitesimal limit of GRW. For MERW we have to first find the dominant eigenvector of the adjacency matrix – maximizing in:
for all positions , where for defects, 0 otherwise. Substituting and multiplying the equation by −1 we get:
where is minimized now, becoming the analog of energy. The formula inside the bracket is discrete Laplace operator, making this equation a discrete analogue of stationary Schrodinger equation. As in quantum mechanics, MERW predicts that the probability distribution should lead exactly to the one of quantum ground state: with its strongly localized density (in contrast to standard diffusion). Taking the infinitesimal limit, we can get standard continuous stationary (time-independent) Schrodinger equation ( for ) here.
See also
Principle of maximum entropy
Eigenvector centrality
Markov chain
Anderson localization
References
External links
Gábor Simonyi, Y. Lin, Z. Zhang, "Mean first-passage time for maximal-entropy random walks in complex networks". Scientific Reports, 2014.
Electron Conductance Models Using Maximal Entropy Random Walks Wolfram Demonstration Project
Network theory
Diffusion
Information theory
Quantum mechanics | Maximal entropy random walk | Physics,Chemistry,Mathematics,Technology,Engineering | 1,553 |
3,232,460 | https://en.wikipedia.org/wiki/List%20of%20exceptional%20set%20concepts | This is a list of exceptional set concepts. In mathematics, and in particular in mathematical analysis, it is very useful to be able to characterise subsets of a given set X as 'small', in some definite sense, or 'large' if their complement in X is small. There are numerous concepts that have been introduced to study 'small' or 'exceptional' subsets. In the case of sets of natural numbers, it is possible to define more than one concept of 'density', for example. See also list of properties of sets of reals.
Almost all
Almost always
Almost everywhere
Almost never
Almost surely
Analytic capacity
Closed unbounded set
Cofinal (mathematics)
Cofinite
Dense set
IP set
2-large
Large set (Ramsey theory)
Meagre set
Measure zero
Natural density
Negligible set
Nowhere dense set
Null set, conull set
Partition regular
Piecewise syndetic set
Schnirelmann density
Small set (combinatorics)
Stationary set
Syndetic set
Thick set
Thin set (Serre)
Exceptional
Exceptional | List of exceptional set concepts | Mathematics | 214 |
47,277,557 | https://en.wikipedia.org/wiki/Pivaldehyde | Pivaldehyde is an organic compound, more specifically an aldehyde. Shown in the image is a line-angle representation of this organic aldehyde, whose systematic name, 2,2-dimethylpropanal, is based on the longest carbon chain (three carbon atoms), ending in "-al" to indicate the aldehyde functionality, and where another descriptive synonym is trimethylacetaldehyde. Pivaldehyde is an example of an aldehyde with a sterically bulky R group, the tertiary-butyl group (with 3 methyl groups, at lower left in the image), attached to the carbonyl, >C=O. By definition, the other "group", R', is a hydrogen (H) atom, shown here pointing directly upward.
See also
Pivalic acid - corresponding carboxylic acid
Pivalamide - corresponding amide
Pinacolone - corresponding methyl ketone
References
Alkanals
Tert-butyl compounds | Pivaldehyde | Chemistry | 207 |
49,595,958 | https://en.wikipedia.org/wiki/Dmitri%20Leonidovich%20Romanowsky | Dmitri Leonidovich Romanowsky (sometimes spelled Dmitry and Romanowski, ; 1861–1921) was a Russian physician who is best known for his invention of an eponymous histological stain called Romanowsky stain. It paved the way for the discovery and diagnosis of microscopic pathogens, such as malarial parasites, and later developments of new histological stains that became fundamental to microbiology and physiology.
While working on his doctoral research, Romanowsky developed the first effective staining method for malarial parasite in 1890. Using a specific mixture of mouldy methylene blue and eosin, he found that malarial parasites could be distinctively identified from other blood cell and within the red blood cells. The chemical reaction of such staining is known in chemistry as "Romanowsky effect". The method became the gold standard in malaria detection by microscopy and general immunohistochemistry. British zoologist and science historian, Francis Edmund Gabriel Cox remarked the discovery as a serendipitous case that became "one of the most significant technical advances in the history of parasitology."
Biography
Romanowsky was born in 1861 in Pskov Governorate, Russia. He attended the 6th Saint Petersburg Gymnasium. In 1880, he enrolled at the St. Petersburg University. He enrolled for two courses: natural science (physics and mathematics) and medicine. He concentrated on medicine in 1882 for a preparatory course to the Military Medical Academy. He graduated with honors in 1886. On 30 November 1886, he was appointed as a junior resident of the Ivangorod military hospital. After one month, he was transferred to the Revel local infirmary as an associate doctor. In 1889, he was attached to the Saint Petersburg Nikolaevsky Military hospital. He initially worked at the clinical department, and from May 1890, he was the head of the eye department. He obtained his medical degree in 1891 on the thesis "On the question of parasitology and therapy of malaria."
Romanowsky died in 1921 in Kislovodsk in North Caucasus.
Invention of histological stain
Background
Romanowsky's research for his medical degree in 1880s was mainly on the identification of malarial parasite (Plasmodium). Until that time malarial infection was difficult to confirm as the parasites were hard to distinguish from blood cells or cell organelles. Pigmented blood cells were often linked to malarial infection, but the pigments are not always visible. When French physician Charles Louis Alphonse Laveran discovered and described the malarial protozoan (later called Plasmodium falciparum) in 1880, it was not accepted as no protozoan had ever been seen in blood cells or associated with malaria.
In 1871, German chemist Adolf von Baeyer synthesised a red dye called eosin (Greek word for "morning red"), which in 1876 was found to be useful for staining tissues. Another German chemist Heinrich Caro synthesised a blue dye named methylene blue in 1876, which was first used as a cell stain by Robert Koch. In 1882, using methylene blue Koch discovered the causative bacterium of tuberculosis, tubercle bacillus (now Mycobacterium tuberculosis). The two stains remain among the fundamental stains used in general cell and tissue staining, as well as in clinical diagnosis.
Romanowsky stain
Romanowsky was the first to realise the differences in the staining abilities of eosin and methylene blue. The individual stains (monochromatic staining) were good only for general colouring of tissue or cell, but not for contrasting the different components. By mixing specific amount of eosin and methylene blue, Romanowsky found that the mixture gave images of contrasting clarity that helped to visualise different parts and components of cells. This mixture method, polychromatic staining or polychromy, with various modifications became the most efficient way of staining cells for identifying cellular components. The chemical phenomenon by which a mixture of stains produces vibrant cell images is known as "Romanowsky effect".
In December 1890, Romanowsky published his invention as a preliminary report of his major work for his doctoral thesis in the journal Vrach as "On the question of the structure of malaria parasites" (as translated in English). Incorrectly, it is more often recorded in books and journals that Romanowsky published his findings in 1891, which led to a controversy on priority that Ernst Malachowsky independently developed the technique as the latter published his research in August 1891.
Romanowsky discovered that instead of fresh methylene blue, an aged and mouldy solution gave the best result, while eosin should be free of any contamination. He described:For staining [blood sample having malarial infection] the following mixture is used, as discovered by me, which is best when freshly prepared: 2 volumes of a filtered saturated aqueous solution of methylene blue plus 5 volumes of a 1% aqueous eosin solution... In my preparations I always obtain the following picture. Red cells are stained in a pink color. Cytoplasm in eosinophils is saturated-pink, whilst that in the malaria parasite and lymphocytes is light blue. Blood platelets and the nuclei of white cell are dark-violet, whilst the nuclei of malaria parasites are purple-violet. The cytoplasm of leukocytes is pale-violet, with transitional colors between the light blue protoplasm of lymphocytes to violet leukocytes.
Within red cells the malaria parasite may be hardly noticeable or may occupy the whole cell. In any event, the violet nucleus, surrounded by a colorless rim, is always clearly distinguishable.Romanowsky gave an elaborate description of the new technique in his thesis submitted in June 1891. The staining method remains the "gold standard" for visualising blood samples, especially for malarial infection, and in immunohistochemical studies.
References
1861 births
1921 deaths
Saint Petersburg State University alumni
Ophthalmologists from the Russian Empire
19th-century physicians from the Russian Empire
Romanowsky stains
Microbiology
Cytopathology
Malaria | Dmitri Leonidovich Romanowsky | Chemistry,Biology | 1,257 |
28,042,707 | https://en.wikipedia.org/wiki/Cyprus%20Safer%20Internet%20Helpline | The Cyprus Safer Internet Helpline is a service provided by the Cyprus Safer Internet Center project, coordinated by the Cyprus Neuroscience and Technology Institute (CNTI). The Helpline ensures that not only children and adolescents but also adults have the opportunity to converse with experts in case they experience something negative on the Internet. Educated psychologists provide support and essential advice so that the crisis is overcome and the situation is confronted. Members of the public can reach the helpline at the number 7000 0 116. The communication is completely confidential and anonymous.
The need for an operation of a Helpline has been stressed by the European Commission, which supports the idea that the increasing use of the Internet is disproportionate relative to its correct use and moral education. Consequently, many of children come into contact with pages containing inappropriate content or individuals who want to exploit them. Such cases usually cause fear and distress, which must be addressed.
The Hotline is a member of the INSAFE European network of Awareness Centres that promote the safe and responsible use of the Internet and mobile devices to young people. The mission of the Insafe cooperation network is to empower citizens to use the Internet, the mobile phone, as well as other online technologies, positively, safely and effectively. The network calls for shared responsibility for the protection of the rights and needs of citizens, in particular children and youngsters, by government, educators, parents, media, industry and all other relevant actors.
History
The service was first established in 2009, through the Cyprus Internet Awareness Center, and is co-funded by the Safer Internet Plus Program of the European Commission, under Grant Number SIP-2008-CNH-143-802.
The Safer Internet Program of the European Commission has been instrumental in developing the Helpline network in Europe.
Visibility
The Helpine has run visibility events to promote Internet safety issues and has participated and contributed to various forums with the view to developing safer internet initiatives. It has also provided support and speakers for events run by educational organisations, industry associations and child welfare organisations. Interviews regarding the working of the Helpline are regularly given on TV, radio and the written press. The Cyprus Internet Helpline is also an active participant in the organisation of various events and activities to raise awareness in the context of the annual celebration of the International Safer Internet Day.
Related projects
In addition to the Helpline, the Cyprus Safer Internet Center also operates the Cyprus Safer Internet Hotline.
Sources
EC reference to helplines: http://ec.europa.eu/information_society/activities/sip/projects/centres/index_en.htm#awareness_insafe
EC reference to the Cyprus Internet Helpline: http://ec.europa.eu/information_society/apps/projects/factsheet/index.cfm?project_ref=SIP-2008-CNH-143802
ISAFE reference to Cyprus Internet Helpline: http://www.saferinternet.org/web/guest/centre/-/centre/cyprus
References
External links
The Cyprus Internet Helpline, https://archive.today/20120803204314/http://www.helpline.cyberethics.info/
INSAFE, http://www.saferinternet.org
European Union’s Safer Internet plus Programme, http://ec.europa.eu/information_society/activities/sip/programme/index_en.htm
Internet in Cyprus
Internet safety
Ethics of science and technology
Crisis hotlines | Cyprus Safer Internet Helpline | Technology | 717 |
37,208 | https://en.wikipedia.org/wiki/Landslide | Landslides, also known as landslips, or rockslides, are several forms of mass wasting that may include a wide range of ground movements, such as rockfalls, mudflows, shallow or deep-seated slope failures and debris flows. Landslides occur in a variety of environments, characterized by either steep or gentle slope gradients, from mountain ranges to coastal cliffs or even underwater, in which case they are called submarine landslides.
Gravity is the primary driving force for a landslide to occur, but there are other factors affecting slope stability that produce specific conditions that make a slope prone to failure. In many cases, the landslide is triggered by a specific event (such as a heavy rainfall, an earthquake, a slope cut to build a road, and many others), although this is not always identifiable.
Landslides are frequently made worse by human development (such as urban sprawl) and resource exploitation (such as mining and deforestation). Land degradation frequently leads to less stabilization of soil by vegetation. Additionally, global warming caused by climate change and other human impact on the environment, can increase the frequency of natural events (such as extreme weather) which trigger landslides. Landslide mitigation describes the policy and practices for reducing the risk of human impacts of landslides, reducing the risk of natural disaster.
Causes
Landslides occur when the slope (or a portion of it) undergoes some processes that change its condition from stable to unstable. This is essentially due to a decrease in the shear strength of the slope material, an increase in the shear stress borne by the material, or a combination of the two. A change in the stability of a slope can be caused by a number of factors, acting together or alone. Natural causes of landslides include:
increase in water content (loss of suction) or saturation by rain water infiltration, snow melting, or glaciers melting;
rising of groundwater or increase of pore water pressure (e.g. due to aquifer recharge in rainy seasons, or by rain water infiltration);
increase of hydrostatic pressure in cracks and fractures;
loss or absence of vertical vegetative structure, soil nutrients, and soil structure (e.g. after a wildfire);
erosion of the top of a slope by rivers or sea waves;
physical and chemical weathering (e.g. by repeated freezing and thawing, heating and cooling, salt leaking in the groundwater or mineral dissolution);
ground shaking caused by earthquakes, which can destabilize the slope directly (e.g., by inducing soil liquefaction) or weaken the material and cause cracks that will eventually produce a landslide;
volcanic eruptions;
changes in pore fluid composition;
changes in temperature (seasonal or induced by climate change).
Landslides are aggravated by human activities, such as:
deforestation, cultivation and construction;
vibrations from machinery or traffic;
blasting and mining;
earthwork (e.g. by altering the shape of a slope, or imposing new loads);
in shallow soils, the removal of deep-rooted vegetation that binds colluvium to bedrock;
agricultural or forestry activities (logging), and urbanization, which change the amount of water infiltrating the soil.
temporal variation in land use and land cover (LULC): it includes the human abandonment of farming areas, e.g. due to the economic and social transformations which occurred in Europe after the Second World War. Land degradation and extreme rainfall can increase the frequency of erosion and landslide phenomena.
Types
Hungr-Leroueil-Picarelli classification
In traditional usage, the term landslide has at one time or another been used to cover almost all forms of mass movement of rocks and regolith at the Earth's surface. In 1978, geologist David Varnes noted this imprecise usage and proposed a new, much tighter scheme for the classification of mass movements and subsidence processes. This scheme was later modified by Cruden and Varnes in 1996, and refined by Hutchinson (1988), Hungr et al. (2001), and finally by Hungr, Leroueil and Picarelli (2014). The classification resulting from the latest update is provided below.
Under this classification, six types of movement are recognized. Each type can be seen both in rock and in soil. A fall is a movement of isolated blocks or chunks of soil in free-fall. The term topple refers to blocks coming away by rotation from a vertical face. A slide is the movement of a body of material that generally remains intact while moving over one or several inclined surfaces or thin layers of material (also called shear zones) in which large deformations are concentrated. Slides are also sub-classified by the form of the surface(s) or shear zone(s) on which movement happens. The planes may be broadly parallel to the surface ("planar slides") or spoon-shaped ("rotational slides"). Slides can occur catastrophically, but movement on the surface can also be gradual and progressive. Spreads are a form of subsidence, in which a layer of material cracks, opens up, and expands laterally. Flows are the movement of fluidised material, which can be both dry or rich in water (such as in mud flows). Flows can move imperceptibly for years, or accelerate rapidly and cause disasters. Slope deformations are slow, distributed movements that can affect entire mountain slopes or portions of it. Some landslides are complex in the sense that they feature different movement types in different portions of the moving body, or they evolve from one movement type to another over time. For example, a landslide can initiate as a rock fall or topple and then, as the blocks disintegrate upon the impact, transform into a debris slide or flow. An avalanching effect can also be present, in which the moving mass entrains additional material along its path.
Flows
Slope material that becomes saturated with water may produce a debris flow or mud flow. However, also dry debris can exhibit flow-like movement. Flowing debris or mud may pick up trees, houses and cars, and block bridges and rivers causing flooding along its path. This phenomenon is particularly hazardous in alpine areas, where narrow gorges and steep valleys are conducive of faster flows. Debris and mud flows may initiate on the slopes or result from the fluidization of landslide material as it gains speed or incorporates further debris and water along its path. River blockages as the flow reaches a main stream can generate temporary dams. As the impoundments fail, a domino effect may be created, with a remarkable growth in the volume of the flowing mass, and in its destructive power.
An earthflow is the downslope movement of mostly fine-grained material. Earthflows can move at speeds within a very wide range, from as low as 1 mm/yr to many km/h. Though these are a lot like mudflows, overall they are more slow-moving and are covered with solid material carried along by the flow from within. Clay, fine sand and silt, and fine-grained, pyroclastic material are all susceptible to earthflows. These flows are usually controlled by the pore water pressures within the mass, which should be high enough to produce a low shearing resistance. On the slopes, some earthflow may be recognized by their elongated shape, with one or more lobes at their toes. As these lobes spread out, drainage of the mass increases and the margins dry out, lowering the overall velocity of the flow. This process also causes the flow to thicken. Earthflows occur more often during periods of high precipitation, which saturates the ground and builds up water pressures. However, earthflows that keep advancing also during dry seasons are not uncommon. Fissures may develop during the movement of clayey materials, which facilitate the intrusion of water into the moving mass and produce faster responses to precipitation.
A rock avalanche, sometimes referred to as sturzstrom, is a large and fast-moving landslide of the flow type. It is rarer than other types of landslides but it is often very destructive. It exhibits typically a long runout, flowing very far over a low-angle, flat, or even slightly uphill terrain. The mechanisms favoring the long runout can be different, but they typically result in the weakening of the sliding mass as the speed increases. The causes of this weakening are not completely understood. Especially for the largest landslides, it may involve the very quick heating of the shear zone due to friction, which may even cause the water that is present to vaporize and build up a large pressure, producing a sort of hovercraft effect. In some cases, the very high temperature may even cause some of the minerals to melt. During the movement, the rock in the shear zone may also be finely ground, producing a nanometer-size mineral powder that may act as a lubricant, reducing the resistance to motion and promoting larger speeds and longer runouts. The weakening mechanisms in large rock avalanches are similar to those occurring in seismic faults.
Slides
Slides can occur in any rock or soil material and are characterized by the movement of a mass over a planar or curvilinear surface or shear zone.
A debris slide is a type of slide characterized by the chaotic movement of material mixed with water and/or ice. It is usually triggered by the saturation of thickly vegetated slopes which results in an incoherent mixture of broken timber, smaller vegetation and other debris. Debris flows and avalanches differ from debris slides because their movement is fluid-like and generally much more rapid. This is usually a result of lower shear resistances and steeper slopes. Typically, debris slides start with the detachment of large rock fragments high on the slopes, which break apart as they descend.
Clay and silt slides are usually slow but can experience episodic acceleration in response to heavy rainfall or rapid snowmelt. They are often seen on gentle slopes and move over planar surfaces, such as over the underlying bedrock. Failure surfaces can also form within the clay or silt layer itself, and they usually have concave shapes, resulting in rotational slides
Shallow and deep-seated landslides
Slope failure mechanisms often contain large uncertainties and could be significantly affected by heterogeneity of soil properties. A landslide in which the sliding surface is located within the soil mantle or weathered bedrock (typically to a depth from few decimeters to some meters) is called a shallow landslide. Debris slides and debris flows are usually shallow. Shallow landslides can often happen in areas that have slopes with high permeable soils on top of low permeable soils. The low permeable soil traps the water in the shallower soil generating high water pressures. As the top soil is filled with water, it can become unstable and slide downslope.
Deep-seated landslides are those in which the sliding surface is mostly deeply located, for instance well below the maximum rooting depth of trees. They usually involve deep regolith, weathered rock, and/or bedrock and include large slope failures associated with translational, rotational, or complex movements. They tend to form along a plane of weakness such as a fault or bedding plane. They can be visually identified by concave scarps at the top and steep areas at the toe. Deep-seated landslides also shape landscapes over geological timescales and produce sediment that strongly alters the course of fluvial streams.
Related phenomena
An avalanche, similar in mechanism to a landslide, involves a large amount of ice, snow and rock falling quickly down the side of a mountain.
A pyroclastic flow is caused by a collapsing cloud of hot ash, gas and rocks from a volcanic explosion that moves rapidly down an erupting volcano.
Extreme precipitation and flow can cause gully formation in flatter environments not susceptible to landslides.
Resulting tsunamis
Landslides that occur undersea, or have impact into water e.g. significant rockfall or volcanic collapse into the sea, can generate tsunamis. Massive landslides can also generate megatsunamis, which are usually hundreds of meters high. In 1958, one such tsunami occurred in Lituya Bay in Alaska.
Landslide prediction mapping
Landslide hazard analysis and mapping can provide useful information for catastrophic loss reduction, and assist in the development of guidelines for sustainable land-use planning. The analysis is used to identify the factors that are related to landslides, estimate the relative contribution of factors causing slope failures, establish a relation between the factors and landslides, and to predict the landslide hazard in the future based on such a relationship. The factors that have been used for landslide hazard analysis can usually be grouped into geomorphology, geology, land use/land cover, and hydrogeology. Since many factors are considered for landslide hazard mapping, GIS is an appropriate tool because it has functions of collection, storage, manipulation, display, and analysis of large amounts of spatially referenced data which can be handled fast and effectively. Cardenas reported evidence on the exhaustive use of GIS in conjunction of uncertainty modelling tools for landslide mapping. Remote sensing techniques are also highly employed for landslide hazard assessment and analysis. Before and after aerial photographs and satellite imagery are used to gather landslide characteristics, like distribution and classification, and factors like slope, lithology, and land use/land cover to be used to help predict future events. Before and after imagery also helps to reveal how the landscape changed after an event, what may have triggered the landslide, and shows the process of regeneration and recovery.
Using satellite imagery in combination with GIS and on-the-ground studies, it is possible to generate maps of likely occurrences of future landslides. Such maps should show the locations of previous events as well as clearly indicate the probable locations of future events. In general, to predict landslides, one must assume that their occurrence is determined by certain geologic factors, and that future landslides will occur under the same conditions as past events. Therefore, it is necessary to establish a relationship between the geomorphologic conditions in which the past events took place and the expected future conditions.
Natural disasters are a dramatic example of people living in conflict with the environment. Early predictions and warnings are essential for the reduction of property damage and loss of life. Because landslides occur frequently and can represent some of the most destructive forces on earth, it is imperative to have a good understanding as to what causes them and how people can either help prevent them from occurring or simply avoid them when they do occur. Sustainable land management and development is also an essential key to reducing the negative impacts felt by landslides.
GIS offers a superior method for landslide analysis because it allows one to capture, store, manipulate, analyze, and display large amounts of data quickly and effectively. Because so many variables are involved, it is important to be able to overlay the many layers of data to develop a full and accurate portrayal of what is taking place on the Earth's surface. Researchers need to know which variables are the most important factors that trigger landslides in any given location. Using GIS, extremely detailed maps can be generated to show past events and likely future events which have the potential to save lives, property, and money.
Since the ‘90s, GIS have been also successfully used in conjunction to decision support systems, to show on a map real-time risk evaluations based on monitoring data gathered in the area of the Val Pola disaster (Italy).
Prehistoric landslides
Storegga Slide, some 8,000 years ago off the western coast of Norway. Caused massive tsunamis in Doggerland and other areas connected to the North Sea. A total volume of debris was involved; comparable to a thick area the size of Iceland. The landslide is thought to be among the largest in history.
Landslide which moved Heart Mountain to its current location, the largest continental landslide discovered so far. In the 48 million years since the slide occurred, erosion has removed most of the portion of the slide.
Flims Rockslide, about , Switzerland, some 10,000 years ago in post-glacial Pleistocene/Holocene, the largest so far described in the Alps and on dry land that can be easily identified in a modestly eroded state.
The landslide around 200 BC which formed Lake Waikaremoana on the North Island of New Zealand, where a large block of the Ngamoko Range slid and dammed a gorge of Waikaretaheke River, forming a natural reservoir up to deep.
Cheekye Fan, British Columbia, Canada, about , Late Pleistocene in age.
The Manang-Braga rock avalanche/debris flow may have formed Marsyangdi Valley in the Annapurna Region, Nepal, during an interstadial period belonging to the last glacial period. Over of material are estimated to have been moved in the single event, making it one of the largest continental landslides.
Tsergo Ri landslide, a massive slope failure north of Kathmandu, Nepal, involving an estimated . Prior to this landslide the mountain may have been the world's 15th mountain above .
Historical landslides
The 1806 Goldau landslide on 2 September 1806
The Cap Diamant Québec rockslide on 19 September 1889
Frank Slide, Turtle Mountain, Alberta, Canada, on 29 April 1903
Khait landslide, Khait, Tajikistan, Soviet Union, on 10 July 1949
A magnitude 7.5 earthquake in Yellowstone Park (17 August 1959) caused a landslide that blocked the Madison River, and created Quake Lake.
Monte Toc landslide () falling into the Vajont Dam basin in Italy, causing a megatsunami and about 2000 deaths, on 9 October 1963
Hope Slide landslide () near Hope, British Columbia on 9 January 1965.
The 1966 Aberfan disaster
Tuve landslide in Gothenburg, Sweden on 30 November 1977.
The 1979 Abbotsford landslip, Dunedin, New Zealand on 8 August 1979.
The eruption of Mount St. Helens (18 May 1980) caused an enormous landslide when the top 1300 feet of the volcano suddenly gave way.
Val Pola landslide during Valtellina disaster (1987) Italy
Thredbo landslide, Australia on 30 July 1997, destroyed hostel.
Vargas mudslides, due to heavy rains in Vargas State, Venezuela, in December, 1999, causing tens of thousands of deaths.
2005 La Conchita landslide in Ventura, California causing 10 deaths.
2006 Southern Leyte mudslide in Saint Bernard, Southern Leyte, causing 1,126 deaths and buried the village of Guinsaugon.
2007 Chittagong mudslide, in Chittagong, Bangladesh, on 11 June 2007.
2008 Cairo landslide on 6 September 2008.
The 2009 Peloritani Mountains disaster caused 37 deaths, on October 1.
The 2010 Uganda landslide caused over 100 deaths following heavy rain in Bududa region.
Zhouqu county mudslide in Gansu, China on 8 August 2010.
Devil's Slide, an ongoing landslide in San Mateo County, California
2011 Rio de Janeiro landslide in Rio de Janeiro, Brazil on 11 January 2011, causing 610 deaths.
2014 Pune landslide, in Pune, India.
2014 Oso mudslide, in Oso, Washington
2017 Mocoa landslide, in Mocoa, Colombia
2022 Ischia landslide
2024 Gofa landslides, in Gofa, Ethiopia
2024 Wayanad landslides, in Wayanad, Kerala, India
Extraterrestrial landslides
Evidence of past landslides has been detected on many bodies in the solar system, but since most observations are made by probes that only observe for a limited time and most bodies in the solar system appear to be geologically inactive not many landslides are known to have happened in recent times. Both Venus and Mars have been subject to long-term mapping by orbiting satellites, and examples of landslides have been observed on both planets.
Landslide mitigation
Landslide monitoring
The monitoring of landslides is essential for estimating the dangerous situations, making it possible to issue alerts on time, to avoid loses of lives and property, and to have proper planning and reducing measures in place. Currently, there exist different type of techniques aimed to monitor landslides:
Remote sensing techniques
InSAR (Interferometric Synthetic Aperture Radar): This remote sensing technique measures ground displacement over time with high precision. It is ideal for large-scale monitoring.
LiDAR (Light Detection and Ranging): Provides detailed 3D models of terrain to detect changes over time by comparison of different point clouds acquired over time.
Optical satellite imagery: Useful for identifying surface changes, geomorphological features (e.g. cracks and scarps) and mapping landslide-prone areas.
UAVs (Unmanned Aerial Vehicles): This technique captures high-resolution images and topographic data in inaccessible areas.
Thermal imaging: Thermal images enable to detects temperature variations that may indicate water movement or stress in the slope.
Ground-based techniques
GPS (Global Positioning System): Tracks ground movements at specific points over time using a constellation of satellites orbiting around the Earth.
Topographic surveys: Measures displacements of marked targets on a slope.
Ground-based radar (GB-SAR): Continuously monitors surface deformation using a SAR sensor and detects movement in real-time. It follows the same principle than InSAR.
Geotechnical instrumentation
Piezometers: Monitors groundwater levels and pore water pressure, which are critical triggers for landslides.
Load cells: Measures stress changes in retaining structures or anchors.
Tiltmeters: Detects small angular changes in the slope surface or retaining walls.
Extensometers: Measures displacement along cracks or tension zones.
Inclinometers: Detects subsurface movements by monitoring changes in the inclination of a borehole.
Seismic techniques
•Geophones and accelerometers: Detect seismic vibrations or movements that might indicate slope instability.
Climate-change impact on landslides
Climate-change impact on temperature, both average rainfall and rainfall extremes, and evapotranspiration may affect landslide distribution, frequency and intensity (62). However, this impact shows strong variability in different areas (63). Therefore, the effects of climate change on landslides need to be studied on a regional scale.
Climate change can have both positive and negative impacts on landslides
Temperature rise may increase evapotranspiration, leading to a reduction in soil moisture and stimulate vegetation growth, also due to a CO2 increase in the atmosphere. Both effects may reduce landslides in some conditions.
On the other side, temperature rise causes an increase of landslides due to
the acceleration of snowmelt and an increase of rain on snow during spring, leading to strong infiltration events (64).
Permafrost degradation that reduces the cohesion of soils and rock masses due to the loss of interstitial ice (65). This mainly occurs at high elevation.
Glacier retreat that has the dual effect of relieving mountain slopes and increasing their steepness.
Since the average precipitation is expected to decrease or increase regionally (63), rainfall induced landslides may change accordingly, due to changes in infiltration, groundwater levels and river bank erosion.
Weather extremes are expected to increase due to climate change including heavy precipitation (63). This yields negative effects on landslides due to focused infiltration in soil and rock (66) and an increase of runoff events, which may trigger debris flows.
See also
Avalanche
California landslides
Deformation monitoring
Earthquake engineering
Geotechnical engineering
Huayco
Landslide dam
Natural disaster
Railway slide fence
Rockslide
Sector collapse
Slump (geology)
Urban search and rescue
Washaway
References
External links
United States Geological Survey site (archived 25 March 2002)
British Geological Survey landslides site
British Geological Survey National Landslide Database
International Consortium on Landslides
Environmental soil science
Hazards of outdoor recreation
Natural disasters
no:Skred | Landslide | Physics,Environmental_science | 4,837 |
4,087,321 | https://en.wikipedia.org/wiki/Latent%20learning | Latent learning is the subconscious retention of information without reinforcement or motivation. In latent learning, one changes behavior only when there is sufficient motivation later than when they subconsciously retained the information.
Latent learning is when the observation of something, rather than experiencing something directly, can affect later behavior. Observational learning can be many things. A human observes a behavior, and later repeats that behavior at another time (not direct imitation) even though no one is rewarding them to do that behavior.
In the social learning theory, humans observe others receiving rewards or punishments, which invokes feelings in the observer and motivates them to change their behavior.
In latent learning particularly, there is no observation of a reward or punishment. Latent learning is simply animals observing their surroundings with no particular motivation to learn the geography of it; however, at a later date, they are able to exploit this knowledge when there is motivation - such as the biological need to find food or escape trouble.
The lack of reinforcement, associations, or motivation with a stimulus is what differentiates this type of learning from the other learning theories such as operant conditioning or classical conditioning.
Comparison to other types of learning
Classical conditioning
Classical conditioning is when an animal eventually subconsciously anticipates a biological stimulus such as food when they experience a seemingly random stimulus, due to a repeated experience of their association. One significant example of classical conditioning is Ivan Pavlov's experiment in which dogs showed a conditioned response to a bell the experimenters had purposely tried to associate with feeding time. After the dogs had been conditioned, the dogs no longer only salivated for the food, which was a biological need and therefore an unconditioned stimulus. The dogs began to salivate at the sound of a bell, the bell being a conditioned stimulus and the salivating now being a conditioned response to it. They salivated at the sound of a bell because they were anticipating food.
On the other hand, latent learning is when an animal learns something even though it has no motivation or stimulus associating a reward with learning it. Animals are therefore able to simply be exposed to the information for the sake of information and it will come to their brain. One significant example of latent learning in rats subconsciously creating mental maps and using that information to be able to find a biological stimulus such as food faster later on when there is a reward. These rats already knew the map of the maze, even though there was no motivation to learn the maze before the food was introduced.
Operant conditioning
Operant Conditioning is the ability to tailor an animals behavior using rewards and punishments. Latent Learning is tailoring an animals behavior by giving them time to create a mental map before a stimulus is introduced.
Social learning theory
Social learning theory suggests that behaviors can be learned through observation, but actively cognizant observation. In this theory, observation leads to a change in behavior more often when rewards or punishments associated with specific behaviors are observed. Latent learning theory is similar in the observation aspect, but again it is different due to the lack of reinforcement needed for learning.
Early studies
In a classic study by Edward C. Tolman, three groups of rats were placed in mazes and their behavior observed each day for more than two weeks. The rats in Group 1 always found food at the end of the maze; the rats in Group 2 never found food; and the rats in Group 3 found no food for 10 days, but then received food on the eleventh. The Group 1 rats quickly learned to rush to the end of the maze; Group 2 rats wandered in the maze but did not preferentially go to the end. Group 3 acted the same as the Group 2 rats until food was introduced on Day 11; then they quickly learned to run to the end of the maze and did as well as the Group 1 rats by the next day. This showed that the Group 3 rats had learned about the organisation of the maze, but without the reinforcement of food. Until this study, it was largely believed that reinforcement was necessary for animals to learn such tasks. Other experiments showed that latent learning can happen in shorter durations of time, e.g. 3–7 days. Among other early studies, it was also found that animals allowed to explore the maze and then detained for one minute in the empty goal box learned the maze much more rapidly than groups not given such goal orientation.
In 1949, John Seward conducted studies in which rats were placed in a T-maze with one arm coloured white and the other black. One group of rats had 30 mins to explore this maze with no food present, and the rats were not removed as soon as they had reached the end of an arm. Seward then placed food in one of the two arms. Rats in this exploratory group learned to go down the rewarded arm much faster than another group of rats that had not previously explored the maze. Similar results were obtained by Bendig in 1952 where rats were trained to escape from water in a modified T-maze with food present while satiated for food, then tested while hungry. Upon being returned to the maze while food deprived, the rats learned where the food was located at a rate that increased with the number of pre-exposures given the rat in the training phase. This indicated varying levels of latent learning.
Most early studies of latent learning were conducted with rats, but a study by Stevenson in 1954 explored this method of learning in children. Stevenson required children to explore a series of objects to find a key, and then he determined the knowledge the children had about various non-key objects in the set-up. The children found non-key objects faster if they had previously seen them, indicating they were using latent learning. Their ability to learn in this way increased as they became older.
In 1982, Wirsig and co-researchers used the taste of sodium chloride to explore which parts of the brain are necessary for latent learning in rats. Decorticate rats were just as able as normal rats to accomplish the latent learning task.
More recent studies
Latent learning in infants
The human ability to perform latent learning seems to be a major contributor to why infants can use knowledge they learned while they did not have the skills to use them. For example, infants do not gain the ability to imitate until they are 6 months. In one experiment, one group of infants was exposed to hand puppets A and B simultaneously at the age of three-months. Another control group, the same age, was only presented to with puppet A. All of the infants were then periodically presented with puppet A until six-months of age. At six-months of age, the experimenters performed a target behavior on the first puppet while all the infants watched. Then, all the infants were presented with puppet A and B. The infants that had seen both puppets at 3-months of age imitated the target behavior on puppet B at a significantly higher rate than the control group which had not seen the two puppets paired. This suggests that the pre-exposed infants had formed an association between the puppets without any reinforcement. This exhibits latent learning in infants, showing that infants can learn by observation, even when they do not show any indication that they are learning until they are older.
The impact of different drugs on latent learning
Many drugs abused by humans imitate dopamine, the neurotransmitter that gives humans motivation to seek rewards. It is shown that zebra-fish can still latently learn about rewards while lacking dopamine if they are given caffeine. If they were given caffeine before learning, then they could use the knowledge they learned to find the reward when they were given dopamine at a later time.
Alcohol may impede on latent learning. Some zebra-fish were exposed to alcohol before exploring a maze, then continued to be exposed to alcohol when the maze had a reward introduced. It took these zebra-fish much longer to find a reward in the maze than the control group that had not been exposed to alcohol, even though they showed the same amount of motivation. However, it was shown that the longer the zebra-fish were exposed to alcohol, the less it had an effect of their latent learning. Another experiment group were zebra-fish representing alcohol withdrawal. Zebra-fish that performed the worst were those who had been exposed to alcohol for a long period, then had it removed before the reward was introduced. These fish lacked in motivation, motor dysfunction, and seemed to have not latently learned the maze.
Other factors impacting latent learning
Though the specific area of the brain responsible for latent learning may not have been pinpointed, it was found that patients with medial temporal amnesia had particular difficulty with a latent learning task which required representational processing.
Another study, conducted with mice, found intriguing evidence that the absence of a prion protein disrupts latent learning and other memory functions in the water maze latent learning task. A lack of phencyclidine was also found to impair latent learning in a water finding task.
References
Ethology
Learning methods
Memory | Latent learning | Biology | 1,853 |
28,219,858 | https://en.wikipedia.org/wiki/Marlborough-Blenheim%20Hotel | The Marlborough-Blenheim Hotel was a historic resort hotel property in Atlantic City, New Jersey, built in 1902–1906, and demolished in October 1978.
History
In 1900, Josiah White III bought a parcel of land between Ohio Avenue and Park Place on the Boardwalk, and built the Queen Anne style Marlborough House. The hotel was financially successful and in 1905, he chose to expand. White hired Philadelphia architect William Lightfoot Price of Price and McLanahan to design a new, separate tower to be called the Blenheim.
"Blenheim" refers to Blenheim Palace in England, the ancestral home of Sir Winston Churchill, a grandson of the 7th Duke of Marlborough.
Recent hotel fires in and around Atlantic City, Price's recent experience designing the all-concrete Jacob Reed store in Philadelphia, and a steel strike in the fall of 1905 influenced Price's choice of reinforced concrete for the tower. It opened in 1906.
It was not the first reinforced concrete hotel in the world, as French concrete pioneer François Hennebique had designed the Imperial Palace Hotel in Nice five years previously. But it was the largest reinforced concrete building in the world. The hotel's Spanish and Moorish themes, capped off with its signature dome and chimneys, represented a step forward from other hotels that had a classically designed influence.
In 1916, Winston Churchill was a guest of the hotel.
In 1946, President General May Erwin Talmadge held the 55th Continental Congress of the Daughters of the American Revolution at the hotel.
On March 14, 1977, Reese Palley and local attorney and businessman Martin Blatt purchased the Marlborough-Blenheim from the White family. They intended to spend $35 million on renovations, preserving the Blenheim wing, while razing the Marlborough to make way for a modern casino hotel. In June 1977, Bally Manufacturing, the world's largest producer of slot machines, leased the Marlborough-Blenheim from Palley and Blatt for 40 years, with an option for a further 100 years. On August 17, 1977, Bally announced that it had purchased the neighboring Dennis Hotel for $4 million from the First National Bank of South Jersey. On October 25, 1977, Josiah White IV, grandson of the Marlborough-Blenheim's founder, presided over the closure of the hotel, locking its front door.
After Bally took control of the two properties, it announced plans to raze all three hotel buildings - the Marlborough, the Blenheim, and the Dennis, despite protests, to make way for the new "Bally's Park Place Casino and Hotel", an $83 million casino/hotel designed by California-based Maxwell Starkman Associates. The new resort was to have a 39-story, octagonal hotel tower and a huge three-level podium, containing a casino, along with other resort and convention facilities. However, in an effort to offset costs and open the casino as soon as possible, the Dennis Hotel was retained to serve as the temporary hotel for Bally's until a new tower could be built.
Bally demolished the wood-framed Marlborough with the conventional wrecking ball. For the Blenheim the company hired Controlled Demolition, Inc. (CDI) and Winzinger Incorporated of Hainesport New Jersey, which had taken down the Traymore Hotel, to implode the structure. A preservation group which had sought historic status for the building won a stay of execution for the Blenheim's rotunda portion on the Boardwalk. It was separated from the rest of the hotel, which was imploded in the fall of 1978. Several months later its historic status was denied, the stay was lifted, and CDI finished the demolition January 4, 1979. It is not known if they sold the name Marlborough-Blenheim as well.
Bally's Park Place now stands at this location.
In culture
The hotel, here renamed the "Essex-Carlton", features prominently in the 1972 Bob Rafelson film The King of Marvin Gardens, starring Jack Nicholson, Bruce Dern and Ellen Burstyn.
In the Garry Marshall film Beaches, a young Hillary Whitney stays with her family at the hotel, where she treats a young C. C. Bloom to chocolate sodas in the Garden Court. The scene was filmed at the Ambassador Hotel (Los Angeles), which itself was torn down in 2005.
In the HBO television show Boardwalk Empire, the fictionalized Nucky Thompson lives on the 8th floor of a Ritz-Carlton whose architecture is based on the Marlborough-Blenheim's, rather than that of the actual Ritz-Carlton in Atlantic City that the real Nucky Johnson had lived in. The Blenheim hotel is mentioned throughout the series.
A clip of the demolition of the main dome of the hotel is featured in the video for Bruce Springsteen's song "Atlantic City."
The second act of the 1925 Broadway musical comedy "No, No, Nanette" is set in the Marlborough-Blenheim and the song "Peach of the Beach" contains the lyric: "You can bet Nanette is the prize and pet of the Marlborough-Blenheim Hotel."
See also
List of tallest buildings in Atlantic City
Winzinger Inc. of Hainesport, New Jersey was the demolition contractor of the hotels along with CDI who controlled and planned the explosives. Heidi Winzinger's song "Queen of Atlantic City" is a folk rock song dedicated to the Blenheim Hotel's memory.
References
Hotel buildings completed in 1906
Buildings and structures demolished in 1979
Buildings and structures demolished by controlled implosion
Skyscraper hotels in Atlantic City, New Jersey
Demolished hotels in New Jersey
Art Nouveau architecture in the United States
Art Nouveau hotels
1906 establishments in New Jersey
1977 disestablishments in New Jersey | Marlborough-Blenheim Hotel | Engineering | 1,177 |
75,914,024 | https://en.wikipedia.org/wiki/Einsteinium%28II%29%20iodide | Einsteinium(II) iodide is a binary inorganic chemical compound of einsteinium and iodide with the chemical formula .
Synthesis
The compound can be prepared via a reaction of and .
Physical properties
The compound forms a solid.
References
Einsteinium compounds
Iodides
Actinide halides | Einsteinium(II) iodide | Chemistry | 60 |
3,841,160 | https://en.wikipedia.org/wiki/Constant%20fraction%20discriminator | A constant fraction discriminator (CFD) is an electronic signal processing device, designed to mimic the mathematical operation of finding a maximum of a pulse by finding the zero of its slope. Some signals do not have a sharp maximum, but short rise times .
Typical input signals for CFDs are pulses from plastic scintillation counters, such as those used for lifetime measurement in positron annihilation experiments. The scintillator pulses have identical rise times that are much longer than the desired temporal resolution. This forbids simple threshold triggering, which causes a dependence of the trigger time on the signal's peak height, an effect called time walk (see diagram). Identical rise times and peak shapes permit triggering not on a fixed threshold but on a constant fraction of the total peak height, yielding trigger times independent from peak heights.
From another point of view
A time-to-digital converter assigns timestamps. The time-to-digital converter needs fast rising edges with normed height. The plastic scintillation counter delivers fast rising edge with varying heights. Theoretically, the signal could be split into two parts. One part would be delayed and the other low pass filtered, inverted and then used in a variable-gain amplifier to amplify the original signal to the desired height. Practically, it is difficult to achieve a high dynamic range for the variable-gain amplifier, and analog computers have problems with the inverse value.
Principle of operation
The incoming signal is split into three components.
One component is delayed by a time , with it may be multiplied by a small factor to put emphasis on the leading edge of the pulse and connected to the noninverting input of a comparator. One component is connected to the inverting input of this comparator. One component is connected to the noninverting input of another comparator. A threshold value is connected to the inverting input of the other comparator. The output of both comparators is fed through an AND gate. A discriminator without that constant fraction would just be a comparator.
Therefore the word discriminator is used for something different
(namely for an FM-demodulator).
Often the logic levels are shifted from -15 V < low < 0 < high < 15 V delivered by the comparator to 0 V < low < 1.5 V < high < 3.3 V needed by CMOS logic.
Applications
If the discriminator triggers a sampler with a following comparator this is called a single channel analyzer (SCA).
If an Analog-to-digital converter is used, this is called a multi channel analyzer (MCA).
References
Beuzekom, M. (2006). "Identifying fast hadrons with silicon detectors", Appendix A, University of Groningen Faculty of Mathematics and Natural Sciences Dissertation
Signal processing
Measuring instruments | Constant fraction discriminator | Technology,Engineering | 590 |
12,637,298 | https://en.wikipedia.org/wiki/Dimethyltubocurarinium%20chloride | Dimethyltubocurarinium chloride (INN; also known as metocurine chloride (USAN) and dimethyltubocurarine chloride) is a non-depolarizing nicotinic acetylcholine receptor antagonist used as a muscle relaxant.
References
Quaternary ammonium compounds
Nicotinic antagonists
Norsalsolinol ethers
Pyrogallol ethers
Macrocycles
Cyclophanes
Methoxy compounds
Cyclic ethers
Heterocyclic compounds with 7 or more rings | Dimethyltubocurarinium chloride | Chemistry | 113 |
72,918,290 | https://en.wikipedia.org/wiki/Magnesium%20laurate | Magnesium laurate is a metal-organic compound with the chemical formula . The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid (lauric acid).
Physical properties
Soluble in water.
Uses
Magnesium laurate is used in the food industry as a binder, emulsifier, and anticaking agent.
References
Laurates
Magnesium compounds | Magnesium laurate | Chemistry | 78 |
76,167,693 | https://en.wikipedia.org/wiki/Confluentic%20acid | Confluentic acid is an organic compound belonging to the chemical class known as depsides. It serves as a secondary metabolite in certain lichens and plays a role in distinguishing closely related species within the genus Porpidia. In 1899, Friedrich Wilhelm Zopf isolated a compound from Lecidea confluens, which he initially named confluentin and noted for its melting point of 147–148 °C. This substance demonstrated the ability to turn litmus paper red and, when interacting with alkali, decomposed into carbon dioxide and phenol-like compounds. Zopf subsequently revised the chemical formula and melting point of the compound. Siegfried Huneck renamed it confluentinic acid in 1962, characterising it as optically inactive, with distinct colour reactions and solubility properties, and determined its molecular formula as C28H36O8.
Researchers typically identify the presence of confluentic acid using methods such as thin-layer chromatography and high-performance liquid chromatography. Additionally, an alternative visual detection method involves examining the lichen's thallus or apothecium (fruiting body) under a microscope on a slide treated with potassium hydroxide, which reveals oil droplets indicative of confluentic acid. Several structural analogues of confluentic acid have been isolated from a variety of lichen species.
History
In 1899, Friedrich Wilhelm Zopf reported isolating a substance from Lecidea confluens, which he named confluentin, characterised by a melting point of . He also found that this substance turns litmus paper red, reacts with FeCl3 to produce a red-brown colour, and decomposes into carbon dioxide, a volatile substance, and a phenol-like compound with a melting point of 52 °C upon interaction with alkali. Zopf initially proposed the formula C37H50O10 for this compound before revising it to C26H36O7, noting the updated melting point as .
In his 1962 report of his chemical investigations into the substance, German chemist Siegfried Huneck proposed naming it 'confluentinic acid' due to the presence of the carboxylic acid functional group, aligning with the naming conventions of other lichen products. Huneck described the substance as optically inactive and noted its poor solubility in petroleum ether, ethyl acetate, and acetone, but found it readily soluble in ether, benzene, and methanol. He noted the following colour reactions: weak brownish with alcoholic FeCl3 solution, blue, green, and finally violet with potassium hydroxide and chloroform upon heating, orange to orange-red with tetrazotised benzidine, and gray-violet with p-phenylenediamine; no colouration was observed with barium hydroxide. Huneck used elemental analysis and molecular weight determination by titration to determine the molecular formula of confluentinic acid as C28H36O8. The Zeisel determination for methoxyl group analysis indicated two methoxyl groups per molecule.
John Elix and Brian Ferguson's proposal for the total synthesis of confluentic acid in 1978 marked a significant advancement in understanding of this lichen substance, enabling scientists to better study and understand the compound's structure and biological activity without relying solely on natural extraction. The synthesis began with the direct condensation of suitably substituted aromatic carboxylic acids and phenols, using dicyclohexylcarbodiimide. Key precursors involved were specially prepared benzoic acids, with protective measures for reactive groups. The process included steps like bromination, alkylation, and the strategic use of protecting groups for the phenol and carboxyl functionalities. The synthesis culminated in the removal of protecting groups and hydrogenolysis over palladised carbon to yield the desired depsides including confluentic acid. In 1993, G. Fegie and colleagues introduced a standardised high-performance liquid chromatographic method that enabled the separation and detection of hundreds of lichen products, confluentic acid included.
Properties
Confluentic acid is a member of the class of chemical compounds called depsides. Its IUPAC name is 4-[2-hydroxy-4-methoxy-6-(2-oxoheptyl)benzoyl]oxy-2-methoxy-6-pentylbenzoic acid. The ultraviolet absorbance maxima (λmax) has two peaks at 268 and 304 nm. In the infrared spectrum, significant peaks indicative of the carboxylic acid functional group occur at 1700 cm−1 (C=O stretching in carbonyl groups) and within the broad range of 2600 to 3100 cm−1 (O-H stretching). The broad band at 3100 is due to hydrogen bonding, while the peak at 3500 is the COOH stretching band. Confluentic acid's molecular formula is C28H36O8; it has a molecular mass of 500.57 grams per mole. In its purified form, it exists as crystalline needles with a melting point of .
Occurrence
The (fungal partner) of the lichen Lecidea tessellata has been shown to produce confluentic acid when cultured without its algal partner. Confluentic acid has also been reported from mycobiont cultures of Parmelina carporrhizans. Confluentic acid is produced by almost all species of the genus Immersaria, which is usually accompanied by 2'-O-methylmicrophyllinic acid. The absence of confluentic acid distinguishes Inoderma nipponicum, from others in genus Inoderma, which typically contain this chemical. The only character reliably distinguishing Porpidia contraponenda and the morphologically similar Porpidia cinereoatra is their secondary chemistry: the former contains 2'-O-methylmicrophyllinate and the latter has confluentic acid. A is a set of biosynthetically related compounds produced by a lichen. The confluentic acid chemosyndrome was identified in several lichens in the family Lecideaceae; it contains confluentic acid as the major metabolite, and minor amounts of 2'-O-methylperlatolic acid, olivetonide monomethyl ether, and 2'-O-methylmicrophyllinic acid.
Not just limited to lichen-forming fungi, confluentic acid has also been reported from the Brazilian plant Himatanthus sucuuba, highlighting the compound's broader biological distribution.
A study on Cryptothecia rubrocincta reveals distinct biochemical compositions in various parts of its thallus, suggesting specialised roles for the compounds present. Specifically, confluentic acid was found exclusively in localised brown flecks within the red and pink zones of the thallus, alongside calcium oxalate monohydrate. This distribution is in contrast to other thallus areas, such as the white zone containing only calcium oxalate dihydrate and the dark red zone with chiodectonic acid, chlorophyll, beta-carotene, and additional calcium oxalate dihydrate in the pink sub-zone. The presence of confluentic acid in specific areas without beta-carotene and chiodectonic acid—both known UV protectants—suggests that confluentic acid plays a different role in the lichen's survival strategy. While the exact function of confluentic acid in these localised brown flecks remains unclear, it is indicated that it is not required for radiation protection. The study also highlights a transition within the lichen from calcium oxalate dihydrate to the more stable monohydrate form, associated with the ageing process and possibly the metabolic activities involving confluentic acid.
Detection
Alan Fryday (1991) outlined a technique for the detection of confluentic acid in lichen samples. This method involves placing a section of the lichen's thallus or apothecium (fruiting body) on a microscope slide, which is then saturated with a 10% potassium hydroxide (KOH) solution. When examined under a compound microscope at 40x magnification, a distinctive 'halo' of small oil droplets or bubbles emanating from the tissue section indicates the presence of confluentic acid. The oil droplets generated during this detection process consist of 4-O-methylolivetonide, a compound that is insoluble in potassium hydroxide solution. This substance forms as a result of confluentic acid undergoing hydrolysis in the presence of potassium hydroxide. This test is particularly useful in distinguishing between morphologically similar yet chemically distinct species within the genus Porpidia, aiding accurate identification and study.
Related compounds
The chemical diversity within lichens includes a variety of compounds related to confluentic acid, reflecting the complex biosynthetic capabilities of these symbiotic organisms and their significance in lichen taxonomy and ecology. In 1987, Chicita Culberson and colleagues reported the use of high-performance liquid chromatography to isolate and identify additional higher-carbon analogue substances in the "confluentic series", including hyperconfluentic acid, superconfluentic acid, and subconfluentic acid. These substances were isolated from the lichen Pseudobaeomyces pachycarpa. The structure of subconfluentic acid (4-[2'-hydroxy-4'-methoxy-6'-(2"-oxopentyl)benzoyloxy]-2-methoxy-6-pentylbenzoic acid) was later established by synthesis. The compound 4-O-demethylsuperconfluentic acid, structurally similar to confluentic acid, was isolated from Stirtonia ramosa. Another analogue, 2-O-methylconfluentic acid, was identified from Lecidea fuscoatra.
Gowan (1989) suggested a close chemical and biosynthetic relationship between methyl 2'-O-methylmicrophyllinate and confluentic acid, noting that the biosynthetic pathways leading to these compounds primarily differ in the length of the acetyl-polymalonyl segment. This means that the two compounds are synthesised through similar processes, differing mainly in the size of a specific chain within the molecule. Additionally, there is only a minor variation in their methylation patterns. Gowan further suggested that methyl 2'-O-methylmicrophyllinate likely originated from an ancestor that already produced confluentic acid.
Notes
References
Lichen products
Organic acids
Polyphenols
Methoxy compounds
Ketones | Confluentic acid | Chemistry | 2,266 |
55,907,806 | https://en.wikipedia.org/wiki/IEEE%201849 | The IEEE STANDARD 1849-2016, IEEE Standard for eXtensible Event Stream (XES) for Achieving Interoperability in Event Logs and Event Streams, is a technical standard developed by the IEEE Standards Association. It standardizes "a language to transport, store, and exchange (possibly large volumes of) event data (e.g., for process mining)". In 2023, the standard has been revised in and superseded by the IEEE Standard 1849-2023.
Process mining aims to discover, monitor and improve processes by extracting knowledge from event logs representing actual process executions in a given setting. Process mining depends on the availability of accurate and unambiguous event logs, according to established standards. The purpose of this standard is to provide a generally acknowledged (W3C) XML format for the interchange of event data between information systems in many applications domains on the one hand and analysis tools for such data on the other hand. As such, this standard aims to fix the syntax and the semantics of the event data which, for example, is being transferred from the site generating this data to the site analyzing this data. As a result of this standard, if the event data is transferred using the syntax as described by this standard, its semantics will be well understood and clear at both sites.
IEEE 1849 was the second IEEE Standard Sponsored by the IEEE Computational Intelligence Society. The first was IEEE 1855.
IEEE Standard 1849-2023
The 2023 revision of the standard has been approved on the 5th of June 2023 and introduces the following changes:
new Micro, Software Event, Software Communication, Software Telemetry, and Artifact Lifecycle extensions
updated lists of tools supporting the standard, event logs using the standard, and publications that mention the standard
updated XES Schema definition, fixing a flaw related to the position of the log attributes
updated bibliography
References
External links
IEEE standards
Data management | IEEE 1849 | Technology | 379 |
1,307,896 | https://en.wikipedia.org/wiki/Chemically%20peculiar%20star | In astrophysics, chemically peculiar stars (CP stars) are stars with distinctly unusual metal abundances, at least in their surface layers.
Classification
Chemically peculiar stars are common among hot main-sequence (hydrogen-burning) stars. These hot peculiar stars have been divided into 4 main classes on the basis of their spectra, although two classification systems are sometimes used:
non-magnetic metallic-lined (Am, CP1)
magnetic (Ap, CP2)
non-magnetic mercury-manganese (HgMn, CP3)
helium-weak (He-weak, CP4).
The class names provide a good idea of the peculiarities that set them apart from other stars on or near the main sequence.
The Am stars (CP1 stars) show weak lines of singly ionized Ca and/or Sc, but show enhanced abundances of heavy metals. They also tend to be slow rotators and have an effective temperature between 7000 and .
The Ap stars (CP2 stars) are characterized by strong magnetic fields, enhanced abundances of elements such as Si, Cr, Sr and Eu, and are also generally slow rotators. The effective temperature of these stars is stated to be between 8000 and , but the issue of calculating effective temperatures in such peculiar stars is complicated by atmospheric structure.
The HgMn stars (CP3 stars) are also classically placed within the Ap category, but they do not show the strong magnetic fields associated with classical Ap stars. As the name implies, these stars show increased abundances of singly ionized mercury and manganese. These stars are also very slow rotators, even by the standards of CP stars. The effective temperature range for these stars is quoted at between and .
The He-weak stars (CP4 stars) show weaker He lines than would be expected classically from their observed Johnson UBV colours. A rare class of He-weak stars are, paradoxically, the helium-rich stars, with temperatures of –.
Cause of the peculiarities
It is generally thought that the peculiar surface compositions observed in these hot main-sequence stars have been caused by processes that happened after the star formed, such as diffusion or magnetic effects in the outer layers of the stars. These processes cause some elements, particularly He, N and O, to "settle" out in the atmosphere into the layers below, while other elements such as Mn, Sr, Y and Zr are "levitated" out of the interior to the surface, resulting in the observed spectral peculiarities. It is assumed that the centers of the stars, and the bulk compositions of the entire star, have more normal chemical abundance mixtures which reflect the compositions of the gas clouds from which they formed. In order for such diffusion and levitation to occur and the resulting layers to remain intact, the atmosphere of such a star must be stable enough to convection that convective mixing does not occur. The proposed mechanism causing this stability is the unusually large magnetic field that is generally observed in stars of this type.
Approximately 5–10% of hot main sequence stars show chemical peculiarities. Of these, the vast majority are Ap (or Bp) stars with strong magnetic fields. Non-magnetic, or only weakly magnetic, chemically peculiar stars mostly fall into the Am or HgMn categories. A much smaller percentage show stronger peculiarities, such as the dramatic under-abundance of iron peak elements in λ Boötis stars.
sn stars
Another group of stars sometimes considered to be chemically peculiar are the 'sn' stars. These hot stars, usually of spectral classes B2 to B9, show Balmer lines with sharp (s) cores, sharp metallic absorption lines, and contrasting broad (nebulous, n) neutral helium absorption lines. These may be combined with the other chemical peculiarities more commonly seen in B-type stars.
It was originally proposed that the unusual helium lines were created in a weak shell of material around the star, but are now thought to be caused by the Stark effect.
Other stars
There are also classes of chemically peculiar cool stars (that is, stars with spectral type G or later), but these stars are typically not main-sequence stars. These are usually identified by the name of their class or some further specific label. The phrase chemically peculiar star without further specification usually means a member of one of the hot main sequence types described above. Many of the cooler chemically peculiar stars are the result of the mixing of nuclear fusion products from the interior of the star to its surface; these include most of the carbon stars and S-type stars. Others are the result of mass transfer in a binary star system; examples of these include the barium stars and some S stars.
Companions
There are very few reports of exoplanets whose host stars are chemically peculiar stars. The young variable star HR 8799, which hosts four directly imaged massive planets, belongs to the group of λ Boötis stars. Similarly, the binary star HIP 79098, whose primary is a mercury-manganese star, was found via direct imaging to have a substellar companion, possibly a brown dwarf or a gas giant.
See also
List of stars that have unusual dimming periods
Przybylski's Star
References
Star types | Chemically peculiar star | Astronomy | 1,069 |
37,842,413 | https://en.wikipedia.org/wiki/NGC%202467 | NGC 2467, nicknamed the "Skull and Crossbones Nebula", is a star-forming region whose appearance has occasionally also been likened to that of a colorful mandrill. It includes areas where large clouds of hydrogen gas incubate new stars. This region was one of the areas featured in the book Hubble's Universe: Greatest Discoveries and Latest Images by Terence Dickinson.
Discussion
NGC 2467 had long been considered to be the nucleus of the Puppis I association. However, NGC 2467 does not represent a distinct open cluster; rather, it represents a superimposition of several stellar groups along the same approximate line of sight that have distinctly different distances and distinctly different radial velocities. One of these is a young and very distant group beyond Puppis OB2, while another, nearer group with later-type stars lies at a similar distance as Puppis OB1.
The region is dominated by a massive young star, HD 64315 (annotated in Commons, below and left of center), of spectral type O6. Two stellar clusters also exist in the area, Haffner 19 (H19, annotated) and Haffner 18 (H18, annotated). H19 is a compact cluster containing a Strömgren sphere which is ionized by a hot B0 V-type star. H18 contains a very young star, FM3060a (annotated), that has just come into existence and still surrounded by its birth cocoon of gas. The age of H19 is estimated to be 2 Myr, while the age H18 is somewhat controversial, some considering it to be as young as only 1 Myr. The field contains other early-type stars such as HD 64568 (annotated, upper right) whose relationship with the clusters is unclear.
The H II region of NGC 2467 has been the target of various investigations to elucidate the process of star formation. Unresolved questions include understanding the degree to which the stars already formed in such regions, especially the massive O or B stars, can affect the future formation of stars in the region: Do these pre-existing stars trigger the formation of others? One such investigation was conducted using the Spitzer Space Telescope, which discovered 45 young stellar objects (YSOs), or protostars, in the region during its "cold" mission, i.e. before its supply of liquid helium ran out. The YSOs are mostly along the edge of the HII region. The concentrated distribution of these objects spatially correlated with the ionization fronts provides evidence for triggered star formation. The newly forming ptotostars are concentrated in areas where the shock front driven in advance of the ionization front compresses the molecular gas.
It has been estimated that H19, H18, and the S311 nebula (in which lies HD 64315) are about , , and away, placing them in the Perseus Arm of the Milky Way. A significant discrepancy has existed between the distances to these features estimated kinematically versus distances estimated photometrically. Regardless of these discrepancies, H19 and H18 may be considered to be a binary cluster.
Gallery
Footnotes
References
External links
NGC 2467
Star Birth NGC 2467 (Gemini Observatory)
2467
Star-forming regions
Sharpless objects
Puppis | NGC 2467 | Astronomy | 690 |
7,902,939 | https://en.wikipedia.org/wiki/J-coupling | In nuclear chemistry and nuclear physics, J-couplings (also called spin-spin coupling or indirect dipole–dipole coupling) are mediated through chemical bonds connecting two spins. It is an indirect interaction between two nuclear spins that arises from hyperfine interactions between the nuclei and local electrons. In NMR spectroscopy, J-coupling contains information about relative bond distances and angles. Most importantly, J-coupling provides information on the connectivity of chemical bonds. It is responsible for the often complex splitting of resonance lines in the NMR spectra of fairly simple molecules.
J-coupling is a frequency difference that is not affected by the strength of the magnetic field, so is always stated in Hz.
Vector model and manifestations for chemical structure assignments
The origin of J-coupling can be visualized by a vector model for a simple molecule such as hydrogen fluoride (HF). In HF, the two nuclei have spin . Four states are possible, depending on the relative alignment of the H and F nuclear spins with the external magnetic field. The selection rules of NMR spectroscopy dictate that ΔI = 1, which means that a given photon (in the radio frequency range) can affect ("flip") only one of the two nuclear spins.
J-coupling provides three parameters: the multiplicity (the "number of lines"), the magnitude of the coupling (strong, medium, weak), and the sign of the coupling.
Multiplicity
The multiplicity provides information on the number of centers coupled to the signal of interest, and their nuclear spin. For simple systems, as in 1H–1H coupling in NMR spectroscopy, the multiplicity is one more than the number of adjacent protons which are magnetically nonequivalent to the protons of interest. For ethanol, each methyl proton is coupled to the two methylene protons, so the methyl signal is a triplet, while each methylene proton is coupled to the three methyl protons, so the methylene signal is a quartet.
Nuclei with spins greater than , which are called quadrupolar, can give rise to greater splitting, although in many cases coupling to quadrupolar nuclei is not observed. Many elements consist of nuclei with nuclear spin and without. In these cases, the observed spectrum is the sum of spectra for each isotopomer. One of the great conveniences of NMR spectroscopy for organic molecules is that several important lighter spin nuclei are either monoisotopic, e.g. 31P and 19F, or have very high natural abundance, e.g. 1H. An additional convenience is that 12C and 16O have no nuclear spin so these nuclei, which are common in organic molecules, do not cause splitting patterns in NMR.
Magnitude of J-coupling
For 1H–1H coupling, the magnitude of J decreases rapidly with the number of bonds between the coupled nuclei, especially in saturated molecules. Generally speaking two-bond coupling (i.e. 1H–C–1H) is stronger than three-bond coupling (1H–C–C–1H). The magnitude of the coupling also provides information on the dihedral angles relating the coupling partners, as described by the Karplus equation for three-bond coupling constants.
For heteronuclear coupling, the magnitude of J is related to the nuclear magnetic moments of the coupling partners. 19F, with a high nuclear magnetic moment, gives rise to large coupling to protons. 103Rh, with a very small nuclear magnetic moment, gives only small couplings to 1H. To correct for the effect of the nuclear magnetic moment (or equivalently the gyromagnetic ratio γ), the "reduced coupling constant" K is often discussed, where
K = .
For coupling of a 13C nucleus and a directly bonded proton, the dominant term in the coupling constant JC–H is the Fermi contact interaction, which is a measure of the s-character of the bond at the two nuclei.
Where the external magnetic field is very low, e.g. as Earth's field NMR, J-coupling signals of the order of hertz usually dominate chemical shifts which are of the order of millihertz and are not normally resolvable.
Sign of J-coupling
The value of each coupling constant also has a sign, and coupling constants of comparable magnitude often have opposite signs. If the coupling constant between two given spins is negative, the energy is lower when these two spins are parallel, and conversely if their coupling constant is positive. For a molecule with a single J-coupling constant, the appearance of the NMR spectrum is unchanged if the sign of the coupling constant is reversed, although spectral lines at given positions may represent different transitions. The simple NMR spectrum therefore does not indicate the sign of the coupling constant, which there is no simple way of predicting.
However for some molecules with two distinct J-coupling constants, the relative signs of the two constants can be experimentally determined by a double resonance experiment. For example in the diethylthallium ion (C2H5)2Tl+, this method showed that the methyl-thallium (CH3-Tl) and methylene-thallium (CH2-Tl) coupling constants have opposite signs.
The first experimental method to determine the absolute sign of a J-coupling constant was proposed in 1962 by Buckingham and Lovering, who suggested the use of a strong electric field to align the molecules of a polar liquid. The field produces a direct dipolar coupling of the two spins, which adds to the observed J-coupling if their signs are parallel and subtracts from the observed J-coupling if their signs are opposed.Buckingham A.D. and Lovering E.G., Effects of a strong electric field on NMR spectra. The absolute sign of the spin coupling constant, Transactions Faraday Society, 58, 2077-2081 (1962), https://doi.org/10.1039/TF9625802077 This method was first applied to 4-nitrotoluene, for which the J-coupling constant between two adjacent (or ortho) ring protons was shown to be positive because the splitting of the two peaks for each proton decreases with the applied electric field.
Another way to align molecules for NMR spectroscopy is to dissolve them in a nematic liquid crystal solvent. This method has also been used to determine the absolute sign of J-coupling constants.
J-coupling Hamiltonian
The Hamiltonian of a molecular system may be taken as:
H = D1 + D2 + D3,D1 = electron orbital–orbital, spin–orbital, spin–spin and electron-spin–external-field interactionsD2 = magnetic interactions between nuclear spin and electron spinD3 = direct interaction of nuclei with each other
For a singlet molecular state and frequent molecular collisions, D1 and D3 are almost zero. The full form of the J-coupling interaction between spins 'Ij and Ik on the same molecule is:
H = 2π Ij · Jjk · Ik
where Jjk is the J-coupling tensor, a real 3 × 3 matrix. It depends on molecular orientation, but in an isotropic liquid it reduces to a number, the so-called scalar coupling. In 1D NMR, the scalar coupling leads to oscillations in the free induction decay as well as splittings of lines in the spectrum.
Decoupling
By selective radio frequency irradiation, NMR spectra can be fully or partially decoupled, eliminating or selectively reducing the coupling effect. Carbon-13 NMR spectra are often recorded with proton decoupling.
History
In September 1951, H. S. Gutowsky, D. W. McCall, and C. P. Slichter reported experiments on HPF_6, CH_3OPF_2, and POCl_2F, where they explained the presence of multiple resonance lines with an interaction of the form .
Independently, in October 1951, E. L. Hahn and D. E. Maxwell reported a spin echo experiment which indicates the existence of an interaction between two protons in dichloroacetaldehyde. In the echo experiment, two short, intense pulses of radiofrequency magnetic field are applied to the spin ensemble at the nuclear resonance condition and are separated by a time interval of τ. The echo appears with a given amplitude at time 2τ. For each setting of τ, the maximum value of the echo signal is measured and plotted as a function of τ. If the spin ensemble consists of a magnetic moment, a monotonic decay in the echo envelope is obtained. In the Hahn–Maxwell experiment, the decay was modulated by two frequencies: one frequency corresponded with the difference in chemical shift between the two non-equivalent spins and a second frequency, J, that was smaller and independent of magnetic field strength ( = 0.7 Hz).
Such interaction came as a great surprise. The direct interaction between two magnetic dipoles depends on the relative position of two nuclei in such a way that when averaged over all possible orientations of the molecule it equals to zero.
In November 1951, N. F. Ramsey and E. M. Purcell proposed a mechanism that explained the observation and gave rise to an interaction of the form I1·I2. The mechanism is the magnetic interaction between each nucleus and the electron spin of its own atom together with the exchange coupling of the electron spins with each other.
In the 1990s, direct evidence was found for the presence of J-couplings between magnetically active nuclei on both sides of the hydrogen bond. Initially, it was surprising to observe such couplings across hydrogen bonds since J-couplings are usually associated with the presence of purely covalent bonds. However, it is now well established that the H-bond J-couplings follow the same electron-mediated polarization mechanism as their covalent counterparts.
The spin–spin coupling between nonbonded atoms in close proximity has sometimes been observed between fluorine, nitrogen, carbon, silicon and phosphorus atoms.
See also
Earth's field NMR (EFNMR)
Exclusive correlation spectroscopy (ECOSY)
Magnetic dipole–dipole interaction (dipolar coupling)
Nuclear magnetic resonance (NMR)
Nuclear magnetic resonance spectroscopy of carbohydrates
Nuclear magnetic resonance spectroscopy of nucleic acids
Nuclear magnetic resonance spectroscopy of proteins
Proton NMR
Relaxation (NMR)
Residual dipolar coupling
References
Nuclear magnetic resonance | J-coupling | Physics,Chemistry | 2,151 |
56,284,478 | https://en.wikipedia.org/wiki/QuEST | Quantum Entanglement Science and Technology (QuEST) is a research program, announced by the DARPA Microsystems Technology Office (MTO) in 2008. As a follow-on to the QuIST Program, its goal was to further accelerate development in the field of quantum information science.
Example areas under investigation included:
Shor's factoring algorithm,
Quantum machine learning,
Quantum game theory,
Secure quantum communications,
Quantum ghost imaging and interaction-free measurement, quantum image processing,
Remote sensing, quantum radar and quantum metrology, e.g. entanglement-assisted gravitomagnetic interferometry.
See also
IARPA – Intelligence Advanced Research Projects Agency
References
External links
QuEST Program overview (archived web page)
DARPA projects | QuEST | Physics | 152 |
65,888,580 | https://en.wikipedia.org/wiki/Multifit%20algorithm | The multifit algorithm is an algorithm for multiway number partitioning, originally developed for the problem of identical-machines scheduling. It was developed by Coffman, Garey and Johnson. Its novelty comes from the fact that it uses an algorithm for another famous problem - the bin packing problem - as a subroutine.
The algorithm
The input to the algorithm is a set S of numbers, and a parameter n. The required output is a partition of S into n subsets, such that the largest subset sum (also called the makespan) is as small as possible.
The algorithm uses as a subroutine, an algorithm called first-fit-decreasing bin packing (FFD). The FFD algorithm takes as input the same set S of numbers, and a bin-capacity c. It heuristically packs numbers into bins such that the sum of numbers in each bin is at most C, aiming to use as few bins as possible. Multifit runs FFD multiple times, each time with a different capacity C, until it finds some C such that FFD with capacity C packs S into at most n bins. To find it, it uses binary search as follows.
Let L := max ( sum(S) / n, max(S) ). Note, with bin-capacity smaller than L, every packing must use more than n bins.
Let U := max ( 2 sum(S) / n, max(S) ). Note, with bin-capacity at least U, FFD uses at most n bins. Proof: suppose by contradiction that some input si did not fit into any of the first n bins. Clearly this is possible only if i ≥ n+1. If si > C/2, then, since the inputs are ordered in descending order, the same inequality holds for all the first n+1 inputs in S. This means that sum(S) > (n+1)C/2 > n U/2, a contradiction to the definition of U. Otherwise, si ≤ C/2. So the sum of each of the first n bins is more than C/2. This again implies sum(S) > n C/2 > n U/2, contradiction.
Iterate k times (where k is a precision parameter):
Let C := (L+U)/2. Run FFD on S with capacity C.
If FFD needs at most n bins, then decrease U by letting U := C.
If FFD needs more than n bins, then increase L by letting L := C.
Finally, run FFD with capacity U. It is guaranteed to use at most n bins. Return the resulting scheduling.
Performance
Multifit is a constant-factor approximation algorithm. It always finds a partition in which the makespan is at most a constant factor larger than the optimal makespan. To find this constant, we must first analyze FFD. While the standard analysis of FFD considers approximation w.r.t. number of bins when the capacity is constant, here we need to analyze approximation w.r.t. capacity when the number of bins is constant. Formally, for every input size S and integer n, let be the smallest capacity such that S can be packed into n bins of this capacity. Note that is the value of the optimal solution to the original scheduling instance.
Let be the smallest real number such that, for every input S, FFD with capacity uses at most n bins.
Upper bounds
Coffman, Garey and Johnson prove the following upper bounds on :
for n = 2;
for n = 3;
for n = 4,5,6,7;
for all n ≥ 8.
During the MultiFit algorithm, the lower bound L is always a capacity for which it is impossible to pack S into n bins. Therefore, . Initially, the difference is at most sum(S) / n, which is at most . After the MultiFit algorithm runs for k iterations, the difference shrinks k times by half, so . Therefore, . Therefore, the scheduling returned by MultiFit has makespan at most times the optimal makespan. When is sufficiently large, the approximation factor of MultiFit can be made arbitrarily close to , which is at most 1.22.
Later papers performed a more detailed analysis of MultiFit, and proved that its approximation ratio is at most 6/5=1.2, and later, at most 13/11≈1.182. The original proof of this missed some cases; presented a complete and simpler proof. The 13/11 cannot be improved: see lower bound below.
Lower bounds
For n=4: the following shows that , which is tight. The inputs are 9,7,6,5,5, 4,4,4,4,4,4,4,4,4. They can be packed into 4 bins of capacity 17 as follows:
9, 4, 4
7, 6, 4
5, 4, 4, 4
5, 4, 4, 4
But if we run FFD with bin capacity smaller than 20, then the filled bins are:
9,7 [4 does not fit]
6,5,5 [4 does not fit]
4,4,4,4 [4 does not fit]
4,4,4,4
4
Note that the sum in each of the first 4 bins is 16, so we cannot put another 4 inside it. Therefore, 4 bins are not sufficient.
For n=13: the following shows that , which is tight. The inputs can be packed into 13 bins of capacity 66 as follows:
40,13,13 {8 times}
25,25,16 {3 times}
25,24,17 {2 times}
But if we run FFD with bin capacity smaller than 66*13/11 = 78, then the filled bins are:
40,25 {8 times}
24, 24, 17
17, 16, 16, 16
13, 13, 13, 13, 13 {3 times}
13
Note that the sum in each of the first 13 bins is 65, so we cannot put another 13 inside it. Therefore, 13 bins are not sufficient.
Performance with uniform machines
MultiFit can also be used in the more general setting called uniform-machines scheduling, where machines may have different processing speeds. When there are two uniform machines, the approximation factor is . When MultiFit is combined with the LPT algorithm, the ratio improves to .
Performance for maximizing the smallest sum
A dual goal to minimizing the largest sum (makespan) is maximizing the smallest sum. Deuermeyer, Friesen and Langston claim that MultiFit does not have a good approximation factor for this problem:"In the solution of the makespan problem using MULTIFIT, it is easy to construct examples where one processor is never used. Such a solution is tolerable for the makespan problem, but is totally unacceptable for our problem [since the smallest sum is 0]. Modifications of MULTIFIT can be devised which would be more suitable for our problem, but we could find none which produces a better worst-case bound than that of LPT."
Proof idea
Minimal counterexamples
The upper bounds on are proved by contradiction. For any integers p ≥ q, if , then there exists a (p/q)-counterexample, defined as an instance S and a number n of bins such that
S can be packed into n bins with capacity q;
FFD does not manage to pack S into n bins with capacity p.
If there exists such a counterexample, then there also exists a minimal (p/q)-counterexample, which is a (p/q)-counterexample with a smallest number of items in S and a smallest number of bins n. In a minimal (p/q)-counterexample, FFD packs all items in S except the last (smallest) one into n bins with capacity p. Given a minimal (p/q)-counterexample, denote by P1,...,Pn the (incomplete) FFD packing into these n bins with capacity p, by Pn+1 the bin containing the single smallest item, and by Q1,...,Qn the (complete) optimal packing into n bins with capacity q. The following lemmas can be proved:
No union of k subsets from {Q1,...,Qn} is dominated by a union of k subsets from {P1,...,Pn+1} ("dominated" means that each item in the dominated subset is mapped to a weakly-larger item in the dominating subset). Otherwise we could get a smaller counterexample as follows. [1] Delete all items in the Pi. Clearly, the incomplete FFD packing now needs n-k bins, and still the smallest item (or an entire bin) remains unpacked. [2] In the optimal packing Qi, exchange each item with its dominating item. Now, the k subsets Qi are larger (probably larger than q), but all other n-k subsets are smaller (in particular, at most q). Therefore, after deleting all items in the Pi, the remaining items can be packed into at most n-k bins of size q.
Each of Q1,...,Qn contains at least 3 items. Otherwise we had domination and, by the previous lemma, could get a smaller counterexample. This is because [a] each Qi with a single item is dominated by the Pj that contains that item; [b] for each Qi with two items x and y, if both x and y are in the same Pj, then Qi is dominated by this Pj; [c] Suppose x≥y, x is in some Pj, and y is in some Pk to its right. This means that y did not fit into Pj. But x+y ≤ q. This means that Pj must contain some item z ≥ y. So Qi is dominated by Pj. [d] Suppose x≥y, x is in some Pj, and y is in some Pk to its left. This means that there must be a previous item z ≥ x. So Qi is dominated by Pk.
Each of P1,...,Pn contains at least 2 items. This is because, if some Pi contains only a single item, this implies that the last (smallest) item does not fit into it. This means that this single item must be alone in an optimal bundle, contradicting the previous lemma.
Let s be the size of the smallest item. Then . Proof: Since s does not fit into the first n bundles, we have , so . On the other hand, since all items fit into n bins of capacity q, we have . Subtracting the inequalities gives .
The size of every item is at most . This is because there are at least 3 items in each optimal bin (with capacity q).
The sum of items in every bin P1,...,Pn is larger than ; otherwise we could add the smallest item.
5/4 Upper bound
From the above lemmas, it is already possible to prove a loose upper bound . Proof. Let S, n be a minimal (5/4)-counterexample. The above lemmas imply that -
. Since the optimal capacity is 4, no optimal bin can contain 4 or more items. Therefore, each optimal bin must contain at most 3 items, and the number of items is at most 3n.
The size of each item is at most , and the size of each FFD bin is more than . If some FFD bin contained only two items, its sum would be at most ; so each FFD bin must contain at least 3 items. But this means that FFD yields exactly n bins - a contradiction.
Structure of FFD packing
To prove tighter bounds, one needs to take a closer look at the FFD packing of the minimal (p/q)-counterexample. The items and FFD bins P1,...,Pn are termed as follows:
A regular item is an item added to some bin Pi, before the next bin Pi+1 was opened. Equivalently, a regular item is an item in Pi which is at least as large as every item in every bin Pj for j>i.
A fallback item is an item added to some bin Pi, after the next bin Pi+1 was opened. Equivalently, a fallback item is an item in Pi which is smaller than the largest item in Pi+1.
A regular k-bin is a bin that contains k regular items and no fallback items.
A fallback k-bin is a bin that contains k regular items and some fallback items.
The following lemmas follow immediately from these definitions and the operation of FFD.
If k1<k2, then all k1-bins are to the left of all k2-bins. This is because all bins have the same capacity, so if more regular items fit into a bin, these items must be smaller, so they must be allocated later.
If Pi is a k-bin, then the sum of the k regular items in Pi is larger than , since otherwise we could add another item before opening a new bin.
If Pi and Pi+1 are both k-bins, and then the sum of the k regular items in Pi is at least as large as in Pi+1 (this is because the items are ordered by decreasing size).
All regular k-bins are to the left of all fallback k-bins. This is because all bins have the same capacity, so if more fallback items fit into a bin, these items must be smaller, so they must be allocated later.
In a minimal counterexample, there are no regular 1-bins (since each bin contains at least 2 items), so by the above lemmas, the FFD bins P1,...,Pn are ordered by type:
Zero or more fallback 1-bins;
Then, zero or more regular 2-bins;
Then, zero or more fallback 2-bins;
Then, zero or more regular 3-bins;
Then, zero or more fallback 3-bins;
and so on.
1.22 upper bound
The upper bound is proved by assuming a minimal (122/100)-counterexample. Each item is given a weight based on its size and its bin in the FFD packing. The weights are determined such that the total weight in each FFD bin is at least x, and the total weight in almost each optimal bin is at most x (for some predetermined x). This implies that the number of FFD bins is at most the number of optimal bins, which contradicts the assumption that it is a counterexample.
By the lemmas above, we know that:
The size of the smallest item satisfies s > p-q = 22, so s = 22+D for some D>0.
Each optimal bin contains at most 4 items (floor(100/22)), and each FFD bin contains at most 5 items (floor(122/22)).
The size of every item is at most q-2s = 56-2D.
The sum in each FFD bin is larger than p-s = 100-D.
There are no 1-bins, since in a 1-bin, the size of the regular item must be at least p/2=61, while here the size of every item is less than 56.
If D>4, the size of each item is larger than 26, so each optimal bin (with capacity 100) must contain at most 3 items. Each item is smaller than 56-2D and each FFD bin has a sum larger than 100-D, so each FFD bin must contain at least 3 items. Therefore, there are at most n FFD bins - contradiction. So from now on, we assume D≤4. The items are assigned types and weights as follows.
The two items in each regular 2-bin except maybe the last one have a size larger than (100-D)/2 each. All such items are called type-X2, and assigned a weight of (100-D)/2. The last 2-regular bin is a special case: if both its items have a size larger than (100-D)/2, then they are type-X2 too; otherwise, they are called type-Z, and their weight equals their size.
The two regular items in each fallback 2-bin have a total size larger than 2*122/3; they are called type-Y2, and their weight equals their size minus D.
The three items in each regular 3-bin except maybe the last one have a size larger than (100-D)/3 each. All such items are called type-X3, and assigned a weight of (100-D)/3. The last 3-regular bin is a special case: if all items in it have a size larger than (100-D)/3, then they are type-X3 too; otherwise, they are called type-Z and their weight equals their size.
The three regular items in each fallback 3-bin have a total size larger than 3*122/4; they are called type-Y3, and their weight equals their size minus D.
The four items in each regular 4-bin except maybe the last one have a size larger than (100-D)/4 each. All such items are called type-X4, and assigned a weight of (100-D)/4. The last 4-regular bin is a special case: if all items in it have a size larger than (100-D)/4, then they are type-X4 too; otherwise, they are called type-Z and their weight equals their size.
The remaining items (including all fallback items in fallback 2-bins and 3-bins, all fallback 4-bins, and all other 5-item bins) are all called type-X5, and their weight equals 22 (if D ≤ 12/5) or (100-D)/4 (otherwise). The threshold 12/5 was computed such that the weight is always at most 22+D, so that the weight is always smaller than the size.
Note that the weight of each item is at most its size (the weight can be seen as the size "rounded down"). Still, the total weight of items in every FFD bin is at least 100-D:
For regular 2-bins, regular 3-bins and regular 4-bins:
For the non-last ones, this is immediate.
The last such bins contain only Z-type items, whose weight equals their size, so the total weight of these bins equals their total size, which is more than 100-D.
Fallback 2-bins contain two type-Y2 items with total weight larger than 2*122/3-2D, plus at least one type-X5 item with weight at least 22 (if D ≤ 12/5) or (100-D)/4 (otherwise). In both cases the total weight is more than 100-D.
Fallback 3-bins contain three type-Y3 items with total weight larger than 3*122/4-3D, plus at least one type-X5 item with weight at least 22. So the total weight is more than 3*122/4+22-3D = 113.5-3D ≥ 105.5-D > 100-D, since D≤4.
5-item bins contain 5 items with size at least 22+D and weight at least 22, so their total weight is obviously more than 100-D.
The total weight of items in most optimal bins is at most 100-D:
This is clear for any optimal bin containing a type-Y2 item or a type-Y3 item, since their weight is their size minus D, the weights of other items is at most their size, and the total size of an optimal bin is at most 100.
For optimal bins containing only type-X2, type-X3, type-X4 and type-X5 items, it is possible to check all possible configurations (all combinations that fit into an optimal bin of size 100), and verify that the total weight in each configuration is at most 100-D.
Optimal bins containing type-Z items might have a total weight larger than 100-D. Since the total weight is at most 100, there is an "excess weight" of at most D for each such bin. However, the number of type-Z items is limited:
If D > 12/5, then there are at most 5 type-Z items (2 in the last regular 2-bin and 3 in the last regular 3-bin; the items in the last regular 4-bin are all type-X4). Therefore, the excess weight is at most 5D. Comparing the total weight of FFD vs. optimal bins yields s < 5D ≤ 20 < 22, a contradiction.
Otherwise, there are at most 9 type-Z items (2+3+4). Therefore, the excess weight is at most 9D. Comparing the total weight of FFD vs. optimal bins yields s < 9D ≤ 108/5 < 22, a contradiction.
13/11 upper bound
The upper bound is proved by assuming a minimal ((120-3d)/100)-counterexample, with some d<20/33, and deriving a contradiction.
Non-monotonicity
MultiFit is not monotone in the following sense: it is possible that an input decreases while the max-sum in the partition returned by MultiFit increases. As an example, suppose n=3 and the input numbers are:44, 24, 24, 22, 21, 17, 8, 8, 6, 6.FFD packs these inputs into 3 bins of capacity 60 (which is optimal):
44, 8, 8;
24, 24, 6, 6;
22, 21, 17.
But if the "17" becomes "16", then FFD with capacity 60 needs 4 bins:
44, 16;
24, 24, 8;
22, 21, 8, 6;
6.
so MultiFit must increase the capacity, for example, to 62:
44, 16;
24, 24, 8, 6;
22, 21, 8, 6.
This is in contrast to other number partitioning algorithms - List scheduling and Longest-processing-time-first scheduling - which are monotone.
Generalization: fair allocation of chores
Multifit has been extended to the more general problem of maximin-share allocation of chores. In this problem, S is a set of chores, and there are n agents who assign potentially different valuations to the chores. The goal is to give to each agent, a set of chores worth at most r times the maximum value in an optimal scheduling based on i's valuations. A naive approach is to let each agent in turn use the MultiFit algorithm to calculate the threshold, and then use the algorithm where each agent uses his own threshold. If this approach worked, we would get an approximation of 13/11. However, this approach fails due to the non-monotonicity of FFD.
Example
Here is an example. Suppose there are four agents, and they have valuations of two types:
Both types can partition the chores into 4 parts of total value 75. Type A:
51, 12, 12
27.5, 27.5, 10, 10
27.5, 27.5, 10, 10
25, 10, 10, 10, 10, 10
Type B:
51, 24
27.5, 27.5, 20
27.5, 27.5, 20
8.33 {9 times}
If all four agents are of the same, then FFD with threshold 75 fills the 4 optimal bins. But suppose there is one agent of type B, and the others are of type A. Then, in the first round, the agent of type B takes the bundle 51, 24 (the other agents cannot take it since for them the values are 51,25 whose sum is more than 75).In the following rounds, the following bundles are filled for the type A agents:
27.5, 27.5, 12 [the sum is 67 - there is no room for another 10]
27.5, 27.5, 12 [the sum is 67 - there is no room for another 10]
10, 10, 10, 10, 10, 10, 10 [the sum is 70 - there is no room for another 10]
so the last two chores remain unallocated.
Optimal value guarantee
Using a more sophisticated threshold calculation, it is possible to guarantee to each agent at most 11/9≈1.22 of his optimal value if the optimal value is known, and at most 5/4≈1.25 of his optimal value (using a polynomial time algorithm) if the optimal value is not known.
Using more elaborate arguments, it is possible to guarantee to each agent the same ratio of MultiFit.
Implementations
Python: The prtpy package contains an implementation of multifit.
References
Number partitioning
Optimal scheduling
Bin packing | Multifit algorithm | Mathematics,Engineering | 5,358 |
56,284,003 | https://en.wikipedia.org/wiki/QuIST | The Quantum Information Science and Technology Program (abbreviated as QUIST or QuIST) was a five-year, $100M DARPA research program that ran from FY 2001 – 2005. The initiative was jointly created by the Defense Sciences Office (DSO) and the Information Technology Office (ITO) to accelerate development in the field of quantum computing, quantum communications, quantum algorithms, and other high-priority quantum information applications. As a completed program, QuIST received an award from DARPA in 2008 for scientific breakthroughs previously conducted under its support.
Research
In 2004, QuIST-funded researchers demonstrated the DARPA Quantum Network, the first working quantum key distribution network. At its start, it employed coherent laser pulses over optical fiber media, sending unconditionally-secure messages between Harvard University, Boston University and BBN Technologies in Cambridge, Massachusetts. It later grew to a fully operational, 10 node network, conveying key material both through telecom fiber and the atmosphere. The work was given a DARPA award four years later.
See also
IARPA – Intelligence Advanced Research Projects Agency
QuEST – Quantum Entanglement Science and Technology
References
External links
DARPA
DARPA projects | QuIST | Physics | 236 |
57,406,626 | https://en.wikipedia.org/wiki/Tibric%20acid | Tibric acid is a sulfamylbenzoic acid that acts as a hypolipidemic agent. Although it was found to be more powerful than clofibrate in lowering lipid levels, it was found to cause liver cancer in mice and rats, and so was not introduced as a human drug. In rats it causes an increase in peroxisomes, and liver enlargement, and then liver cancer. However the peroxisome changes do not occur in humans, and it is not likely to cause liver cancer in humans.
Synthesis
Tibric acid can be made in a multi-step process. Firstly 2-chlorobenzoic acid is reacted with chlorosulfonic acid to add a chlorosulfonate group in the 5- position. This reacts with 3,5-dimethylpiperidine to yield tibric acid.
References
Hypolipidemic agents
Sulfonamides
Abandoned drugs
Benzoic acids | Tibric acid | Chemistry | 202 |
11,275,526 | https://en.wikipedia.org/wiki/Audio%20router | An audio router is a device that transports audio signals from inputs to outputs.
Inputs and Outputs
The number of inputs and outputs varies dramatically. The way routers are described is normally number of inputs by number of outputs e.g. 2×1, 256×256.
Signals
The type of signals transported - switched can be analogue - Analog - audio signals or Digital. Digital audio usually is in the AES/EBU standard for broadcast use. Broadband routers can route more than one signal type e.g. analogue or more than one type of digital.
Crosspoints
Because any of the inputs can be routed to any output, the internal arrangement of the router is arranged as a number of crosspoints which can be activated to pass the corresponding signal to the desired output.
Some Manufacturers of audio routers
Lawo
Datavideo
Imagine Communications
AEQ
FOR-A
Klotz Digital
NVISION
Panasonic
Philips
Ross Video
Snell & Wilcox
Sony
Thomson Grass Valley
Utah Scientific
Matrix Switch Corporation
See also
Video router
Vision mixer
Television technology
Television terminology | Audio router | Technology | 213 |
4,739,349 | https://en.wikipedia.org/wiki/STED%20microscopy | Stimulated emission depletion (STED) microscopy is one of the techniques that make up super-resolution microscopy. It creates super-resolution images by the selective deactivation of fluorophores, minimizing the area of illumination at the focal point, and thus enhancing the achievable resolution for a given system. It was developed by Stefan W. Hell and Jan Wichmann in 1994, and was first experimentally demonstrated by Hell and Thomas Klar in 1999. Hell was awarded the Nobel Prize in Chemistry in 2014 for its development. In 1986, V.A. Okhonin (Institute of Biophysics, USSR Academy of Sciences, Siberian Branch, Krasnoyarsk) had patented the STED idea. This patent was unknown to Hell and Wichmann in 1994.
STED microscopy is one of several types of super resolution microscopy techniques that have recently been developed to bypass the diffraction limit of light microscopy to increase resolution. STED is a deterministic functional technique that exploits the non-linear response of fluorophores commonly used to label biological samples in order to achieve an improvement in resolution, that is to say STED allows for images to be taken at resolutions below the diffraction limit. This differs from the stochastic functional techniques such as photoactivated localization microscopy (PALM) and stochastic optical reconstruction microscopy (STORM) as these methods use mathematical models to reconstruct a sub diffraction limit from many sets of diffraction limited images.
Background
In traditional microscopy, the resolution that can be obtained is limited by the diffraction of light. Ernst Abbe developed an equation to describe this limit. The equation is:
where D is the diffraction limit, λ is the wavelength of the light, and NA is the numerical aperture, or the refractive index of the medium multiplied by the sine of the angle of incidence. n describes the refractive index of the specimen, α measures the solid half‐angle from which light is gathered by an objective, λ is the wavelength of light used to excite the specimen, and NA is the numerical aperture. To obtain high resolution (i.e. small d values), short wavelengths and high NA values (NA = n sinα) are optimal. This diffraction limit is the standard by which all super resolution methods are measured. Because STED selectively deactivates the fluorescence, it can achieve resolution better than traditional confocal microscopy. Normal fluorescence occurs by exciting an electron from the ground state into an excited electronic state of a different fundamental energy level (S0 goes to S1) which, after relaxing back to the vibrational ground state (of S1), emits a photon by dropping from S1 to a vibrational energy level on S0. STED interrupts this process before the photon is released. The excited electron is forced to relax into a higher vibration state than the fluorescence transition would enter, causing the photon to be released to be red-shifted as shown in the image to the right. Because the electron is going to a higher vibrational state, the energy difference of the two states is lower than the normal fluorescence difference. This lowering of energy raises the wavelength, and causes the photon to be shifted farther into the red end of the spectrum. This shift differentiates the two types of photons, and allows the stimulated photon to be ignored.
To force this alternative emission to occur, an incident photon must strike the fluorophore. This need to be struck by an incident photon has two implications for STED. First, the number of incident photons directly impacts the efficiency of this emission, and, secondly, with sufficiently large numbers of photons fluorescence can be completely suppressed. To achieve the large number of incident photons needed to suppress fluorescence, the laser used to generate the photons must be of a high intensity. Unfortunately, this high intensity laser can lead to the issue of photobleaching the fluorophore. Photobleaching is the name for the destruction of fluorophores by high intensity light.
Process
STED functions by depleting fluorescence in specific regions of the sample while leaving a center focal spot active to emit fluorescence. This focal area can be engineered by altering the properties of the pupil plane of the objective lens. The most common early example of these diffractive optical elements, or DOEs, is a torus shape used in two-dimensional lateral confinement shown below. The red zone is depleted, while the green spot is left active. This DOE is generated by a circular polarization of the depletion laser, combined with an optical vortex. The lateral resolution of this DOE is typically between 30 and 80 nm. However, values down to 2.4 nm have been reported. Using different DOEs, axial resolution on the order of 100 nm has been demonstrated. A modified Abbe's equation describes this sub diffraction resolution as:
Where is the refractive index of the medium, is the intracavity intensity and is the saturation intensity. Where is the saturation factor expressing the ratio of the applied (maximum) STED intensity to the saturation intensity, .
To optimize the effectiveness of STED, the destructive interference in the center of the focal spot needs to be as close to perfect as possible. That imposes certain constraints on the optics that can be used.
Dyes
Early on in the development of STED, the number of dyes that could be used in the process was very limited. Rhodamine B was named in the first theoretical description of STED. As a result, the first dyes used were laser emitting in the red spectrum. To allow for STED analysis of biological systems, the dyes and laser sources must be tailored to the system. This desire for better analysis of these systems has led to living cell STED and multicolor STED, but it has also demanded more and more advanced dyes and excitation systems to accommodate the increased functionality.
One such advancement was the development of immunolabeled cells. These cells are STED fluorescent dyes bound to antibodies through amide bonds. The first use of this technique coupled MR-121SE, a red dye, with a secondary anti-mouse antibody. Since that first application, this technique has been applied to a much wider range of dyes including green emitting, Atto 532, and yellow emitting, Atto 590, as well as additional red emitting dyes. In addition, Atto 647N was first used with this method to produce two-color STED.
Applications
Over the last several years, STED has developed from a complex and highly specific technique to a general fluorescence method. As a result, a number of methods have been developed to expand the utility of STED and to allow more information to be provided.
Structural analysis
From the beginning of the process, STED has allowed fluorescence microscopy to perform tasks that had been only possible using electron microscopy. As an example, STED was used for the elucidation of protein structure analysis at a sub-organelle level. The common proof of this level of study is the observation of cytoskeletal filaments. In addition, neurofilaments, actin, and tubulin are often used to compare the resolving power of STED and confocal microscopes.
Using STED, a lateral resolution of 70 – 90 nm has been achieved while examining SNAP25, a human protein that regulates membrane fusion. This observation has shown that SNAP25 forms clusters independently of the SNARE motif's functionality, and binds to clustered syntaxin. Studies of complex organelles, like mitochondria, also benefit from STED microscopy for structural analysis. Using custom-made STED microscopes with a lateral resolution of fewer than 50 nm, mitochondrial proteins Tom20, VDAC1, and COX2 were found to distribute as nanoscale clusters. Another study used a homemade STED microscopy and DNA binding fluorescent dye, measured lengths of DNA fragments much more precisely than conventional measurement with confocal microscopy.
Correlative methods
Due to its function, STED microscopy can often be used with other high-resolution methods. The resolution of both electron and atomic force microscopy is even better than STED resolution, but by combining atomic force with STED, Shima et al. were able to visualize the actin cytoskeleton of human ovarian cancer cells while observing changes in cell stiffness.
Multicolor
Multicolor STED was developed in response to a growing problem in using STED to study the dependency between structure and function in proteins. To study this type of complex system, at least two separate fluorophores must be used. Using two fluorescent dyes and beam pairs, colocalized imaging of synaptic and mitochondrial protein clusters is possible with a resolution down to 5 nm [18]. Multicolor STED has also been used to show that different populations of synaptic vesicle proteins do not mix of escape synaptic boutons. By using two color STED with multi-lifetime imaging, three channel STED is possible.
Live-cell
Early on, STED was thought to be a useful technique for working with living cells. Unfortunately, the only way for cells to be studied was to label the plasma membrane with organic dyes. Combining STED with fluorescence correlation spectroscopy showed that cholesterol-mediated molecular complexes trap sphingolipids, but only transiently. However, only fluorescent proteins provide the ability to visualize any organelle or protein in a living cell. This method was shown to work at 50 nm lateral resolution within Citrine-tubulin expressing mammalian cells. In addition to detecting structures in mammalian cells, STED has allowed for the visualization of clustering YFP tagged PIN proteins in the plasma membrane of plant cells.
Recently, multicolor live-cell STED was performed using a pulsed far-red laser and CLIPf-tag and SNAPf-tag expression.
In the brain of intact animals
Superficial layers of mouse cortex can be repetitively imaged through a cranial window. This allows following the fate and shape of individual dendritic spines for many weeks. With two-color STED, it is even possible to resolve the nanostructure of the postsynaptic density in life animals.
STED at video rates and beyond
Super-resolution requires small pixels, which means more spaces to acquire from in a given sample, which leads to a longer acquisition time. However, the focal spot size is dependent on the intensity of the laser being used for depletion. As a result, this spot size can be tuned, changing the size and imaging speed. A compromise can then be reached between these two factors for each specific imaging task. Rates of 80 frames per second have been recorded, with focal spots around 60 nm. Up to 200 frames per second can be reached for small fields of view.
Problems
Photobleaching can occur either from excitation into an even higher excited state, or from excitation in the triplet state. To prevent the excitation of an excited electron into another, higher excited state, the energy of the photon needed to trigger the alternative emission should not overlap the energy of the excitation from one excited state to another. This will ensure that each laser photon that contacts the fluorophores will cause stimulated emission, and not cause the electron to be excited to another, higher energy state. Triplet states are much longer lived than singlet states, and to prevent triplet states from exciting, the time between laser pulses needs to be long enough to allow the electron to relax through another quenching method, or a chemical compound should be added to quench the triplet state.
See also
Confocal microscopy
Fluorescence
Fluorescence microscope
Ground state depletion microscopy
Laser scanning confocal microscopy
Optical microscope
Photoactivated localization microscopy
Stochastic optical reconstruction microscopy
Super-resolution microscopy
References
External links
Overview at the Department of NanoBiophotonics at the Max Planck Institute for Biophysical Chemistry.
Brief summary of the RESOLFT equations developed by Stefan Hell.
Stefan Hell lecture: Super-Resolution: Overview and Stimulated Emission Depletion (STED) Microscopy
Light Microscopy: An ongoing contemporary revolution (Introductory Review)
Cell imaging
Diffraction
Laboratory equipment
Optical microscopy techniques | STED microscopy | Physics,Chemistry,Materials_science,Biology | 2,529 |
64,099,734 | https://en.wikipedia.org/wiki/Scardovia%20wiggsiae | Scardovia wiggsiae is a species of bacterium in the family Bifidobacteriaceae. In 2011, a study carried out using anaerobic culture conditions allowed the identification of a newly named species, Scardovia wiggsiae [], which was significantly associated with severe ECC (Early childhood caries, a particularly severe manifestation of carious pathology affecting children between birth and 71 months of age). The paper of Bossù et al. 2020 [] shows that S. wiggsiae forms biofilm and illustrates for the first time with high resolution scanning electron microscopy images the morphology of this bacterium and its biofilm. Images were obtained usingn original scanning electron microscopy protocol, the OsO4-RR-TA-IL treatment. The biofilm had an intricate three-dimensional architecture made of Eps trabeculae, in this structure a complex micro-canalicular system was developed. S. wiggsiae has the aspect of an elongated bacterium, without pili or fimbriae. It forms clusters of bacteria embedded in the Eps scaffold.
References
Bacteria described in 2011
Bifidobacteriales | Scardovia wiggsiae | Biology | 232 |
6,870,342 | https://en.wikipedia.org/wiki/Multi-document%20summarization | Multi-document summarization is an automatic procedure aimed at extraction of information from multiple texts written about the same topic. The resulting summary report allows individual users, such as professional information consumers, to quickly familiarize themselves with information contained in a large cluster of documents. In such a way, multi-document summarization systems are complementing the news aggregators performing the next step down the road of coping with information overload.
Key benefits and difficulties
Multi-document summarization creates information reports that are both concise and comprehensive.
With different opinions being put together & outlined, every topic is described from multiple perspectives within a single document.
While the goal of a brief summary is to simplify information search and cut the time by pointing to the most relevant source documents, comprehensive multi-document summary should in theory contain the required information, hence limiting the need for accessing original files to cases when refinement is required. In practice, it is hard to summarize multiple documents with conflicting views and biases. In fact, it is almost impossible to achieve clear extractive summarization of documents with conflicting views. Abstractive summarization is the preferred venue in this case.
Automatic summaries present information extracted from multiple sources algorithmically, without any editorial touch or subjective human intervention, thus making it completely unbiased. The difficulties remain, if doing automatic extractive summaries of documents with conflicting views.
Technological challenges
The multi-document summarization task is more complex than summarizing a single document, even a long one. The difficulty arises from thematic diversity within a large set of documents. A good summarization technology aims to combine the main themes with completeness, readability, and concision. The Document Understanding Conferences, conducted annually by NIST, have developed sophisticated evaluation criteria for techniques accepting the multi-document summarization challenge.
An ideal multi-document summarization system not only shortens the source texts, but also presents information organized around the key aspects to represent diverse views. Success produces an overview of a given topic. Such text compilations should also follow basic requirements for an overview text compiled by a human. The multi-document summary quality criteria are as follows:
clear structure, including an outline of the main content, from which it is easy to navigate to the full text sections
text within sections is divided into meaningful paragraphs
gradual transition from more general to more specific thematic aspects
good readability.
The latter point deserves an additional note. Care is taken to ensure that the automatic overview shows:
no paper-unrelated "information noise" from the respective documents (e.g., web pages)
no dangling references to what is not mentioned or explained in the overview
no text breaks across a sentence
no semantic redundancy.
Real-life systems
The multi-document summarization technology is now coming of age - a view supported by a choice of advanced web-based systems that are currently available.
ReviewChomp presents summaries of customer reviews for any given product or service. Some products have thousands of online reviews which renders the reviews unreadable by humans in real time. Search for the product or service is performed by the website.
Ultimate Research Assistant - performs text mining on Internet search results to help summarize and organize them and make it easier for the user to perform online research. Specific text mining techniques used by the tool include concept extraction, text summarization, hierarchical concept clustering (e.g., automated taxonomy generation), and various visualization techniques, including tag clouds and mind maps.
iResearch Reporter - Commercial Text Extraction and Text Summarization system, free demo site accepts user-entered query, passes it on to Google search engine, retrieves multiple relevant documents, produces categorized, easily readable natural language summary reports covering multiple documents in retrieved set, all extracts linked to original documents on the Web, post-processing, entity extraction, event and relationship extraction, text extraction, extract clustering, linguistic analysis, multi-document, full text, natural language processing, categorization rules, clustering, linguistic analysis, text summary construction tool set.
Newsblaster is a system that helps users find news that is of the most interest to them. The system automatically collects, clusters, categorizes, and summarizes news from several sites on the web (CNN, Reuters, Fox News, etc.) on a daily basis, and it provides users an interface to browse the results.
NewsInEssence may be used to retrieve and summarize a cluster of articles from the web. It can start from a URL and retrieve documents that are similar, or it can retrieve documents that match a given set of keywords. NewsInEssence also downloads news articles daily and produces news clusters from them.
NewsFeed Researcher is a news portal performing continuous automatic summarization of documents initially clustered by the news aggregators (e.g., Google News). NewsFeed Researcher is backed by a free online engine covering major events related to business, technology, U.S. and international news. This tool is also available in on-demand mode allowing a user to build a summaries on selected topics.
Scrape This is like a search engine, but instead of providing links to the most relevant websites based on a query, it scrapes the pertinent information off of the relevant websites and provides the user with a consolidated multi-document summary, along with dictionary definitions, images, and videos.
JistWeb is a query specific multiple document summariser.
As auto-generated multi-document summaries increasingly resemble the overviews written by a human, their use of extracted text snippets may one day face copyright issues in relation to the fair use copyright concept.
Bibliography
Dragomir R. Radev, Hongyan Jing, Malgorzata Styś, and Daniel Tam. Centroid-based summarization of multiple documents. Information Processing and Management, 40:919–938, December 2004.
Kathleen R. McKeown and Dragomir R. Radev. Generating summaries of multiple news articles. In Proceedings, ACM Conference on Research and Development in Information Retrieval SIGIR'95, pages 74–82, Seattle, Washington, July 1995.
C.-Y. Lin, E. Hovy, "From single to multi-document summarization: A prototype system and its evaluation", In "Proceedings of the ACL", pp. 457–464, 2002
Kathleen McKeown, Rebecca J. Passonneau, David K. Elson, Ani Nenkova, Julia Hirschberg, "Do Summaries Help? A Task-Based Evaluation of Multi-Document Summarization", SIGIR’05, Salvador, Brazil, August 15–19, 2005
R. Barzilay, N. Elhadad, K. R. McKeown, "Inferring strategies for sentence ordering in multidocument news summarization", Journal of Artificial Intelligence Research, v. 17, pp. 35–55, 2002
M. Soubbotin, S. Soubbotin, "Trade-Off Between Factors Influencing Quality of the Summary", Document Understanding Workshop (DUC), Vancouver, B.C., Canada, October 9–10, 2005
C Ravindranath Chowdary, and P. Sreenivasa Kumar. "Esum: an efficient system for query-specific multi-document summarization." In ECIR (Advances in Information Retrieval), pp. 724–728. Springer Berlin Heidelberg, 2009.
See also
Automatic summarization
Text mining
News aggregators
References
External links
Document Understanding Conferences
Columbia NLP Projects
NewsInEssence: Web-based News Summarization
ReviewChomp
Natural language processing
Information retrieval genres | Multi-document summarization | Technology | 1,594 |
40,106,773 | https://en.wikipedia.org/wiki/The%20Lebanese%20Rocket%20Society%20%28film%29 | The Lebanese Rocket Society is a 2012 Franco - Lebanese documentary film directed by Joana Hadjithomas and Khalil Joreige and released theatrically on 1 May 2013.
Synopsis
In the 1960s, Lebanon was the first Arab country to start sending rockets into the sky. Led by Manoug Manougian, their physics teacher, a small group of students from the Haigazian University (called Haigazian College at the time) began tests and launched their first rockets to conquer space under the name ″Lebanese Rocket Society″. Their work was briefly a source of national pride.
Cast and crew
Director: Joana Hadjithomas and Khalil Joreige
Production: Edouard Mauriat (Mille et une productions), and Georges Shoucair (Abbout Productions)
France Distribution: Urban Distribution
Photography: Jeanne Lapoirie and Khalil Joreige
Animation: Ghassan Halawani
Editing: Tina Baz
Music: Scrambled Eggs
Genre: Documentary
Country of origin: France, Lebanon
Format: DCP
Selections
Official selection at the Doha Tribeca Film Festival 2012
Official selection at the Toronto International Film Festival 2012
Official selection at Cinéma du réel
See also
Lebanese space program
References
2012 documentary films
2012 films
Documentary films about outer space
French documentary films
Lebanese documentary films
2010s French films | The Lebanese Rocket Society (film) | Astronomy | 267 |
15,358,713 | https://en.wikipedia.org/wiki/Cilomilast | Cilomilast (INN, codenamed SB-207,499, proposed trade name Ariflo) is a drug which was developed for the treatment of respiratory disorders such as asthma and chronic obstructive pulmonary disease (COPD). It is orally active and acts as a selective phosphodiesterase-4 inhibitor.
Phosphodiesterase (PDE) inhibitors, such as theophylline, have been used to treat COPD for centuries; however, the clinical benefits of these agents have never been shown to outweigh the risks of their numerous adverse effects. Four clinical trials were identified evaluating the efficacy of cilomilast, the usual randomized, double-blind, and placebo-controlled protocols were used. It showed reasonable efficacy for treating COPD, but side effects were problematic and it is unclear whether cilomilast will be marketed, or merely used in the development of newer drugs.
Cilomilast is a second-generation PDE4 inhibitor with anti-inflammatory effects that target bronchoconstriction, mucus hypersecretion, and airway remodeling associated with COPD.
History
GlaxoSmithKline (GSK) filed for drug approval with the U.S. FDA at the end of 2002 and in January 2003 with the European Medicines Evaluation Agency (EMEA). In October 2003, the FDA issued an approvable letter for use of cilomilast in maintenance of lung function in COPD patients poorly responsive to salbutamol, despite an earlier decision by the FDA advisory panel to reject approval. The rejection was based on concerns over the efficacy of the agent, as well as gastrointestinal side effects. Before issuing final approval, however, the FDA requested additional efficacy and safety data. The development of the drug was finally abandoned by GSK.
Synthesis
References
Abandoned drugs
Carboxylic acids
Nitriles
Catechol ethers
PDE4 inhibitors
Cyclopentanes
Cyclohexanes | Cilomilast | Chemistry | 417 |
54,903,105 | https://en.wikipedia.org/wiki/Genetically%20modified%20food%20in%20Asia | India and China are the two largest producers of genetically modified products in Asia. India currently only grows GM cotton, while China produces GM varieties of cotton, poplar, petunia, tomato, papaya and sweet pepper. Cost of enforcement of regulations in India are generally higher, possibly due to the greater influence farmers and small seed firms have on policy makers, while the enforcement of regulations was more effective in China. Other Asian countries that grew GM crops in 2011 were Pakistan, the Philippines and Myanmar. GM crops were approved for commercialisation in Bangladesh in 2013 and in Vietnam and Indonesia in 2014.
China
GM crops in China go through three phases of field trials (pilot field testing, environmental release testing, and preproduction testing) before they are submitted to the Office of Agricultural Genetic Engineering Biosafety Administration (OAGEBA) for assessment. Producers must apply to OAGEBA at each stage of the field tests. The Chinese Ministry of Science and Technology developed the first biosafety regulations for GM products in 1993 and they were updated in 2001. The 75 member National Biosafety Committee evaluates all applications, although OAGEBA has the final decision. Most of the National Biosafety Committee are involved in biotechnology leading to criticisms that they do not represent a wide enough range of public concerns.
India
The release of transgenic crops in India is governed by the Indian Environment Protection Act, which was enacted in 1986. The
Institutional Biosafety Committee (IBSC), Review Committee on Genetic Manipulation (RCGM) and Genetic Engineering Approval Committee (GEAC) all review any genetically modified organism to be released, with transgenic crops also needing permission from the Ministry of Agriculture. India regulators cleared the Bt brinjal, a genetically modified eggplant, for commercialisation in October 2009. Following opposition from some scientists, farmers and environmental groups a moratorium was imposed on its release in February 2010.
Official Reports on GMO
There have been four official reports on GMO in India till August 2013 :
The ‘Jairam Ramesh Report’ - February 2010, imposing an indefinite moratorium on Bt Brinjal
The Sopory Committee Report - August 2012
The Parliamentary Standing Committee (PSC) Report on GM crops - August 2012
Final Report of The Technical Expert Committee established by Supreme Court - July 2013
Japan
Two laws regulate food safety and food quality in Japan, the Food Sanitation Law passed in 1947 and the Law Concerning Standardization and Proper Labeling of Agricultural and Forestry Products passed in 1950. The Food Sanitation Law has been amended and updated many times; an amendment dealing with pre-market approval and labeling of GMOs was passed in 2000 and came into effect in 2001. Japan passed laws to implement the Cartagena Protocol on Biosafety in September 2003 which came into effect in February 2004 - the Law Concerning the Conservation and Sustainable Use of Biological Diversity through Regulations on the Use of Living Modified Organisms (Law No. 97 of 2003).
Authority for approvals for various uses of genetically modified organisms is divided in Japan. The Ministry of the Environment has final approval for all uses of GMOs, but crops for commercial use and live vaccines for
animals first go through the Ministry of Agriculture, Forestry and Fisheries; viruses for gene therapy and other medical applications first go through the Ministry of Health, Labor and Welfare; field trials of GM crops and recombinant DNA used in biotechnology research first goes through the Ministry of Education, Culture, Sports, Science and Technology; and uses in the process of production of industrial enzymes, etc. goes through the Ministry of Economy, Trade and Industry.
Japan has not approved any commodity GM crops to be grown in Japan, but does allow import of agricultural products made from GM crops and food made of imported GM ingredients. Japan does however allow cultivation of GM flowers (e.g. Blue roses).
GM foods must undergo a safety assessment prior to being awarded certification for distribution to the domestic market. The Food Safety Commission (FSC) performs food and feed safety risk assessments.
Certain GM food must be labeled, but this is limited to designated genetically modified agricultural products, which are soybean, corn, potato, rapeseed, cottonseed, alfalfa and beet, and is limited to 32 processed foods which contain soybean, corn and potato, alfalfa and beet, in which recombinant DNA or the resulting protein still exists even after processing. However, processed food in which recombinant DNA or protein is dissolved in or removed during processing, such as soy sauce, soybean oil, corn flakes, millet jelly, corn oil, rapeseed oil, cottonseed oil, and others, do not have to be labeled.
Japan does not require traceability, and allows negative labeling ("GMO-free" and the like).
Philippines
The Philippines bans all GMOs recently overturning existing Department of Agriculture regulations. A petition filed on May 17, 2013 by environmental group Greenpeace Southeast Asia and farmer-scientist coalition Masipag (Magsasaka at Siyentipiko sa Pagpapaunlad ng Agrikultura) asked the appellate court to stop the planting of Bt eggplant in test fields, saying the impacts of such an undertaking to the environment, native crops and human health are still unknown. The Court of Appeals granted the petition, citing the precautionary principle stating "when human activities may lead to threats of serious and irreversible damage to the environment that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish the threat." Respondents filed a motion for reconsideration in June 2013 and on September 20, 2013 the Court of Appeals chose to uphold their May decision saying the bt talong field trials violate the people’s constitutional right to a "balanced and healthful ecology." The Supreme Court on Tuesday, December 8, 2015 permanently stopped the field testing for Bt (Bacillus thuringiensis) talong (eggplant), upholding the decision of the Court of Appeals which stopped the field trials for the genetically modified eggplant. The Philippines Supreme Court also took the unprecedented step and invalidated the Department of Agriculture administrative order allowing the field testing, propagation and commercialization, and importation of GMOs.
References
Genetic engineering by country | Genetically modified food in Asia | Engineering,Biology | 1,282 |
78,909,914 | https://en.wikipedia.org/wiki/1%2C3-Bis%283-%28dimethylamino%29propyl%29urea | 1,3-Bis(3-(dimethylamino)propyl)urea is an aliphatic organic chemical principally used as a curing agent in epoxy chemistry and a blowing agent in the polyurethane foam industry. It has the formula C11H26N4O. It is on TSCA and also on EINECS and thus by definition, REACH registered with a number of 257-861-2. The CAS number is 52338-87-1.
Uses
As the material has tertiary amine functionality, it finds use as a catalyst for polyurethane foam production.
The molecule has two secondary amines and thus can be used to cure epoxy resin based materials. Other uses include as a propellant and also blowing agents. The amine functionality allows it to be used as an intermediate to synthesize other compounds.
References
Ureas
Tertiary amines
Catalysts
Secondary amines
Dimethylamino compounds | 1,3-Bis(3-(dimethylamino)propyl)urea | Chemistry | 198 |
38,360,352 | https://en.wikipedia.org/wiki/Truncated%20order-5%20pentagonal%20tiling | In geometry, the truncated order-5 pentagonal tiling is a regular tiling of the hyperbolic plane. It has Schläfli symbol of t0,1{5,5}, constructed from one pentagons and two decagons around every vertex.
Related tilings
See also
Square tiling
Uniform tilings in hyperbolic plane
List of regular polytopes
References
John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 19, The Hyperbolic Archimedean Tessellations)
External links
Hyperbolic and Spherical Tiling Gallery
KaleidoTile 3: Educational software to create spherical, planar and hyperbolic tilings
Hyperbolic Planar Tessellations, Don Hatch
Hyperbolic tilings
Isogonal tilings
Order-5 tilings
Pentagonal tilings
Truncated tilings
Uniform tilings | Truncated order-5 pentagonal tiling | Physics | 179 |
37,422,023 | https://en.wikipedia.org/wiki/HP%20Flexible%20Data%20Center | HP Flexible Data Center, also termed FlexDC, is a modular data center built from prefabricated components by Hewlett-Packard and introduced in 2010. It is housed in five large buildings that form the shape of a butterfly. The Flexible DC looks like a traditional building, but it is fabricated off-site in order to circumvent the two years it often takes for traditional building construction. The building consists of a central admin area (the Core), surrounded by 1-4 data halls (the Quadrants). FDC offers cooling options that are optimal for each type of climate.
The FlexDC product line follows from HP's acquisition of EYP Mission Critical Facilities in November 2007. HP currently positions FlexDC at the top end of their modular datacenter product line (above their PODs, which are housed in shipping containers), up to 3.6MW in capacity per facility.
References
External links
Press Release: HP Flexible Data Center
Data centers
Flexible Data Center
Modular datacenter | HP Flexible Data Center | Technology | 207 |
3,186,372 | https://en.wikipedia.org/wiki/Human%E2%80%93robot%20interaction | Human–robot interaction (HRI) is the study of interactions between humans and robots. Human–robot interaction is a multidisciplinary field with contributions from human–computer interaction, artificial intelligence, robotics, natural language processing, design, psychology and philosophy. A subfield known as physical human–robot interaction (pHRI) has tended to focus on device design to enable people to safely interact with robotic systems.
Origins
Human–robot interaction has been a topic of both science fiction and academic speculation even before any robots existed. Because much of active HRI development depends on natural language processing, many aspects of HRI are continuations of human communications, a field of research which is much older than robotics.
The origin of HRI as a discrete problem was stated by 20th-century author Isaac Asimov in 1941, in his novel I, Robot. Asimov coined Three Laws of Robotics, namely:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
These three laws provide an overview of the goals engineers and researchers hold for safety in the HRI field, although the fields of robot ethics and machine ethics are more complex than these three principles. However, generally human–robot interaction prioritizes the safety of humans that interact with potentially dangerous robotics equipment. Solutions to this problem range from the philosophical approach of treating robots as ethical agents (individuals with moral agency), to the practical approach of creating safety zones. These safety zones use technologies such as lidar to detect human presence or physical barriers to protect humans by preventing any contact between machine and operator.
Although initially robots in the human–robot interaction field required some human intervention to function, research has expanded this to the extent that fully autonomous systems are now far more common than in the early 2000s. Autonomous systems include from simultaneous localization and mapping systems which provide intelligent robot movement to natural-language processing and natural-language generation systems which allow for natural, human-esque interaction which meet well-defined psychological benchmarks.
Anthropomorphic robots (machines which imitate human body structure) are better described by the biomimetics field, but overlap with HRI in many research applications. Examples of robots which demonstrate this trend include Willow Garage's PR2 robot, the NASA Robonaut, and Honda ASIMO. However, robots in the human–robot interaction field are not limited to human-like robots: Paro and Kismet are both robots designed to elicit emotional response from humans, and so fall into the category of human–robot interaction.
Goals in HRI range from industrial manufacturing through Cobots, medical technology through rehabilitation, autism intervention, and elder care devices, entertainment, human augmentation, and human convenience. Future research therefore covers a wide range of fields, much of which focuses on assistive robotics, robot-assisted search-and-rescue, and space exploration.
The goal of friendly human–robot interactions
Robots are artificial agents with capacities of perception and action in the physical world often referred by researchers as workspace. Their use has been generalized in factories but nowadays they tend to be found in the most technologically advanced societies in such critical domains as search and rescue, military battle, mine and bomb detection, scientific exploration, law enforcement, entertainment and hospital care.
These new domains of applications imply a closer interaction with the user. The concept of closeness is to be taken in its full meaning, robots and humans share the workspace but also share goals in terms of task achievement. This close interaction needs new theoretical models, on one hand for the robotics scientists who work to improve the robots utility and safety and on the other hand to evaluate the risks and benefits of this new "friend" for our modern society. The subfield of physical human–robot interaction (pHRI) has largely focused on device design to enable people to safely interact with robotic systems, but is increasingly developing algorithmic approaches in an attempt to support fluent and expressive interactions between humans and robotic systems.
With the advance in AI, the research is focusing on one part towards the safest physical interaction but also on a socially correct interaction, dependent on cultural criteria. The goal is to build an intuitive, and easy communication with the robot through speech, gestures, and facial expressions.
Kerstin Dautenhahn refers to friendly Human–robot interaction as "Robotiquette" defining it as the "social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans" The robot has to adapt itself to our way of expressing desires and orders and not the contrary. But every day environments such as homes have much more complex social rules than those implied by factories or even military environments. Thus, the robot needs perceiving and understanding capacities to build dynamic models of its surroundings. It needs to categorize objects, recognize and locate humans and further recognize their emotions. The need for dynamic capacities pushes forward every sub-field of robotics.
Furthermore, by understanding and perceiving social cues, robots can enable collaborative scenarios with humans. For example, with the rapid rise of personal fabrication machines such as desktop 3D printers, laser cutters, etc., entering our homes, scenarios may arise where robots can collaboratively share control, co-ordinate and achieve tasks together. Industrial robots have already been integrated into industrial assembly lines and are collaboratively working with humans. The social impact of such robots have been studied and has indicated that workers still treat robots and social entities, rely on social cues to understand and work together.
On the other end of HRI research the cognitive modelling of the "relationship" between human and the robots benefits the psychologists and robotic researchers the user study are often of interests on both sides. This research endeavours part of human society. For effective human – humanoid robot interaction numerous communication skills and related features should be implemented in the design of such artificial agents/systems.
General HRI research
HRI research spans a wide range of fields, some general to the nature of HRI.
Methods for perceiving humans
Methods for perceiving humans in the environment are based on sensor information. Research on sensing components and software led by Microsoft provide useful results for extracting the human kinematics (see Kinect). An example of older technique is to use colour information for example the fact that for light skinned people the hands are lighter than the clothes worn. In any case a human modelled a priori can then be fitted to the sensor data. The robot builds or has (depending on the level of autonomy the robot has) a 3D mapping of its surroundings to which is assigned the humans locations.
Most methods intend to build a 3D model through vision of the environment. The proprioception sensors permit the robot to have information over its own state. This information is relative to a reference. Theories of proxemics may be used to perceive and plan around a person's personal space.
A speech recognition system is used to interpret human desires or commands. By combining the information inferred by proprioception, sensor and speech the human position and state (standing, seated). In this matter, natural-language processing is concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural-language data. For instance, neural-network architectures and learning algorithms that can be applied to various natural-language processing tasks including part-of-speech tagging, chunking, named-entity recognition, and semantic role labeling.
Methods for motion planning
Motion planning in dynamic environments is a challenge that can at the moment only be achieved for robots with 3 to 10 degrees of freedom. Humanoid robots or even 2 armed robots, which can have up to 40 degrees of freedom, are unsuited for dynamic environments with today's technology. However lower-dimensional robots can use the potential field method to compute trajectories which avoid collisions with humans.
Cognitive models and theory of mind
Humans exhibit negative social and emotional responses as well as decreased trust toward some robots that closely, but imperfectly, resemble humans; this phenomenon has been termed the "Uncanny Valley." However recent research in telepresence robots has established that mimicking human body postures and expressive gestures has made the robots likeable and engaging in a remote setting. Further, the presence of a human operator was felt more strongly when tested with an android or humanoid telepresence robot than with normal video communication through a monitor.
While there is a growing body of research about users' perceptions and emotions towards robots, we are still far from a complete understanding. Only additional experiments will determine a more precise model.
Based on past research, we have some indications about current user sentiment and behavior around robots:
During initial interactions, people are more uncertain, anticipate less social presence, and have fewer positive feelings when thinking about interacting with robots, and prefer to communicate with a human. This finding has been called the human-to-human interaction script.
It has been observed that when the robot performs a proactive behaviour and does not respect a "safety distance" (by penetrating the user space) the user sometimes expresses fear. This fear response is person-dependent.
It has also been shown that when a robot has no particular use, negative feelings are often expressed. The robot is perceived as useless and its presence becomes annoying.
People have also been shown to attribute personality characteristics to the robot that were not implemented in software.
People similarly infer the mental states of both humans and robots, except for when robots and humans use non-literal language (such as sarcasm or white lies).
In line with the contact hypothesis, supervised exposure to a social robot can decrease uncertainty and increase willingness to interact with the robot, compared to pre-exposure attitudes toward robots as a class of agents.
Interacting with a robot by looking at or touching the robot can reduce negative feelings that some people have about robots before interacting with them. Even imagined interaction can reduce negative feelings. However, in some cases, interacting with a robot can increase negative feelings for people with strong pre-existing negative sentiments towards robots.
Methods for human–robot coordination
A large body of work in the field of human–robot interaction has looked at how humans and robots may better collaborate. The primary social cue for humans while collaborating is the shared perception of an activity, to this end researchers have investigated anticipatory robot control through various methods including: monitoring the behaviors of human partners using eye tracking, making inferences about human task intent, and proactive action on the part of the robot. The studies revealed that the anticipatory control helped users perform tasks faster than with reactive control alone.
A common approach to program social cues into robots is to first study human–human behaviors and then transfer the learning. For example, coordination mechanisms in human–robot collaboration are based on work in neuroscience which examined how to enable joint action in human–human configuration by studying perception and action in a social context rather than in isolation. These studies have revealed that maintaining a shared representation of the task is crucial for accomplishing tasks in groups. For example, the authors have examined the task of driving together by separating responsibilities of acceleration and braking i.e., one person is responsible for accelerating and the other for braking; the study revealed that pairs reached the same level of performance as individuals only when they received feedback about the timing of each other's actions. Similarly, researchers have studied the aspect of human–human handovers with household scenarios like passing dining plates in order to enable an adaptive control of the same in human–robot handovers. Another study in the domain of Human Factors and Ergonomics of human–human handovers in warehouses and supermarkets reveal that Givers and Receivers perceive handover tasks differently which has significant implications for designing user-centric human–robot collaborative systems. Most recently, researchers have studied a system that automatically distributes assembly tasks among co-located workers to improve co-ordination.
Robots used for research in HRI
Some research involved designing a new robot while others use available robots to conduct study. Some commonly used robots are Nao, a humanoid and programmable robot. Pepper, another social humanoid robot, and Misty, a programmable companion robot.
Color
The majority of robots are of a white color, stemming from a bias against robots of other colors.
Application areas
The application areas of human–robot interaction include robotic technologies that are used by humans for industry, medicine, and companionship, among other purposes.
Industrial robots
Industrial robots have been implemented to collaborate with humans to perform industrial manufacturing tasks. While humans have the flexibility and the intelligence to consider different approaches to solve the problem, choose the best option among all choices, and then command robots to perform assigned tasks, robots are able to be more precise and more consistent in performing repetitive and dangerous work. Together, the collaboration of industrial robots and humans demonstrates that robots have the capabilities to ensure efficiency of manufacturing and assembling. However, there are persistent concerns about the safety of human–robot collaboration, since industrial robots have the ability to move heavy objects and operate often dangerous and sharp tools, quickly and with force. As a result, this presents a potential threat to the people who work in the same workspace. Therefore, the planning of safe and effective layouts for collaborative workplaces is one of the most challenging topics that research faces.
Medical robots
Rehabilitation
A rehabilitation robot is an example of a robot-aided system implemented in health care. This type of robot would aid stroke survivors or individuals with neurological impairment to recover their hand and finger movements. In the past few decades, the idea of how human and robot interact with each other is one factor that has been widely considered in the design of rehabilitation robots. For instance, human–robot interaction plays an important role in designing exoskeleton rehabilitation robots since the exoskeleton system makes direct contact with humans' body.
Elder care and companion robot
Nursing robots are aimed to provide assistance to elderly people who may have faced a decline in physical and cognitive function, and, consequently, developed psychosocial issues. By assisting in daily physical activities, physical assistance from the robots would allow the elderly to have a sense of autonomy and feel that they are still able to take care of themselves and stay in their own homes.
Long-term research on human-robot interaction could show that residents of care home are willing to interact with humanoid robots and benefit from cognitive and physical activation that is led by the robot Pepper. Another long-term study in a care home could show that people working in the care sector are willing to use robots in their daily work with the residents. But it also revealed that even though that the robots are ready to be used, they do need human assistants, they cannot replace the human work force but they can assist them and give them new possibilities.
Social robots
Autism intervention
Over the past decade, human–robot interaction has shown promising outcomes in autism intervention. Children with autism spectrum disorders (ASD) are more likely to connect with robots than humans, and using social robots is considered to be a beneficial approach to help these children with ASD.
However, social robots that are used to intervene in children's ASD are not viewed as viable treatment by clinical communities because the study of using social robots in ASD intervention, often, does not follow standard research protocol. In addition, the outcome of the research could not demonstrate a consistent positive effect that could be considered as evidence-based practice (EBP) based on the clinical systematic evaluation. As a result, the researchers have started to establish guidelines which suggest how to conduct studies with robot-mediated intervention and hence produce reliable data that could be treated as EBP that would allow clinicians to choose to use robots in ASD intervention.
Education robots
Robots can become tutors or peers in the classroom. When acting as a tutor, the robot can provide instruction, information and also individual attention to student. When acting as a peer learner, the robot can enable "learning by teaching" for students.
Rehabilitation
Robots can be configured as collaborative robot and can be used for rehabilitation of users with motor impairment. Using various interactive technologies like automatic speech recognition, eye gaze tracking and so on, users with motor impairment can control robotic agents and use it for rehabilitation activities like powered wheelchair control, object manipulation and so on.
Automatic driving
A specific example of human–robot interaction is the human-vehicle interaction in automated driving. The goal of human-vehicle cooperation is to ensure safety, security, and comfort in automated driving systems. The continued improvement in this system and the progress in advancements towards highly and fully automated vehicles aim to make the driving experience safer and more efficient in which humans do not need to intervene in the driving process when there is an unexpected driving condition such as a pedestrian walking across the street when it is not supposed to.
Search and rescue
Unmanned aerial vehicles (UAV) and unmanned underwater vehicles (UUV) have the potential to assist search and rescue work in wilderness areas, such as locating a missing person remotely from the evidence that they left in surrounding areas. The system integrates autonomy and information, such as coverage maps, GPS information and quality search video, to support humans performing the search and rescue work efficiently in the given limited time.
Space exploration
Humans have been working on achieving the next breakthrough in space exploration, such as a crewed mission to Mars. This challenge identified the need for developing planetary rovers that are able to assist astronauts and support their operations during their mission. The collaboration between rovers, UAVs, and humans enables leveraging capabilities from all sides and optimizes task performance.
Agricultural robots
Human labor has been greatly used in agriculture but Agricultural robots like milking robots have been adopted in large-scale farming. Hygiene is the main issue in the agri-food sector and the invention of this technology has widely impacted agriculture. Robots can also be used in tasks that might be hazardous to human health like in the application of chemicals to plants.
See also
Robotics
Autonomous robots
Cobots
Gesture recognition
Humanoid robots
Human–robot collaboration
Mobile robots
Motion planning
Personal robot
Robot simulations
Robot teams
Social robot
Technology
Artificial intelligence
CAPTCHA
Computer supported collaborative work
Dialog management
Face detection
Haptic technology
Human–computer interaction
Interactive Systems Engineering
Multimodal interaction
Natural-language understanding
Telematics
Face recognition
Human sensing
Psychology
Anthropomorphism and the uncanny valley
Properties
Bartneck and Okada suggest that a robotic user interface can be described by the following four properties:
Tool – toy scale
Is the system designed to solve a problem effectively or is it just for entertainment?
Remote control – autonomous scale
Does the robot require remote control or is it capable of action without direct human influence?
Reactive – dialogue scale
Does the robot rely on a fixed interaction pattern or is it able to have dialogue — exchange of information — with a human?
Anthropomorphism scale
Does it have the shape or properties of a human?
Conferences
ACE – International Conference on Future Applications of AI, Sensors, and Robotics in Society
The International Conference on Future Applications of AI, Sensors, and Robotics in Society explore the state of the art research, highlighting the future challenges as well as the hidden potential behind the technologies. The accepted contributions to this conference will be published annually in the special edition of the Journal of Future Robot Life.
International Conference on Social Robotics
The International Conference on Social Robotics is a conference for scientists, researchers, and practitioners to report and discuss the latest progress of their forefront research and findings in social robotics, as well as interactions with human beings and integration into our society.
ICSR2009, Incheon, Korea in collaboration with the FIRA RoboWorld Congress
ICSR2010, Singapore
ICSR2011, Amsterdam, Netherlands
International Conference on Human–Robot Personal Relationships
HRPR2008, Maastricht
HRPR 2009, Tilburg. Keynote speaker was Hiroshi Ishiguro.
HRPR2010, Leiden. Keynote speaker was Kerstin Dautenhahn.
International Congress on Love and Sex with Robots
The International Congress on Love and Sex with Robots is an annual congress that invites and encourages a broad range of topics, such as AI, Philosophy, Ethics, Sociology, Engineering, Computer Science, Bioethics.
The earliest academic papers on the subject were presented at the 2006 E.C. Euron Roboethics Atelier, organized by the School of Robotics in Genoa, followed a year later by the first book – "Love and Sex with Robots" – published by Harper Collins in New York. Since that initial flurry of academic activity in this field the subject has grown significantly in breadth and worldwide interest. Three conferences on Human–Robot Personal Relationships were held in the Netherlands during the period 2008–2010, in each case the proceedings were published by respected academic publishers, including Springer-Verlag. After a gap until 2014 the conferences were renamed as the "International Congress on Love and Sex with Robots", which have previously taken place at the University of Madeira in 2014; in London in 2016 and 2017; and in Brussels in 2019. Additionally, the Springer-Verlag "International Journal of Social Robotics", had, by 2016, published articles mentioning the subject, and an open access journal called "Lovotics" was launched in 2012, devoted entirely to the subject. The past few years have also witnessed a strong upsurge of interest by way of increased coverage of the subject in the print media, TV documentaries and feature films, as well as within the academic community.
The International Congress on Love and Sex with Robots provides an excellent opportunity for academics and industry professionals to present and discuss their innovative work and ideas in an academic symposium.
2020, Berlin, Germany
2019, Brussels, Belgium
2017, London, United Kingdom
2016, London, United Kingdom
2014, Madeira, Portugal
International Symposium on New Frontiers in Human–Robot Interaction
This symposium is organized in collaboration with the Annual Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour.
2015, Canterbury, United Kingdom
2014, London, United Kingdom
2010, Leicester, United Kingdom
2009, Edinburgh, United Kingdom
IEEE International Symposium in Robot and Human Interactive Communication
The IEEE International Symposium on Robot and Human Interactive Communication ( RO-MAN ) was founded in 1992 by Profs. Toshio Fukuda, Hisato Kobayashi, Hiroshi Harashima and Fumio Hara. Early workshop participants were mostly Japanese, and the first seven workshops were held in Japan. Since 1999, workshops have been held in Europe and the United States as well as Japan, and participation has been of international scope.
ACM/IEEE International Conference on Human–Robot Interaction
This conference is amongst the best conferences in the field of HRI and has a very selective reviewing process. The average acceptance rate is 26% and the average attendance is 187. Around 65% of the contributions to the conference come from the US and the high level of quality of the submissions to the conference becomes visible by the average of 10 citations that the HRI papers attracted so far.
HRI 2006 in Salt Lake City, Utah, USA, Acceptance Rate: 0.29
HRI 2007 in Washington, D.C., USA, Acceptance Rate: 0.23
HRI 2008 in Amsterdam, Netherlands, Acceptance Rate: 0.36 (0.18 for oral presentations)
HRI 2009 in San Diego, CA, USA, Acceptance Rate: 0.19
HRI 2010 in Osaka, Japan, Acceptance Rate: 0.21
HRI 2011 in Lausanne, Switzerland, Acceptance Rate: 0.22 for full papers
HRI 2012 in Boston, Massachusetts, USA, Acceptance Rate: 0.25 for full papers
HRI 2013 in Tokyo, Japan, Acceptance Rate: 0.24 for full papers
HRI 2014 in Bielefeld, Germany, Acceptance Rate: 0.24 for full papers
HRI 2015 in Portland, Oregon, USA, Acceptance Rate: 0.25 for full papers
HRI 2016 in Christchurch, New Zealand, Acceptance Rate: 0.25 for full papers
HRI 2017 in Vienna, Austria, Acceptance Rate: 0.24 for full papers
HRI 2018 in Chicago, USA, Acceptance Rate: 0.24 for full papers
HRI 2021 in Boulder, USA, Acceptance Rate: 0.23 for full papers
International Conference on Human–Agent Interaction
HAI 2013 in Sapporo, Japan
HAI 2014 in Tsukuba, Japan
HAI 2015 in Daegu, Korea
HAI 2016 in Singapore
HAI 2017 in Bielefeld, Germany
Related conferences
There are many conferences that are not exclusively HRI, but deal with broad aspects of HRI, and often have HRI papers presented.
IEEE-RAS/RSJ International Conference on Humanoid Robots (Humanoids)
Ubiquitous Computing (UbiComp)
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Intelligent User Interfaces (IUI)
Computer Human Interaction (CHI)
American Association for Artificial Intelligence (AAAI)
INTERACT
Journals
There are currently two dedicated HRI Journals
ACM Transactions on Human–Robot Interaction (Originally Journal of Human–Robot Interaction)
International Journal of Social Robotics
and there are several more general journals in which one will find HRI articles.
International Journal of Humanoid Robotics
‘Entertainment robotics’ section of the Entertainment Computing Journal
Interaction Studies Journal
Artificial Intelligence
Systems, Man and Cybernetics
Books
There are several books available that specialise on Human–Robot Interaction. While there are several edited books, only a few dedicated texts are available:
– free PDF available online
– chapter in an extensive handbook.
Courses
Many universities offer courses in Human–Robot Interaction.
University Courses and Degrees
Tufts University, Medford, MA, USA, MS and PhD programs in Human–Robot Interaction
University of Waterloo, Canada, Kerstin Dautenhahn, Social Robotics – Foundations, Technology and Applications of Human-Centered Robotics
National Taipei University in Taiwan, Taiwan, Hooman Samani, M5226 Advanced Robotics
Ontario Tech University, Canada, Patrick C. K. Hung, BUSI4590U Topics in Technology Management & INFR 4599U Service Robots Innovation for Commerce
The Colorado School of Mines, USA, Tom Williams, CSCI 436 / 536: Human–Robot Interaction
Heriot-Watt University, UK, Lynne Baillie, F21HR Human Robot Interaction
Uppsala University, Sweden, Filip Malmberg, UU-61611 Social Robotics and Human–Robot Interaction
Skövde University, Sweden, MSc Human–Robot Interaction program
Indiana University, Bloomington, USA, Selma Sabanovic, INFO-I 440 Human–Robot Interaction
Ghent University, Belgium, Tony Belpaeme, E019370A Robotics module
Bielefeld University, Germany, Frederike Eyssel, 270037 Sozialpsychologische Aspekte der Mensch-Maschine Interaktion
Kyoto University, Japan, Takayuki Kanda, 3218000 Human–Robot Interaction (ヒューマンロボットインタラクション)
KTH Royal Institute of Technology, Sweden, Iolanda Leite, DD2413 Social Robotics
Chalmers University of Technology, Sweden, Mohammad Obaid, DAT545 Human-Robot Interaction Design
Online Courses and Degrees
There are also online courses available such as Mooc:
University of Canterbury (UCx) – edX program
Professional Certificate in Human–Robot Interaction
Introduction to Human–Robot Interaction
Methods and Application in Human–Robot Interaction
Footnotes
References
External resources
Human communication
Multimodal interaction
Robotics
Robotics engineering | Human–robot interaction | Technology,Engineering,Biology | 5,579 |
4,989,591 | https://en.wikipedia.org/wiki/Psi5%20Aurigae | {{DISPLAYTITLE:Psi5 Aurigae}}
Psi5 Aurigae (ψ5 Aur, ψ5 Aurigae) is a star in the northern constellation of Auriga. It is faintly visible to the naked eye with an apparent visual magnitude of 5.25. Based upon parallax measurements made during the Hipparcos mission, this star is approximately distant from Earth. There is an optical companion which is 36 arcseconds away and has an apparent magnitude of +8.4.
It used to be known to be part of a much bigger constellation named Telescopium Herschelii before it was unrecognized by the International Astronomical Union (IAU).
Characteristics
The spectrum of this star shows it to be a G-type main sequence star with a stellar classification of G0 V. Thought to be around 4 billion years old, it is similar in size, mass, and composition to the Sun, making this a solar analog. It is radiating energy into space at an effective temperature of 5,989 K, giving it the golden-hued glow of a G-type star.
Debris disk
Observation in the infrared shows an excess emission that suggests the presence of a circumstellar disk of dust, known as a debris disk. This material has a mean temperature of 60 K, indicating that it is orbiting at a distance of about 29 astronomical units from the host star. The dust has about half the mass of the Moon and is around 600 million years old. The star is being examined for evidence of extrasolar planets, but none have been found so far.
See also
Psi Aurigae
References
External links
HR 2483
CCDM J06467+4335
Image Psi5 Aurigae
Aurigae, Psi05
Aurigae, 56
048682
032480
Auriga
Double stars
G-type main-sequence stars
Circumstellar disks
Aurigae, 56
2483
Durchmusterung objects
0245 | Psi5 Aurigae | Astronomy | 420 |
15,226,993 | https://en.wikipedia.org/wiki/P2RX6 | P2X purinoceptor 6 is a protein that in humans is encoded by the P2RX6 gene.
The protein encoded by this gene belongs to the family of P2X receptors, which are ATP-gated ion channels and mediate rapid and selective permeability to cations. This gene is predominantly expressed in skeletal muscle, and regulated by p53. The encoded protein is associated with VE-cadherin at the adherens junctions of human umbilical vein endothelial cells.
See also
Purinergic receptor
References
Further reading
External links
Ion channels | P2RX6 | Chemistry | 121 |
371,700 | https://en.wikipedia.org/wiki/Locale%20%28computer%20software%29 | In computing, a locale is a set of parameters that defines the user's language, region and any special variant preferences that the user wants to see in their user interface. Usually a locale identifier consists of at least a language code and a country/region code.
Locale is an important aspect of i18n.
General locale settings
These settings usually include the following display (output) format settings:
Number format setting (LC_NUMERIC, C/C++)
Character classification, case conversion settings (LC_CTYPE, C/C++)
Date-time format setting (LC_TIME, C/C++)
String collation setting (LC_COLLATE, C/C++)
Currency format setting (LC_MONETARY, C/C++)
Paper size setting (LC_PAPER, ISO 30112)
Color setting
UI font setting (especially for CJKV language)
Location setting (country or region)
ANSI character set setting (for Microsoft Windows)
The locale settings are about formatting output given a locale. So, the time zone information and daylight saving time are not usually part of the locale settings.
Less usual is the input format setting, which is mostly defined on a per application basis.
Programming and markup language support
In these environments,
C
C++
Eiffel
Java
.NET Framework
REBOL
Ruby
Perl
PHP
Python
XML
JSP
JavaScript
and other (nowadays) Unicode-based environments, they are defined in a format similar to BCP 47. They are usually defined with just ISO 639 (language) and ISO 3166-1 alpha-2 (2-letter country) codes.
International standards
In standard C and C++, locale is defined in "categories" of (text collation), (character class), (currency format), (number format), and (time format). The special category can be used to set all locale settings.
There is no standard locale names associated with C and C++ standards besides a "minimal locale" name "C", although the POSIX format is a commonly-used baseline.
POSIX platforms
On POSIX platforms such as Unix, Linux and others, locale identifiers are defined in a way similar to the BCP 47 definition of language tags, but the locale variant modifier is defined differently, and the character set is optionally included as a part of the identifier. The POSIX or "XPG" format is . (For example, Australian English using the UTF-8 encoding is .) Separately, ISO/IEC 15897 describes a different form, , though it's highly dubious whether it is used at all.
In the next example there is an output of command locale for Czech language (cs), Czech Republic (CZ) with explicit UTF-8 encoding:
$ locale
LANG=cs_CZ.UTF-8
LC_CTYPE="cs_CZ.UTF-8"
LC_NUMERIC="cs_CZ.UTF-8"
LC_TIME="cs_CZ.UTF-8"
LC_COLLATE="cs_CZ.UTF-8"
LC_MONETARY="cs_CZ.UTF-8"
LC_MESSAGES="cs_CZ.UTF-8"
LC_PAPER="cs_CZ.UTF-8"
LC_NAME="cs_CZ.UTF-8"
LC_ADDRESS="cs_CZ.UTF-8"
LC_TELEPHONE="cs_CZ.UTF-8"
LC_MEASUREMENT="cs_CZ.UTF-8"
LC_IDENTIFICATION="cs_CZ.UTF-8"
LC_ALL=
Specifics for Microsoft platforms
Windows uses specific language and territory strings.
The locale identifier (LCID) for unmanaged code on Microsoft Windows is a number such as 1033 for English (United States), or 2057 for English (United Kingdom), or 1041 for Japanese (Japan). These numbers consist of a language code (lower 10 bits) and a culture code (upper bits), and are therefore often written in hexadecimal notation, such as 0x0409, 0x0809 or 0x0411.
Microsoft is starting to introduce managed code application programming interfaces (APIs) for .NET that use this format. One of the first to be generally released is a function to mitigate issues with internationalized domain names, but more are in Windows Vista Beta 1.
Starting with Windows Vista, new functions that use BCP 47 locale names have been introduced to replace nearly all LCID-based APIs.
A POSIX-like locale name format of is available in the UCRT (Universal C Run Time) of Windows 10 and 11.
See also
Internationalization and localization
ISO 639 language codes
ISO 3166-1 alpha-2 region codes
ISO 15924 script codes
IETF language tag
C localization functions
CCSID
Code page
Common Locale Data Repository
Date and time representation by country
AppLocale
References
External links
BCP 47
Language Subtag Registry
Common Locale Data Repository
Javadoc API documentation
Locale and Language information from Microsoft
MS-LCID: Windows Language Code Identifier (LCID) Reference from Microsoft
Microsoft LCID list
Microsoft LCID chart with decimal equivalents
POSIX Environment Variables
Low Level Technical details on defining a POSIX locale
ICU Locale Explorer
Debian Wiki on Locales
Article "The Standard C++ Locale" by Nathan C. Myers
locale(7): Description of multi-language support - Linux man page
Apache C++ Standard Library Locale User's Guide
Sort order charts for various operating system locales and database collations
NATSPEC Library
Description of locale-related UNIX environment variables in Debian Linux Reference Manual
Guides to locales and locale creation on various platforms
Unix user management and support-related utilities
Unix SUS2008 utilities
Internationalization and localization | Locale (computer software) | Technology | 1,273 |
155,192 | https://en.wikipedia.org/wiki/Chimera%20%28genetics%29 | A genetic chimerism or chimera ( or ) is a single organism composed of cells with more than one distinct genotype. Animal chimeras can be produced by the fusion of two (or more) embryos. In plants and some animal chimeras, mosaicism involves
distinct types of tissue that originated from the same zygote but differ due to mutation during ordinary cell division.
Normally, genetic chimerism is not visible on casual inspection; however, it has been detected in the course of proving parentage. More practically, in agronomy Chimera indicates a plant or portion of a plant whose tissues are made up of two or more types of cells with different genetic makeup; it can derive from a bud mutation or, more rarely, at the grafting point, from the concrescence of cells of the two bionts; in this case it is commonly referred to as a "graft hybrid", although it is not a hybrid in the genetic sense of "hybrid".
In contrast, an individual where each cell contains genetic material from two organisms of different breeds, varieties, species or genera is called a hybrid.
Another way that chimerism can occur in animals is by organ transplantation, giving one individual tissues that developed from a different genome. For example, transplantation of bone marrow often determines the recipient's ensuing blood type.
Classifications
Natural chimerism
Some level of chimerism occurs naturally in the wild in many animal species, and in some cases may be a required (obligate) part of their life cycle.
Symbiotic chimerism in anglerfish
Chimerism occurs naturally in adult Ceratioid anglerfish and is in fact a natural and essential part of their life cycle. Once the male achieves adulthood, it begins its search for a female. Using strong olfactory (or smell) receptors, the male searches until it locates a female anglerfish. The male, less than an inch in length, bites into her skin and releases an enzyme that digests the skin of both his mouth and her body, fusing the pair down to the blood-vessel level. While this attachment has become necessary for the male's survival, it will eventually consume him, as both anglerfish fuse into a single hermaphroditic individual. Sometimes in this process, more than one male will attach to a single female as a symbiote. In this case, they will all be consumed into the body of the larger female angler. Once fused to a female, the males will reach sexual maturity, developing large testicles as their other organs atrophy. This process allows for sperm to be in constant supply when the female produces an egg, so that the chimeric fish is able to have a greater number of offspring.
Sponges
Chimerism has been found in some species of marine sponges. Four distinct genotypes have been found in a single individual, and there is potential for even greater genetic heterogeneity. Each genotype functions independently in terms of reproduction, but the different intra-organism genotypes behave as a single large individual in terms of ecological responses like growth.
In obligates
It has been shown that male yellow crazy ants are obligate chimeras, the first known such case. In this species, the queens have arisen from fertilized eggs with a genotype of RR (Reproductive × Reproductive), the sterile female workers show a RW arrangement (Reproductive × Worker), and the males instead of being haploid, as is usually the case for ants, also display a RW genotype, but for them the egg R and the sperm W do not fuse so they develop as a chimera with some cells carrying an R and others carrying a W genome.
Artificial chimerism
Artificial chimerism refers to examples of chimerism that are accidentally produced by humans, either for research or commercial purposes.
Tetragametic chimerism
Tetragametic chimerism is a form of congenital chimerism. This condition occurs through fertilizing two separate ova by two sperm, followed by aggregation of the two at the blastocyst or zygote stages. This results in the development of an organism with intermingled cell lines. Put another way, the chimera is formed from the merging of two nonidentical twins. As such, they can be male, female, or intersex.
The tetragametic state has important implications for organ or stem cell transplantation. Chimeras typically have immunologic tolerance to both cell lines.
Microchimerism
Microchimerism is the presence of a small number of cells that are genetically distinct from those of the host individual. Most people are born with a few cells genetically identical to their mothers' and the proportion of these cells goes down in healthy individuals as they get older. People who retain higher numbers of cells genetically identical to their mother's have been observed to have higher rates of some autoimmune diseases, presumably because the immune system is responsible for destroying these cells and a common immune defect prevents it from doing so and also causes autoimmune problems.
The higher rates of autoimmune diseases due to the presence of maternally-derived cells is why in a 2010 study of a 40-year-old man with scleroderma-like disease (an autoimmune rheumatic disease), the female cells detected in his blood stream via FISH (fluorescence in situ hybridization) were thought to be maternally-derived. However, his form of microchimerism was found to be due to a vanished twin, and it is unknown whether microchimerism from a vanished twin might predispose individuals to autoimmune diseases as well. Mothers often also have a few cells genetically identical to those of their children, and some people also have some cells genetically identical to those of their siblings (maternal siblings only, since these cells are passed to them because their mother retained them).
Germline chimerism
Germline chimerism occurs when the germ cells (for example, sperm and egg cells) of an organism are not genetically identical to its own. It has been recently discovered that marmosets can carry the reproductive cells of their (fraternal) twin siblings due to placental fusion during development. (Marmosets almost always give birth to fraternal twins.)
Types
Animals
As the organism develops, it can come to possess organs that have different sets of chromosomes. For example, the chimera may have a liver composed of cells with one set of chromosomes and have a kidney composed of cells with a second set of chromosomes. This has occurred in humans, and at one time was thought to be extremely rare although more recent evidence suggests that this is not the case.
This is particularly true for the marmoset. Recent research shows most marmosets are chimeras, sharing DNA with their fraternal twins. 95% of marmoset fraternal twins trade blood through chorionic fusions, making them hematopoietic chimeras.
In the budgerigar, due to the many existing plumage colour variations, tetragametic chimeras can be very conspicuous, as the resulting bird will have an obvious split between two colour types often divided bilaterally down the centre. These individuals are known as half-sider budgerigars.
An animal chimera is a single organism that is composed of two or more different populations of genetically distinct cells that originated from different zygotes involved in sexual reproduction. If the different cells have emerged from the same zygote, the organism is called a mosaic. Innate chimeras are formed from at least four parent cells (two fertilised eggs or early embryos fused together). Each population of cells keeps its own character and the resulting organism is a mixture of tissues. Cases of human chimeras have been documented.
Chimerism in humans
Some consider mosaicism to be a form of chimerism, while others consider them to be distinct. Mosaicism involves a mutation of the genetic material in a cell, giving rise to a subset of cells that are different from the rest. Natural chimerism is the fusion of more than one fertilized zygote in the early stages of prenatal development. It is much rarer than mosaicism.
In artificial chimerism, an individual has one cell lineage that was inherited genetically at the time of the formation of the human embryo and the other that was introduced through a procedure, including organ transplantation or blood transfusion. Specific types of transplants that could induce this condition include bone marrow transplants and organ transplants, as the recipient's body essentially works to permanently incorporate the new blood stem cells into it.
Boklage argues that many human 'mosaic' cell lines will be "found to be chimeric if properly tested".
In contrast, a human where each cell contains genetic material from two organisms of different breeds, varieties, species or genera is called a human–animal hybrid.
While German dermatologist Alfred Blaschko described Blaschko's lines in 1901, the genetic science took until the 1930s to approach a vocabulary for the phenomenon. The term genetic chimera has been used at least since the 1944 article of Belgovskii.
This condition is either innate or it is synthetic, acquired for example through the infusion of allogeneic blood cells during transplantation or transfusion.
In nonidentical twins, innate chimerism occurs by means of blood vessel anastomoses. The likelihood of offspring being a chimera is increased if it is created via in vitro fertilisation. Chimeras can often breed, but the fertility and type of offspring depend on which cell line gave rise to the ovaries or testes; varying degrees of intersex differences may result if one set of cells is genetically female and another genetically male.
On January 22, 2019, the National Society of Genetic Counselors released an article Chimerism Explained: How One Person Can Unknowingly Have Two Sets of DNA, where they state, "where a twin pregnancy evolves into one child, is currently believed to be one of the rarer forms. However, we know that 20 to 30% of singleton pregnancies were originally a twin or a multiple pregnancy".
Most human chimeras will go through life without realizing they are chimeras. The difference in phenotypes may be subtle (e.g., having a hitchhiker's thumb and a straight thumb, eyes of slightly different colors, differential hair growth on opposite sides of the body, etc.) or completely undetectable. Chimeras may also show, under a certain spectrum of UV light, distinctive marks on the back resembling that of arrow points pointing downward from the shoulders down to the lower back; this is one expression of pigment unevenness called Blaschko's lines.
Another case was that of Karen Keegan, who was also suspected (initially) of not being her children's biological mother, after DNA tests on her adult sons for a kidney transplant she needed, seemed to show she was not their mother.
Plants
Structure
The distinction between sectorial, mericlinal and periclinal plant chimeras is widely used. Periclinal chimeras involve a genetic difference that persists in the descendant cells of a particular meristem layer. This type of chimera is more stable than mericlinal or sectoral mutations that affect only later generations of cells.
Graft chimeras
These are produced by grafting genetically different parents, different cultivars or different species (which may belong to different genera). The tissues may be partially fused together following grafting to form a single growing organism that preserves both types of tissue in a single shoot. Just as the constituent species are likely to differ in a wide range of features, so the behavior of their periclinal chimeras is like to be highly variable. The first such known chimera was probably the Bizzarria, which is a fusion of the Florentine citron and the sour orange. Well-known examples of a graft-chimera are Laburnocytisus 'Adamii', caused by a fusion of a Laburnum and a broom, and "Family" trees, where multiple varieties of apple or pear are grafted onto the same tree. Many fruit trees are cultivated by grafting the body of a sapling onto a rootstock.
Chromosomal chimeras
These are chimeras in which the layers differ in their chromosome constitution. Occasionally, chimeras arise from loss or gain of individual chromosomes or chromosome fragments owing to misdivision. More commonly cytochimeras have simple multiple of the normal chromosome complement in the changed layer. There are various effects on cell size and growth characteristics.
Nuclear gene-differential chimeras
These chimeras arise by spontaneous or induced mutation of a nuclear gene to a dominant or recessive allele. As a rule, one character is affected at a time in the leaf, flower, fruit, or other parts.
Plastid gene-differential chimeras
These chimeras arise by spontaneous or induced mutation of a plastid gene, followed by the sorting-out of two kinds of plastid during vegetative growth. Alternatively, after selfing or nucleic acid thermodynamics, plastids may sort-out from a mixed egg or mixed zygote respectively. This type of chimera is recognized at the time of origin by the sorting-out pattern in the leaves. After sorting-out is complete, periclinal chimeras are distinguished from similar looking nuclear gene-differential chimeras by their non-mendelian inheritance. The majority of variegated-leaf chimeras are of this kind.
All plastid gene- and some nuclear gene-differential chimeras affect the color of the plasmids within the leaves, and these are grouped together as chlorophyll chimeras, or preferably as variegated leaf chimeras. For most variegation, the mutation involved is the loss of the chloroplasts in the mutated tissue, so that part of the plant tissue has no green pigment and no photosynthetic ability. This mutated tissue is unable to survive on its own, but it is kept alive by its partnership with normal photosynthetic tissue. Sometimes chimeras are also found with layers differing in respect of both their nuclear and their plastid genes.
Origins
There are multiple reasons to explain the occurrence of plant chimera during the plant recovery stage:
The process of shoot organogenesis starts from the multicellular origin.
The endogenous tolerance leads to the ineffectiveness of the weak selective agents.
A self-protection mechanism (cross protection). Transformed cells serve as guards to protect the untransformed ones.
The observable characteristic of transgenic cells may be a transient expression of the marker gene. Or it may due to the presence of agrobacterium cells.
Detection
Untransformed cells should be easy to detect and remove to avoid chimeras. This is because it is important to maintain the stable ability of the transgenic plants across different generations. Reporter genes such as GUS and Green Fluorescent Protein (GFP) are used in combination with plant selective markers (herbicide, antibody etc.). However, GUS expression depends on the plant development stage and GFP may be influenced by the green tissue autofluorescence. Quantitative PCR could be an alternative method for chimera detection.
Viruses
In 2012, the first example of a naturally-occurring RNA-DNA hybrid virus was unexpectedly discovered during a metagenomic study of the acidic extreme environment of Boiling Springs Lake that is in Lassen Volcanic National Park, California. The virus was named BSL-RDHV (Boiling Springs Lake RNA DNA Hybrid Virus). Its genome is related to a DNA circovirus, which usually infects birds and pigs, and a RNA tombusvirus, which infect plants. The study surprised scientists, because DNA and RNA viruses vary and the way the chimera came together was not understood.
Other viral chimeras have also been found, and the group is known as the CHIV viruses ("chimeric viruses").
Research
The first known primate chimeras are the rhesus monkey twins, Roku and Hex, each having six genomes. They were created by mixing cells from totipotent four-cell morulas; although the cells never fused, they worked together to form organs. It was discovered that one of these primates, Roku, was a sexual chimera; as four percent of Roku's blood cells contained two x chromosomes.
A major milestone in chimera experimentation occurred in 1984 when a chimeric sheep–goat was produced by combining embryos from a goat and a sheep, and survived to adulthood.
To research the developmental biology of the bird embryo, researchers produced artificial quail-chick chimeras in 1987. By using transplantation and ablation in the chick embryo stage, the neural tube and the neural crest cells of the chick were ablated, and replaced with the same parts from a quail. Once hatched, the quail feathers were visibly apparent around the wing area, whereas the rest of the chick's body was made of its own chicken cells.
In August 2003, researchers at the Shanghai Second Medical University in China reported that they had successfully fused human skin cells and rabbit ova to create the first human chimeric embryos. The embryos were allowed to develop for several days in a laboratory setting, and then destroyed to harvest the resulting stem cells. In 2007, scientists at the University of Nevada School of Medicine created a sheep whose blood contained 15% human cells and 85% sheep cells.
In 2023 a study reported the first chimeric monkey using embryonic stem cell lines, it was the only live birth from 12 pregnancies resulting from 40 implanted embryos of the crab-eating macaque, an average of 67% and a highest of 92% of the cells across the 26 tested tissues were descendants of the donor stem cells against 0.1–4.5% from previous experiments on chimeric monkeys.
Work with mice
Chimeric mice are important animals in biological research, as they allow for the investigation of a variety of biological questions in an animal that has two distinct genetic pools within it. These include insights into problems such as the tissue specific requirements of a gene, cell lineage, and cell potential.
The general methods for creating chimeric mice can be summarized either by injection or aggregation of embryonic cells from different origins. The first chimeric mouse was made by Beatrice Mintz in the 1960s through the aggregation of eight-cell-stage embryos. Injection on the other hand was pioneered by Richard Gardner and Ralph Brinster who injected cells into blastocysts to create chimeric mice with germ lines fully derived from injected embryonic stem cells (ES cells). Chimeras can be derived from mouse embryos that have not yet implanted in the uterus as well as from implanted embryos. ES cells from the inner cell mass of an implanted blastocyst can contribute to all cell lineages of a mouse including the germ line. ES cells are a useful tool in chimeras because genes can be mutated in them through the use of homologous recombination, thus allowing gene targeting. Since this discovery occurred in 1988, ES cells have become a key tool in the generation of specific chimeric mice.
Underlying biology
The ability to make mouse chimeras comes from an understanding of early mouse development. Between the stages of fertilization of the egg and the implantation of a blastocyst into the uterus, different parts of the mouse embryo retain the ability to give rise to a variety of cell lineages. Once the embryo has reached the blastocyst stage, it is composed of several parts, mainly the trophectoderm, the inner cell mass, and the primitive endoderm. Each of these parts of the blastocyst gives rise to different parts of the embryo; the inner cell mass gives rise to the embryo proper, while the trophectoderm and primitive endoderm give rise to extra embryonic structures that support growth of the embryo. Two- to eight-cell-stage embryos are competent for making chimeras, since at these stages of development, the cells in the embryos are not yet committed to give rise to any particular cell lineage, and could give rise to the inner cell mass or the trophectoderm. In the case where two diploid eight-cell-stage embryos are used to make a chimera, chimerism can be later found in the epiblast, primitive endoderm, and trophectoderm of the mouse blastocyst.
It is possible to dissect the embryo at other stages so as to accordingly give rise to one lineage of cells from an embryo selectively and not the other. For example, subsets of blastomeres can be used to give rise to chimera with specified cell lineage from one embryo. The Inner Cell Mass of a diploid blastocyst, for example, can be used to make a chimera with another blastocyst of eight-cell diploid embryo; the cells taken from the inner cell mass will give rise to the primitive endoderm and to the epiblast in the chimera mouse.
From this knowledge, ES cell contributions to chimeras have been developed. ES cells can be used in combination with eight-cell-and two-cell-stage embryos to make chimeras and exclusively give rise to the embryo proper. Embryos that are to be used in chimeras can be further genetically altered to specifically contribute to only one part of chimera. An example is the chimera built off of ES cells and tetraploid embryos, which are artificially made by electrofusion of two two-cell diploid embryos. The tetraploid embryo will exclusively give rise to the trophectoderm and primitive endoderm in the chimera.
Methods of production
There are a variety of combinations that can give rise to a successful chimera mouse and according to the goal of the experiment an appropriate cell and embryo combination can be picked; they are generally but not limited to diploid embryo and ES cells, diploid embryo and diploid embryo, ES cell and tetraploid embryo, diploid embryo and tetraploid embryo, ES cells and ES cells. The combination of embryonic stem cell and diploid embryo is a common technique used for the making of chimeric mice, since gene targeting can be done in the embryonic stem cell. These kinds of chimeras can be made through either aggregation of stem cells and the diploid embryo or injection of the stem cells into the diploid embryo. If embryonic stem cells are to be used for gene targeting to make a chimera, the following procedure is common: a construct for homologous recombination for the gene targeted will be introduced into cultured mouse embryonic stem cells from the donor mouse, by way of electroporation; cells positive for the recombination event will have antibiotic resistance, provided by the insertion cassette used in the gene targeting; and be able to be positively selected for. ES cells with the correct targeted gene are then injected into a diploid host mouse blastocyst. Then, these injected blastocysts are implanted into a pseudo pregnant female surrogate mouse, which will bring the embryos to term and give birth to a mouse whose germline is derived from the donor mouse's ES cells. This same procedure can be achieved through aggregation of ES cells and diploid embryos, diploid embryos are cultured in aggregation plates in wells where single embryos can fit, to these wells ES cells are added the aggregates are cultured until a single embryo is formed and has progressed to the blastocyst stage, and can then be transferred to the surrogate mouse.
Ethics and legislation
The US and Western Europe have strict codes of ethics and regulations in place that expressly forbid certain subsets of experimentation using human cells, though there is a vast difference in the regulatory framework. Through the creation of human chimeras comes the question: where does society now draw the line of humanity? This question poses serious legal and moral issues, along with creating controversy. Chimpanzees, for example, are not offered any legal standing, and are put down if they pose a threat to humans. If a chimpanzee is genetically altered to be more similar to a human, it may blur the ethical line between animal and human. Legal debate would be the next step in the process to determine whether certain chimeras should be granted legal rights. Along with issues regarding the rights of chimeras, individuals have expressed concern about whether or not creating human chimeras diminishes the "dignity" of being human.
See also
46,XX/46,XY
Genetic chimerism in fiction
Retron
Vanishing twin
X-inactivation (lyonization)
Polycephaly
References
Further reading
Appel, Jacob M. "The Monster's Law", Genewatch, Volume 19, Number 2, March–April 2007.
Nelson, J. Lee (Scientific American, February 2008). Your Cells Are My Cells
Weiss, Rick (August 14, 2003). Cloning yields human-rabbit hybrid embryo . The Washington Post.
Weiss, Rick (February 13, 2005). U.S. Denies Patent for a too-human hybrid. The Washington Post.
External links
"Chimerism Explained"
Chimerism and cellular mosaicism, Genetic Home Reference, U.S. National Library of Medicine, National Institute of Health.
Chimera: Apical Origin, Ontogeny and Consideration in Propagation
Plant Chimeras in Tissue Culture
Ainsworth, Claire (November 15, 2003). "The Stranger Within". New Scientist . (Reprinted here )
Embryogenesis of chimeras, twins and anterior midline asymmetries
Natural human chimeras: A review
Reproduction
Intersex healthcare
Genetic anomalies
Twin | Chimera (genetics) | Biology | 5,441 |
5,750,289 | https://en.wikipedia.org/wiki/Self%20accelerating%20decomposition%20temperature | The self-accelerating decomposition temperature (SADT) is the lowest temperature at which an organic peroxide in a typical vessel or shipping package will undergo a self-accelerating decomposition within one week. The SADT is the point at which the heat evolution from the decomposition reaction and the heat removal rate from the package of interest become unbalanced. When the heat removal is too low, the temperature in the package increases and the rate of decomposition increases in an uncontrollable manner. The result is therefore dependent on the formulation and the package characteristics.
A self-accelerating decomposition occurs when the rate of peroxide decomposition is sufficient to generate heat at a faster rate than it can be dissipated to the environment. Temperature is the main factor in determining the decomposition rate, although the size of the package is also important since its dimensions will determine the ability to dissipate heat to the environment.
All peroxides contain an oxygen-oxygen bond that, on heating, can break apart homolytically to generate two radicals. As mentioned previously, this decomposition also generates heat. But the stability of the oxygen-oxygen bond is dependent on what else is present in the molecule. Some peroxides, due to their chemical make-up, are very unstable and need to be refrigerated to avoid a self-accelerating decomposition. Others, particularly those used for crosslinking purposes, are much more stable and can be stored at normal ambient temperatures without risk of self-acceleration. Due to the large variations in the stabilities of peroxides, each is tested to determine the safe maximum temperature for which the peroxide may be stored, shipped, and handled. The result of this test is the self-accelerating decomposition temperature (SADT).
Although a number of organic peroxides can safely be stored at room temperature, most require some form of temperature control. For long storage periods, the organic peroxide is usually kept at a lower temperature than the maximum safe storage temperature as determined by the SADT.
The SADT for an organic peroxide formulation is usually lower for more concentrated formulations. Dilution with a compatible, high boiling point diluent will usually increase the SADT since the peroxide is dilute and the diluent can absorb much of the heat minimizing the increase in temperature. Also, for an organic peroxide formulation, larger packages generally have a lower SADT because of the poorer heat transfer of the larger package due to lower surface area to volume ratio.
Most organic peroxides react to some extent with their decomposition products during thermal decomposition. This often increases the rate since the decomposition proceeds more rapidly as the decomposition products are generated.
The SADT measurement is made as follows:
The package containing the peroxide is placed in oven set for test temperature
The timer starts when product reaches 2 °C below intended test temperature
The oven is held at constant temperature for up to one week or, until a runaway event occurs.
Test "Passes" if product does not exceed test (oven) temperature by 6 °C within one week
Test "Fails" if product exceeds test temperature by 6 °C within one week
The test is repeated in 5 °C increments until a failure is reached
Fail temperature is reported as SADT for that package and formulation
Secondary information about the violence of the decomposition can also be recorded
As an alternative to the oven test the SADT for larger packages can be determined by substituting a Dewar flask for the package. The heat transfer of the Dewar flask can be matched to the heat transfer of a larger package size. This test is called the Heat Accumulation Storage Test (HAST).
Application to polymerizable mixtures
Some mixtures containing peroxides and polymerizable monomers may also exhibit SADTs. For example, mixtures of vinyltrimethoxysilane, peroxides and stabilizers are used commercially for cross-linking polyethylene to make PEX pipe. These mixtures are typically liquid solutions that are shipped to where they are used to graft alkoxysilane groups to polyethylene. In such mixtures decomposition of the peroxide can initiate exothermic radical polymerization of the vinyltrimethoxysilane. At low temperature the decomposition rate is slow enough that the stabilizers quench the polymerization before much heat is generated and the container dissipates what heat is produced. At higher temperatures peroxide decomposition is faster, more polymerization occurs to heat the mixture, which in turn increases peroxide decomposition and polymerizes the monomer even faster. The container dissipates heat more slowly in a higher-temperature environment, so at some critical temperature heat is generated by polymerization faster than the container can dissipate it and the reaction self-accelerates. Thus such a mixture has a SADT that depends on container size exactly as in the case of a pure organic peroxide.
Results
When thermal decomposition occurs some organic peroxide formulations release a considerable amount of gases and/or mists. Some, but not all, of these gases may be flammable. For example, carbon dioxide is a common, gaseous decomposition product for diacyl peroxides and peresters that is not flammable.
The decomposition may include small organic fragments such as methane or acetone which are flammable. When flammable gases or mists are released as part of the decomposition there is always the potential danger of a fire or vapor phase explosion. Therefore, the risk of vapor phase explosion should be kept in mind when designing storage structures. These types of materials may be released at low rates during storage and in quite high rates in the event of an upset due to failure to control storage temperature or in the event of a fire in the storage area.
It is the ease of splitting the peroxy group to give two free radicals that makes organic peroxides so useful. However, the presence of energetic free radicals during decomposition, particularly in hot gases or mists, can cause auto-ignition to occur at a lower temperature than would otherwise be normal for a similar chemical structure without the peroxy functional group. Organic peroxides do not usually produce oxygen as part of the decomposition process, so there is little risk of enhanced burning rates due to oxygen enrichment. This is unlike the decomposition of hydrogen peroxide and solid oxidizers that can liberate oxygen.
References
Organic peroxides | Self accelerating decomposition temperature | Chemistry | 1,295 |
17,508,125 | https://en.wikipedia.org/wiki/XF-73 | XF-73 (Exeporfinium chloride) is an experimental drug candidate. It is an anti-microbial that works via weakening bacteria cell walls. It is a potential treatment for methicillin-resistant Staphylococcus aureus (MRSA) and possibly Clostridioides difficile. It is being developed by Destiny Pharma Ltd.
Structurally, it is a dicationic porphyrin.
It has completed a phase I clinical trial for nasal decolonisation of MRSA—being tested against 5 bacterial strains. It seems unlikely to cause MRSA to develop resistance to it.
In 2014, a phase 1 clinical trial for nasal administration was run.
, another phase 1 clinical trial (for nasal administration) completed recruiting in 2016 but no results have been posted.
References
Antimicrobials
Tetrapyrroles | XF-73 | Chemistry,Biology | 176 |
24,009,393 | https://en.wikipedia.org/wiki/Haemolysin%20E | Haemolysin E (HlyE) is a protein family that consists of several enterobacterial haemolysin (HlyE) proteins. Hemolysin E (HlyE) is a novel pore-forming toxin of Escherichia coli, Salmonella typhi, and Shigella flexneri.
HlyE is unrelated to the well characterised pore-forming E. coli hemolysins of the RTX family, haemolysin A.
HlyE is a protein of 34 kDa that is expressed during anaerobic growth of E. coli. Anaerobic expression is controlled by the transcription factor, FNR, such that, upon ingestion and entry into the anaerobic mammalian intestine, HlyE is produced and may then contribute to the colonisation of the host.
References
Protein domains
Bacterial toxins | Haemolysin E | Biology | 188 |
74,771,915 | https://en.wikipedia.org/wiki/Illegal%20character | In computer science, an illegal character is a character that is not allowed by a certain programming language, protocol, or program. To avoid illegal characters, some languages may use an escape character which is a backslash followed by another character.
Examples
Windows
In the Windows operating system, illegal characters in file and folder names include colons, brackets, question marks, and null characters.
References
Character encoding | Illegal character | Technology | 80 |
7,013,274 | https://en.wikipedia.org/wiki/Prajmaline | Prajmaline (Neo-gilurythmal) is a class Ia antiarrhythmic agent which has been available since the 1970s. Class Ia drugs increase the time one action potential lasts in the heart. Prajmaline is a semi-synthetic propyl derivative of ajmaline, with a higher bioavailability than its predecessor. It acts to stop arrhythmias of the heart through a frequency-dependent block of cardiac sodium channels.
Mechanism
Prajmaline causes a resting block in the heart. A resting block is the depression of a person's Vmax after a resting period. This effect is seen more in the atrium than the ventricle. The effects of some Class I antiarrhythmics are only seen in a patient who has a normal heart rate (~1 Hz). This is due to the effect of a phenomenon called reverse use dependence. The higher the heart rate, the less effect Prajmaline will have.
Uses
The drug Prajmaline has been used to treat a number of cardiac disorders. These include: coronary artery disease, angina, paroxysmal tachycardia and Wolff–Parkinson–White syndrome. Prajmaline has been indicated in the treatment of certain disorders where other antiarrhythmic drugs were not effective.
Administration
Prajmaline can be administered orally, parenterally or intravenously. Three days after the last dose, a limited effect has been observed. Therefore, it has been suggested that treatment of arrhythmias with Prajmaline must be continuous to see acceptable results.
Pharmacokinetics
The main metabolites of Prajmaline are: 21-carboxyprajmaline and hydroxyprajmaline. Twenty percent of the drug is excreted in the urine unchanged.
Daily therapeutic dose is 40–80 mg.
Distribution half-life is 10 minutes.
Plasma protein binding is 60%.
Oral bioavailability is 80%.
Elimination half-life is 6 hours.
Volume of distribution is 4-5 L/kg.
Side Effects
There are no significant adverse side-effects of Prajmaline when taken alone and with a proper dosage. Patients who are taking other treatments for their symptoms (e.g. beta blockers and nifedipine) have developed minor transient conduction defects when given Prajmaline.
Overdose
An overdose of Prajmaline is possible. The range of symptoms seen during a Prajmaline overdose include: no symptoms, nausea/vomiting, bradycardia, tachycardia, hypotension, and death.
Other Potential Uses
Due to Prajmaline's sodium channel-blocking properties, it has been shown to protect rat white matter from anoxia (82 +/- 15%). The concentration used causes little suppression of the preanoxic response.
References
Alkaloids
Sodium channel blockers
Secondary alcohols | Prajmaline | Chemistry | 610 |
1,039,393 | https://en.wikipedia.org/wiki/Andrewsarchus | Andrewsarchus (), meaning "Andrews' ruler", is an extinct genus of artiodactyl that lived during the Middle Eocene in what is now China. The genus was first described by Henry Fairfield Osborn in 1924 with the type species A. mongoliensis based on a largely complete cranium. A second species, A. crassum, was described in 1977 based on teeth. A mandible, formerly described as Paratriisodon, does probably belong to Andrewsarchus as well. The genus has been historically placed in the families Mesonychidae or Arctocyonidae, or was considered to be a close relative of whales. It is now regarded as the sole member of its own family, Andrewsarchidae, and may have been related to entelodonts. Fossils of Andrewsarchus have been recovered from the Middle Eocene Irdin Manha, Lushi, and Dongjun Formations of Inner Mongolia, each dated to the Irdinmanhan Asian land mammal age (Lutetian–Bartonian stages, 48–38 million years ago).
Andrewsarchus has historically been reputed as the largest terrestrial, carnivorous mammal given its skull length of , though its overall body size was probably overestimated due to inaccurate comparisons with mesonychids. Its incisors are arranged in a semicircle, similar to entelodonts, with the second rivalling the canine in size. The premolars are again similar to entelodonts in having a single cusp. The crowns of the molars are wrinkled, suggesting it was omnivorous or a scavenger. Unlike many modern scavengers, a reduced sagittal crest and flat mandibular fossa suggest that Andrewsarchus likely had a fairly weak bite force.
Taxonomy
Early history
The holotype of Andrewsarchus mongoliensis is a mostly complete cranium (specimen number AMNH-VP 20135). It was recovered from the lower Irdin Manha Formation of Inner Mongolia during a 1923 palaeontological expedition conducted by the American Museum of Natural History of New York. Its discoverer was a local assistant, Kan Chuen-pao, also known as "Buckshot". It was initially identified by Walter W. Granger as the skull of an Entelodon. A drawing of the skull was sent to the museum, where it was identified by William Diller Matthew as belonging to "the primitive Creodonta of the family Mesonychidae". The specimen itself arrived at the museum and was described by Osborn in 1924. Its generic name honours Roy Chapman Andrews, the leader of the expedition, with the Ancient Greek archos (ἀρχός, "ruler") added to his surname.
A second species of Andrewsarchus, A. crassum, was named by Ding Suyin and colleagues in 1977 on the basis of IVPP V5101, a pair of teeth (the second and third lower premolars) recovered from the Dongjun Formation of Guangxi.
In the 1957, Zhou Mingzhen and colleagues recovered a mandible, a fragmentary maxilla, and several isolated teeth from the Lushi Formation of Henan, China, which correlates to the Irdin Manha Formation. The maxilla belonged to a skull that was crushed beyond recognition; it is likely from the same individual as the mandible. Zhou described it in 1959 as Paratriisodon henanensis, and assigned it to Arctocyonidae. He further classified it as part of the subfamily Triisodontinae (now the family Triisodontidae) based on close similarities of the molars and premolars to those of Triisodon. A second species, P. gigas, was named by Zhou and colleagues in 1973 for a molar also from the Lushi Formation. Three molars and an incisor from the Irdin Manha Formation were later referred to P. gigas. Comparisons between the two genera were drawn as far back as 1969, when Frederick Szalay suggested that they either evolved from the same arctocyonid ancestors or that they were an example of convergent evolution. Paratriisodon was first properly synonymised with Andrewsarchus by Leigh Van Valen in 1978, who did so without explanation. Regardless, their synonymy was upheld by Maureen O'Leary in 1998, based on similarities between the molars and premolars of the two genera and their comparable body sizes.
Classification
Andrewsarchus was initially regarded as a mesonychid, and Paratriisodon as an arctocyonid. In 1995, the former became the sole member of its own subfamily, Andrewsarchinae, within Mesonychia. The subfamily was elevated to family level by Philip D. Gingerich in 1998, who tentatively assigned Paratriisodon to it. In 1988, Donald Prothero and colleagues recovered Andrewsarchus as the sister taxon to whales. It has since been recovered as a more basal member of Cetancodontamorpha, most closely related to entelodonts, hippos, and whales. In 2023, Yu and colleagues conducted a phylogenetic analysis of ungulates, with a particular focus on entelodontid artiodactyls. Andrewsarchus was recovered as part of a clade consisting of itself, Achaenodon, Erlianhyus, Protentelodon, Wutuhyus, and Entelodontidae. It was found to be most closely related to Achaenodon and Erlianhyus, with which it formed a polytomy. A cladogram based on their phylogeny is reproduced below:
Description
When first describing Andrewsarchus, Osborn believed it to be the largest terrestrial, carnivorous mammal. Based on the length of the A. mongoliensis holotype skull, and using the proportions of Mesonyx, he estimated a total body length of and a body height of . However, considering cranial and dental similarities with entelodonts, Frederick Szalay and Stephen Jay Gould proposed that it had proportions less like mesonychids and more like them, and thus that Osborn's estimates were likely inaccurate.
Skull
The holotype skull of Andrewsarchus has a total length of , and is wide at the zygomatic arches. The snout is greatly elongated, measuring one-and-a-half times the length of the basicranium, and the portion of the snout in front of the canines resembles that of entelodonts. Unlike entelodonts, however, the postorbital bar is incomplete. The sagittal crest is reduced, and the mandibular fossa is relatively flat. Together, these attributes suggest a weak temporalis muscle and a fairly weak bite force. The hard palate is long and narrow. The mandibular fossa is also offset laterally and ventrally from the basicranium, similar to the condition seen in mesonychids. The mandible itself is long and shallow, characterised by a straight and relatively shallow horizontal ramus. The masseteric fossa, the depression on the mandible to which the masseter attaches, is shallow. Symphyseal contact between the two mandibles is limited.
Dentition
The holotype cranium of Andrewsarchus demonstrates the typical placental tooth formula, of three incisors, one canine, four premolars and three molars per side, though it is not clear whether the same applies to the mandible. The upper incisors are arranged in a semicircle in front of the canines, a trait that is shared with entelodonts. The second incisor is enlarged, and is almost the size of the canines. This is partly because, while the canines were originally described as being "of enormous size", they are relatively small in proportion to the rest of the dentition. The upper premolars are elongate and consist of a single cusp, resembling those of entelodonts. The fourth premolar retains the protocone, though in a vestigial form. Their roots are not confluent and lack a dentine platform, which are both likely to be adaptations to prolong the tooth's functional life after crown abrasion. The first molar is the smallest. The second is the widest, but has been heavily worn since fossilisation. The third has largely avoided that wear. The premolars and molars have wrinkled crowns, similar to the condition seen in suids and other omnivorous artiodactyls. The tooth structure of the mandible (IVPP V5101) is difficult to determine, as nearly all are worn or broken. All of the right mandible's teeth are preserved save for the first premolar, which is instead preserved on the left mandible. The lower canine and the first premolar both point forwards. The third molar is large, with talonids that have two cusps.
Diet
In his paper describing Andrewsarchus, Osborn suggested that it may have been omnivorous based on comparisons with entelodonts. This conclusion was supported by Szalay and Gould, who use the heavily wrinkled crowns of the molars and premolars as supporting evidence, as well as the close phylogenetic relationship between Andrewsarchus and entelodonts. R.M. Joeckel, in 1990, suggested that it was likely an "omnivore-scavenger", and that it was an ecological analogue to entelodonts. Lars Werdelin further suggested that it was a scavenger, or that it might have preyed on brontotheres.
Palaeoecology
For much of the Eocene, a hothouse climate with humid, tropical environments with consistently high precipitations prevailed. Modern mammalian orders including the Perissodactyla, Artiodactyla, and Primates (or the suborder Euprimates) appeared already by the Early Eocene, diversifying rapidly and developing dentitions specialized for folivory. The omnivorous forms mostly either switched to folivorous diets or went extinct by the Middle Eocene (Lutetian–Bartonian, 48–38 million years ago) along with the archaic "condylarths". By the Late Eocene (Priabonian, 38–34 million years ago), most of the ungulate form dentitions shifted from bunodont cusps to cutting ridges (i.e. lophs) for folivorous diets.
The Irdin Manha Formation, from which the holotype of Andrewsarchus was recovered, consists of Irdinmanhan strata dated to the Middle Eocene. Andrewsarchus mongoliensis comes from the IM-1 locality, dated to the lower Irdinmanhan, from which the hyaenodontine Propterodon, the mesonychid Harpagolestes, at least three unnamed mesonychids, the artiodactyl Erlianhyus, the perissodactyls Deperetella and Lophialetes, the omomyid Tarkops, the glirian Gomphos, the rodent Tamquammys, and various indeterminate glirians are also known. The Lushi Formation, from which the Paratriisodon henanensis specimen was recovered, was deposited at around the same time as the Irdin Manha Formation. The mesonychid Mesonyx, the pantodont Eudinoceras, the dichobunid Dichobune, the helohyid Gobiohyus, the brontotheres Rhinotitan and Microtitan, the perissodactyls Amynodon and Lophialetes, the ctenodactylid Tsinlingomys, and the lagomorph Lushilagus have been identified from the Lushi Formation. The Dongjun Formation, from which A. crassum originates, is similarly Middle Eocene. It preserves the nimravid Eusmilus, the anthracotheriid Probrachyodus, the pantodont Eudinoceras, the brontotheres Metatelmatherium and cf. Protitan, the deperetellids Deperetella and Teleolophus, the hyracodontid Forstercooperia, the rhinocerotids Ilianodon and Prohyracodon, and the amynodonts Amynodon, Gigantamynodon, and Paramnyodon.
References
Cetancodontamorpha
Eocene Artiodactyla
Enigmatic mammal taxa
Eocene mammals of Asia
Lutetian genus first appearances
Priabonian genus extinctions
Fossil taxa described in 1924
Taxa named by Henry Fairfield Osborn
Prehistoric Artiodactyla genera | Andrewsarchus | Biology | 2,669 |
74,992,659 | https://en.wikipedia.org/wiki/Xiaomi%20Mi%205s%20Plus | Xiaomi Mi 5s Plus is a flagship smartphone from the Chinese company Xiaomi, which is a modification of the Xiaomi Mi 5. It was presented on September 27, 2016, with Xiaomi Mi 5s. This is the first smartphone of the Mi series to receive a dual main camera setup.
Design
The screen is made of glass. The body of the smartphone is made of polished aluminum.
At the bottom there is a USB-C connector, a speaker and a microphone stylized as a speaker. On top are 3.5 mm audio jack, a second microphone and IR port. On the left side of the smartphone, there is a slot for 2 SIM cards. On the right side are the volume buttons and the smartphone lock button. The fingerprint scanner is located on the back panel.
Xiaomi Mi 5s Plus was sold in 4 colors: gray, silver, gold and Rose Gold.
Specifications
Platform
Mi 5s Plus has more overclocked processor Qualcomm Qualcomm Snapdragon 821 (2×2.35 GHz Kryo & 2×2.2 GHz Kryo) which works with Adreno 530 GPU.
Battery
The battery received a 3800 mAh capacity and support for 18-watt Quick Charge 3.0 fast charging.
Camera
The smartphone received a dual main camera 13 Mp, f/2.0 + 13 Mp, f/2.0 (B/W) with phase autofocus and the ability to record video in resolution 4K@30fps. The front camera received a resolution of 4 MP, an aperture of f/2.0 and the ability to record video in a resolution of 1080p@30fps.
Screen
Screen IPS, 5.7", FullHD (1920 × 1080) with an aspect ratio of 16:9 and a pixel density of 386 ppi.
Memory
The smartphone was sold in configurations of 4/64 and 6/128 GB.
Software
Xiaomi Mi 5s Plus was launched on MIUI 8 based on Android 6.0 Marshmallow. The global version of the firmware has been updated to MIUI 10 and the Chinese version to MIUI 11. Both are based on Android 8.0 Oreo.
Controversy
The rear fingerprint sensor and non-metal integrated design of Mi 5s Plus have been criticized by many netizens. Some technology media even said, "When we simply compare Mi 5s and Mi 5s Plus, it is difficult for us to believe that they are a series of models." Named Mi 5s The Plus version has almost no similarities in appearance and design, which is the most controversial aspect of Mi 5s Plus.
The dual-camera imaging quality of Mi 5s Plus has dropped significantly compared to Mi 5 and Mi 5s. DxOMark, a well-known French image evaluation media, gave Mi 5s Plus a score of 78 points, including 80 points for static images and 80 points for video. 74 points.
Mi 5s Plus did not activate the NFC-based bus card simulation service when it was launched, causing netizens to complain. and due to system scheduling reasons, the lag is more obvious in high-load scenarios such as games.
References
Xiaomi smartphones
Mobile phones introduced in 2016
Android (operating system) devices
Mobile phones with multiple rear cameras
Mobile phones with 4K video recording
Discontinued flagship smartphones | Xiaomi Mi 5s Plus | Technology | 695 |
29,954 | https://en.wikipedia.org/wiki/Topology | Topology (from the Greek words , and ) is the branch of mathematics concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling, and bending; that is, without closing holes, opening holes, tearing, gluing, or passing through itself.
A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of topological spaces, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. The following are basic examples of topological properties: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles.
The ideas underlying topology go back to Gottfried Wilhelm Leibniz, who in the 17th century envisioned the and . Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, although, it was not until the first decades of the 20th century that the idea of a topological space was developed.
Motivation
The motivating insight behind topology is that some geometric problems depend not on the exact shape of the objects involved, but rather on the way they are put together. For example, the square and the circle have many properties in common: they are both one dimensional objects (from a topological point of view) and both separate the plane into two parts, the part inside and the part outside.
In one of the first papers in topology, Leonhard Euler demonstrated that it was impossible to find a route through the town of Königsberg (now Kaliningrad) that would cross each of its seven bridges exactly once. This result did not depend on the lengths of the bridges or on their distance from one another, but only on connectivity properties: which bridges connect to which islands or riverbanks. This Seven Bridges of Königsberg problem led to the branch of mathematics known as graph theory.
Similarly, the hairy ball theorem of algebraic topology says that "one cannot comb the hair flat on a hairy ball without creating a cowlick." This fact is immediately convincing to most people, even though they might not recognize the more formal statement of the theorem, that there is no nonvanishing continuous tangent vector field on the sphere. As with the Bridges of Königsberg, the result does not depend on the shape of the sphere; it applies to any kind of smooth blob, as long as it has no holes.
To deal with these problems that do not rely on the exact shape of the objects, one must be clear about just what properties these problems do rely on. From this need arises the notion of homeomorphism. The impossibility of crossing each bridge just once applies to any arrangement of bridges homeomorphic to those in Königsberg, and the hairy ball theorem applies to any space homeomorphic to a sphere.
Intuitively, two spaces are homeomorphic if one can be deformed into the other without cutting or gluing. A famous example, known as the "Topologist's Breakfast", is that a topologist cannot distinguish a coffee mug from a doughnut; a sufficiently pliable doughnut could be reshaped to a coffee cup by creating a dimple and progressively enlarging it, while shrinking the hole into a handle.
Homeomorphism can be considered the most basic topological equivalence. Another is homotopy equivalence. This is harder to describe without getting technical, but the essential notion is that two objects are homotopy equivalent if they both result from "squishing" some larger object.
History
Topology, as a well-defined mathematical discipline, originates in the early part of the twentieth century, but some isolated results can be traced back several centuries. Among these are certain questions in geometry investigated by Leonhard Euler. His 1736 paper on the Seven Bridges of Königsberg is regarded as one of the first practical applications of topology. On 14 November 1750, Euler wrote to a friend that he had realized the importance of the edges of a polyhedron. This led to his polyhedron formula, (where , , and respectively indicate the number of vertices, edges, and faces of the polyhedron). Some authorities regard this analysis as the first theorem, signaling the birth of topology.
Further contributions were made by Augustin-Louis Cauchy, Ludwig Schläfli, Johann Benedict Listing, Bernhard Riemann and Enrico Betti. Listing introduced the term "Topologie" in Vorstudien zur Topologie, written in his native German, in 1847, having used the word for ten years in correspondence before its first appearance in print. The English form "topology" was used in 1883 in Listing's obituary in the journal Nature to distinguish "qualitative geometry from the ordinary geometry in which quantitative relations chiefly are treated".
Their work was corrected, consolidated and greatly extended by Henri Poincaré. In 1895, he published his ground-breaking paper on Analysis Situs, which introduced the concepts now known as homotopy and homology, which are now considered part of algebraic topology.
Unifying the work on function spaces of Georg Cantor, Vito Volterra, Cesare Arzelà, Jacques Hadamard, Giulio Ascoli and others, Maurice Fréchet introduced the metric space in 1906. A metric space is now considered a special case of a general topological space, with any given topological space potentially giving rise to many distinct metric spaces. In 1914, Felix Hausdorff coined the term "topological space" and gave the definition for what is now called a Hausdorff space. Currently, a topological space is a slight generalization of Hausdorff spaces, given in 1922 by Kazimierz Kuratowski.
Modern topology depends strongly on the ideas of set theory, developed by Georg Cantor in the later part of the 19th century. In addition to establishing the basic ideas of set theory, Cantor considered point sets in Euclidean space as part of his study of Fourier series. For further developments, see point-set topology and algebraic topology.
The 2022 Abel Prize was awarded to Dennis Sullivan "for his groundbreaking contributions to topology in its broadest sense, and in particular its algebraic, geometric and dynamical aspects".
Concepts
Topologies on sets
The term topology also refers to a specific mathematical idea central to the area of mathematics called topology. Informally, a topology describes how elements of a set relate spatially to each other. The same set can have different topologies. For instance, the real line, the complex plane, and the Cantor set can be thought of as the same set with different topologies.
Formally, let be a set and let be a family of subsets of . Then is called a topology on if:
Both the empty set and are elements of .
Any union of elements of is an element of .
Any intersection of finitely many elements of is an element of .
If is a topology on , then the pair is called a topological space. The notation may be used to denote a set endowed with the particular topology . By definition, every topology is a -system.
The members of are called open sets in . A subset of is said to be closed if its complement is in (that is, its complement is open). A subset of may be open, closed, both (a clopen set), or neither. The empty set and itself are always both closed and open. An open subset of which contains a point is called an open neighborhood of .
Continuous functions and homeomorphisms
A function or map from one topological space to another is called continuous if the inverse image of any open set is open. If the function maps the real numbers to the real numbers (both spaces with the standard topology), then this definition of continuous is equivalent to the definition of continuous in calculus. If a continuous function is one-to-one and onto, and if the inverse of the function is also continuous, then the function is called a homeomorphism and the domain of the function is said to be homeomorphic to the range. Another way of saying this is that the function has a natural extension to the topology. If two spaces are homeomorphic, they have identical topological properties, and are considered topologically the same. The cube and the sphere are homeomorphic, as are the coffee cup and the doughnut. However, the sphere is not homeomorphic to the doughnut.
Manifolds
While topological spaces can be extremely varied and exotic, many areas of topology focus on the more familiar class of spaces known as manifolds. A manifold is a topological space that resembles Euclidean space near each point. More precisely, each point of an -dimensional manifold has a neighborhood that is homeomorphic to the Euclidean space of dimension . Lines and circles, but not figure eights, are one-dimensional manifolds. Two-dimensional manifolds are also called surfaces, although not all surfaces are manifolds. Examples include the plane, the sphere, and the torus, which can all be realized without self-intersection in three dimensions, and the Klein bottle and real projective plane, which cannot (that is, all their realizations are surfaces that are not manifolds).
Topics
General topology
General topology is the branch of topology dealing with the basic set-theoretic definitions and constructions used in topology. It is the foundation of most other branches of topology, including differential topology, geometric topology, and algebraic topology. Another name for general topology is point-set topology.
The basic object of study is topological spaces, which are sets equipped with a topology, that is, a family of subsets, called open sets, which is closed under finite intersections and (finite or infinite) unions. The fundamental concepts of topology, such as continuity, compactness, and connectedness, can be defined in terms of open sets. Intuitively, continuous functions take nearby points to nearby points. Compact sets are those that can be covered by finitely many sets of arbitrarily small size. Connected sets are sets that cannot be divided into two pieces that are far apart. The words nearby, arbitrarily small, and far apart can all be made precise by using open sets. Several topologies can be defined on a given space. Changing a topology consists of changing the collection of open sets. This changes which functions are continuous and which subsets are compact or connected.
Metric spaces are an important class of topological spaces where the distance between any two points is defined by a function called a metric. In a metric space, an open set is a union of open disks, where an open disk of radius centered at is the set of all points whose distance to is less than . Many common spaces are topological spaces whose topology can be defined by a metric. This is the case of the real line, the complex plane, real and complex vector spaces and Euclidean spaces. Having a metric simplifies many proofs.
Algebraic topology
Algebraic topology is a branch of mathematics that uses tools from algebra to study topological spaces. The basic goal is to find algebraic invariants that classify topological spaces up to homeomorphism, though usually most classify up to homotopy equivalence.
The most important of these invariants are homotopy groups, homology, and cohomology.
Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group.
Differential topology
Differential topology is the field dealing with differentiable functions on differentiable manifolds. It is closely related to differential geometry and together they make up the geometric theory of differentiable manifolds.
More specifically, differential topology considers the properties and structures that require only a smooth structure on a manifold to be defined. Smooth manifolds are "softer" than manifolds with extra geometric structures, which can act as obstructions to certain types of equivalences and deformations that exist in differential topology. For instance, volume and Riemannian curvature are invariants that can distinguish different geometric structures on the same smooth manifoldthat is, one can smoothly "flatten out" certain manifolds, but it might require distorting the space and affecting the curvature or volume.
Geometric topology
Geometric topology is a branch of topology that primarily focuses on low-dimensional manifolds (that is, spaces of dimensions 2, 3, and 4) and their interaction with geometry, but it also includes some higher-dimensional topology. Some examples of topics in geometric topology are orientability, handle decompositions, local flatness, crumpling and the planar and higher-dimensional Schönflies theorem.
In high-dimensional topology, characteristic classes are a basic invariant, and surgery theory is a key theory.
Low-dimensional topology is strongly geometric, as reflected in the uniformization theorem in 2 dimensions – every surface admits a constant curvature metric; geometrically, it has one of 3 possible geometries: positive curvature/spherical, zero curvature/flat, and negative curvature/hyperbolic – and the geometrization conjecture (now theorem) in 3 dimensions – every 3-manifold can be cut into pieces, each of which has one of eight possible geometries.
2-dimensional topology can be studied as complex geometry in one variable (Riemann surfaces are complex curves) – by the uniformization theorem every conformal class of metrics is equivalent to a unique complex one, and 4-dimensional topology can be studied from the point of view of complex geometry in two variables (complex surfaces), though not every 4-manifold admits a complex structure.
Generalizations
Occasionally, one needs to use the tools of topology but a "set of points" is not available. In pointless topology one considers instead the lattice of open sets as the basic notion of the theory, while Grothendieck topologies are structures defined on arbitrary categories that allow the definition of sheaves on those categories, and with that the definition of general cohomology theories.
Applications
Biology
Topology has been used to study various biological systems including molecules and nanostructure (e.g., membraneous objects). In particular, circuit topology and knot theory have been extensively applied to classify and compare the topology of folded proteins and nucleic acids. Circuit topology classifies folded molecular chains based on the pairwise arrangement of their intra-chain contacts and chain crossings. Knot theory, a branch of topology, is used in biology to study the effects of certain enzymes on DNA. These enzymes cut, twist, and reconnect the DNA, causing knotting with observable effects such as slower electrophoresis.
Computer science
Topological data analysis uses techniques from algebraic topology to determine the large scale structure of a set (for instance, determining if a cloud of points is spherical or toroidal). The main method used by topological data analysis is to:
Replace a set of data points with a family of simplicial complexes, indexed by a proximity parameter.
Analyse these topological complexes via algebraic topology – specifically, via the theory of persistent homology.
Encode the persistent homology of a data set in the form of a parameterized version of a Betti number, which is called a barcode.
Several branches of programming language semantics, such as domain theory, are formalized using topology. In this context, Steve Vickers, building on work by Samson Abramsky and Michael B. Smyth, characterizes topological spaces as Boolean or Heyting algebras over open sets, which are characterized as semidecidable (equivalently, finitely observable) properties.
Physics
Topology is relevant to physics in areas such as condensed matter physics, quantum field theory and physical cosmology.
The topological dependence of mechanical properties in solids is of interest in disciplines of mechanical engineering and materials science. Electrical and mechanical properties depend on the arrangement and network structures of molecules and elementary units in materials. The compressive strength of crumpled topologies is studied in attempts to understand the high strength to weight of such structures that are mostly empty space. Topology is of further significance in Contact mechanics where the dependence of stiffness and friction on the dimensionality of surface structures is the subject of interest with applications in multi-body physics.
A topological quantum field theory (or topological field theory or TQFT) is a quantum field theory that computes topological invariants.
Although TQFTs were invented by physicists, they are also of mathematical interest, being related to, among other things, knot theory, the theory of four-manifolds in algebraic topology, and to the theory of moduli spaces in algebraic geometry. Donaldson, Jones, Witten, and Kontsevich have all won Fields Medals for work related to topological field theory.
The topological classification of Calabi–Yau manifolds has important implications in string theory, as different manifolds can sustain different kinds of strings.
In cosmology, topology can be used to describe the overall shape of the universe. This area of research is commonly known as spacetime topology.
In condensed matter a relevant application to topological physics comes from the possibility to obtain one-way current, which is a current protected from backscattering. It was first discovered in electronics with the famous quantum Hall effect, and then generalized in other areas of physics, for instance in photonics by F.D.M Haldane.
Robotics
The possible positions of a robot can be described by a manifold called configuration space. In the area of motion planning, one finds paths between two points in configuration space. These paths represent a motion of the robot's joints and other parts into the desired pose.
Games and puzzles
Disentanglement puzzles are based on topological aspects of the puzzle's shapes and components.
Fiber art
In order to create a continuous join of pieces in a modular construction, it is necessary to create an unbroken path in an order which surrounds each piece and traverses each edge only once. This process is an application of the Eulerian path.
Resources and research
Major journals
Geometry & Topology- a mathematic research journal focused on geometry and topology, and their applications, published by Mathematical Sciences Publishers.
Journal of Topology- a scientific journal which publishes papers of high quality and significance in topology, geometry, and adjacent areas of mathematics.
Major books
Munkres, James R. (2000). Topology (2nd ed.). Upper Saddle River, NJ: Prentice Hall.
Willard, Stephen (2016). General topology. Dover books on mathematics. Mineola, N.Y: Dover publications.
Armstrong, M. A. (1983). Basic topology. Undergraduate texts in mathematics. New York: Springer-Verlag.
See also
Characterizations of the category of topological spaces
Equivariant topology
List of algebraic topology topics
List of examples in general topology
List of general topology topics
List of geometric topology topics
List of topology topics
Publications in topology
Topoisomer
Topology glossary
Topological Galois theory
Topological geometry
Topological order
References
Citations
Bibliography
Further reading
Ryszard Engelking, General Topology, Heldermann Verlag, Sigma Series in Pure Mathematics, December 1989, .
Bourbaki; Elements of Mathematics: General Topology, Addison–Wesley (1966).
(Provides a well motivated, geometric account of general topology, and shows the use of groupoids in discussing van Kampen's theorem, covering spaces, and orbit spaces.)
Wacław Sierpiński, General Topology, Dover Publications, 2000,
(Provides a popular introduction to topology and geometry)
External links
Elementary Topology: A First Course Viro, Ivanov, Netsvetaev, Kharlamov.
The Topological Zoo at The Geometry Center.
Topology Atlas
Topology Course Lecture Notes Aisling McCluskey and Brian McMaster, Topology Atlas.
Topology Glossary
Moscow 1935: Topology moving towards America, a historical essay by Hassler Whitney.
Mathematical structures | Topology | Physics,Mathematics | 4,093 |
420,764 | https://en.wikipedia.org/wiki/Integrated%20pest%20management | Integrated pest management (IPM), also known as integrated pest control (IPC) that integrates both chemical and non-chemical practices for economic control of pests. The UN's Food and Agriculture Organization defines IPM as "the careful consideration of all available pest control techniques and subsequent integration of appropriate measures that discourage the development of pest populations and keep pesticides and other interventions to levels that are economically justified and reduce or minimize risks to human health and the environment. IPM emphasizes the growth of a healthy crop with the least possible disruption to agro-ecosystems and encourages natural pest control mechanisms." Entomologists and ecologists have urged the adoption of IPM pest control since the 1970s. IPM is a safer pest control framework than reliance on the use of chemical pesticides, mitigating risks such as: insecticide-induced resurgence, pesticide resistance and (especially food) crop residues.
History
Shortly after World War II, when synthetic insecticides were introduced, entomologists in California developed the concept of "supervised insect control". Around the same time, entomologists in the US Cotton Belt were advocating a similar approach. Under this scheme, insect control was "supervised" by qualified entomologists and insecticide applications were based on conclusions reached from periodic monitoring of pest and natural-enemy populations. This was viewed as an alternative to calendar-based programs. Supervised control was based on knowledge of the ecology and analysis of projected trends in pest and natural-enemy populations.
Supervised control formed much of the conceptual basis for the "integrated control" that University of California entomologists articulated in the 1950s. Integrated control sought to identify the best mix of chemical and biological controls for a given insect pest. Chemical insecticides were to be used in the manner least disruptive to biological control. The term "integrated" was thus synonymous with "compatible." Chemical controls were to be applied only after regular monitoring indicated that a pest population had reached a level that required treatment (the economic threshold) to prevent the population from reaching a level at which economic losses would exceed the cost of the control measures (the economic injury level).
IPM extended the concept of integrated control to all classes of pests and was expanded to include all tactics. Controls such as pesticides were to be applied as in integrated control, but these now had to be compatible with tactics for all classes of pests. Other tactics, such as host-plant resistance and cultural manipulations, became part of the IPM framework. IPM combined entomologists, plant pathologists, nematologists and weed scientists.
In the United States, IPM was formulated into national policy in February 1972 as directed by President Richard Nixon. In 1979, President Jimmy Carter established an interagency IPM Coordinating Committee to ensure development and implementation of IPM practices.
Perry Adkisson and Ray F. Smith received the 1997 World Food Prize for encouraging the use of IPM.
Applications
IPM is used in agriculture, horticulture, forestry, human habitations, preventive conservation of cultural property and general pest control, including structural pest management, turf pest management and ornamental pest management. IPM practices help to prevent and slow the development of resistance, known as resistance management.
Principles
An American IPM system is designed around six basic components:
Acceptable pest levels—The emphasis is on control, not eradication. IPM holds that wiping out an entire pest population is often impossible, and the attempt can be expensive and unsafe. IPM programmes first work to establish acceptable pest levels, called action thresholds, and apply controls if those thresholds are crossed. These thresholds are pest and site specific, meaning that it may be acceptable at one site to have a weed such as white clover, but not at another site. Allowing a pest population to survive at a reasonable threshold reduces selection pressure. This lowers the rate at which a pest develops resistance to a control, because if almost all pests are killed then those that have resistance will provide the genetic basis of the future population. Retaining a significant number of unresistant specimens dilutes the prevalence of any resistant genes that appear. Similarly, the repeated use of a single class of controls will create pest populations that are more resistant to that class, whereas alternating among classes helps prevent this.
Preventive cultural practices—Selecting varieties best for local growing conditions and maintaining healthy crops is the first line of defense. Plant quarantine and 'cultural techniques' such as crop sanitation are next, e.g., removal of diseased plants, and cleaning pruning shears to prevent spread of infections. Beneficial fungi and bacteria are added to the potting media of horticultural crops vulnerable to root diseases, greatly reducing the need for fungicides.
Monitoring—Regular observation is critically important. Observation is broken into inspection and identification. Visual inspection, insect and spore traps, and other methods are used to monitor pest levels. Record-keeping is essential, as is a thorough knowledge of target pest behavior and reproductive cycles. Since insects are cold-blooded, their physical development is dependent on area temperatures. Many insects have had their development cycles modeled in terms of degree-days. The degree days of an environment determines the optimal time for a specific insect outbreak. Plant pathogens follow similar patterns of response to weather and season. Automated systems based on AI have been developed to identify and monitor flies using e-trapping devices.
Mechanical controls—Should a pest reach an unacceptable level, mechanical methods are the first options. They include simple hand-picking, barriers, traps, vacuuming and tillage to disrupt breeding.
Biological controls—Natural biological processes and materials can provide control, with acceptable environmental impact, and often at lower cost. The main approach is to promote beneficial insects that eat or parasitize target pests. Biological insecticides, derived from naturally occurring microorganisms (e.g.—Bt, entomopathogenic fungi and entomopathogenic nematodes), also fall in this category. Further 'biology-based' or 'ecological' techniques are under evaluation.
Responsible use—Synthetic pesticides are used as required and often only at specific times in a pest's life cycle. Many newer pesticides are derived from plants or naturally occurring substances (e.g.—nicotine, pyrethrum and insect juvenile hormone analogues), but the toxophore or active component may be altered to provide increased biological activity or stability. Applications of pesticides must reach their intended targets. Matching the application technique to the crop, the pest, and the pesticide is critical, for example, the use of low-volume spray equipment can considerably reduce overall pesticide use and operational costs.
Although originally developed for agricultural pest management, IPM programmes now encompass diseases, weeds and other pests that interfere with management objectives for sites such as residential and commercial structures, lawn and turf areas, and home and community gardens. Predictive models have proved to be suitable tools supporting the implementation of IPM programmes.
Process
IPM is the selection and use of pest control actions that will ensure favourable economic condition, ecological and social consequences and is applicable to most agricultural, public health and amenity pest management situations. The IPM process starts with monitoring, which includes inspection and identification, followed by the establishment of economic injury levels. The economic injury levels set the economic threshold level. Economic Injury level is the pest population level at which crop damage exceeds the cost of treatment of pest. This can also be an action threshold level for determining an unacceptable level that is not tied to economic injury. Action thresholds are more common in structural pest management and economic injury levels in classic agricultural pest management. An example of an action threshold is one fly in a hospital operating room is not acceptable, but one fly in a pet kennel would be acceptable. Once a threshold has been crossed by the pest population action steps need to be taken to reduce and control the pest. Integrated pest management employs a variety of actions including cultural controls such as physical barriers, biological controls such as adding and conserving natural predators and enemies of the pest, and finally chemical controls or pesticides. Reliance on knowledge, experience, observation and integration of multiple techniques makes IPM appropriate for organic farming (excluding synthetic pesticides). These may or may not include materials listed on the Organic Materials Review Institute (OMRI) Although the pesticides and particularly insecticides used in organic farming and organic gardening are generally safer than synthetic pesticides, they are not always more safe or environmentally friendly than synthetic pesticides and can cause harm. For conventional farms IPM can reduce human and environmental exposure to hazardous chemicals, and potentially lower overall costs.
Risk assessment usually includes four issues: 1) characterization of biological control agents, 2) health risks, 3) environmental risks and 4) efficacy.
Mistaken identification of a pest may result in ineffective actions. E.g., plant damage due to over-watering could be mistaken for fungal infection, since many fungal and viral infections arise under moist conditions.
Monitoring begins immediately, before the pest's activity becomes significant. Monitoring of agricultural pests includes tracking soil/planting media fertility and water quality. Overall plant health and resistance to pests is greatly influenced by pH, alkalinity, of dissolved mineral and oxygen reduction potential. Many diseases are waterborne, spread directly by irrigation water and indirectly by splashing.
Once the pest is known, knowledge of its lifecycle provides the optimal intervention points. For example, weeds reproducing from last year's seed can be prevented with mulches and pre-emergent herbicide.
Pest-tolerant crops such as soybeans may not warrant interventions unless the pests are numerous or rapidly increasing. Intervention is warranted if the expected cost of damage by the pest is more than the cost of control. Health hazards may require intervention that is not warranted by economic considerations.
Specific sites may also have varying requirements. E.g., white clover may be acceptable on the sides of a tee box on a golf course, but unacceptable in the fairway where it could confuse the field of play.
Possible interventions include mechanical/physical, cultural, biological and chemical. Mechanical/physical controls include picking pests off plants, or using netting or other material to exclude pests such as birds from grapes or rodents from structures. Cultural controls include keeping an area free of conducive conditions by removing waste or diseased plants, flooding, sanding, and the use of disease-resistant crop varieties. Biological controls are numerous. They include: conservation of natural predators or augmentation of natural predators, sterile insect technique (SIT).
Augmentation, inoculative release and inundative release are different methods of biological control that affect the target pest in different ways. Augmentative control includes the periodic introduction of predators. With inundative release, predators are collected, mass-reared and periodically released in large numbers into the pest area. This is used for an immediate reduction in host populations, generally for annual crops, but is not suitable for long run use. With inoculative release a limited number of beneficial organisms are introduced at the start of the growing season. This strategy offers long term control as the organism's progeny affect pest populations throughout the season and is common in orchards. With seasonal inoculative release the beneficials are collected, mass-reared and released seasonally to maintain the beneficial population. This is commonly used in greenhouses. In America and other western countries, inundative releases are predominant, while Asia and the eastern Europe more commonly use inoculation and occasional introductions.
The sterile insect technique (SIT) is an area-wide IPM program that introduces sterile male pests into the pest population to trick females into (unsuccessful) breeding encounters, providing a form of birth control and reducing reproduction rates. The biological controls mentioned above only appropriate in extreme cases, because in the introduction of new species, or supplementation of naturally occurring species can have detrimental ecosystem effects. Biological controls can be used to stop invasive species or pests, but they can become an introduction path for new pests.
Chemical controls include horticultural oils or the application of insecticides and herbicides. A green pest management IPM program uses pesticides derived from plants, such as botanicals, or other naturally occurring materials.
Pesticides can be classified by their modes of action. Rotating among materials with diverse modes of action minimizes pest resistance.
Evaluation is the process of assessing whether the intervention was effective, whether it produced unacceptable side effects, whether to continue, revise or abandon the program.
Southeast Asia
The Green Revolution of the 1960s and '70s introduced sturdier plants that could support the heavier grain loads resulting from intensive fertilizer use. Pesticide imports by 11 Southeast Asian countries grew nearly sevenfold in value between 1990 and 2010, according to FAO statistics, with disastrous results. Rice farmers become accustomed to spraying soon after planting, triggered by signs of the leaf folder moth, which appears early in the growing season. It causes only superficial damage and doesn't reduce yields. In 1986, Indonesia banned 57 pesticides and completely stopped subsidizing their use. Progress was reversed in the 2000s, when growing production capacity, particularly in China, reduced prices. Rice production in Asia more than doubled. But it left farmers believing more is better—whether it's seed, fertilizer, or pesticides.
The brown planthopper, Nilaparvata lugens, the farmers' main target, has become increasingly resistant. Since 2008, outbreaks have devastated rice harvests throughout Asia, but not in the Mekong Delta. Reduced spraying allowed natural predators to neutralize planthoppers in Vietnam. In 2010 and 2011, massive planthopper outbreaks hit 400,000 hectares of Thai rice fields, causing losses of about $64 million. The Thai government is now pushing the "no spray in the first 40 days" approach.
By contrast early spraying kills frogs, spiders, wasps and dragonflies that prey on the later-arriving and dangerous planthopper and produced resistant strains. Planthoppers now require pesticide doses 500 times greater than originally. Overuse indiscriminately kills beneficial insects and decimates bird and amphibian populations. Pesticides are suspected of harming human health and became a common means for rural Asians to commit suicide.
In 2001, 950 Vietnamese farmers tried IPM. In one plot, each farmer grew rice using their usual amounts of seed and fertilizer, applying pesticide as they chose. In a nearby plot, less seed and fertilizer were used and no pesticides were applied for 40 days after planting. Yields from the experimental plots were as good or better and costs were lower, generating 8% to 10% more net income. The experiment led to the "three reductions, three gains" campaign, claiming that cutting the use of seed, fertilizer and pesticide would boost yield, quality and income. Posters, leaflets, TV commercials and a 2004 radio soap opera that featured a rice farmer who gradually accepted the changes. It didn't hurt that a 2006 planthopper outbreak hit farmers using insecticides harder than those who didn't. Mekong Delta farmers cut insecticide spraying from five times per crop cycle to zero to one.
The Plant Protection Center and the International Rice Research Institute (IRRI) have been encouraging farmers to grow flowers, okra, and beans on rice paddy banks, instead of stripping vegetation, as was typical. The plants attract bees and wasps that eat planthopper eggs, while the vegetables diversify farm incomes.
Agriculture companies offer bundles of pesticides with seeds and fertilizer, with incentives for volume purchases. A proposed law in Vietnam requires licensing pesticide dealers and government approval of advertisements to prevent exaggerated claims. Insecticides that target other pests, such as Scirpophaga incertulas (stem borer), the larvae of moth species that feed on rice plants allegedly yield gains of 21% with proper use.
See also
References
Further reading
photos, reference tables, diagrams.
Jahn, GC, PG Cox., E Rubia-Sanchez, and M Cohen 2001. The quest for connections: developing a research agenda for integrated pest and nutrient management. pp. 413–430, In S. Peng and B. Hardy [eds.] "Rice Research for Food Security and Poverty Alleviation." Proceedings of the International Rice Research Conference, 31 March – 3 April 2000, Los Baños, Philippines. Los Baños (Philippines): International Rice Research Institute. 692 p.
Jahn, GC, B. Khiev, C Pol, N. Chhorn and V Preap 2001. Sustainable pest management for rice in Cambodia. In P. Cox and R Chhay [eds.] "The Impact of Agricultural Research for Development in Southeast Asia" Proceedings of an International Conference held at the Cambodian Agricultural Research and Development Institute, Phnom Penh, Cambodia, 24-26 Oct. 2000, Phnom Penh (Cambodia): CARDI.
Nonveiller, Guido 1984. Catalogue commenté et illustré des insectes du Cameroun d'intérêt agricole : (apparitions, répartition, importance) / University of Belgrade/Institut pour la protection des plantes
Regnault-Roger, Catherine; Philogene, Bernard JR (2008) Past and Current Prospects for the use of Botanicals and Plant allelochemicals in Integrated Pest Management. Pharm. Bio. 46(1–2): 41–52
Acosta, EW (2006) (IPM). Biocontrol Reference Center.
Surendra K Dara, The New Integrated Pest Management Paradigm for the Modern Age, Journal of Integrated Pest Management, Volume 1, Issue 1, 2019, 12, The New Integrated Pest Management Paradigm for the Modern Age
External links
Introducing to Integrated Pest Management via EPA
Agronomy
Biological pest control
Pest control techniques
Phytopathology
Soil chemistry | Integrated pest management | Chemistry | 3,648 |
3,074,135 | https://en.wikipedia.org/wiki/Orbital%20plane | The orbital plane of a revolving body is the geometric plane in which its orbit lies. Three non-collinear points in space suffice to determine an orbital plane. A common example would be the positions of the centers of a massive body (host) and of an orbiting celestial body at two different times/points of its orbit.
The orbital plane is defined in relation to a reference plane by two parameters: inclination (i) and longitude of the ascending node (Ω).
By definition, the reference plane for the Solar System is usually considered to be Earth's orbital plane, which defines the ecliptic, the circular path on the celestial sphere that the Sun appears to follow over the course of a year.
In other cases, for instance a moon or artificial satellite orbiting another planet, it is convenient to define the inclination of the Moon's orbit as the angle between its orbital plane and the planet's equatorial plane.
The coordinate system defined that uses the orbital plane as the plane is known as the perifocal coordinate system.
Artificial satellites around the Earth
For launch vehicles and artificial satellites, the orbital plane is a defining parameter of an orbit; as in general, it will take a very large amount of propellant to change the orbital plane of an object. Other parameters, such as the orbital period, the eccentricity of the orbit and the phase of the orbit are more easily changed by propulsion systems.
Orbital planes of satellites are perturbed by the non-spherical nature of the Earth's gravity. This causes the orbital plane of the satellite's orbit to slowly rotate around the Earth, depending on the angle the plane makes with the Earth's equator. For planes that are at a critical angle this can mean that the plane will track the Sun around the Earth, forming a Sun-synchronous orbit.
A launch vehicle's launch window is usually determined by the times when the target orbital plane intersects the launch site.
See also
Earth-centered inertial coordinate system
ECEF, Earth-Centered Earth-fixed coordinate system
Invariable plane, a weighted average of all orbital planes in a system
Orbital elements
Orbital state vectors
Perifocal coordinate system
References
Plane
Planes (geometry) | Orbital plane | Mathematics | 448 |
13,615,347 | https://en.wikipedia.org/wiki/Latent%20semantic%20structure%20indexing | Latent semantic structure indexing (LaSSI) is a technique for calculating chemical similarity derived from latent semantic analysis (LSA).
LaSSI was developed at Merck & Co. and patented in 2007 by Richard Hull, Eugene Fluder, Suresh Singh, Robert Sheridan, Robert Nachbar and Simon Kearsley.
Overview
LaSSI is similar to LSA in that it involves the construction of an occurrence matrix from a corpus of items and the application of singular value decomposition to that matrix to derive latent features. What differs is that the occurrence matrix represents the frequency of two- and three-dimensional chemical descriptors (rather than natural language terms) found within a chemical database of chemical structures. This process derives latent chemical structure concepts that can be used to calculate chemical similarities and structure–activity relationships for drug discovery.
References
Hull, R.D., Fluder, E.M., Singh, S.B., Nachbar, R.B., Sheridan, R.P. and Kearsley, S.K. (2001) "Latent semantic structure indexing (LaSSI) for defining chemical similarity." J Med Chem, 2001 Apr 12;44(8):1177–84.
Hull, R.D., Singh, S.B., Nachbar, R.B., Sheridan, R.P., Kearsley, S.K. and Fluder, E.M. (2001) "Chemical similarity searches using latent semantic structure indexing (LaSSI) and comparison to TOPOSIM." J Med Chem, 2001 Apr 12;44(8):1185–91.
Singh, S.B., Sheridan, R.P., Fluder, E.M. and Hull, R.D. (2001) "Mining the chemical quarry with joint chemical probes: an application of latent semantic structure indexing (LaSSI) and TOPOSIM (Dice) to chemical database mining." J Med Chem, 2001 May 10;44(10):1564–75.
Cheminformatics
Drug discovery | Latent semantic structure indexing | Chemistry,Biology | 440 |
13,721,358 | https://en.wikipedia.org/wiki/The%20Handle | The Handle is an electric guitar created by designer Peter Solomon and produced by the company XOX Audio Tools.
The Handle is characterized by its hollow shell which construction favors direct transmission of acoustic vibrations and creates a resonance chamber similar to that of a semi-acoustic guitar.
The Handle is made from carbon fiber.
A baritone version has also been produced, called the "Billytone" after Billy Sheehan.
Related Discussions
Electric guitar
Industrial design
References
Product design
Electric guitars | The Handle | Engineering | 92 |
59,579,913 | https://en.wikipedia.org/wiki/Cell%20unroofing | Cell unroofing is any of various methods to isolate and expose the cell membrane of cells. Differently from the more common membrane extraction protocols performed with multiple steps of centrifugation (which goal is to separate the membrane fraction from a cell lysate), in cell unroofing the aim is to tear and preserve patches of the plasma membrane in order to perform in situ experiments using (microscopy and biomedical spectroscopy).
History
The first observation the bi-layer cell membrane was made in 1959 on a section of a cell using the electron microscope.
But the first micrograph of the internal side of a cell dates back to 1977 by M.V. Nermut. Professor John Heuser made substantial contributions in the field, imaging the detailed internal structure of the membrane and the cytoskeleton bound to it with extensive use of the electron microscope.
It was only after the development of the atomic force microscope operated in liquid that it was possible to image the cell membranes in almost-physiological conditions and to test its mechanical properties.
Methods
Freeze-fracturing of monolayers
Quick-freeze deep-etch electron microscopy and cryofixation
Sonication for atomic force microscopy
Single-cell unroofing
See also
Sonoporation
Lysis
References
Cell biology
Scientific techniques | Cell unroofing | Biology | 259 |
1,819,715 | https://en.wikipedia.org/wiki/Marine%20geology | Marine geology or geological oceanography is the study of the history and structure of the ocean floor. It involves geophysical, geochemical, sedimentological and paleontological investigations of the ocean floor and coastal zone. Marine geology has strong ties to geophysics and to physical oceanography.
Marine geological studies were of extreme importance in providing the critical evidence for sea floor spreading and plate tectonics in the years following World War II. The deep ocean floor is the last essentially unexplored frontier and detailed mapping in support of economic (petroleum and metal mining), natural disaster mitigation, and academic objectives.
History
The study of marine geology dates back to the late 1800s during the 4-year HMS Challenger expedition. HMS Challenger hosted nearly 250 people, including sailors, engineers, carpenters, marines, officers, and a 6-person team of scientists, led by Charles Wyville Thomson. The scientists' goal was to prove that there was life in the deepest parts of the ocean. Using a sounding rope, dropped over the edge of the ship, the team was able to capture ample amounts of data. Part of their discovery was that the deepest part of the ocean was not in the middle. These were some of the first records of the mid-ocean ridge system.
Prior to World War II, marine geology grew as a scientific discipline. During the early 20th century, organizations such as the Scripps Institution of Oceanography and the Woods Hole Oceanographic Institution (WHOI) were created to support efforts in the field. With Scripps being located on the west coast of North America and WHOI on the east coast, the study of marine geology became much more accessible.
In the 1950s, marine geology had one of the most significant discoveries, the mid-ocean ridge system. After ships were equipped with sonar sensors, they travelled back and forth across the Atlantic Ocean collecting observations of the sea floor. In 1953, the cartographer Marie Tharp generated the first three-dimensional relief map of the ocean floor which proved there was an underwater mountain range in the middle of the Atlantic, along with the Mid-Atlantic Ridge. The survey data was large step towards many more discoveries about the geology of the sea.
In 1960, the American geophysicist Harry H. Hess hypothesized that the seafloor was spreading from the mid-ocean ridge system. With support from the maps of the sea floor, and the recently developed theory of plate tectonics and continental drift, Hess was able to prove that the Earth's mantle continuously released molten rock from the mid-ocean ridge and that the molten rock then solidified, causing the boundary between the two tectonic plates to diverge. A geomagnetic survey was conducted that supported this theory. The survey was composed of scientists using magnetometers to measure the magnetism of the basalt rock protruding from the mid-ocean ridge. They discovered that on either side of the ridge, symmetrical "strips" were found as the polarity of the planet would change over time. This proved that seafloor spreading existed. In later years, newer technology was able to date the rocks and identified that rocks closest to the ridge were younger than the rocks near the coasts of the Western and Eastern Hemispheres land.
At present, marine geology focuses on geological hazards, environmental conditions, habitats, natural resources, and energy and mining projects.
Methods
There are multiple methods for collecting data from the sea floor without physically dispatching humans or machines to the bottom of the ocean.
Side-scan sonar
A common method of collecting imagery of the sea floor is side-scan sonar. Developed in the late 1960s, the purpose of the survey method is to use active sonar systems on the sea floor to detect and develop images of objects. The physical sensors of the sonar device are known as a transducer array and they are mounted onto the hull of a vessel which sends acoustic pulses that reflect off the seafloor and received by the sensors. The imaging can help determine the seafloors composition as harder objects generate a stronger reflectance and appear dark on the returned image. Softer materials such as sand and mud cannot reflect the arrays pulses as well so they appear lighter on the image. This information can be analyzed by specialist to determine outcrops of rock beneath the surface of the water.
This method is less expensive than releasing a vehicle to take photographs of the sea floor, and requires less time. The side-scan sonar is useful for scientists as it is a quick and efficient way of collecting imagery of the sea floor, but it cannot measure other factors, such as depth. Therefore, other depth measuring sonar devices are typically accompanied with the side-scan sonar to generate a more detailed survey.
Multibeam bathymetry
Similarly to side-scan sonar, multibeam bathymetry uses a transducer array to send and receive sound waves in order to detect objects located on the sea floor. Unlike side-scan sonar, scientists are able to determine multiple types of measurements from the recordings and make hypothesis' on the data collected. By understanding the speed at which sound will travel in the water, scientists can calculate the two way travel time from the ship's sensor to the seafloor and back to the ship. These calculations will determine to depth of the sea floor in that area.
Backscatter is another measurement used to determine the intensity of the sound that is returned to the sensor. This information can provide insight on the geological makeup and objects of the sea floor as well as objects located within the water column. Objects in the water column can include structures from shipwrecks, dense biology, and bubble plumes. The importance of objects in the water column to marine geology is identifying specific features as bubble plumes can indicate the presence of hydrothermal vents and cold seeps.
There are limitations to this technique. The distance between the sea floor and the sensor is related to the resolution of the map being created. The closer the sensor is the sea floor, the higher the resolution will be and the farther away the sensor is to the sea floor, the lower the resolution will be. Therefore, it is common for remotely operated vehicles (ROVs) and autonomous underwater vehicles (AUVs) to be equipped with the multibeam sensor or for the sensor to be towed by the ship its self. This ensures that the resolution of the collected data will be high enough for proper analysis.
Sub-bottom profiler
A sub-bottom profiler is another sonar system used in geophysical surveys of the sea floor to not only map depth, but also to map beneath the sea floor. Mounted to the hull of a ship, the system releases low-frequency pulses which penetrate the surface of the sea floor and are reflected by sediments in the sub-surface. Some sensors can reach over 1000 meters below the surface of the sea floor, giving hydrographers a detailed view of the marine geological environment.
Many sub-bottom profilers can emit multiple frequencies of sound to record data on a multitude of sediments and objects on and below the sea floor. The returned data is collected by computers and with aid from hydrographers, can create cross-sections of the terrain below the sea floor. The resolution of the data also allows scientists to identify geological features such as volcanic ridges, underwater landslides, ancient river beds, and other features.
The benefit of the sub-bottom profiler is its capability to record information on the surface and below the seafloor. When accompanied with geophysical data from multibeam sonar and physical data from rock and core samples, the sub-bottom profiles delivers insights on the location and morphology of submarine landslide, identifies how oceanic gasses travel through the subsurface, discover artifacts from cultural heritages, understand sediment deposition, and more.
Marine magnetometry
Magnetometry is the process of measuring changes in the Earth's magnetic field. The outer layer of the Earth's core is liquid and mostly made up of magnetic iron and nickel. When the Earth turns on its axis, the metals release electrical currents which generate magnetic fields. These fields can then be measured to reveal geological subseafloor structures. This method is especially useful in marine exploration and geology as it can not only characterize geological features on the seafloor but can survey aircraft and ship wrecks deep under the sea.
A magnetometer is the main piece of equipment deployed, which is typically towed behind a vessel or mounted to a AUV. It is able to measure the changes in fields of magnetism and corresponding geolocation to create maps. The magnetometer evaluates the magnetic presence generally every second, or one hertz, but can be calibrated to measure at different speeds depending on the study. The readings will be consistent until the device detects ferrous material. This could range from a ship's hull to ferrous basalt at the seafloor. The sudden change in magnetism can be analyzed on the magnetometer's display.
The benefit to a magnetometer compared to sonar devices is its ability to detect artifacts and geological features on top and underneath the seafloor. Because the magnetometer is a passive sensor, and does not emit waves, its exploration depth is unlimited. Although, in most studies, the resolution and certainty of the data collected is dependent on the distance from the device. The closer the device is to a ferrous object, the better the data collected.
Plate tectonics
Plate tectonics is a scientific theory developed in the 1960s that explains major land form events, such as mountain building, volcanoes, earthquakes, and mid-ocean ridge systems. The idea is that Earth's most outer layer, known as the lithosphere, that is made up of the crust and mantle is divided into extensive plates of rock. These plates sit on top of partially molten layer of rock known as the asthenosphere and move relative to each other due to convection between the asthenosphere and lithosphere. The speed at which the plates move ranges between 2 and 15 centimeters per year. Why this theory is so significant is the interaction between the tectonic plates explains many geological formations. In regards to marine geology, the movement of the plates explains seafloor spreading and mid-ocean ridge systems, subduction zones and trenches, volcanism and hydrothermal vents, and more.
There are three major types of tectonic plate boundaries; divergent, convergent, and transform boundaries. Divergent plate boundaries are when two tectonic plates move away from each other, convergent plate boundaries are when two plates move towards each other, and transform plate boundaries are when two plates slide sideways past each other. Each boundary type is associated with different geological marine features. Divergent plates are the cause for mid-ocean ridge systems while convergent plates are responsible for subduction zones and the creation of deep ocean trenches. Transform boundaries cause earthquakes, displacement of rock, and crustal deformation.
Mid-ocean ridge system
Divergent plates are directly responsible for the largest mountain range on Earth, known as the mid-ocean ridge system. At nearly 60,000 km long, the mid-ocean ridge is an extensive chain of underwater volcanic mountains that spans the globe. Centralized in the oceans, this unique geological formation houses a collection of ridges, rifts, fault zones, and other geological features.
The Mid-Atlantic Ridge is a consequence of the diverging North American and Eurasian, and the African and South American Plates. It began forming over 200 million years ago when the American, African and European continents were still connected, forming the Pangea. After continental drift, the ridge system became more defined and in the last 75 years, it has been intensely studied. The Mid-Atlantic Ridge was also served as the birthplace for the discovery of seafloor spreading. As volcanic activity produces new oceanic crust along the ridge, the two plates diverge from each other pulling up the new ocean floor from below the crust. Along the ocean-continent border of the tectonic plates, the oceanic plates subduct underneath the continental plates, creating some of the deepest marine trenches in the world
Subduction zones
Subduction zones are caused when two tectonic plates converge on each other and one plate is pushed beneath the other. In a marine setting, this typically occurs when the oceanic crust subducts below the continental crust, resulting in volcanic activity and the development of deep ocean trenches. Marine geology focuses on mapping and understanding how these processes function. Renowned geological features created through subduction zones include the Mariana Trench and the Ring of Fire.
Mariana Trench
The Mariana Trench is the deepest known submarine trench, and the deepest location in the Earth's crust itself. It is a subduction zone where the Pacific Plate is being subducted under the Mariana Plate. At the deepest point, the trench is nearly 11,000 m deep (almost 36,000 feet). This is further below sea level than Mount Everest is above sea level, by over 2 kilometers.
Ring of Fire
The Ring of Fire is situated around the Pacific Ocean, created from several converging plate boundaries. Its intense volcanism and seismic activity poses a major threat for disastrous earthquakes, tsunamis, and volcanic eruptions. Any early warning systems and mitigation techniques for these disastrous events will require marine geology of coastal and island arc environments to predict events.
Economic benefits
Resource exploration
Marine geology has several methods of detecting geological features below the sea. One of the economic benefits of geological surveying of the seafloor is determining valuable resources that can be extracted. The two major resources mined at sea include oil and minerals. Over the last 30 years, deep-sea mining has generated between $9 -$11 billion USD in the United States of America. Although this sector seems profitable, it is a high risk, high reward industry with many harmful environmental impacts.
Some of the major minerals extracted from the sea include nickel, copper, cobalt, manganese, zinc, gold, and other metals. These minerals are commonly formed around volcanic activity, more specifically hydrothermal vents and polymetallic nodules. These vents emit large volumes of super-heated, metal infused fluids that rise and rapidly cool when mixed with the cold seawater. The chemical reaction causes sulfur and minerals to precipitate and from chimneys, towers, and mineral-rich deposits on the sea floor. Polymetallic nodules, also known as manganese nodules, are rounded ores formed over millions of years from precipitating metals from seawater and sediment pore water. They are typically found unattached, spread across the abyssal seafloor and contain metals crucial for building batteries and touch screens, including cobalt, nickel, copper, and manganese.
A popular area for deep-sea mining, located in the Pacific Ocean, in the Clarion–Clipperton zone (CCZ). The CCZ is approximately 4,500,000 square kilometers constructed of various submarine fracture zones. It has been divided into 16 mining claims and 9 sections dedicated to conservation. According to the International Seabed Authority (ISA), there is an estimated 21 billion tons (Bt) of nodules; 5.95 Bt of manganese, 0.27 Bt of nickel, 0.23 Bt of copper, and 0.05 Bt of cobalt. It is a highly sought-after area for mining because of the yield of minerals it possesses.
Offshore energy development
Marine geology also has many applications on the subject of offshore energy development. Offshore energy is the generation of electricity using ocean-based resources. This includes using wind, thermal, wave, and tidal movement to convert to energy. Understanding the seafloor and geological features can help develop the infrastructure to support these renewable energy sources. Underwater geological features can dictate ocean properties, such as currents and temperatures, which are crucial for location placement of the necessary infrastructure to produce energy.
The stability of the seafloor is important for the creation of offshore wind turbines. Most turbines are secured to the seafloor using monopiles, if the water depth is greater than 15 meters. There must be inserted in areas that are not at risk to sediment deposition, erosion, or tectonic activity. Surveying the geological area before development is needed to insure proper support of the turbines and forces applied to them. Another example why marine geology is needed for future energy projects is to understand wave and current patterns. Analyzing the effects that the seafloor has on water movement can help support planning and location selection of generators offshore and optimize energy farming.
Environmental impacts and mitigation
Habitat mapping and conservation
Marine geology has a key role in habitat mapping and conservation. With global events causing potentially irreversible damage to the sea habitats, such as deep-sea mining and bottom trawling, marine geology can help us study and mitigate the effects of these activity.
The CCZ has been surveyed and mapped to designate specific areas for mining and for conservation. The International Seabed Authority has set aside approximately 160,000 square kilometers of seabed within the CCZ as the area is rich with biodiversity and habitats. The zone houses over 5,000 species, including sea cucumbers, corals, crabs, shrimps, glass sponges, and members of the spider family and, has been an area where new species of sea worms have been discovered. Furthermore, 90% of the species have yet to be identified. Proper marine survey techniques have protected thousands of habitats and species by dedicating it to conservation.
Bottom trawling also poses a detrimental effects to the sea and using marine geology techniques can be helpful at mitigating them. Bottom trawling, generally a commercial fishing technique, involves dragging a large net that herds and captures a target species, such as fish or crabs. During this process, the net damages the seafloor by scraping and removing animals and vegetation living on the seabed, including coral reefs, sharks, and sea turtles. It can tear up root systems and animal burrows, which can directly affect the sediment distribution. This can lead to the change in chemistry and nutriment levels in the sea water. Marine geology can determine areas which have been damaged to employ habitat restoration techniques. It can also help determine areas that have not been affecting by bottom trawling and employ conservation protection.
Sediment transportation and coastal erosion
Sediment transportation and coastal erosion is a complex subject that is necessary to understand to protect infrastructure and the environment. Coastal erosion is the process of sediment and materials breaking down and transported due to the effects of the sea. This can lead to destruction animal habitats, fishing industries, and infrastructure. In the United States, damages to properties and infrastructure has caused approximately $500 million per year, and an additional $150 million a year is dedicated to mitigation from the U.S. federal government. Marine geology supports the study of sediment types, current patterns, and ocean topography to predict erosional trends which can protect these environments.
Natural hazard assessment
Earthquakes are one of the most common natural disasters. Furthermore, they can cause other disasters, such as tsunamis and landslides, such as the underwater earthquake in the Indian Ocean occurred at a magnitude of 9.1 which then triggered a tsunami that caused waves to reach a height of at least 30 ft and killed approximately 230,000 people in 13 different countries. Marine geology and understanding plate boundaries supports the development of early warning systems and other mitigation techniques to protect the people and environments who may be susceptible to natural disasters. Many earthquake early warning systems (EEWS) are in place and more are being developed.
Future research
Seafloor mapping and bathymetry
Many section of the oceans are permanently dark, low temperatures, and are under extreme pressure, making them difficult to observe. According to the National Oceanic and Atmospheric Administration (NOAA), only 23% of the seafloor has been mapped in detail and one of the leading projects in exploration is developing high-resolution maps of the seafloor. The Okeanos Explorer, a vessel owned by NOAA, has already mapped over 2 million km2 of the seafloor using multibeam sonar since 2008, but this technique has proved to be too time-consuming.
The importance of mapping the seafloor has been recognized by governments and scientists alike. Because of this, an international collaboration effort to create a high-definition map of the entire seafloor was developed, called the Nippon Foundation-GEBCO Seabed 2030 Project. This committee has a set goal to have the project finished by 2030. To reach their goal, they are equipping old, new, and autonomous vehicles with sonar, sensors, and other GIS based technology to reach their goal.
See also
Geology portal
Oceans portal
Bathymetric chart
Hawaiian-Emperor seamount chain
Hydrogeology
Pelagic sediments
Seafloor mapping
References
Sources
Erickson, Jon, 1996, Marine Geology: Undersea Landforms and Life Forms, Facts on File
"What is the Ring of Fire? : Ocean Exploration Facts: NOAA Office of Ocean Exploration and Research". oceanexplorer.noaa.gov. Retrieved 2023-02-10.
Atwood, Trisha B.; Witt, Andrew; Mayorga, Juan; Hammill, Edd; Sala, Enric (2020). "Global Patterns in Marine Sediment Carbon Stocks". Frontiers in Marine Science. 7. doi:10.3389/fmars.2020.00165/full. ISSN 2296-7745.
Merino, Nancy; Aronson, Heidi S.; Bojanova, Diana P.; Feyhl-Buska, Jayme; Wong, Michael L.; Zhang, Shu; Giovannelli, Donato (2019). "Living at the Extremes: Extremophiles and the Limits of Life in a Planetary Context". Frontiers in Microbiology. 10. doi:10.3389/fmicb.2019.00780/full. ISSN 1664-302X.
External links
Soundwaves Coastal Science & Research News from Across the USGS
Marine Geology and Geophysics – NOAA
Pacific Seafloor Mapping Project – USGS
Marine Geology and Geophysics at MIT
Ocean Drilling Program
Oceanography
Subfields of geology | Marine geology | Physics,Environmental_science | 4,502 |
46,833,807 | https://en.wikipedia.org/wiki/Merit%20good | The economics concept of a merit good, originated by Richard Musgrave (1957, 1959), is a commodity which is judged that an individual or society should have on the basis of some concept of benefit, rather than ability and willingness to pay. The term is, perhaps, less often used presently than it was during the 1960s to 1980s but the concept still motivates many economic actions by governments. Examples include in-kind transfers such as the provision of food stamps to assist nutrition, the delivery of health services to improve quality of life and reduce morbidity, and subsidized housing and education.
Definition
A merit good can be defined as a good which would be under-consumed (and under-produced) by a free market economy, due to two main reasons:
When consumed, a merit good creates positive externalities (an externality being a third party/spill-over effect of the consumption or production of the good/service). This means that there is a divergence between private benefit and public benefit when a merit good is consumed (i.e. the public benefit is greater than the private benefit). However, as consumers only take into account private benefits when consuming most goods, it means that they are under-consumed (and so under-produced).
Individuals are short-term utility maximisers and so do not take into account the long term benefits of consuming a merit good, so they are under-consumed.
Justification
In many cases, merit goods provide services which should apply universally to everyone in a particular situation, an opinion that is similar to that of the concept of primary goods found in work by philosopher John Rawls or discussions about social inclusion. Lester Thurow claims that merit goods (and in-kind transfers) are justified based on "individual-societal preferences": just as we, as a society, permit each adult citizen an equal vote in elections, we should also entitle each person an equal right to life, and hence an equal right to life-saving medical care.
On the supply side, it is sometimes suggested that there will be more endorsement in society for implicit redistribution via the provision of certain kinds of goods and services, rather than explicit redistribution through income.
It is sometimes suggested that society in general may be in a better position to determine what individuals need, since individuals might act irrationally (for example, poor people receiving monetary transfers might use them to buy alcoholic drinks rather than nutritious food).
Sometimes, merit and demerit goods (goods which are considered to affect the consumer negatively, but not society in general) are simply considered as an extension of the idea of externalities. A merit good may be described as a good that has positive externalities associated with it. Thus, an inoculation against a contagious disease may be considered as a merit good, because others who may not catch the disease from the inoculated person also benefit.
However, merit and demerit goods can be defined in a different manner without reference to externalities. Consumers can be considered to under-consume merit goods (and over-consume demerit goods) due to an information failure. This happens because most consumers do not perceive quite how good or bad the good is for them: either they do not have the right information or lack relevant information. With this definition, a merit good is defined as a good that is better for a person than the person who may consume the good realises.
Other possible rationales for treating some commodities as merit (or demerit) goods include public-goods aspects of a commodity, imposing community standards (prostitution, drugs, etc.), immaturity or incapacity, and addiction. A common element of all of these is recommending for or against some goods on a basis other than consumer choice. For the case of education, it can be argued that those lacking education are incapable of making an informed choice about the benefits of education, which would warrant compulsion (Musgrave, 1959, 14). In this case, the implementation of consumer sovereignty is the motivation, rather than rejection of consumer sovereignty.
Public Choice Theory suggests that good government policies are an under-supplied merit good in a democracy.
Criticism
Arguments about the irrational behavior of welfare receivers are often criticised for being paternalistic, often by those who would like to reduce to a minimum economic activity by government.
The principle of consumer sovereignty in welfare also suggests that monetary transfers are preferable to in-kind transfers of the same cost.
References
Richard A. Musgrave (1957). "A Multiple Theory of Budget Determination," FinanzArchiv, New Series 25(1), pp. 33–43.
_ (1959). The Theory of Public Finance, pp. 13–15.
_ (1987). "merit goods,", " The New Palgrave: A Dictionary of Economics, v. 3, pp. 452-53.
Richard A. Musgrave and Peggy B. Musgrave (1973). Public Finance in Theory and Practice, pp. 80-81.
Roger Lee Mendoza ([2007] 2011). "Merit Goods at Fifty: Reexamining Musgrave's Theory in the Context of Health Policy." Review of Economic and Business Studies, v. 4 (2), pp. 275–284.
Amartya K. Sen ([1977] 1982). "Rational Fools: A Critique of the Behavioral Foundations of Economic Theory," in Choice, Welfare and Measurement, pp. 84–106. (1977 JSTOR version)
Goods (economics) | Merit good | Physics | 1,131 |
20,841,370 | https://en.wikipedia.org/wiki/Nonradiation%20condition | Classical nonradiation conditions define the conditions according to classical electromagnetism under which a distribution of accelerating charges will not emit electromagnetic radiation. According to the Larmor formula in classical electromagnetism, a single point charge under acceleration will emit electromagnetic radiation. In some classical electron models a distribution of charges can however be accelerated so that no radiation is emitted. The modern derivation of these nonradiation conditions by Hermann A. Haus is based on the Fourier components of the current produced by a moving point charge. It states that a distribution of accelerated charges will radiate if and only if it has Fourier components synchronous with waves traveling at the speed of light.
History
Finding a nonradiating model for the electron on an atom dominated the early work on atomic models. In a planetary model of the atom, the orbiting point electron would constantly accelerate towards the nucleus, and thus according to the Larmor formula emit electromagnetic waves. In 1910 Paul Ehrenfest published a short paper on "Irregular electrical movements without magnetic and radiation fields" demonstrating that Maxwell's equations allow for the existence of accelerating charge distributions which emit no radiation. In 1913, the Bohr model of the atom abandoned the efforts to explain why its bound electrons do not radiate by postulating that they did not radiate. This was later subsumed by a postulate of quantum theory called Schrödinger's equation.
In the meantime, our understanding of classical nonradiation has been considerably advanced since 1925. Beginning as early as 1933, George Adolphus Schott published a surprising discovery that a charged sphere in accelerated motion (such as the electron orbiting the nucleus) may have radiationless orbits. Admitting that such speculation was out of fashion, he suggests that his solution may apply to the structure of the neutron. In 1948, Bohm and Weinstein also found that charge distributions may oscillate without radiation; they suggest that a solution which may apply to mesons. Then in 1964, Goedecke derived, for the first time, the general condition of nonradiation for an extended charge-current distribution, and produced many examples, some of which contained spin and could conceivably be used to describe fundamental particles. Goedecke was led by his discovery to speculate:
The nonradiation condition went largely ignored for many years. Philip Pearle reviews the subject in his 1982 article Classical Electron Models. A Reed College undergraduate thesis on nonradiation in infinite planes and solenoids appears in 1984. An important advance occurred in 1986, when Hermann Haus derived Goedecke's condition in a new way. Haus finds that all radiation is caused by Fourier components of the charge/current distribution that are lightlike (i.e. components that are synchronous with light speed). When a distribution has no lightlike Fourier components, such as a point charge in uniform motion, then there is no radiation. Haus uses his formulation to explain Cherenkov radiation in which the speed of light of the surrounding medium is less than c.
Applications
The nonradiation condition is important to the study of invisibility physics.
See also
Sommerfeld radiation condition
Frank–Tamm formula
Notes
External links
Invisibility Physics: Acceleration without radiation, part I
Electromagnetism
Boundary conditions | Nonradiation condition | Physics | 676 |
29,203,856 | https://en.wikipedia.org/wiki/Phyllospondyli | The Phyllospondyli is a now abandoned term for a series of small, poorly ossified fossils of labyrinthodont amphibians from the Paleozoic. The groups was proposed as an order on the basis if their vertebrae, which was either consisting of neural arches over an otherwise unossified notocord or consisting of thin-walled, ring-shaped intercentra topped by the neural arch. The name pyllospondily is from Greek, "leaf vertebrae".
While the group originally was based on the shape of the vertebrae, common in older classification of labyrinthodonts, several families was at times assigned to it based on skull characters. All members were more or less salamander-like in body outline, with weak, poorly ossified limbs, four fingers to the hand and a more or less round skull when seen from above. Remains of larval gills were frequently found. What animals was actually assigned to the group varies, Case (1946) gave four families that he confidently assigned to the order: Branchiosauridae (now known to be larval Temnospondyli), Eugyrinidae (various temnospondyl and anthracosaur groups), Melanerpetontidae and Microbatrachidae (now abandoned). The group as a whole seem to have been a wastebasket taxon for various small, poorly ossified and/or larval fossils, the families once ascribed to it largely being constructed from similar animals found in different parts of the world.
References
History of paleontology
Stegocephalians
Polyphyletic groups | Phyllospondyli | Biology | 335 |
16,365,986 | https://en.wikipedia.org/wiki/Backhouse%27s%20constant | Backhouse's constant is a mathematical constant named after Nigel Backhouse. Its value is approximately 1.456 074 948.
It is defined by using the power series such that the coefficients of successive terms are the prime numbers,
and its multiplicative inverse as a formal power series,
Then:
.
This limit was conjectured to exist by Backhouse, and later proven by Philippe Flajolet.
References
Further reading
Mathematical constants
Prime numbers | Backhouse's constant | Mathematics | 94 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.