id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
60,597,923
https://en.wikipedia.org/wiki/Close-space%20sublimation
Closed space sublimation is a method of producing thin-films, esp. cadmium telluride photovoltaics, though it is used for other materials like antimony triselenide. It is a type of physical vapor deposition where the substrate to be coated and the source material are held close to one another. They are both placed in a vacuum chamber, which is pumped down. The source and substrate are then heated. The source is heated to some fraction of its melting temperature, and the substrate some lower temperature e.g. 640 °C and 600 °C, respectively. This causes sublimation of the source, allowing vapors to travel a short distance to the substrate, where they condense, producing a thin film. This short-path diffusion is similar in principle to short-path distillation. Compared to other techniques, it is a relevantly insensitive process, and takes as little as 15 minutes for an entire cycle. This makes it a very viable technique for large-scale manufacturing. References Thin film deposition Semiconductor device fabrication
Close-space sublimation
Chemistry,Materials_science,Mathematics
216
15,075,030
https://en.wikipedia.org/wiki/KLF9
Krueppel-like factor 9 is a protein that in humans is encoded by the KLF9 gene. Previously known as Basic Transcription Element Binding Protein 1 (BTEB Protein 1), Klf9 is part of the Sp1 C2H2-type zinc finger family of transcription factors. Several previous studies showed Klf9-related regulation of animal development, including cell differentiation of B cells, keratinocytes, and neurons. Klf9 is also a key transcriptional regulator for uterine endometrial cell proliferation, adhesion, and differentiation, all factors that are essential during the process of pregnancy and are turned off during tumorigenesis. Function The protein encoded by this gene is a transcription factor that binds to GC box elements located in the promoter. Binding of the encoded protein to a single GC box inhibits mRNA expression while binding to tandemly repeated GC box elements activates transcription. Oxidative stress increases expression of Klf9 and overexpression of Klf9 gene sensitizes the cell to oxidative stress and reactive oxygen species (ROS). Using a short hairpin RNA (shRNA) to silence expression of Klf9 provides resistance for the cell to oxidative stress and ROS-related cell death. Klf9 is upregulated by ROS and promotes ROS-related cell death. Klf9 exhibits similarities to other known oxidative stress genes like NQO1 and HMOX1. When exposed to the same amount of hydrogen peroxide, both mouse embryo cells and human cells produced similar amounts of Klf9 and NQO1/HMOX. The opposite of this effect also occurs; Klf9 overexpression within the cell leads to an increase in intracellular ROS. The result of the increase in intracellular ROS and Klf9 is increase in cell death; with the overexpressed Klf9 gene, more cells die. Similar cell death was found in vivo when wild-type mice were exposed to oxidative stress agent paraquat intranasally, which validated the oxidative stress-dependent Klf9 expression found in just the cell lines. Regions around 10 kb upstream and 1 kb downstream of Klf9 transcription start site contain conserved antioxidant response elements (AREs), which are binding sites for Nrf2. Nrf2 is a major regulator of the antioxidant response to ROS within the cell. Klf9 is upregulated by Nrf2; when oxidative stress is high and concentration of intracellular ROS is high, Nrf2 binds to Klf9 promoter, which increases the amount of intracellular ROS, leading to cell death. When oxidative stress is low, Nrf2 goes through its normal pathway by increasing the amount of antioxidant species within the cell and decreasing the amount of intracellular ROS. Animal studies A Klf9 deficiency suppresses bleomycin-induced fibrosis in the lungs of mice. By introducing bleomycin to lung tissue, the tissue will produce ROS and develop fibrotic lung tissue to combat the damage done by the bleomycin. When Klf9 was knocked out in these mice, not as much fibrotic lung tissue was formed. Because of this finding, the researchers proposed that manipulations of Klf9 levels within the body may be a valid treatment for other diseases as well, including certain types of cancer. Interactions KLF9 has been shown to interact with progesterone receptor. References Further reading External links Transcription factors
KLF9
Chemistry,Biology
741
99,482
https://en.wikipedia.org/wiki/Deathmatch%20%28video%20games%29
Deathmatch, also known as free-for-all, is a gameplay mode integrated into many shooter games, including first-person shooter (FPS), and real-time strategy (RTS) video games, where the goal is to kill (or "frag") the other players' characters as many times as possible. The deathmatch may end on a frag limit or a time limit, and the winner is the player that accumulated the greatest number of frags. The deathmatch is an evolution of competitive multiplayer modes found in game genres such as fighting games and racing games moving into other genres. Gameplay In a typical first-person shooter (FPS) deathmatch session, players connect individual computers together via a computer network in a peer-to-peer model or a client–server model, either locally or over the Internet. Players often have the option to communicate with each other during the game by using microphones and speakers. Deathmatches have different rules and goals depending on the game, but an example of a typical FPS-deathmatch session is where every player is versus every other player. The game begins with each player being "spawned" (starting) at random locations—picked from a fixed predefined set. Being spawned entails having the score, health, armor and equipment reset to default values which usually is 0 score, full (100%) health, no armour and a basic firearm and a melee weapon. After a session has commenced, arbitrary players may join and leave the game on an ad hoc basis. Players In this context a player is a human operated character in the game or a character operated by a computer software AI—a bot. Both the human and computer operated character do have the same basic visual appearance but will in most modern games be able to select a skin which is an arbitrary graphics model but that operates on the same set of movements as the base model. A human player's character and computer bot's character features the same set of physical properties, initial health, initial armour, weapon capabilities, the same available character maneuvers and speed—i.e. they are equally matched except for the actual controlling part. For a novice player the difference (i.e. experience, not taking into account the actual skill) between a human opponent and a computer controlled opponent may be near nil, however for a skilled player the lack of human intelligence is usually easily noticed in most bot implementations; regardless of the actual skill of the bot—which lack of intelligence can be at least somewhat compensated for in terms of e.g. extreme (superhuman) accuracy and aim. However, some systems deliberately inform the player when inspecting the score list which player(s) are bots and which are human (e.g. OpenArena). In the event that the player is aware of the nature of the opponent it will affect the cognitive process of the player regardless of the player's skill. Modern implementations allow for new players to join after the game has started, the maximum number of players that can join is arbitrary for each game, map and rules and can be selected by the server. Some maps are suitable for small numbers of players, some are suitable for larger numbers. Deaths The goal for each player is killing the other players by any means possible which counts as a frag, either by direct assault or manipulating the map, the latter counts as a frag in some games, some not; in either case—to attain the highest score—this process should be repeated as many times as possible, with each iteration performed as quickly as possible. The session may have a time limit, a frag limit, or no limit at all. If there is a limit then the player with the most frags will eventually win when the session ends. The health variable will determine if a player is wounded; however, a wounded player does not entail reduced mobility or functionality in most games, and in most games a player will not bleed to death. A player will die when the health value reaches equal to or less than 0, if the value is reduced to a very low negative value, the result may be gibbing depending upon the game. In most games, when a player dies (i.e. is fragged), the player will lose all equipment gained and the screen will continue to display the visible (still animated) scene that the player normally sees, and the score list is usually displayed—the frags. The display does not go black when the player dies. Usually the player can choose to instantly respawn or remain dead. The armor variable affects the health variable by reducing the damage taken, the reduction in health is in concept inversely proportional to the value of the armor times the actual damage caused; with the obvious differences in various implementations. Some games may account for the location of the body injured when the damage is deduced, while many—especially older implementations—do not. In most games, no amount of armor causes any reduced mobility—i.e. is never experienced as a weight issue by the player. The lost equipment (usually not including the armor) of a dead player can usually be picked up by any player (even the fragged player, respawned) who gets to it first. Simulation Newtonian physics are often only somewhat accurately simulated, common in many games is the ability of the player to modify the player's own vector to some degree while airborne, e.g. by retarding a forward airborne flight by moving backwards, or even jumping around a corner. Other notable concepts derived from the physics of FPS game engines are i.a. at least bunny-hopping, strafe-jumping and rocket-jumping—in all of which the player exploits the particular characteristics of the physics engine in question to obtain a high speed and/or height, or other attribute(s); e.g. with rocket-jumping the player will jump and fire at rocket at the floor area immediately under the feet of the same player, which will cause the player to jump higher compared to a regular jump as a result of the rocket blast (at the obvious expense of the health variable being somewhat reduced from self-inflicted injury). The types of techniques available and how the techniques may be performed by the player differs from the physics implementation as is as such also game dependent. Most modern deathmatch games features a high level of graphic violence; a normal modern implementation will contain high quality human characters being killed, e.g. moderate amounts of blood, screams of pain and death, exploding bodies with associated gibs are common. Some games feature a way to disable and/or reduce the level of gore. However, the setting of the game is usually that of a fictional world, the player may resurrect in the form of mentioned respawning and the characters will usually have superhuman abilities, e.g. able to tolerate numerous point blank hits from a machine gun directly to the head without any armour, jumping extreme inhuman distances and falling extreme distances to mention a few things. These factors together may make the player experience the game less real as the game contains highly unreal and unrealistic elements. Powerups All normal maps will contain various power-ups; i.e. extra health, armor, ammunition and other (more powerful than default) weapons. Once collected by a player the power-up will respawn after a defined time at the same location, the time for an item to respawn depends upon the game mode and the type of the item. In some deathmatch modes power-ups will not respawn at all. Certain power-ups are especially powerful, which can often lead to the game rotating around controlling power-ups—i.e. all other things being equal, the player who controls the strongest power-ups (collecting the items most often) is the one that will have the best potential for making the best score. Sessions If the session does have a frag or time limit a new session will start briefly after the current session has been concluded, during the respite the players will be allowed to observe the score list, chat and will usually see an animated pseudo overview display of the map as background for the score list. Some games have a system to allow each player to announce they are now ready to begin the new session, some do not. The new sessions might be on a different map—based on a map list kept on the server—or it might always be on the same map if there is no such rotating map list. Common in many games is some form of message broadcast and private message system; the broadcast message system announces public events, e.g. if a player died it will often be informed who died and how, if fragged, then often by what weapon; the same system will also often announce if a player joins or leaves the game, and may announce how many frags are left in total and other important messages, including errors or warnings from the game; instant text messages from other players are also displayed with this system. The private message system, in contrast, only prints messages for individual players, e.g. if player A picks up a weapon, player A will get a message to confirm that the weapon was picked up. History Even before the term deathmatch was first used, there existed games with a similar gameplay mode. MIDI Maze was a multiplayer first-person shooter for the Atari ST, released in 1987, which has been suggested as the first example of deathmatch gameplay. Sega's 1988 third-person shooter arcade game Last Survivor featured eight-player deathmatch. Another early example of a deathmatch mode in a first-person shooter was Taito's 1992 video game Gun Buster. It allowed two-player cooperative gameplay for the mission mode, and featured an early deathmatch mode, where either two players could compete against each other or up to four players could compete in a team deathmatch, consisting of two teams with two players each competing against each other. The phrase death match was originally used in wrestling, starting in the 1950s, to denote certain brutal hardcore wrestling fights. The term "death match" in this sense appeared in the 1992 fighting arcade game World Heroes, where it denotes a game mode taking place in an arena with environmental hazards. The term deathmatch in the context of multiplayer video games may have been coined by game designer John Romero, while he and lead programmer John Carmack were developing the LAN multiplayer mode for the video game Doom. Romero commented on the birth of the FPS deathmatch: "Sure, it was fun to shoot monsters, but ultimately these were soulless creatures controlled by a computer. Now gamers could play against spontaneous human beings—opponents who could think and strategize and scream. We can kill each other!' If we can get this done, this is going to be the fucking coolest game that the planet Earth has ever fucking seen in its entire history!'" According to Romero, the deathmatch concept was inspired by fighting games. At id Software, the team frequently played Street Fighter II, Fatal Fury and Art of Fighting during breaks, while developing elaborate rules involving trash-talk and smashing furniture or tech. Romero stated that "what we were doing was something that invented deathmatch" and that "Japanese fighting games fueled the creative impulse to create deathmatch in our shooters." Some games give a different name to these types of matches, while still using the same underlying concept. For example, in Perfect Dark, the name "Combat" is used and in Halo, deathmatch is known as "Slayer". Precursors It has been suggested that in 1983, Drew Major and Kyle Powell probably played the world's first deathmatch with Snipes, a text-mode game that was later credited with being the inspiration behind Novell NetWare, although multiplayer games spread across multiple screens predate that title by at least 9 years in the form of Spasim and Maze War. Early evidence of the term's application to graphical video games exists. On August 6, 1982, Intellivision game developers Russ Haft and Steve Montero challenged each other to a game of Bi-Planes, a 1981 Intellivision release in which multiple players control fighter planes with the primary purpose of repeatedly killing each other until a limit is reached. Once killed, a player would be respawned in a fixed location, enjoying a short period of protection from attacks. The contest was referred to, at that time, as a deathmatch. Variations In a team deathmatch, the players are organized into two or more teams, with each team having its own frag count. Friendly fire may or may not cause damage, depending on the game and the rules used — if it does, players that kill a teammate (called a team kill) usually decrease their own score and the team's score by one point; in certain games, they may also themselves be killed as punishment, and/or may be removed from the game for repeat offenses. The team with the highest frag-count at the end wins. In a last man standing deathmatch (or a battle royale game), players start with a certain number of lives (or just one, in the case of battle royale games), and lose these as they die. Players who run out of lives are eliminated for the rest of the match, and the winner is the last and only player with at least one life. See the "Fundamental changes" section in the "Last Man Standing" article for more insight. Any arbitrary multiplayer game with the goal for each player to kill every other player(s) as many times as possible can be considered to be a form of deathmatch. In real time strategy games, deathmatch can refer to a game mode where all players begin their empires with large amounts of resources. This saves them the time of accumulation and lets hostilities commence much faster and with greater force. Destroying all the enemies is the only way to win, while in other modes some other victory conditions may be used (king of the hill, building a wonder...) History, fundamental changes Doom The first-person shooter version of deathmatch, originating in Doom by id Software, had a set of unmodifiable rules concerning weapons, equipment and scoring, known as "Deathmatch 1.0". Items do not respawn, e.g. health, armour, ammunition; however weapons had a fixed status as available to any arbitrary player except the player who acquired the weapon — i.e. the weapon did not in fact disappear as items do when picked up. The player who acquires the weapon can only collect it anew after respawning (this sometimes leads to lack of ammunition if a player survives long enough, eventually leading to one's death due to being unable to fight back) Suicide (such as falling into lava or causing an explosion too close to the player, or getting crushed by a crushing ceiling etc.) did not entail negative score points. Within months, these rules were modified into "Deathmatch 2.0" rules (included in Doom v1.2 patch). These rules were optional, the administrator of the game could decide on using DM 1.0 or DM 2.0 rules. The changes were: Picking up an object removes it from the map. Objects re-appear 30 seconds after being picked up and can be picked up by anyone; bonus objects which provide significant advantages (invisibility power-up etc.) re-appear after much longer delay, some of them may not reappear at all. Suicide counts as −1 frag. Notable power-ups that are featured in most consecutive games include the soul spheres. Although the name and/or graphics may be different in other games the concept and feature of the power-up remains the same in other games. Corridor 7: Alien Invasion CD version Corridor 7: Alien Invasion released by Capstone Software in 1994. The first FPS to include multiple character classes. The first FPS to include DM specific maps. Rise of the Triad Rise of the Triad was first released as shareware in 1994 by Apogee Software, Ltd. and honed an expansive multiplayer mode that pioneered a variety of deathmatch features. It introduced the Capture the Flag mode to the first-person-shooter genre as Capture the Triad. It was the first FPS to have an in-game scoreboard. It was the first FPS to deliver its level of multiplayer customization through a plethora of options affecting aspects of the level played like gravity or weapon persistence. It was the first FPS to have voice macros and the ability to talk to players via microphone. It introduced a unique point system that awards different numbers of points for different kills (for instance, a missile kill is worth a point more than a bullet kill). Hexen: Beyond Heretic Hexen: Beyond Heretic released by Raven Software in 1995. The first to feature multiple character classes with their own weapons; some items also functioned differently based on the class using them. Quake Quake released in 1996 by ID Software, was the first FPS deathmatch game to feature in-game joining. Quake was the first FPS deathmatch game to feature AI operated deathmatch players (bots), although not as a feature of the released product, but rather in the form of a community created content. Quake popularized rocket-jumping. Notable power-ups that are featured in most consecutive games are i.a. the quad damage. Although the name and/or graphics may be different in other games the concept and feature of the power-up remains the same in other games. Unreal With the game Unreal (1998, by Epic), the rules were enhanced with some widely accepted improvements: spawn protection (usually 2–4 seconds), which is a period of invulnerability after a player (re)enters combat (such as after being killed and respawning); spawn protection was automatically terminated when the player used a weapon (including non-attack usage, such as zooming the sniper rifle). Spawn protection prevents "easy frags" — killing a player which just spawned and is slightly disoriented and almost unarmed. "suicide-cause tracking" – if a player dies by "suicide" that was caused by some other player's action, such as knocking him off the cliff or triggering a crusher or gas chamber, the player that caused such death is credited the kill and the killed player does not lose a frag (it's not counted as a suicide). This concept increases the entertainment potential of the game (as it gives players options to be "cunning"), but it at the same time adds complexity, which may be the reason why Epic's main competitor, Id software, did not implement this concept into Quake III Arena (just as they did not implement spawn protection). Unreal Tournament "combat achievements tracking" – Unreal Tournament (1999, by Epic) added statistics tracking. The range of statistics being tracked is very wide, such as: precision of fire with each weapon (percentage of hits to fired ammunition) kills with each weapon, being killed by particular weapon, and being killed when holding particular weapon. headshots (lethal hits of combatant heads with sniper rifles and some other powerful weapons) killing sprees: Killing 5, 10, 15, 20 or 25 combatants without dying is called a killing spree, each greater kill count being considered more valuable and having a unique title (respectively; Killing Spree, Rampage, Dominating, Unstoppable, Godlike). The game tracked how many times has the player achieved each of these titles. consecutive kills: when a player kills a combatant within 5 seconds after a previous kill, a consecutive kill occurs. The timer starts ticking anew, allowing a third kill, a fourth kill etc. Alternatively, killing several enemies with a mega weapon (such as the Redeemer, which resembles a nuclear rocket) also counts as consecutive kill. The titles of these kills are: Double Kill (2), Multi kill (3), Ultra kill (4), Megakill (5), MONSTERKILL (6; 5 in the original Unreal Tournament). For comparison, id Software's "Quake III Arena" tracks double kills, but a third kill soon after results in another double kill award. Quake III Arena This game's approach to combat achievements tracking is different from Unreal Tournament. In deathmatch, the player might be rewarded with awards for the following tricks: "perfect!" – winning a round of deathmatch without getting killed "impressive!" – hitting with two consecutive shots or hitting two enemies with one shot from the railgun (a powerful, long-range hitscan weapon with a slow rate of fire) "humiliation!" – killing an opponent with the melee razor-like gauntlet (the killed player hears the announcement too, but the fact of being humiliated is not tracked for him). "accuracy" – having over 50% of hits-to-shots ratio. Last Man Standing The Last Man Standing (LMS) version of deathmatch is fundamentally different from deathmatch. In deathmatch, it does not matter how many times the player dies, only how many times the player kills. In LMS, it is the exact opposite — the important task is "not to die". Because of this, two activities that are not specifically addressed in deathmatch have to be controlled in LMS. "Camping", which is a recognized expression for staying in one location (usually somewhat protected or with only one access route) and eventually using long range weapons, such as a sniper rifle, from that location. In standard deathmatch, campers usually accumulate fewer frags than players who actively search for enemies, because close range combat usually generates frags faster than sniping from afar. In LMS, however, camping increases the average lifespan. Unreal Tournament 2003 addresses this unfairness by indicating players who are camping and providing other players with navigation to campers. "Staying dead" – after dying, player representations lie on the ground (where applicable) and are shown the results of the game in progress. They have to perform some action, usually click the "Fire" key or button, to respawn and reenter combat. This principle prevents players who might have been forced by real world situations (be it a sudden cough or a door ring) to leave the computer from dying over and over. In standard deathmatch, a player who stays dead is not a problem, as the goal is to score the most frags, not die the least times. In LMS, however, a player that would be allowed to stay dead after being killed for the first time might wait through most of the fight and respawn when there's only one opponent remaining. Because of this, Unreal Tournament 2003'' automatically respawns a player immediately after being killed. See also Player versus environment Player versus player Battle royale game References Video game terminology Esports terminology Articles containing video clips Fiction about death games
Deathmatch (video games)
Technology
4,745
11,726,660
https://en.wikipedia.org/wiki/Canadian%20Society%20for%20Biomechanics
Canadian Society for Biomechanics / Société canadienne de biomécanique (CSB/SCB) was formed in 1973. The CSB is an Affiliated Society with the International Society of Biomechanics (ISB). The purpose of the Society is to foster research and the interchange of information on the biomechanics of human physical activity. Biomechanics research is being performed more and more by people from diverse disciplinary and professional backgrounds. CSB/SCB is attempting to enhance interdisciplinary communication and thereby improve the quality of biomechanics research and facilitate application of findings by bringing together therapists, physicians, engineers, sport researchers, ergonomists, and others who are using the same pool of basic biomechanics techniques but studying different human movement problems. External links Canadian Society for Biomechanics Official Site International Society of Biomechanics Official Site Canadian Society for Biomechanics podcast Biomechanics Professional associations based in Canada
Canadian Society for Biomechanics
Physics
201
30,783,246
https://en.wikipedia.org/wiki/PAD%20emotional%20state%20model
The PAD emotional state model is a psychological model developed by Albert Mehrabian and James A. Russell (1974 and after) to describe and measure emotional states. PAD uses three numerical dimensions, Pleasure, Arousal and Dominance to represent all emotions. Its initial use was in a theory of environmental psychology, the core idea being that physical environments influence people through their emotional impact. It was subsequently used by Peter Lang and colleagues to propose a physiological theory of emotion. It was also used by James A. Russell to develop a theory of emotional episodes (relatively brief emotionally charged events). The PA part of PAD was developed into a circumplex model of emotion experience, and those two dimensions were termed "core affect". The D part of PAD was re-conceptualized as part of the appraisal process in an emotional episode (a cold cognitive assessment of the situation eliciting the emotion). A more fully developed version of this approach is termed the psychological construction theory of emotion. The PAD (Pleasure, Arousal, Dominance) model has been used to study nonverbal communication such as body language in psychology. It has also been applied to consumer marketing and the construction of animated characters that express emotions in virtual worlds. The dimensional structure PAD uses three-dimensional scales which in theory could have any numerical values. The dimensional structure is reminiscent of the 19th century work of Wilhelm Wundt who also used a three-dimensional system and also the 20th century work of Charles E. Osgood. The Pleasure-Displeasure Scale measures how pleasant or unpleasant one feels about something. For instance, both anger and fear are unpleasant emotions, and both score on the displeasure side. However, joy is a pleasant emotion. The Arousal-Nonarousal Scale measures how energized or soporific one feels. It is not the intensity of the emotion -- for grief and depression can be low arousal intense feelings. While both anger and rage are unpleasant emotions, rage has a higher intensity or a higher arousal state. However boredom, which is also an unpleasant state, has a low arousal value. The Dominance-Submissiveness Scale represents the controlling and dominant versus controlled or submissive one feels. For instance, while both fear and anger are unpleasant emotions, anger is a dominant emotion, while fear is a submissive emotion. A more abbreviated version of the model uses just 4 values for each dimension, providing only 64 values for possible emotions. For instance, anger is a quite unpleasant, quite aroused, and moderately dominant emotion, while boredom is slightly unpleasant, quite unaroused, and mostly non-dominant. Applications Marketing The abbreviated model has also been used in organizational studies where the emotions towards specific entities or products marketed by the respective organisations are measured. The PAD model has been used in studying consumer behavior in stores, to determine the effects of pleasure and arousal on issues such as extra time spent in the store and unplanned spending. Virtual emotional characters The PAD model, and the corresponding PAD Space have been used in the construction of animated agents that exhibit emotions. For instance, Becker et al. describes how primary and secondary emotions can be mapped via the PAD space to features in the faces of animated characters to reflect happiness, boredom, frustration or annoyance. Lance et al. discuss how the PAD model can be used to study gaze behavior in animated agents. Zhang et al. describes how the PAD model can be used to assign specific emotions to the faces of avatars. In this approach the PAD model is used as a high-level emotional space, and the lower-level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used in a two-level structure: the PAD-PEP mapping and the PEP-FAP translation model. See also Affect measures References Further reading Emotion Psychological models Psychological theories
PAD emotional state model
Biology
778
13,605,305
https://en.wikipedia.org/wiki/Androdioecy
Androdioecy is a reproductive system characterized by the coexistence of males and hermaphrodites. Androdioecy is rare in comparison with the other major reproductive systems: dioecy, gynodioecy and hermaphroditism. In animals, androdioecy has been considered a stepping stone in the transition from dioecy to hermaphroditism, and vice versa. Androdioecy, trioecy and gynodioecy are sometimes referred to as a mixed mating systems. Androdioecy is a dimorphic sexual system in plants comparable with gynodioecy and dioecy. Evolution of androdioecy The fitness requirements for androdioecy to arise and sustain itself are theoretically so improbable that it was long considered that such systems do not exist. Particularly, males and hermaphrodites have to have the same fitness, in other words produce the same number of offspring, in order to be maintained. However, males only have offspring by fertilizing eggs or ovules of hermaphrodites, while hermaphrodites have offspring both through fertilizing eggs or ovules of other hermaphrodites and their own ovules. This means that all else being equal, males have to fertilize twice as many eggs or ovules as hermaphrodites to make up for the lack of female reproduction. Androdioecy can evolve either from hermaphroditic ancestors through the invasion of males or from dioecious ancestors through the invasion of hermaphrodites. The ancestral state is important because conditions under which androdioecy can evolve differ significantly. Androdioecy with dioecious ancestry In roundworms, clam shrimp, tadpole shrimp and cancrid shrimps, androdioecy has evolved from dioecy. In these systems, hermaphrodites can only fertilize their own eggs (self-fertilize) and do not mate with other hermaphrodites. Males are the only means of outcrossing. Hermaphrodites may be beneficial in colonizing new habitats, because a single hermaphrodite can generate many other individuals. In the well-studied roundworm Caenorhabditis elegans, males are very rare and only occur in populations that are in bad condition or stressed. In Caenorhabditis elegans androdioecy is thought to have evolved from dioecy, through a trioecous intermediate. Androdioecy with hermaphroditic ancestry In barnacles, androdioecy evolved from hermaphroditism. Many plants self-fertilize, and males may be sustained in a population when inbreeding depression is severe because males guarantee outcrossing. Types of androdioecy The most common form of androdioecy in animals involves hermaphrodites that can reproduce by autogamy or allogamy through ovum with males. However, this type does not involve outcrossing with sperm. This type of androdioecy generally occurs in predominantly gonochoric taxonomy groups. One type of androdioecy contains outcrossing hermaphrodites which is present in some angiosperms. Another type of androdioecy has males and simultaneous hermaphrodites in a population due to developmental or conditional sex allocation. Like in some fish species small individuals are hermaphrodites and under circumstances of high density, large individuals become male. Androdioecious species Despite their unlikely evolution, 115 androdioecious animal and about 50 androdioecious plant species are known. These species include Anthozoa (Corals) Goniastra australensis Stylophora pistillata Nematoda (Roundworms) Rhabditidae (Order Rhabditida) Caenorhabditis briggsae Caenorhabditis elegans Caenorhabditis sp. 11 Oscheius myriophila Oscheius dolchura Oscheius tipulae Oscheius guentheri Rhabditis rainai Rhabditis sp. (AF5) Rhabdias nigrovenosum Rhabdias rubrovenosa Rhabdias ranae Entomelas entomelas Diplogastridae (Order Rhabditida) Allodiplogaster sudhausi Diplogasteroides magnus Levipalatum texanum Pristionchus boliviae Pristionchus fissidentatus Pristionchus maupasi Pristionchus mayeri Pristionchus pacificus Pristionchus triformis Sudhausia aristotokia Sudhausia crassa Steinernematidae (Order Rhabditida) Steinernema hermaphroditum Allanotnematidae (Order Rhabditida) Allantonema mirabile Bradynema rigidum Dorylaimida Dorylaimus liratus Nemertea (Ribbon worms) Prostoma eilhardi Arthropoda Clam shrimp Eulimnadia texana Eulimnadia africana Eulimnadia agassizii Eulimnadia antlei Eulimnadia braueriana Eulimnadia brasiliensis Eulimnadia colombiensis Eulimnadia cylondrova Eulimnadia dahli Eulimnadia diversa Eulimnadia feriensis Eulimnadia follisimilis Eulimnadia thompsoni Eulimnadia sp. A Eulimnadia sp. B Eulimnadia sp. C Tadpole shrimp Triops cancriformis Triops newberryi Triops longicaudatus Barnacles Paralepas klepalae Paralepas xenophorae Koleolepas avis Koleolepas tinkeri Ibla quadrivalvis Ibla cumingii Ibla idiotica Ibla segmentata Calantica studeri Calantica siemensi Calantica spinosa Calantica villosa Arcoscalpellum sp. Euscalpellum squamuliferum Scalpellum peronii Scalpellum scalpellum Scalpellum vulgare Scillaelepas arnaudi Scillaelepas bocquetae Scillaelepas calyculacilla Scillaelepas falcate Scillaelepas fosteri Smilium hastatum Smilium peronii Chelonibia patula Chelonibia testudinaria Bathylasma alearum Bathylasma corolliforme Conopea galeata Conopea calceola Conopea merrilli Solidobalanus masignotus Tetrapachylasma trigonum Megalasma striatum Octolasmis warwickii Lysmata Lysmata wurdemanni Lysmata amboinensis Lysmata californica Lysmata bahia Lysmata intermedia Lysmata grabhami Lysmata seticaudata Lysmata nilita Lysmata hochi Lysmata nayaritensis Lysmata rafa Lysmata boggessi Lysmata ankeri Lysmata pederseni Lysmata debelius Lysmata galapaguensis Lysmata cf. trisetacea Insects Icerya bimaculata Icerya purchasi Crypticerya zeteki Annelida (Ringed worms) Salvatoria clavata Ophryotrocha gracilis Ophryotrocha hartmanni Ophryotrocha diadema Ophryotrocha bacci Ophryotrocha maculata Ophryotrocha socialis Chordata Kryptolebias marmoratus Serranus fasciatus Serranus baldwini Angiosperms (Flowering plants) Some Acer (maple) species Castilla elastica Culcita macrocarpa Datisca cannabina (false hemp) Datisca glomerata (Durango root) Fraxinus lanuginosa (Japanese ash) Fraxinus ornus Fuchsia microphylla Gagea serotina Mercurialis annua (Annual mercury) Neobuxbaumia mezcalaensis Nephelium lappaceum (Rambutan) Panax trifolius (Ginseng) Oxalis suksdorfii Phillyrea angustifolia Phillyrea latifolia Ricinocarpos pinifolius Sagittaria lancifolia (sub-androdioecy) Saxifraga cernua Schizopepon bryoniaefolius Spinifex littoreus Ulmus minor See also Gynodioecy Plant sexuality Dioecy Trioecy Hermaphrodite Monoicy References External links Diana Wolf. 'Breeding systems: Evolution of androdioecy' Sex Mating systems Sexual system
Androdioecy
Biology
1,890
23,520,048
https://en.wikipedia.org/wiki/Sharpe%27s%20lobe-billed%20parotia
Sharpe's lobe-billed parotia, also known as Sharpe's lobe-billed riflebird, is a bird in the family Paradisaeidae that Erwin Stresemann proposed is an intergeneric hybrid between a long-tailed paradigalla and western parotia, an identity confirmed by DNA analysis. History Only one subadult male specimen is known of this hybrid, held in the British Natural History Museum, presumably deriving from the Vogelkop Peninsula of north-western New Guinea. It is named after its describer, British ornithologist Richard Bowdler Sharpe. Notes References Hybrid birds of paradise Birds of the Doberai Peninsula Intergeneric hybrids
Sharpe's lobe-billed parotia
Biology
141
13,056,029
https://en.wikipedia.org/wiki/Indie%20game
An indie video game or indie game, short for independent video game, is a video game created by individuals or smaller development teams without the financial and technical support of a large game publisher, in contrast to most "AAA" (triple-A) games. Because of their independence and freedom to develop, indie games often focus on innovation, experimental gameplay, and taking risks not usually afforded in AAA games. Indie games tend to be sold through digital distribution channels rather than at retail due to a lack of publisher support. The term is analogous to independent music or independent film in those respective mediums. Indie game development bore out from the same concepts of amateur and hobbyist programming that grew with the introduction of the personal computer and the simple BASIC computer language in the 1970s and 1980s. So-called bedroom coders, particularly in the United Kingdom and other parts of Europe, made their own games and used mail order to distribute their products, although they later shifted to other software distribution methods with the onset of the Internet in the 1990s, such as shareware and other file sharing distribution methods. However, by this time, interest in hobbyist programming had waned due to rising costs of development and competition from video game publishers and home consoles. The modern take on the indie game scene resulted from a combination of numerous factors in the early 2000s, including technical, economic, and social concepts that made indie games less expensive to make and distribute but more visible to larger audiences and offered non-traditional gameplay from the current mainstream games. A number of indie games at that time became success stories that drove more interest in the area. New industry opportunities have arisen since then, including new digital storefronts, crowdfunding, and other indie funding mechanisms to help new teams get their games off the ground. There are also low-cost and open-source development tools available for smaller teams across all gaming platforms, boutique indie game publishers that leave creative freedom to the developers, and industry recognition of indie games alongside mainstream ones at major game award events. Around 2015, the increasing number of indie games being published led to fears of an "indiepocalypse", referring to an oversupply of games that would make the entire market unprofitable. Although the market did not collapse, discoverability remains an issue for most indie developers, with many games not being financially profitable. Examples of successful indie games include Cave Story, Braid, Super Meat Boy, Terraria, Minecraft, Fez, Hotline Miami, Shovel Knight, the Five Nights at Freddy's series, Undertale, Cuphead, and Among Us. Definition The term "indie game" itself is based on similar terms like independent film and independent music, where the concept is often related to self-publishing and independence from major studios or distributors. However, as with both indie films and music, there is no exact, widely accepted definition of what constitutes an "indie game" besides falling well outside the bounds of triple-A video game development by large publishers and development studios. One simple definition, described by Laura Parker for GameSpot, says "independent video game development is the business of making games without the support of publishers", but this does not cover all situations. Dan Pearce of IGN stated that the only consensus for what constitutes an indie game is a "I know it when I see it"-type assessment, since no single definition can capture what games are broadly considered indie. Indie games generally share certain common characteristics. One method to define an indie game is the nature of independence, which can either be: Financial independence: In such situations, the developers have paid for the development and/or publication of the game themselves or from other funding sources such as crowd funding, and specifically without financial support of a large publisher. Independence of thought: In this case, the developers crafted their game without any oversight or directional influence by a third party such as a publisher. Another means to evaluate a game as indie is to examine its development team, with indie games being developed by individuals, small teams, or small independent companies that are often specifically formed for the development of one specific game. Typically, indie games are smaller than mainstream titles. Indie game developers are generally not financially backed by video game publishers, who are risk-averse and prefer "big-budget games". Instead, indie game developers usually have smaller budgets, usually sourcing from personal funds or via crowdfunding. Being independent, developers do not have controlling interests or creative limitations, and do not require the approval of a publisher, as mainstream game developers usually do. Design decisions are thus also not limited by an allocated budget. Furthermore, smaller team sizes increase individual involvement. However, this view is not all-encompassing, as there are numerous cases of games where development is not independent of a major publisher but still considered indie. Some notable instances of games include: Journey was created by thatgamecompany, but had financial backing of Sony as well as publishing support. Kellee Santiago of thatgamecompany believes that they are an independent studio because they were able to innovate on their game without Sony's involvement. Bastion, similarly, was developed by Supergiant Games, but with publishing by Warner Bros. Entertainment, primarily to avoid difficulties with the certification process on Xbox Live. Greg Kasavin of Supergiant notes they consider their studio indie as they lack any parent company. The Witness was developed by Jonathan Blow and his studio Thekla, Inc. Though self-funded and published, the game's development cost around $6 million and was priced at $40, in contrast to most indie games typically priced up to $20. Blow believed this type of game represented something between indie and AAA publishing. No Man's Sky was developed by Hello Games, though with publishing but non-financial support from Sony; the game on release had a price equal to a typical AAA title. Sean Murray of Hello Games believes that because they are still a small team and the game is highly experimental that they consider themselves indie. Dave the Diver was developed by Mintrocket, a thirty-person studio owned by Nexon. Despite this corporate ownership, and the studio itself stating they do not consider themselves as an indie studio, the game's approach was considered less traditional as to be considered an indie game by the industry, including being nominated for Best Indie Game at The Game Awards 2023. Yet another angle to evaluate a game as indie is from its innovation, creativity, and artistic experimentation, factors enabled by small teams free of financial and creative oversight. This definition is reflective of an "indie spirit" that is diametrically opposite of the corporate culture of AAA development, and makes a game "indie", where the factors of financial and creative independence make a game "independent". Developers with limited ability to create graphics can rely on gameplay innovation. This often leads to indie games having a retro style of the 8-bit and 16-bit generations, with simpler graphics atop the more complex mechanics. Indie games may fall into classic game genres, but new gameplay innovations have been seen. However, being "indie" does not imply that the game focuses on innovation. In fact, many games with the "indie" label can be of poor quality and may not be made for profit. Jesper Juul, an associate professor at The Royal Danish Academy of Fine Arts that has studied the video game market, wrote in his book Handmade Pixels that the definition of an indie game is vague, and depends on different subjective considerations. Juul classified three ways games can be considered indie: those that are financially independent of large publishers, those that are aesthetically independent of and significantly different from the mainstream art and visual styles used in AAA games, and those that present cultural ideas that are independent from mainstream games. Juul however wrote that ultimately the labeling of a game as "indie" still can be highly subjective and no single rule helps delineate indie games from non-indie ones. Games that are not as large as most triple-A games, but are developed by larger independent studios with or without publisher backing and that can apply triple-A design principles and polish due to the experience of the team, have sometimes been called "triple-I" games, reflecting the middle ground between these extremes. Ninja Theory's Hellblade: Senua's Sacrifice is considered a prime example of a triple-I game. A further distinction from indie games are those considered double-A ("AA"), tending to be from mid to large-size studios ranging from 50 to 100 team members and larger than typically associated with indie games, that often work under similar practices as triple-A studios but still retain creative control of their titles from a publisher. Indie games are distinct from open source games. The latter are games which are developed with the intent to release the source code and other assets under an open source license. While many of the same principles used to develop open source games are the same as for indie games, open source games are not developed for commercial gain and instead as a hobbyist pursuit. However, commercial sales are not a requirement for an indie game and such games can be offered as freeware, most notably with Spelunky on its original release and Dwarf Fortress, with the exception of its enhanced visual front-end version while its base version remains free. History The onset of indie game development is difficult to track due to the broadness of what defines an indie game, and the term was not really in use until the early 2000s. Until the 2000s, other terms like amateur, enthusiast, and hobbyist software or games were used to describe such software. Today, terms like amateur and hobbyist development are more reflective of those that create mods for existing games, or work with specific technologies or game parts rather than the development of full games. Such hobbyists usually produce non-commercial products and may range from novices to industry veterans. Before home computers There is some debate as to whether independent game development started prior to the 1977 home computer revolution with games developed for mainframe computers at universities and other large institutions. 1962's Spacewar! was not commercially financed and was made by a small team, but there was no commercial sector of the video game industry at that time to distinguish from independent works. Joyce Weisbecker, who considers herself the first indie designer, created several games for the RCA Studio II home console in 1976 as an independent contractor for RCA. Home computers (late 1970s-1980s) When the first personal computers were released in 1977, they each included a pre-installed version of the BASIC computer language along with example programs, including games, to show what users could do with these systems. The availability of BASIC led to people trying to make their own programs. Sales of the 1978 rerelease of the book BASIC Computer Games by David H. Ahl that included the source code for over one hundred games, eventually surpassed over one million copies. The availability of BASIC inspired a number of people to start writing their own games. Many personal computer games written by individuals or two person teams were self-distributed in stores or sold through mail order. Atari, Inc. launched the Atari Program Exchange in 1981 to publish user-written software, including games, for Atari 8-bit computers. Print magazines such as SoftSide, Compute!, and Antic solicited games from hobbyists, written in BASIC or assembly language, to publish as type-in listings. In the United Kingdom, early microcomputers such as the ZX Spectrum were popular, launching a range of "bedroom coders" which initiated the UK's video game industry. During this period, the idea that indie games could provide experimental gameplay concepts or demonstrate niche arthouse appeal had been established. Many games from the bedroom coders of the United Kingdom, such as Manic Miner (1983), incorporated the quirkiness of British humour and made them highly experimental games. Other games like Alien Garden (1982) showed highly-experimental gameplay. Infocom itself advertised its text-based interactive fiction games by emphasizing their lack of graphics in lieu of the players' imagination, at a time that graphics-heavy action games were commonplace. Shareware and chasing the console (1990s) By the mid-1990s, the recognition of the personal computer as a viable gaming option, and advances in technology that led to 3D gaming created many commercial opportunities for video games. During the last part of the 1990s, visibility of games from these single or small team studios scene waned, since a small team could not readily compete in costs, speed and distribution as a commercial entity could. The industry had started to coalesce around video game publishers that could pay larger developers to make games and handle all the marketing and publication costs as well as opportunities to franchise game series. Publishers tended to be risk averse due to high costs of production, and they would reject all small-size and too innovative concepts of small game developers. The market also became fractured due to the prevalence of video game consoles, which required expensive or difficult-to-acquire game development kits typically reserved for larger developers and publishers. There were still significant developments from smaller teams that laid the basis of indie games going forward. Shareware games became a popular means to distribute demos or partially complete games in the 1980s and into the 1990s, where players could purchase the full game from the vendor after trying it. As such demos were generally free to distribute, shareware demo compilations would frequently be included in gaming magazines at that time, providing an easy means for amateur and hobbyist developers to be recognized. The ability to produce numerous copies of games, even if just shareware/demo versions, at a low cost helped to propel the idea as the PC as a gaming platform. At the time, shareware was generally associated with hobbyist programmers, but the releases of Wolfenstein 3D in 1992 and Doom in 1993 showed the shareware route to be a viable platform for titles from mainstream developers. Rise of indie games from digital distribution (2000−2005) The current, common understanding of indie games on personal computer took shape in the early 2000s from several factors. Key was the availability of online distribution over the Internet, allowing game developers to sell directly to players and bypassing limitations of retail distribution and the need for a publisher. Software technologies used to drive the growth of the World Wide Web, like Adobe Flash, were available at low cost to developers, and provided another means for indie games to grow. The new interest in indie games led to middleware and game engine developers to offer their products at low or no cost for indie development, in addition to open source libraries and engines. Dedicated software like GameMaker Studio and tools for unified game engines like Unity and Unreal Engine removed much of the programming barriers needed for a prospective indie developer to create these games. The commercial possibilities for indie games at this point helped to distinguish these games from any prior amateur game. There were other shifts in the commercial environment that were seen as drivers for the rise of indie games in the 2000s. Many of the games to be indie games of this period were considered to be the antithesis of mainstream games and which highlighted the independence of how these games were made compared to the collective of mainstream titles. Many of them took a retro-style approach to their design, art, or other factors in development, such as Cave Story in 2004, which proved popular with players. Social and political changes also led to the use of indie games not only for entertainment purposes but to also tell a message related to these factors, something that could not be done in mainstream titles. In comparing indie games to independent film and the state of their respective industries, the indie game's rise was occurring approximately at the same relative time as its market was starting to grow exponentially and be seen as a supporting offshoot of the mainstream works. Shifting industry and increased visibility (2005−2014) Indie games saw a large boost in visibility within the video game industry and the rest of the world starting around 2005. A key driver was the transition into new digital distribution methods with storefronts like Steam that offered indie games alongside traditional AAA titles, as well as specialized storefronts for indie games. While direct online distribution helped indie games to reach players, these storefronts allowed developers to publish, update, and advertise their games directly, and players to download the games anywhere, with the storefront otherwise handling the distribution and sales factors. While Steam itself initially began heavy curation, it eventually allowed for indie publishing with its Steam Greenlight and Steam Direct programs, vastly increasing the number of games available. Further indie game growth in this period came from the departure of large publishers like Electronic Arts and Activision from their smaller, one-off titles to focus on their larger, more successful properties, leaving the indie game space to provide shorter and more experimental titles as alternatives. Costs of developing AAA games had risen greatly, to an average cost of tens of millions of dollars in 2007–2008 per title, and there was little room for risks in gameplay experimentation. Another driver came from discussions related to whether video games could be seen as an art form; movie critic Roger Ebert postulated in open debates that video games could not be art in 2005 and 2006, leading to developers creating indie games to specifically challenge that notion. Indie video game development saw a further boost by the use of crowdfunding as a means for indie developers to raise funds to produce a game and to determine the desire for a game, rather than risk time and investment into a game that does not sell well. While video games had used crowdfunding prior to 2012, several large indie game-related projects successfully raised millions of dollars through Kickstarter, and since then, several other similar crowdfunding options for game developers have become available. Crowdfunding eliminated some of the cost risk associated with indie game development, and created more opportunities for indie developers to take chances on new titles. With more indie titles emerging during this period, larger publishers and the industry as a whole started taking notice of indie games as a significant movement within the field. One of the first examples of this was World of Goo (2008), whose developers 2D Boy had tried but failed to gain any publisher support prior to release. On release, the game was recognized at various award events including the Independent Games Festival, leading to publishers that had previously rejected World of Goo to offer to publish it. The success of indie video games on crowdfunding platforms also inspired a wave of indie tabletop role-playing game developers to follow the same business model. Console manufacturers also helped increase recognition of indie games in this period. By the seventh generation of consoles in 2005, each platform provided online services for players–namely Xbox Live, PlayStation Network, and Nintendo Wi-Fi Connection–which included digital game distribution. Following the increased popularity of indie games on computers, these services started publishing them alongside larger releases. The Xbox 360 had launched in 2005 with Xbox Live Arcade (XBLA), a service that included some indie games, though these drew little attention in the first few years. In 2008, Microsoft ran its "XBLA Summer of Arcade" promotion, which included the releases of indie games Braid, Castle Crashers, and Geometry Wars: Retro Evolved 2 alongside two AAA games. While all three indie games had a high number of downloads, Braid received critical acclaim and drew mainstream media recognition for being a game developed by two people. Microsoft continued to follow up on this promotion in the following years, bringing in more games onto XBLA such as Super Meat Boy, Limbo, and Fez. Sony and Nintendo followed suit, encouraging indie developers to bring games onto their platforms. By 2013, all three console manufacturers had established programs that allowed indie developers to apply for low-cost development toolkits and licenses to publish directly onto the console's respective storefronts following approval processes. A number of "boutique" indie game publishers were founded in this period to support funding, technical support, and publishing of indie games across various digital and retail platforms. In 2012, Journey became the first Indie game to win the Game Developers Choice Award for Game of the Year and D.I.C.E. Award for Game of the Year. Several other indie games were released during this period to critical and/or commercial success. Minecraft (2011), the best-selling video game of all time as of 2024, was originally released as an indie game before its developer Mojang Studios was acquired by Microsoft in 2014 and brought into Xbox Game Studios. Another indie game, Terraria, was released that same year and has become the eighth best selling video game of all time, as well the highest rated game on Steam as of 2022. Other successful indie games released during this time include Hotline Miami (2012), Shovel Knight (2014), and Five Nights at Freddy's (2014). Hotline Miami inspired many to begin developing games and contributed to the rise in indie game released during this time period, while Shovel Knight and Five Nights at Freddy's spawned successful media franchises, with the latter becoming a cultural phenomenon. Mobile games also became popular with indie developers, with inexpensive development tools and low-barrier storefronts with the App Store and Google Play opening in the late 2000s. In 2012, a documentary, Indie Game: The Movie, was created that covers several successful games from this period. Fears regarding saturation and discoverability (2015−present) Leading into 2015, there was concern that the rise of easy-to-use tools to create and distribute video games could lead to an oversupply of video games, which was termed the "indiepocalypse". This perception of an indiepocalypse is not unanimous; Jeff Vogel stated in a talk at GDC 2016 that any downturn was just part of the standard business cycle. The size of the indie game market was estimated in March 2016 to be at least $1 billion per year for just those games offered through Steam. Mike Wilson, Graeme Struthers and Harry Miller, the co-founders of indie publisher Devolver Digital, stated in April 2016 that the market in indie games is more competitive than ever but continues to appear healthy with no signs of faltering. Gamasutra said that by the end of 2016, while there had not be any type of catastrophic collapse of the indie game market, there were signs that the growth of the market had significantly slowed and that it has entered a "post-indiepocalypse" phase as business models related to indie games adjust to these new market conditions. While there has not been any type of collapse of the indie game field since 2015, there are concerns that the market is far too large for many developers to get noticed. Very few selected indie titles get wide coverage in the media, and are typically referred to as "indie darlings". In some cases, indie darlings are identified through consumer reactions that praise the game rather than direct industry influence, leading to further coverage; examples of such games include Celeste and Untitled Goose Game. However, there are also times where the video game media may see a future title as a success and position it as an indie darling before its release, only to have the game fail to make a strong impression on players, such as in the case of No Man's Sky and Where the Water Tastes Like Wine. Discoverability has become an issue for indie developers as well. With the Steam distribution service allowing any developer to offer their game with minimal cost to them, there are thousands of games being added each year, and developers have come to rely heavily on Steam's discovery tools – methods to tailor catalog pages to customers based on past purchases – to help sell their titles. Mobile app stores have had similar problems with large volumes of offers but poor means for discovery by consumers in the late 2010s. Several indie developers have found it critical to have a good public relations campaign across social media and to interact with the press to make sure a game is noticed early on in its development cycle to get interest and maintain that interest through release, which adds to costs of development. Several games during this time have still seen success, including games that were referred to as "indie darlings." Some of the most popular indie games from this time were primarily popularized over social media and spawned cultural phenomena, such as Undertale (2015) and Among Us (2018), with the latter being one of the most popular games during the COVID-19 pandemic in 2020 and 2021 with half a billion players. A similar example is Lethal Company, which released in 2023 and was popularized through internet culture, becoming one of the most played games of 2023. More commercially successful games from this time include Stardew Valley, Hollow Knight, and Cuphead. Other regions Indie games are generally associated with Western regions, specifically with North American, European, and Oceanic areas. However, other countries have had similar expansions of indie games that have intersected with the global industry. Japanese doujin soft In Japan, the doujin soft community has generally been treated as a hobbyist activity up through the 2010s. Computers and bedroom coding had taken off similarly in the late 1970s and early 1980s, but the computer market was quickly overwhelmed by consoles. Still, hobbyist programmers continued to develop games. One area that Japan had focused on were game development kits, specialized software that would allow users to create their own games. A key line of these were produced by ASCII Corporation, which published ASCII, a hobbyist programming magazine that users could share their programs with. Over time, ASCII saw the opportunity to publish game development kits, and by 1992, released the first commercial version of the RPG Maker software. While the software cost money to obtain, users could release completed games with it as freeware or commercial products, which established the potential for a commercial independent games market by the early 2000s, aligning with the popularity of indie games in the West. Like other Japanese fan-created works in other media, doujin games were often built from existing assets and did not receive much respect or interest from consumers, and instead were generally made to be played and shared with other interested players and at conventions. Around 2013, market forces began to shift with the popularity of indie games in the Western regions, bringing more interest to doujin games as legitimate titles. The Tokyo Game Show first offered a special area for doujin games in 2013 with support from Sony Interactive Entertainment who had been a promoter of Western indie games in prior years, and has expanded that since. The distinction between Japanese-developed doujin games and indie games is ambiguous - the use of the term usually refers to if their popularity formed in Western or Eastern markets before the mid-2010s, and if they are made with the aim of selling large copies or just as a passion project; the long-running bullet hell Touhou Project series, developed entirely by one-man independent developer ZUN since 1995, has been called both indie and doujinshi. Meanwhile, despite being Japanese-developed, Cave Story is primarily referred to as an "indie game" because of its success in the Western market. It is one of the most influential indie games, also contributing to the resurgence of the Metroidvania genre. Doujin games also got a strong interest in Western markets after some English-speaking groups translated various titles with permission for English release, most notably with Recettear: An Item Shop's Tale, the first such doujin to be published on Steam in 2010. Mikhail Fiadotau, a lecturer in video game studies at Tallinn University, identified three primary distinctions between the established doujin culture and the Western idea of indie games. From a conceptual view, indie games generally promote independence and novelty in thought, while doujin games tend to be ideas shared by a common group of people and tend to not veer from established concepts (such as strong favoritism towards the well-established RPG genre). From a genealogical standpoint, the nature of doujin dates back as far as the 19th century, while the indie phenomena is relatively new. Finally, only until recently, doujin games tended to only be talked about in the same circles as other doujin culture (fan artwork and writing) and rarely mixed with commercial productions, whereas indie games have shared the same stage with AAA games. Development Many of the same basic concepts behind video game development for mainstream titles also apply to indie game development, particularly around the software development aspects. Key differences lie in how the development of the game ties in with the publisher or lack thereof. Development teams There is no definitive size for how big an independent game development studio might be. Several successful indie games, such as the Touhou Project series, Axiom Verge, Cave Story, Papers, Please, and Spelunky, were developed by a single person, though often with support of artists and musicians for those assets. More common are small teams of developers, from two to a few dozen, with additional support from external artists. While it is possible for development teams to be larger, with this comes a higher cost overhead of running the studio, which may be risky if the game does not perform well. Indie teams can arise from many different directions. One common path recently includes student projects, developed as prototypes as part of their coursework, which the students then take into a commercial opportunity after graduating from school. Examples of such games are And Yet It Moves, Octodad: Dadliest Catch, Risk of Rain, and Outer Wilds. In some cases, students may drop out of school to pursue the commercial opportunity or for other reasons; Vlambeer's founders, for example, had started to develop a commercial game while still in school and dropped out when the school demanded rights to the game. Another route for indie development teams comes from experienced developers in the industry who either voluntarily leave to pursue indie projects, typically due to creative burnout from the corporate process, or resulting from termination from the company. Examples of games from such groups include FTL: Faster Than Light, Papers, Please, Darkest Dungeon, and Gone Home. Yet another route is simply those with little to no experience in the games industry, although they may have computer-programming skills and experience, and they may come in with ideas and fresh perspectives for games, with ideas that are generally more personable and close to their hearts. These developers are usually self-taught and thus may not have certain disciplines of typical programmers, thereby allowing for more creative freedom and new ideas. However, some may see amateur work less favorably than those that have had experience, whether from school or from the industry, relying on game development toolkits rather than programming languages, and they may associate such titles as amateur or hobbyist. Some such amateur-developed games have found great success. Examples of these include Braid, Super Meat Boy, Dwarf Fortress, and Undertale. Typically, a starting indie-game studio will be primarily programmers and developers. Art assets including artwork and music may be outsourced to work-for-hire artists and composers. Development tools For development of personal computer games, indie games typically rely on existing game engines, middleware and game development kits to build their titles, lacking the resources to build custom engines. Common game engines include Unreal Engine and Unity, but there are numerous others as well. Small studios that do not anticipate large sales are generally afforded reduced prices for mainstream game engines and middleware. These products may be offered free, or be offered at a substantial royalty discount that only increases if their sales exceed certain numbers. Indie developers may also use open source software (such as Godot) or by taking advantage of homebrew libraries, which are freely available but may lack technically-advanced features compared to equivalent commercial engines. Prior to 2010, development of indie games on consoles was highly restrictive due to costly access to software development kits (SDKs), typically a version of the console with added debugging features that would cost several thousands of dollars and come with numerous restrictions on its use to prevent trade secrets related to the console from being leaked. Console manufacturers may have also restricted sales of SDKs to only certain developers that met specific criteria, leaving potential indie developers unable to acquire them. When indie games became more popular by 2010, the console manufacturers as well as mobile device operating system providers released special software-based SDKs to build and test games first on personal computers and then on these consoles or mobile devices. These SDKs were still offered at commercial rates to larger developers, but reduced pricing was provided to those who would generally self-publish via digital distribution on the console or mobile device's storefront, such as with the ID@Xbox program or the iOS SDK. Publishers While most indie games lack a publisher with the developer serving in that role, a number of publishers geared towards indie games have been established since 2010, also known as boutique game publishers; these include Raw Fury, Devolver Digital, Annapurna Interactive, Finji, and Adult Swim Games. There also have been a number of indie developers that have grown large enough on their own to also support publishing for smaller developers, such as Chucklefish, Coffee Stain Studios, and Team17. These boutique publishers, having experience in making indie games themselves, typically will provide necessary financial support and marketing but have little to no creative control on developers' product as to maintain the "indie" nature of the game. In some cases, the publisher may be more selective of the type of games it supports; Annapurna Interactive sought games that were "personal, emotional and original". Funding The lack of a publisher requires an indie developer to find means to fund the game themselves. Existing studios may be able to rely on past funds and incoming revenue, but new studios may need to use their own personal funds ("bootstrapping"), personal or bank loans, or investments to cover development costs, or building community support while in development. More recently, crowd-funding campaigns, both reward-based and equity-based, have been used to obtain the funds from interested consumers before development begins in earnest. While using crowd-funding for video games took off in 2012, its practice has significantly waned as consumers became wary of campaigns that failed to deliver on promised goods. A successful crowd-funded campaign now typically requires significant development work and costs associated with this before the campaign is launched, in order to demonstrate that the game will likely be completed in a timely manner and draw in funds. Another mechanism offered through digital distribution is the early access model, in which interested players can buy playable beta versions of the game to provide software testing and gameplay feedback. Those consumers become entitled to the full game for free on release, while others may have to pay a higher price for the final game. This can provide funding midway though development, but like with crowd-funding, consumers expect a game that is near completion, so significant development and costs will likely need to have been invested already. Minecraft was considered an indie game during its original development, and was one of the first titles to successfully demonstrate this approach to funding. More recently, a number of dedicated investor-based indie game funds have been established such as the Indie Fund. Indie developers can submit applications requesting grants from these funds. The money is typically provided as a seed investment to be repaid through game royalties. Several national governments, through their public arts agencies, also have made similar grants available to indie developers. Distribution Prior to digital distribution, hobbyist programmers typically relied on mail order to distribute their product. They would place ads in local papers or hobbyist computer magazines such as Creative Computing and Byte and, once payment was received, fulfill orders by hand, making copies of their game to cassette tape, floppy disc, or CD-ROM along with documentation. Others would provide copies to their local computer store to sell. In the United Kingdom, where personal computer game development took off in the early 1980s, a market developed for game distributors that handled the copying and distribution of games for these hobbyist programmers. In Japan, doujinshi conventions like Comiket, the largest fan convention in the world, have allowed independent developers to sell and promote their physical products since its inauguration in 1975, allowing game series like Touhou Project and Fate to spread in popularity and dominate the convention for years. As the media shifted to higher-capacity formats and with the ability for users to make their own copies of programs, the simple mail order method was threatened since one person could buy the game and then make copies for their friends. The shareware model of distribution emerged in the 1980s accepting that users would likely make copies freely and share these around. The shareware version of the software would be limited, and require payment to the developer to unlock the remaining features. This approach became popular with hobbyist games in the early 1990s, notably with the releases of Wolfenstein 3D and ZZT, "indie" games from fledgling developers id Software and Tim Sweeney (later founder of Epic Games), respectively. Game magazines started to include shareware games on pack-in demo discs with each issue, and as with mail-order, companies arose that provided shareware sampler discs and served to help with shareware payment and redemption processing. Shareware remained a popular form of distribution even with availability of bulletin board systems and the Internet. By the 2000s, indie developers relied on the Internet as their primary distribution means as without a publisher, it was nearly impossible to stock an indie game at retail, the mail order concept having long since died out. Continued Internet growth led to dedicated video game sites that served as repositories for shareware and other games, indie and mainstream alike, such as GameSpy FilePlanet. A new issue had arisen for larger mainstream games that featured multiplayer elements, in that updates and patches could easily be distributed through these sites but making sure all users were equally informed of the updates was difficult, and without the updates, some players would be unable to participate in multiplayer modes. Valve built the Steam software client originally to serve these updates automatically for their games, but over time, it became a digital storefront that users could also purchase games through. For indie games, Steam started curating third-party titles (including some indies) onto the service by 2005, later adding Steam Greenlight in 2012 that allowed any developer to propose their game for addition onto the service to the userbase, and ultimately replacing Greenlight with Steam Direct in 2017 where any developer can add their game to the service for a small fee. While Steam remains the largest digital storefront for personal computer distribution, a number of other storefronts have since opened. For example, Itch.io, established in 2013, has been more focused on serving indie games over mainstream ones, providing the developers with store pages and other tools to help with marketing. Other services act more as digital retailers, giving tools to the indie developer to be able to accept and redeem online purchases and distribute the game, such as Humble Bundle, but otherwise leaving the marketing to the developer. On consoles, the distribution of an indie game is handled by the console's game store, once the developer has been approved by the console manufacturer. Similarly, for mobile games, the distribution of the game is handled by the app store provider once the developer has been approved to release apps on that type of device. In either case, all aspects of payment, redemption and distribution are handled at the manufacturer/app store provider level. A recent trend for some of the more popular indies is a limited physical release, typical for console-based versions. The distributor Limited Run Games was formed to produce limited runs of games, most commonly successful indie titles that have a proven following that would have a market for a physical edition. These versions are typically produced as special editions with additional physical products like art books, stickers, and other small items in the game's case. Other such distributors include Super Rare Games, Special Reserve Games, and Strictly Limited Games. In nearly all cases with digital distribution, the distribution platform takes a revenue cut of each sale with the rest of the sale going to the developer, as a means to pay for the costs of maintaining the digital storefront. Industry Most indie games do not make a significant profit, and only a handful have made large profits. Instead, indie games are generally seen as a career stepping stone rather than a commercial opportunity. The Dunning–Kruger effect has been shown to apply to indie games: some people with little experience have been able to develop successful games from the start, but for most, it takes upwards of ten years of experience within the industry before one regularly starts making games with financial success. Most in the industry caution that indie games should not be seen a financially-rewarding career for this reason. The industry perception towards indie games have also shifted, making the tactics of how to develop and market indie games difficult in contrast to AAA games. In 2008, a developer could earn around 17% of a game's retail price, and around 85% if sold digitally. This can lead to the appearance of more "risky" creative projects. Furthermore, the expansion of social websites has introduced gaming to casual gamers. Recent years have brought the importance of drawing social media influencers to help promote indie games as well. There is contention as to how prominent indie video game development is in the video game industry. Most games are not widely known or successful, and mainstream media attention remains with mainstream titles. This can be attributed to a lack of marketing for indie games, but indie games can be targeted at niche markets. Industry recognition of indie games through awards has grown significantly over time. The Independent Games Festival was established in 1998 to recognize the best of indie games, and since its first event in 1999 has been held in conjunction with the Game Developers Conference in the first part of each year alongside the Game Developers Choice Awards (GDCA). However, it was not until 2010 when indie games were seen as similar competition to major gaming awards, with the 2010 GDCA recognizing games like Limbo, Minecraft, and Super Meat Boy among AAA titles. Since then, indie games have frequently been included in award nominations alongside AAA games in the major awards events like the GDCA, the D.I.C.E. Awards, The Game Awards, and the BAFTA Video Games Awards. Indie games like What Remains of Edith Finch, Outer Wilds, Untitled Goose Game, Hades, Inscryption, and Vampire Survivors have been awarded various Game of the Year awards. Community Indie developers are generally considered a highly collaborative community with development teams sharing knowledge between each other, providing testing, technical support, and feedback, as generally indie developers are not in any direct competition with each other once they have achieved funding for their project. Indie developers also tend to be open with their target player community, using beta testing and early access to get feedback, and engaging users regularly through storefront pages and communication channels such as Discord. Indie game developers can be involved with various indie game trade shows, such as Independent Games Festival, held alongside the Game Developers Conference, and IndieCade held prior to the annual E3 convention. The Indie Megabooth was established in 2012 as a large showcase at various trade shows to allow indie developers to show off their titles. These events act as intermediaries between indie developers and the larger industry, as they allow for indie developers to connect with larger developers and publishers for business opportunities, as well as to get word of their games out to the press prior to release. Game jams, including Ludum Dare, the Indie Game Jam, the Nordic Game Jam, and the Global Game Jam, are typically annual competitions in which game developers are given a theme, concept and/or specific requirements and given a limited amount of time, on the order of a few days, to come up with a game prototype to submit for review and voting by judges, with the potential to win small cash prizes. Companies can also have internal game jams as a means to relieve stress which may generate ideas for future games, as has notably been the case for developer Double Fine and its Amnesia Fortnight game jams. The structure of such jams can influence whether the end games are more experimental or serious, and whether they are to be more playful or more expressive. While many game jam prototypes go no further, some developers have subsequently expanded the prototype into a full release after the game jam into successful indie games, such as Superhot, Super Time Force, Gods Will Be Watching, Hollow Knight, Surgeon Simulator, and Goat Simulator. Impact and popularity Indie games are recognized for helping to generate or revitalize video game genres, either bringing new ideas to stagnant gameplay concepts or creating whole new experiences. The expansion of roguelikes from ASCII, tile-based hack-and-slash games to a wide variety of so-called "rogue-lites" that maintain the roguelike procedural generation and permadeath features bore out directly from indie games Strange Adventures in Infinite Space (2002) and its sequel Weird Worlds: Return to Infinite Space (2005), Spelunky (2008), The Binding of Isaac (2011), FTL: Faster Than Light (2012) and Rogue Legacy (2012). In turn, new takes on the roguelike genre were inspired by Slay the Spire (2019), which popularized the roguelike deck-building game, and Vampire Survivors (2022), which led to numerous "bullet heaven" or reverse bullet hell games using roguelike mechanics. Metroidvanias resurged following the releases of Cave Story (2004) and Shadow Complex (2009). Stardew Valley (2016) created a resurgence in life simulation games. Art games have gained attention through indie developers with early indie titles such as Samorost (2003) and The Endless Forest (2005). The following table lists indie games that have reported total sales over one million copies, based on the last reported sales figures. These results exclude downloaded copies for games that had transitioned to a free-to-play model such as Rocket League, or copies sold after acquisition by a larger publisher and no longer being considered an indie game, such as Minecraft. See also References Literature External links The Creativity of Indie Video Games Documentary produced by Off Book Video game terminology Self-publishing
Indie game
Technology
9,390
8,486,333
https://en.wikipedia.org/wiki/Mannan-binding%20lectin
Mannose-binding lectin (MBL), also called mannan-binding lectin or mannan-binding protein (MBP), is a lectin that is instrumental in innate immunity as an opsonin and via the lectin pathway. Structure MBL has an oligomeric structure (400-700 kDa), built of subunits that contain three presumably identical peptide chains of about 30 kDa each. Although MBL can form several oligomeric forms, there are indications that dimers and trimers are biologically inactive as an opsonin and at least a tetramer form is needed for activation of complement. Genes and polymorphisms Human MBL2 gene is located on chromosome 10q11.2-q21. Mice have two homologous genes, but in human the first of them was lost. A low level expression of an MBL1 pseudogene 1 (MBL1P1) was detected in liver. The pseudogene encodes a truncated 51-amino acid protein that is homologous to the MBLA isoform in rodents and some primates. Structural mutations in exon 1 of the human MBL2 gene, at codon 52 (Arg to Cys, allele D), codon 54 (Gly to Asp, allele B) and codon 57 (Gly to Glu, allele C), also independently reduce the level of functional serum MBL by disrupting the collagenous structure of the protein. Furthermore, several nucleotide substitutions in the promoter region of the MBL2 gene at position −550 (H/L polymorphism), −221 (X/Y polymorphism) and −427, −349, −336, del (−324 to −329), −70 and +4 (P/Q polymorphisms) affect the MBL serum concentration. Both the frequency of structural mutations and the promoter polymorphisms that are in strong linkage disequilibrium vary among ethnic groups resulting in seven major haplotypes: HYPA, LYQA, LYPA, LXPA, LYPB, LYQC and HYPD. Differences in the distribution of these haplotypes are the major cause of interracial variations in MBL serum levels. Both HYPA and LYQA are high-producing haplotypes, LYPA intermediate-producing haplotype and LXPA low-producing haplotype, whereas LYPB, LYQC and HYPD are defective haplotypes, which cause a severe MBL deficiency. Such polymorphism also present in exon 4. Both MBL2 and MBL1P1 genes has been repeatedly hit throughout evolution of primates. The latter silenced eventually by mutations in the glycine residues of the collagen-like region. It has been selectively turned off during evolution through the same molecular mechanisms causing the MBL2 variant alleles in man, suggesting an evolutionary selection for low-producing MBL genes. Posttranslational modifications In rat hepatocytes, MBL is synthesized in the rough endoplasmic reticulum. While in Golgi, it undergoes two distinct posttranslational modifications and is assembled into high molecular weight multimeric complexes. The modifications produce MBL in multiple forms of slightly various molecular masses and pI from 5.7 to 6.2. Proteolytic cleavage also resulted in removal of the 20-aa N-terminal signal peptide, and hydroxylation and glycosylation were also detected. Some cysteine residues can be converted to dehydroalanin. Function MBL belongs to the class of collectins in the C-type lectin superfamily, whose function appears to be pattern recognition in the first line of defense in the pre-immune host. MBL recognizes carbohydrate patterns found on the surface of a large number of pathogenic micro-organisms, including bacteria, viruses, protozoa and fungi. Binding of MBL to a micro-organism results in activation of the lectin pathway of the complement system. Another important function of MBL is that this molecule binds senescent and apoptotic cells and enhances engulfment of whole, intact apoptotic cells, as well as cell debris by phagocytes. Activation The complement system can be activated through three pathways: the classical pathway, the alternative pathway, and the lectin pathway. One way the most-recently discovered lectin pathway is activated is through mannose-binding lectin protein. MBL binds to carbohydrates (to be specific, D-mannose and L-fucose residues) found on the surfaces of many pathogens. For example, MBL has been shown to bind to: yeasts such as Candida albicans viruses such as HIV and influenza A many bacteria, including Salmonella and Streptococci parasites like Leishmania SARS-CoV-2 Complexes MBL in the blood is complexed with (bound to) a serine protease called MASP (MBL-associated serine protease). There are three MASPs: MASP-1, MASP-2 and MASP-3, which have protease domains. There are also sMAP (also called MAp19) and MAp44, which do not have protease domains and are thought to be regulatory molecules of MASPs. MASPs also form complexes with ficolins, which are similar to MBL functionally and structurally with the exception that ficolins recognize their targets through fibrinogen-like domains, unlike MBL. In order to activate the complement system when MBL binds to its target (for example, mannose on the surface of a bacterium), the MASP protein functions to cleave the blood protein C4 into C4a and C4b. The C4b fragments can then bind to the surface of the bacterium, and initiate the formation of a C3-convertase. The subsequent complement cascade catalyzed by C3-convertase results in creating a membrane attack complex, which causes lysis of the pathogen as well as altered-self in the context of apoptotic and necrotic cells. MBL/MASP-1 complex also has thrombin-like activity (thrombin clots fibrin to initiate blood clots). Mice that genetically lack MBL or MASP-1/3 (but not MASP-2/sMAP) have prolonged bleeding time in experimental injury models, although mice are seen to be normal if there is no insult to the body. Clinical significance It is produced in the liver as a response to infection, and is part of many other factors termed acute phase proteins. Expression and function in other organs were also suggested. The three structural polymorphisms of exon 1 have been reported to cause susceptibility to various common infections, including meningococcal disease. However, evidence has been presented that suggests no harmful effect of these variants with regard to mengingococcal disease. MBL deficiency is very common in humans, with approximately 10% of individuals having this deficiency. External links References Immune system Collectins Blood proteins Human proteins Lectins
Mannan-binding lectin
Biology
1,526
4,962,398
https://en.wikipedia.org/wiki/Chemistry%20Letters
Chemistry Letters is a peer-reviewed scientific journal published by the Chemical Society of Japan. It specializes in the rapid publication of reviews and letters on all areas of chemistry. The editor-in-chief is Mitsuhiko Shionoya (University of Tokyo). According to the Journal Citation Reports, the journal has a 2014 impact factor of 1.23. References External links Chemistry journals Academic journals established in 1972 English-language journals Academic journals published by learned and professional societies Monthly journals Chemical Society of Japan
Chemistry Letters
Chemistry
101
2,737,680
https://en.wikipedia.org/wiki/Devege%C3%A7idi%20Dam
Devegeçidi Dam is one of the 22 dams of the Southeastern Anatolia Project of Turkey, Diyarbakır. It is near Diyarbakır on a branch of the Tigris river. It was constructed for irrigation purposes between 1965 and 1972. Notes References External links www.gap.gov.tr - Official GAP web site Dams in Diyarbakır Province Southeastern Anatolia Project Buildings and structures in Diyarbakır Dams completed in 1972 Rock-filled dams Dams in the Tigris River basin Important Bird Areas of Turkey
Devegeçidi Dam
Engineering
117
52,625,188
https://en.wikipedia.org/wiki/Donna%20R.%20Maglott
Donna R. Maglott is a staff scientist at the National Center for Biotechnology Information known for her research on large-scale genomics projects, including the mouse genome and development of databases required for genomics research. Education and career Maglott earned her Ph.D. in 1970 from the University of Michigan where she worked on the 50S ribosome in the bacterium Escherichia coli. She held an academic position at Howard University; and then moved to the American Type Culture Center in 1986 where she began establishing databases needed for genomic research. She started at the National Center for Biotechnology Information (NCBI) in 1998. Research While at Howard University, Maglott worked on protein synthesis during early development of sea urchins. At ATCC, she worked on repositories holding clone and genomic information and began research using genomic tools to investigate information on human chromosomes. In 2000, Maglott worked with Kim D. Pruitt to introduce RefSeq, a web-based resource for gene-based information that is hosted by NCBI and has been updated over the years. She has also been involved in the development of other databases at NCBI including Entrez Gene, ClinVar, STS markers, Conserved CoDing Sequences (CCDS), Map Viewer, RefSeqGene, the NIH Genetic Testing Registry (GTR), MedGen, and ClinVar. Large-scale genomics projects that Margott has worked on include the rat genome database, and the mouse genome and transcriptome. In 2006, Maglott was a part of the team analyzing the genome of the sea urchin, Strongylocentrotus purpuratus, which was the first genome obtained for a motile marine invertebrate. Selected publications References Year of birth missing (living people) Living people American medical researchers University of Michigan alumni American women geneticists Bioinformaticians National Institutes of Health people American geneticists
Donna R. Maglott
Biology
401
1,688,648
https://en.wikipedia.org/wiki/Exhaust%20system
An exhaust system is used to guide reaction exhaust gases away from a controlled combustion inside an engine or stove. The entire system conveys burnt gases from the engine and includes one or more exhaust pipes. Depending on the overall system design, the exhaust gas may flow through one or more of the following: Cylinder head and exhaust manifold A turbocharger to increase engine power. A catalytic converter to reduce air pollution. A muffler (North America) / silencer (UK/India), to reduce noise. Design criteria An exhaust pipe must be carefully designed to carry toxic and noxious gases away from the users of the machine. Indoor generators and furnaces can quickly fill an enclosed space with poisonous exhaust gases such as hydrocarbons, carbon monoxide and nitrogen oxides, if they are not properly vented to the outdoors. Also, the gases from most machines are scorching; the pipe must be heat-resistant and not pass through or near anything that can burn or be damaged by heat. A chimney is an exhaust pipe in a stationary structure. For the internal combustion engine, it is important to have the exhaust system "tuned" (refer to tuned exhaust) for optimal efficiency. Also, this should meet the norms of the regulations in each country. In China, China 5; In European countries, EURO 5; In India, BS-4, etc., Motorcycles In most motorcycles, all or most of the exhaust system is visible and may be chrome plated as a display feature. Aftermarket exhausts may be made from steel, aluminium, titanium, or carbon fiber. Motorcycle exhausts come in many varieties depending on the type of engine and its intended use. A twin-cylinder bike may have independent exhaust sections, as seen in the Kawasaki EX250 (also known as the Ninja 250 in the US, or the GPX 250), or a single exhaust section known as a two-into-one (2-1). Four-cylinder machines, super-sport bikes like Kawasaki's ZX series, Honda's CBR series, Yamaha's YZF series, latterly titled R6 and R1, and Suzuki's GSX-R, often have a twin exhaust system. A "full system" may be bought as an aftermarket accessory, also called a 4-2-1 or 4–1, depending on its layout. In the past, these bikes would come as standard with a single exhaust muffler. This practice lasted until the early 2000s when EU noise and pollution regulations effectively forced companies to use other methods to increase the motorcycle's performance. Trucks In many trucks / lorries, all or most of the exhaust system is visible, often with a vertical exhaust pipe. Usually, in such trucks, the silencer is surrounded by a perforated metal sheath to avoid people getting burnt from touching the hot silencer. This sheath may be chrome plated as a display feature. Part of the pipe between the engine and the silencer is often flexible metal industrial ducting, which helps to avoid vibration from the engine being transferred into the exhaust system. Sometimes, a large diesel exhaust pipe is vertical to blow the hot, toxic gas well away from people; in such cases, the end of the exhaust pipe often has a hinged metal flap to stop debris, birds, and rainwater from falling inside. In former times, exhaust systems of trucks / lorries in Britain were usually out of sight underneath the chassis. Two-stroke engines In a two-stroke engine, such as that used on dirt bikes, a bulge in the exhaust pipe known as an expansion chamber uses the pressure of the exhaust to create a pump that squeezes more air and fuel into the cylinder during the intake stroke. This provides greater power and fuel efficiency. See Kadenacy effect. Marine engines With an onboard diesel or petrol (gasoline) engine, below-decks on marine vessels:- Lagging the exhaust pipe stops it from overheating the engine room where people must work to service the engine. Feeding water into the exhaust pipe cools the exhaust gas and thus lessens the back-pressure at the engine's cylinders. In marine service, the exhaust manifold is often integral to a heat exchanger that allows seawater to cool a closed system of freshwater circulating within the engine. Outboard motors In outboard motors, the exhaust system is usually a vertical passage through the engine structure, and to reduce out-of-water noise, it blows out underwater, sometimes through the middle of the propeller. Terminology Manifold or header In most production engines, the manifold is an assembly designed to collect exhaust gas from two or more cylinders into one pipe. In stock production cars, manifolds are often made of cast iron. They may have material-saving design features such as using the least metal, occupying the least space necessary, or having the lowest production cost. These design restrictions often result in a cost-effective design that does not do the most efficient job of venting the gases from the engine. Inefficiencies generally occur due to the nature of the combustion engine and its cylinders. Since cylinders fire at different times, exhaust leaves them at different times, and pressure waves from gas emerging from one cylinder might not be completely vacated through the exhaust system when another comes. This creates back pressure and restriction in the engine's exhaust system, restricting the engine's actual performance possibilities. Regardless of the negative attributes of steel tube exhaust outlet configurations, engineers who design engine components choose conventional cast iron exhaust manifolds because they list positive attributes, such as an array of heat management properties and superior longevity to any other type of exhaust outlet design. A header is a manifold specifically designed for performance. During design, engineers create a manifold without regard to weight or cost but instead for optimal flow of the exhaust gases. This design results in a header that is more efficient at scavenging the exhaust from the cylinders. Headers are generally circular steel tubing with bends and folds calculated to make the paths from each cylinder's exhaust port to the common outlet all equal length and joined at narrow angles to encourage pressure waves to flow through the outlet, not back towards other cylinders. In a set of tuned headers the pipe lengths are carefully calculated to enhance exhaust flow in a particular engine revolutions per minute range. A common method of increasing an engine's power output is using upgraded headers. The increased power output is often due to a result of a larger cross-section area of the pipes (reducing the resistance on the exhaust gasses) and designing the pipe lengths so that the pressure wave assists in exhaust scavenging. For inline-four engines and V8 engines, exhaust manifolds are usually either a 4-2-1 design (where the four pipes merge into two, followed by a separate merge of these two pipes into one) or a 4-1 design (where the four pipes directly merge into one). Headers are generally made by aftermarket automotive companies, but sometimes can be bought from the high-performance parts department at car dealerships. Generally, most car performance enthusiasts buy aftermarket headers made by companies solely focused on producing reliable, cost-effective, well-designed headers specifically for their cars. Headers can also be custom-designed by a specialty shop. Due to the advanced materials that some aftermarket headers are made of, this can be expensive. An exhaust system can be custom-built for many vehicles and generally is not specific to the car's engine or design except for needing to properly connect solidly to the engine. This is usually accomplished by correct sizing in the design stage and selecting a proper gasket type and size for the engine. Catalytic converter Some systems (called catless or de-cat systems) eliminate the catalytic converter. It is a U.S. legal requirement to have a catalytic converter. Converters may not be removed from a vehicle that is used only for "off-road" driving in the United States. The main purpose of a catalytic converter on an automobile is to reduce harmful emissions of hydrocarbons, carbon monoxide, and nitrogen oxides into the atmosphere. They work by transforming the polluted exhaust components into water and carbon dioxide. There is a light-off temperature from which catalytic converters start to be efficient and work properly. Catalytic converters can cause back pressure if clogged or not designed for the required flow rate. In these situations, upgrading or removal of the catalytic converter can increase power at high revs. However, the catalytic converter is vital to the vehicle's emission control systems. Therefore, a non-standard product can cause a vehicle to be unroadworthy. Piping The piping that connects all of the individual components of the exhaust system is called the exhaust pipe. If the diameter is too small, power at high RPM will be reduced. Piping diameter that is too large can reduce torque at low RPM and can cause the exhaust system to be lower to the ground, increasing the risk of it being hit and damaged while the car is moving. On cars with two sets of exhaust pipes, a crossover pipe is often used to connect the two pipes. Typical designs of crossover pipes are a perpendicular pipe ('H-pipe', due to their shape) or angled pipes that slowly merge and separate ('X-pipe'). Muffler Original equipment mufflers typically reduces the noise level from the tailpipe by bouncing sound waves off of the back, front, and sides of the muffler. They are designed to meet the maximum allowable noise level required by government regulations. However, some original equipment mufflers are a significant source of backpressure. Glasspack mufflers (also called 'cannons' or 'hotdogs') are straight-through design mufflers that consist of an inner perforated tube, an outer solid tube, and fiberglass sound insulation between the two tubes. They often have less back pressure than original equipment mufflers, but are relatively ineffective at reducing sound levels. Another common type of muffler is the chambered muffler, which consists of a series of concentric or eccentric pipes inside the expansion chamber cavity. These pipes allow sound to travel into them and cause the sound waves to bounce off the closed, flat ends of the pipe. These reflections partially cancel each other out, reducing the sound level. Resonators are sections of pipe that expand to a larger diameter and allow the sound waves to reflect off the walls and cancel out, reducing the noise level. Resonators can be used inside mufflers or as separate components in an exhaust system. Tailpipe and exhaust With trucks, sometimes the silencer is crossways under the front of the cab, and its tailpipe blows sideways to the offside (right side if driving on the left, left side if driving on the right). The side of a passenger car on which the exhaust exits beneath the rear bumper usually indicates the market for which the vehicle was designed, i.e., Japanese (and some older British) vehicles have exhausts on the right so they are furthest from the curb in countries which drive on the left, while European vehicles have exhausts on the left. The end of the final length of the exhaust pipe where it vents to open air, generally the only visible part of the exhaust system part on a vehicle, often ends with a straight or angled cut but may include a decorative tip. The tip is sometimes chromed. It frequently has a larger pipe than the rest of the exhaust system. This produces a final reduction in pressure and is sometimes used to enhance the car's appearance. In the late 1950s, in the United States, manufacturers had a fashion in car styling to form the rear bumper with a hole at each end through which the exhaust would pass. Two outlets symbolized V8 engines. Many expensive cars (Cadillac, Lincoln, Imperial, Packard) were fitted with this design. One justification for this was that luxury cars in those days had such an extended rear overhang that the exhaust pipe scraped the ground when the car traversed ramps. The fashion disappeared after customers noted that the rear end of the vehicle is a low-pressure area that collected soot from the exhaust, and its acidic content ate into the chrome-plated rear bumper. When a bus, truck or tractor or excavator has a vertical exhaust pipe (called stacks or pipes behind the cab), sometimes the end is curved, or has a hinged cover flap which the gas flow blows out of the way, to try to prevent foreign objects (including feces from a bird perching on the exhaust pipe when the vehicle is not being used) getting inside the exhaust pipe. In some trucks, when the silencer (muffler) is front-to-back under the chassis, the end of the tailpipe turns and blows downwards. That protects anyone near a stationary truck from getting a direct blast of the exhaust gas but often raises dust when driving on a dry, dusty surface such as on a building site. Lake pipes A consequence of the problematic nature of the adaptation of large-diameter exhaust tubing to the undercarriage of ladder-frame or body-on-frame chassis architecture vehicles with altered geometry suspensions, lake pipes evolved to become a front-engined vehicle exhaust archetype crafted by specialty motorsport engine specialists of the 1930s, 1940s, and 1950s, whose focus was the optimization of the acoustic effect associated with high-output internal combustion engines. The name is derived from their use on the vast, empty, dry lake beds northeast of Los Angeles County, where engine specialists custom-crafted, interchanged, and evaluated one-piece header manifolds of various mil thicknesses, a function of temperature, humidity, elevation, and climate they anticipated. No intrinsic performance gain to be derived, per se, lake pipes evolved a function of practicality. In typical instances, their manifolds routed straight out the front wheel wells posing an asphyxiation risk to the race driver, "lake pipes" were fashioned, extending from the header flange along the rocker panels, bottom side of the vehicle, beneath the doors, thus allowing (1) suspension tuners a lower ride height sufficient for land speed record attempts, and (2) engine tuners ease and flexibility of interchanging different exhaust manifolds without hoisting the vehicle, thus precluding having to wrench undercarriage of the vehicle. Body-on-frame chassis architecture ceding to superleggera, unibody, and monocoque archetypes, in tandem with smog abatement legislation rendered lake pipes obsolete as a performance option. There is no meaningful performance gain for contemporary vehicles; lake pipes are aesthetic accessories usually chrome-plated. Some allow the driver to control whether the exhaust gas is routed to the standard exhaust system or through the lake pipes. Some are equipped with laker caps which, affixed by fasteners at the terminal end of exhaust tips, serve to (1) "cap" the exhaust system when not in use and/or (2) indicate that the presence of lake pipes is merely cosmetic. Header-back The Header-back (or header back) is part of the exhaust system from the header outlet to the final vent to open air — everything from the header back. Header-back systems are generally produced as aftermarket performance systems for cars without turbochargers. Turbo-back The Turbo-back (or turbo back) is part of the exhaust system from the outlet of a turbocharger to the final vent to open air. Turbo-back systems are generally produced as aftermarket performance systems for cars with turbochargers. Some turbo-back (and header-back) systems replace stock catalytic converters, while others have less flow restriction. Cat-back Cat-back (also cat back and catback) refers to the portion of the exhaust system from the outlet of the catalytic converter to the final vent to open air. This generally includes the pipe from the converter to the muffler, the muffler, and the final length of the pipe to open air. Cat-back exhaust systems generally use pipes of larger diameters than the stock system. To reduce backpressure, the mufflers included in these kits are often glasspacks. If the system is engineered more for show than functionality, it may be tuned to enhance the lower sounds from high-RPM low-displacement engines. Exhaust aftertreatment Exhaust aftertreatments are devices or methods to meet emission regulations. Catalytic converter Exhaust gas recirculation (EGR) Diesel particulate filter (DPF) Diesel exhaust fluid (DEF or AdBlue) Carbon capture and storage Selective non-catalytic reduction (SNCR) Scrubber Exhaust system tuning Aftermarket exhaust parts can increase peak power by reducing the back pressure of the exhaust system. These parts sometimes can void factory warranties, however the European Union Block Exemption Regulations 1400/2002 prevents manufacturers from rejecting warranty claims if the aftermarket parts are of matching quality and specifications to the original parts. Many automotive companies offer aftermarket exhaust system upgrades as a subcategory of engine tuning. This is often relatively expensive as it usually includes replacing the entire exhaust manifold or other significant components. These upgrades, however, can improve engine performance by reducing the exhaust back pressure and reducing the amount of heat from the exhaust being lost into the underbonnet area. This reduces the underbonnet temperature and consequently lowers the intake manifold temperature, increasing power. This also has a positive side effect of preventing damage to heat-sensitive components. Backpressure is most commonly reduced by replacing exhaust manifolds with headers, which have smoother bends and normally wider pipe diameters. Exhaust heat management helps reduce exhaust heat radiating from the exhaust pipe and components. One dominant solution to aftermarket upgrades is the use of a ceramic coating applied via thermal spraying as a heat shield. This not only reduces heat loss and lessens back pressure, but also provides an effective way to protect the exhaust system from wear and tear, thermal degradation, and corrosion. Tuning can change the noise of the exhaust system, known as exhaust notes. Images See also Vehicle emissions control Expansion chamber Motor vehicle emissions Nitrogen oxide sensor British Leyland Motor Corp v Armstrong Patents Co - litigation involving right to supply aftermarket exhaust systems Exhaust Heat Management Zircotec References External links Engine components
Exhaust system
Technology
3,739
53,931,502
https://en.wikipedia.org/wiki/Global%20Offset%20Table
The Global Offset Table, or GOT, is a section of a computer program's (executables and shared libraries) memory used to enable computer program code compiled as an ELF file to run correctly, independent of the memory address where the program's code or data is loaded at runtime. It maps symbols in programming code to their corresponding absolute memory addresses to facilitate Position Independent Code (PIC) and Position Independent Executables (PIE) which are loaded to a different memory address each time the program is started. The runtime memory address, also known as absolute memory address of variables and functions is unknown before the program is started when PIC or PIE code is run so cannot be hardcoded during compilation by a compiler. The Global Offset Table is represented as the .got and .got.plt sections in an ELF file which are loaded into the program's memory at startup. The operating system's dynamic linker updates the global offset table relocations (symbol to absolute memory addresses) at program startup or as symbols are accessed. It is the mechanism that allows shared libraries (.so) to be relocated to a different memory address at startup and avoid memory address conflicts with the main program or other shared libraries, and to harden computer program code from exploitation. References Computer programming
Global Offset Table
Technology,Engineering
262
48,914,877
https://en.wikipedia.org/wiki/PET%20radiotracer
PET radiotracer is a type of radioligand that is used for the diagnostic purposes via positron emission tomography imaging technique. Mechanism PET is a functional imaging technique that produces a three-dimensional image of functional processes in the body. The system detects pairs of gamma rays emitted indirectly by a positron-emitting radionuclide (tracer), which is introduced into the body on a biologically active molecule. Pharmacology In in vivo systems it is often used to quantify the binding of a test molecule to the binding site of radioligand. The higher the affinity of the molecule the more radioligand is displaced from the binding site and the increasing radioactive decay can be measured by scintillography. This assay is commonly used to calculate binding constant of molecules to receptors. Due to the probable injuries of PET-radiotracers, they could not be administered in the normal doses of the medications. Therefore, the binding affinity (PKD) of the PET-tracers must be high. In addition, since via the PET imaging technique is desired to investigate a function accurately, the selectivity of bindings to the specific targets is very important. See also Medicinal radiocompounds List of PET radiotracers Positron emission tomography Medicinal radiochemistry Radioligand References Positron emission tomography Neuroimaging Nuclear medicine Radiopharmaceuticals Medicinal radiochemistry Chemicals in medicine
PET radiotracer
Physics,Chemistry
299
8,213,823
https://en.wikipedia.org/wiki/Trabectedin
Trabectedin, sold under the brand name Yondelis, is an antitumor chemotherapy medication for the treatment of advanced soft-tissue sarcoma and ovarian cancer. The most common adverse reactions include nausea, fatigue, vomiting, constipation, decreased appetite, diarrhea, peripheral edema, dyspnea, and headache. It is sold by Pharma Mar S.A. and Johnson and Johnson. It is approved for use in the European Union, Russia, South Korea and the United States. The European Commission and the U.S. Food and Drug Administration (FDA) granted orphan drug status to trabectedin for soft-tissue sarcomas and ovarian cancer. Discovery and production During the 1950s and 1960s, the National Cancer Institute carried out a wide-ranging program of screening plant and marine organism material. As part of that program, extract from the sea squirt Ecteinascidia turbinata was found to have anticancer activity in 1969. Separation and characterization of the active molecules had to wait many years for the development of sufficiently sensitive techniques, and the structure of one of them, Ecteinascidin 743, was determined by KL Rinehart at the University of Illinois in 1984. Rinehart had collected his sea squirts by scuba diving in the reefs of the West Indies. The biosynthetic pathway responsible for producing the drug has been determined to come from Candidatus Endoecteinascidia frumentensis, a microbial symbiont of the tunicate. The Spanish company PharmaMar licensed the compound from the University of Illinois before 1994 and attempted to farm the sea squirt with limited success. Yields from the sea squirt are extremely low as around 1,000 kilograms of animals is needed to isolate 1 gram of trabectedin - and about 5 grams were believed to be needed for a clinical trial so Rinehart asked the Harvard chemist E. J. Corey to search for a synthetic method of preparation. His group developed such a method and published it in 1996. This was later followed by a simpler and more tractable method which was patented by Harvard and subsequently licensed to PharmaMar. The current supply is based on a semisynthetic process developed by PharmaMar starting from safracin B, a chemical obtained by fermentation of the bacterium Pseudomonas fluorescens. PharmaMar entered into an agreement with Johnson & Johnson to market the compound outside Europe. Approvals and indications Trabectedin was first trialed in humans in 1996. Soft tissue sarcoma In 2007, the European Commission gave authorization for the marketing of trabectedin, under the trade name Yondelis, "for the treatment of patients with advanced soft tissue sarcoma, after failure of anthracyclines and ifosfamide, or who are unsuited to receive these agents". The European Medicine Agency's evaluating committee, the Committee for Medicinal Products for Human Use (CHMP), observed that trabectedin had not been evaluated in an adequately designed and analyzed randomized controlled trial against current best care, and that the clinical efficacy data were mainly based on patients with liposarcoma and leiomyosarcoma. However, the pivotal study did show a significant difference between two different trabectedin treatment regimens, and due to the rarity of the disease, the CHMP considered that marketing authorization could be granted under exceptional circumstances. As part of the approval PharmaMar agreed to conduct a further trial to identify whether any specific chromosomal translocations could be used to predict responsiveness to trabectedin. Trabectedin is also approved in South Korea and Russia. In 2015, (after a phase III study comparing trabectedin with dacarbazine), the US FDA approved trabectedin (Yondelis) for the treatment of liposarcoma and leiomyosarcoma that is either unresectable or has metastasized. Patients must have received prior chemotherapy with an anthracycline. Ovarian cancer and other In 2008, the submission was announced of a registration dossier to the European Medicines Agency and the FDA for Yondelis when administered in combination with pegylated liposomal doxorubicin (Doxil, Caelyx) for the treatment of women with relapsed ovarian cancer. In 2011, Johnson & Johnson voluntarily withdrew the submission in the United States following a request by the FDA for an additional phase III study to be done in support of the submission. Trabectedin is also in phase II trials for prostate, breast, and paediatric cancers. Structure Trabectedin is composed of three tetrahydroisoquinoline moieties, eight rings including one 10-membered heterocyclic ring containing a cysteine residue, and seven chiral centers. Biosynthesis The biosynthesis of trabectedin in the tunicate symbiotic bacteria Candidatus Endoecteinascidia frumentensis starts with a fatty acid loading onto the acyl-ligase domain of the EtuA3 module. A cysteine and glycine are then loaded as canonical NRPS amino acids. A tyrosine residue is modified by the enzymes EtuH, EtuM1, and EtuM2 to add a hydroxyl at the meta position of the phenol, and adding two methyl groups at the para-hydroxyl and the meta carbon position. This modified tyrosine reacts with the original substrate via a Pictet-Spengler reaction, where the amine group is converted to an imine by deprotonation, then attacks the free aldehyde to form a carbocation that is quenched by electrons from the methyl-phenol ring. This is done in the EtuA2 T-domain. This reaction is done a second time to yield a dimer of modified tyrosine residues that have been further cyclized via Pictet-Spengler reaction, yielding a bicyclic ring moiety. The EtuO and EtuF3 enzymes continue to post-translationally modify the molecule, adding several functional groups and making a sulfide bridge between the original cysteine residue and the beta-carbon of the first tyrosine to form ET-583, ET-597, ET-596, and ET-594 which have been previously isolated. A third O-methylated tyrosine is added and cyclized via Pictet-Spengler to yield the final product. Total synthesis The total synthesis by E.J. Corey used this proposed biosynthesis to guide their synthetic strategy. The synthesis uses such reactions as the Mannich reaction, Pictet-Spengler reaction, the Curtius rearrangement, and chiral rhodium-based diphosphine-catalyzed enantioselective hydrogenation. A separate synthetic process also involved the Ugi reaction to assist in the formation of the pentacyclic core. This reaction was unprecedented for using such a one pot multicomponent reaction in the synthesis of such a complex molecule. Mechanism of action Recently, it has been shown that trabectedin blocks DNA binding of the oncogenic transcription factor FUS-CHOP and reverses the transcriptional program in myxoid liposarcoma. By reversing the genetic program created by this transcription factor, trabectedin promotes differentiation and reverses the oncogenic phenotype in these cells. Other than transcriptional interference, the mechanism of action of trabectedin is complex and not completely understood. The compound is known to bind and alkylate DNA at the N2 position of guanine. It is known from in vitro work that this binding occurs in the minor groove, spans approximately three to five base pairs and is most efficient with CGG sequences. Additional favorable binding sequences are TGG, AGC, or GGC. Once bound, this reversible covalent adduct bends DNA toward the major groove, interferes directly with activated transcription, poisons the transcription-coupled nucleotide excision repair complex, promotes degradation of RNA polymerase II, and generates DNA double-strand breaks. In 2024, researchers from ETH Zürich and UNIST determined that abortive transcription-coupled nucleotide excision repair of trabectedin-DNA adducts forms persistent single-strand breaks (SSBs) as the adducts block the second of the two sequential NER incisions. The researchers mapped the 3’-hydroxyl groups of SSBs originating from the first NER incision at trabectedin lesions, recording TC-NER on a genome-wide scale, which resulted in a TC-NER-profiling assay TRABI-Seq. Society and culture Legal status In September 2020, the European Medicines Agency recommended that the use of trabectedin in treating ovarian cancer remain unchanged. References 2,5-Dimethoxyphenethylamines Acetate esters Antineoplastic drugs Benzodioxoles Drugs developed by Johnson & Johnson Orphan drugs Hydroxyarenes Total synthesis Phenethylamine alkaloids Bacterial alkaloids
Trabectedin
Chemistry
1,934
15,658,702
https://en.wikipedia.org/wiki/Journal%20of%20the%20IEST
The Journal of the IEST is a peer-reviewed scientific journal and the official publication of the Institute of Environmental Sciences and Technology (IEST). It covers research on simulation, testing, modeling, control, and the teaching of the environmental sciences and technologies. The journal was established in 1958 as the Journal of Environmental Engineering. In October 1959, it was renamed Journal of Environmental Sciences and obtained its current title in 1998. External links Journal page on society's website Environmental science journals English-language journals Academic journals established in 1958 Hijacked journals
Journal of the IEST
Environmental_science
111
22,615,874
https://en.wikipedia.org/wiki/Limber%20hole
A limber hole is a drain hole through a frame or other structural member of a boat designed to prevent water from accumulating against one side of the frame, and allowing it to drain toward the bilge. Limber holes are common in the bilges of wooden boats. The term may be extended to cover drain holes in floors. Limber holes are created in between bulkheads so that one compartment does not fill with water. The limber holes allow water to drain into the lowest part of the bilge so that it can be pumped out by a single bilge pump (or more usually, one electric and one manual pump). The term is also commonly applied to the holes in mid-20th century submarine upperworks, which allow drainage from the superstructure. References Chapelle, Howard I. (1994, p252). Yacht Designing and Planning. W.W. Norton. . Brewer, Ted (1994, p139). Understanding Boat Design (4th ed.). International Marine, a division of McGraw Hill. . Shipbuilding Nautical terminology
Limber hole
Engineering
217
63,982,819
https://en.wikipedia.org/wiki/Diphenylcarbazide
1,5-Diphenylcarbazide (or simply Diphenylcarbazide, often abbreviated DPC) is a chemical compound from the group of the carbazides. It has a structural formula similar to that of diphenylcarbazone and can be easily converted into it by oxidation. Properties Diphenylcarbazide is a white solid that is scarcely soluble in water, but readily soluble in organic solvents like acetone, hot ethanol and acetic acid. It forms colored complex compounds with certain metal ions. Diphenylcarbazide oxidizes to Diphenylcarbazone when exposed to light and air, turning pink in the process. Uses Diphenylcarbazide is used as a redox indicator and for the photometric determination of certain heavy metal ions, like those of Chromium, Mercury, Cadmium, Osmium, Rubidium, Technetium and more. Reacting a Diphenylcarbazide with Chromium (VI) compounds, such as chromates or dichromates produces diphenylcarbazone, which forms a red-violet complex with the chromium compounds. Chromium (III) compounds can also be determined using this method, by first oxidizing these to Chromium (VI) using an oxidizing agent (e.g. ammonium persulfate solution). Diphenylcarbazide has also been widely used in the chemical lab to detect Mercury (II) compounds, in a similar fashion. The reagent is typically used either as 1% to 0.25% solution in some organic solvent, or in the form of test-strip paper for detection of heavy metals in drinking water at home. The reagent is very sensitive, with a sensitivity threshold at 0.000,000,05 g/ml for Chromium (VI) ions, and 0.000,002 g/ml for Mercury (II) ions, that is 50 ppb and 2000 ppb, respectively. In the beginning of the 20th century, the following procedure using the Diphenylcarbazide indicator was developed, to prove the presence of mercury in solution: One drop of the solution to be tested is deposited on a filter paper which had been dipped into a freshly prepared 1% alcoholic solution of Diphenylcarbazide. Mercury salts produce a purple spot, even in a very diluted solution. Chromates and molybdates produce the same reaction. The major drawback of the test is the deterioration of stock solution of Diphenylcarbazide in different solvents. Thus, it needs to be freshly prepared. To avoid this problem, a solution has been found in the publication by Urone in 1955. Accordingly, non-aqueous Ethyl acetate and acetone were the better solvents, Diphenylcarbazide solutions of which are stable for months. Diphenylcarbazide solutions of Methyl ethyl ketone, methyl cellosolve (2-methoxyethanol), and Isopropyl alcohol are usable for 1-2 weeks. Aqueous solutions and solvents tending to be basic such as methanol and ethanol, and those containing traces of water and basic impurities, do not make good solvents for stock solutions of the colorimetric reagent Synthesis At least 16 different routes to synthesizing the compound are known, most of which use Phenylhydrazine. An example of such a chemical reaction is the reaction between Phenylhydrazine and Urea to produce 1,5-Diphenylcarbazide in about 96% yield References Hydrazides Ureas Anilines
Diphenylcarbazide
Chemistry
776
31,243,975
https://en.wikipedia.org/wiki/Morchella%20spongiola
Morchella spongiola is a species of fungus in the family Morchellaceae. It was first described scientifically by Jean Louis Émile Boudier in 1897. References External links Morchellaceae Edible fungi Fungi of Europe Fungi described in 1897 Fungus species
Morchella spongiola
Biology
56
77,060,756
https://en.wikipedia.org/wiki/Selman%27s%20theorem
In computability theory, Selman's theorem is a theorem relating enumeration reducibility with enumerability relative to oracles. It is named after Alan Selman, who proved it as part of his PhD thesis in 1971. Statement Informally, a set A is enumeration-reducible to a set B if there is a Turing machine which receives an enumeration of B (it has a special instruction to get the next element, or none if it has not yet been provided), and produces an enumeration of A. See enumeration reducibility for a precise account. A set A is computably enumerable with oracle B (or simply "in B") when there is a Turing machine with oracle B which enumerates the members of A; this is the relativized version of computable enumerability. Selman's theorem: A set A is enumeration-reducible to a set B if and only if A is computably enumerable with an oracle X whenever B is computably enumerable with the same oracle X. Discussion Informally, the hypothesis means that whenever there is a program enumerating B using some source of information (the oracle), there is also a program enumerating A using the same source of information. A priori, the program enumerating A could be running the program enumerating B as a subprogram in order to produce the elements of A from those of B, but it could also be using the source of information directly, perhaps in a different way than the program enumerating B, and it could be difficult to deduce from the program enumerating B. However, the theorem asserts that, in fact, there exists a single program which produces an enumeration of A solely from an enumeration of B, without direct access to the source of information used to enumerate B. From a slightly different point of view, the theorem is an automatic uniformity result. Let P be the set of total computable functions such that the range of f with ⊥ removed equals A, and let Q be similarly defined for B. A possible reformulation of the theorem is that if P is Mučnik-reducible to Q, then it is also Medvedev-reducible to Q. . Informally: if every enumeration of B can be used to compute an enumeration of A, then there is a single (uniform) oracle Turing machine which computes some enumeration of A whenever it is given an enumeration of B as the oracle. Proof If A is enumeration-reducible to B and B is computably enumerable with oracle X, then A is computably enumerable with oracle X (it suffices to compose a machine that enumerates A given an enumeration of B with a machine that enumerates B with an oracle X). Conversely, assume that A is not enumeration-reducible to B. We shall build X such that B is computably enumerable with oracle X, but A is not. Let denote some computable pairing function. We build X as a set of elements where , such that for each , there is at least one pair in X. This ensures that B is computably enumerable with oracle X (through a semi-algorithm that takes an input x and searches for y such that ). The construction of X is done by stages, following the priority method. It is convenient to view the eventual value of X as an infinite bit string (i-th bit is the boolean ) which is constructed by incrementally appending to a finite bit string. Initially, X is the empty string. We describe the n-th step of the construction. It extends X in two ways. First, we ensure that X has a 1 bit at some index , where x is the n-th element of X. If there is none yet, we choose y large enough such that the index is outside the current string X, and we add a 1 bit at this index (padding with 0 bits before it). Doing this ensures that in the eventual value of X, there is some pair for each . Second, let us call "admissible extension" an extension of the current X which respects the property that 1 bits are pairs . Denote by M the n-th oracle Turing machine. We use M(Z) to mean M associated to a specific oracle Z (if Z is a finite bit string, out of bounds requests return 0). We distinguish three cases. 1. There is an admissible extension Y such that M(Y) enumerates some x that is not in A. Fix such an x. We further extend Y by padding it with 0s until all oracle queries that were used by M(Y) before enumerating x become in bounds, and we set X to this extended Y. This ensures that, however X is later extended, M(X) does not enumerate A, as it enumerates x which is not in A. 2. There is some value x in A which is not enumerated by any M(Y), for any admissible extension Y. In this case, we do not change X; it is already ensured that eventually M(X) will not enumerate A, because it cannot enumerate x — indeed, if it did, this would be done after a finite number of oracle invocations, which would lie in some admissible extension Y. 3. We show that the remaining case is absurd. Here, we know that all values enumerated by M(Y), for Y admissible extension, are in A, and conversely, every element of A is enumerated by M(Y) for at least one admissible extension Y. In other words, A is exactly the set of all values enumerated by M(Y) for an admissible extension Y. We can build a machine which receives an enumeration of B, uses it to enumerates admissible extensions Y, runs the M(Y) in parallel, and enumerates the values they yield. This machine is an enumeration reduction from A to B, which is absurd since we assumed no such reduction exists. See also Enumeration reducibility Oracle machine Reduction (computability) References Theoretical computer science Theorems in theory of computation
Selman's theorem
Mathematics
1,340
23,406,517
https://en.wikipedia.org/wiki/Flying%20Star%20Feng%20Shui
Xuan Kong Flying Star feng shui or Xuan Kong Fei Xing is a discipline in Feng Shui, and is an integration of the principles of Yin Yang, the interactions between the five elements, the eight trigrams, the Lo Shu numbers, and the 24 Mountains, by using time, space and objects to create an astrological chart to analyze positive auras and negative auras of a building. These include analyzing wealth, mental and physiological states, success, relationships with external parties, and health of the inhabitant. During the Qing Dynasty, it was popularized by grandmaster Shen Zhu Ren, with his book Mr. Shen's Study of Xuan Kong, or Shen Shi Xuan Kong Xue. Flying Star Feng Shui does not limit itself to buildings for the living or Yang Zhai, where rules pertaining to directions equally apply to all built structures; it also applies to grave sites and buildings for spirits or Yin Zhai. Fundamentals Numbers In the Lo Shu Square, flying stars are nine numbers. Each number in the Lo Shu represents one of the Chinese Trigrams and is related to an Element, Family Member, Cardinal, Colour, Hour, Season, Organ, Ailment and many others. The numbers always move to the lower right (northwest), middle right (west), lower left (northeast), upper center (south), lower center (north), upper right (southwest), middle left (east), upper left (southeast) and back to the center. Time Time is divided into 20-Year cycles. Each cycle of 20 years is a Period or "Yun". A grand cycle comprises 9 Periods in total, which covers a span of 180 years. Periods are used to describe the cyclical pattern of Qi. Different types of Qi have different strengths and weaknesses with the reference to a particular Period. Periodic Table on Flying Stars Timely and Untimely Flying Stars A timely star is positive for a building whereas an untimely star is negative. For the current period, Period 8 (Year 2004–2023), stars Eight, Nine and One are timely (For a building, they are timely if and only if the object placed in that palace is timely). Star Eight is most timely which is often treated as Prosperous and Noble Star. Star Nine and Star One belong to Sheng Qi, a growing energy. The other six stars are regarded as having retreating, killing or dead qi. Space An accurate measurement of direction must be obtained before any system of Feng Shui can be undertaken. A Luopan is a magnetic compass to determine the precise direction of a structure or an item. 24 Mountains The most important ring on the Luopan is the 24 Mountain ring. On the 24 Mountain ring, each direction is subdivided into three sectors. Taking Directions Using the principles of Yin and Yang, the facing of a building is determined by the side of the built structure that receives most Yang Qi. A house is constructed with an architectural frontage with its side that faces whatever landscape feature. The facing of that house is considered by the direction of its frontage which is most Yang in nature. In apartments, or condominiums, the facing of a unit is determined by the facing of the entire building. If the structure is not an obvious facade, the facing of the unit is determined by the side of the building having the most Yang energy (faces the busiest crowd flow). Taking locations Energy in a building can be tapped into by locating a person within a sector that houses the energy. Ideally, living objects should be located in a sector with positive Qi as determined by Flying Star Chart. The layout of a building is demarcated with a Nine Palace grid, which looks like a tic-tac-toe grid. A door, room or other object's location refers to the square within this grid where the object is found. This may or may not correspond to the direction that the object faces. A door could be located in the southwest sector, but face south. Its location could be also southwest, and its direction to be facing to the south. Objects Objects are essential to evaluate the Feng Shui of a building. Mountain Mountain generates Qi. A lush and green mountain or hill generates auspicious Qi, while a barren, rocky rising area will, in general, generate inauspicious energy. In urban areas, skyscrapers, apartments or any structure that rises from the ground have a similar role to a mountain: generating energy outside. From inside, cupboards, wardrobes, or any furniture that is taller or larger than any others nearby are also considered mountains. Water Water conducts Qi. It is essential to identify the cleanliness of the water, the location and the flow of the water formation. These include ponds, lakes, rivers, drains and fountains. In urban areas, highways and lowlands play a similar role to waterways, conducting Qi. Inside a building or a room, a spinning fan or anything lower than ground level is considered water. Nine-Palace Flying Stars Nine Palace Flying Stars or Jiu Gong Fei Xing is another name of the Flying Stars method whereby palaces are the nine sectors overlaid onto a layout of the house. Flying Star Chart A Flying Star chart consists of three numbers in each Palace of the Luo Shu. These numbers are called the Base Star, the Facing Star and the Sitting Star. Constructing a Flying Star Chart requires he dates that the building was occupied by the owners and the facing of the building For example, if a building is constructed in the year 2003, but the residents do not move in until February 4 of 2004, the Period of the building is 8, not 7. The period does not change again unless there is major renovation undertaken to the structure. Rules and Procedures Creating a Flying Star chart is always begun with the Base Star. The period of the building determines the number occupies the Base Star position of the Central Palace. Base Stars always fly in the Luo Shu path. Once all the base stars are distributed amongst the nine palaces, the number in Facing Palace on the Luo Shu grid is determined by the facing direction of the building. This number is the facing star. The Sitting Palace is always opposite of the Facing Palace. The sitting star is the number in the sitting palace. For instance, in a Period-8 building that faces southwest, the number that locates in Facing Palace is number 5 whereas the number in Sitting Palace is number 2; thus, 5 is Facing Star and 2 is Sitting Star. Unlike the Base Star, the Facing Star and Sitting Star can fly in either ascending (Yang) order, or descending (Yin) order. The order depends upon two factors: whether the star is an even number or an odd number, and which mountain the unit faces. Even-numbered Stars follow a Yin-Yang-Yang form. In a certain number which comprises three mountains, if the mountain that the property faces is Yang, then the numbers fly in ascending order of Lo Shu path, and vice versa. Odd-numbered Stars follow a Yang-Yin-Yin form. In a certain number which comprises three mountains, if the mountain that the property faces is Yang, then the numbers fly in ascending order of Lo Shu path, and vice versa. To determine the polarity of number 5 star, go by the polarity of the Period number. Properties of Nine Stars Timely and Untimely Flying stars can be timely or untimely. The nature of flying star depends on which period is to be referred and which star is being activated. Portents and Natures Famous Combination of Stars Bull fight Result of overcoming of untimely Flying Star 3 (Wood) upon Star 2 (Earth) Relationship: Son harassing mother-in-law, a male violating a woman Activities: Problems (Conflict, arguments, combat, lawsuit, disharmonies) for mother Health: woman is hurt at the belly (while pregnant) or having stomachache Cure: introduce a red carpet or a painting that is red. Red represents fire and will be able to change the effect that wood has on earth (control cycle) into a wood, fire and earth, supporting cycle. Death and Disastrous Result of combination of untimely Flying Stars 2 (Earth) and 5 (Earth) Activities: Accidents, bankruptcy, haunted house, death Health: Serious sickness, cancer of the digestive system Fire hazard Result of fire combination of untimely Flying Stars 2 (Earth) and 7 (Metal), or of untimely Flying Stars 7 (Metal) and 9 (Fire) Relationship: Lesbian, Male with strong female personalities Activities: Fire, explosion Penetrating the heart Result of combination of untimely Flying Stars 3 (Wood) and 7 (Metal) Relationship: Male and female fight Activities: Cripple, armed robbery, burglary, lawsuit, scams Health: foot disease, liver cancer, arm injury by metal Wisdom Result of combination of timely Flying Stars 1 (Water) and 4 (Wood), or combination of timely Flying Stars 3 (Wood) and 9 (Fire), or combination of timely Flying Stars 1 (Water) and 6 (Metal) Activities: Intelligence, Splendid for studies and research Metal in battle Result of metal combination of untimely Flying Stars 6 and 7. Relationship: combat and competition between brothers Activities: Conflict, armed robbery, death by metal Rich and Authority Result of combination of timely Flying Stars 6 and 8, or timely Flying Stars 2 and 6 Activities: Success in business, especially real estate or owning land, Inheritance, Great authority Fame and Celebration Result of combination of timely Flying Stars 8 (Earth) and 9 (Fire) Activities: Promotion, Marriage, Birth, Fame, Championship Chemistry of Flying Stars According to I-Ching, south direction belongs to fire. However, in a building, the south sector may not be a fire. The nature of a palace depends on the combination among elements of the base star, of the sitting star, of the facing star and of the Heaven Trigram. For example, a house that faces bearing 337.6 – 352.5 was built in 2001, and was occupied by residents in 2006. Period of the house: Period 8, since 2006 is the year of occupying Facing: Ren mountain in North direction, bearing 337.6 – 352.5 Sitting: Bing mountain in South direction, opposite of the facing Timely Flying Star Timely flying star is a catalyst to the phase combination of Sitting Star, Facing Star, Base Star and Heavenly Trigram, implying whether to boost the current aura to best or to the opposite. For example, annual Star 1 (Water) has the ability to combat the competition of metallic stars 6 and 7. Annual and Monthly Flying Stars In Sexagenary cycle, a new Chinese year begins on start of spring, which usually falls on February 4. The annual flying star that visits the center palace has to be subtracted by one every Chinese year. Once Star 1 is reached, annual star would loop back around to 9 in next year. Star 1 occupies center palace in 1999, 2008, 2017 and so on. Annual and monthly stars always follow Lo Shu path. Daily Flying Star Daily Flying Stars are governed by, RULE 1: From the onset of Winter Solstice until Summer Solstice in the following year, the daily stars progress in ascending order (... 7,8,9,1, 2, 3, ...). The stars are distributed around the nine palaces following Lo Shu path. On the very first Yang Wood Rat day or Jia-zi day after Winter Solstice, daily Star 1 presides the center palace. RULE 2: From the onset of Summer Solstice until the next Winter Solstice, the daily stars progress in a descending order (... 3,2,1,9,8,7,...). The stars are distributed around the nine palaces fleeing Lo Shu path. On the very first Yang Wood Rat day or Jia-zi day after Summer Solstice, daily Star 9 presides the center palace. Bihourly Flying Star Bi-hourly Flying Stars are ruled by, RULE 1: From the onset of Winter Solstice until Summer Solstice in the following year, the bi-hourly stars are distributed around the nine palaces following Lo Shu path. The stars progress in ascending order every bi-hourly. On Rat, Rabbit, Horse, and Rooster days, star 1 occupies the center sector at Rat hour (11 pm of previous day – 1 am). On Ox, Dragon, Goat, and Dog days, star 4 occupies the center sector at Rat hour. On Tiger, Snake, Monkey, Pig days, star 7 occupies the center sector at Rat hour. RULE 2: From the onset of Summer Solstice until next Winter Solstice, the bi-hourly stars are distributed around the nine palaces fleeing Lo Shu path. The stars progress in descending order every bi-hourly. On Rat, Rabbit, Horse, and Rooster days, star 9 occupies the center sector at Rat hour. On Ox, Dragon, Goat, and Dog days, star 6 occupies the center sector at Rat hour. On Tiger, Snake, Monkey, Pig days, star 3 occupies the center sector at Rat hour. See also 5 Elements (Wu Xing) Bagua Chinese calendar Feng shui Luopan 9 Star Ki Yin and yang Notes Further reading Flying Star Feng Shui free articles, professional training קורס פנג שואי I Ching Environmental design Chinese culture Feng Shui
Flying Star Feng Shui
Engineering
2,740
69,563,062
https://en.wikipedia.org/wiki/Korean%20bug
Korean bug is a popular aphrodisiac in China, Korea, and Southeast Asia, either eaten alive or in gelatin form. The aphrodisiac effect has not been clinically tested and is achieved by cantharidin inhibition of phosphodiesterase, protein phosphatase activity and stimulation of adrenergic receptors, which leads to vascular congestion and inflammation. Cantharidin is an unreliable and dangerous aphrodisiac. Its impact is primarily based totally on stimulation of the urogenital tract, robust pelvic hyperaemia with consequent erection or a possible priapism. The bug is type of a beetle of Palembus dermestoides species. Medical studies have shown that it is a vector of causative agent of hymenolepiasis. References Aphrodisiac foods Chinese cuisine Korean cuisine Insects as food Southeast Asian cuisine
Korean bug
Biology
188
74,831,484
https://en.wikipedia.org/wiki/Praseodymium%20bromate
Praseodymium bromide is an inorganic compound with the chemical formula Pr(BrO3)3. It is soluble in water and can form the dihydrate, tetrahydrate and nonahydrate. The nonahydrate melts in its own crystal water at 56.5 °C and completely loses its crystal water at 130 °C. It can be produced by the reaction of barium bromate and praseodymium sulfate. References Praseodymium(III) compounds Bromates
Praseodymium bromate
Chemistry
108
303,923
https://en.wikipedia.org/wiki/James%20Mercer%20%28mathematician%29
James Mercer FRS (15 January 1883 – 21 February 1932) was a mathematician, born in Bootle, close to Liverpool, England. He was educated at University of Manchester, and then University of Cambridge. He became a Fellow, saw active service at the Battle of Jutland in World War I and, after decades of ill health, died in London. He proved Mercer's theorem, which states that positive-definite kernels can be expressed as a dot product in a high-dimensional space. This theorem is the basis of the kernel trick (applied by Aizerman), which allows linear algorithms to be easily converted into non-linear algorithms. References 1883 births 1932 deaths 19th-century British mathematicians 20th-century British mathematicians Mathematical analysts People from Bootle Alumni of the University of Manchester Senior Wranglers Scientists from Liverpool Fellows of the Royal Society Alumni of the University of Cambridge
James Mercer (mathematician)
Mathematics
178
6,392,115
https://en.wikipedia.org/wiki/Wireless%20intrusion%20prevention%20system
In computing, a wireless intrusion prevention system (WIPS) is a network device that monitors the radio spectrum for the presence of unauthorized access points (intrusion detection), and can automatically take countermeasures (intrusion prevention). Purpose The primary purpose of a WIPS is to prevent unauthorized network access to local area networks and other information assets by wireless devices. These systems are typically implemented as an overlay to an existing Wireless LAN infrastructure, although they may be deployed standalone to enforce no-wireless policies within an organization. Some advanced wireless infrastructure has integrated WIPS capabilities. Large organizations with many employees are particularly vulnerable to security breaches caused by rogue access points. If an employee (trusted entity) in a location brings in an easily available wireless router, the entire network can be exposed to anyone within range of the signals. In July 2009, the PCI Security Standards Council published wireless guidelines for PCI DSS recommending the use of WIPS to automate wireless scanning for large organizations. Intrusion detection A wireless intrusion detection system (WIDS) monitors the radio spectrum for the presence of unauthorized, rogue access points and the use of wireless attack tools. The system monitors the radio spectrum used by wireless LANs, and immediately alerts a systems administrator whenever a rogue access point is detected. Conventionally it is achieved by comparing the MAC address of the participating wireless devices. Rogue devices can spoof MAC address of an authorized network device as their own. New research uses fingerprinting approach to weed out devices with spoofed MAC addresses. The idea is to compare the unique signatures exhibited by the signals emitted by each wireless device against the known signatures of pre-authorized, known wireless devices. Intrusion prevention In addition to intrusion detection, a WIPS also includes features that prevent against the threat automatically. For automatic prevention, it is required that the WIPS is able to accurately detect and automatically classify a threat. The following types of threats can be prevented by a good WIPS: Rogue access points – WIPS should understand the difference between rogue APs and external (neighbor's) APs Mis-configured AP Client mis-association Unauthorized association Man-in-the-middle attack Ad hoc networks MAC spoofing Honeypot / evil twin attack Denial-of-service attack Implementation WIPS configurations consist of three components: Sensors — These devices contain antennas and radios that scan the wireless spectrum for packets and are installed throughout areas to be protected Server — The WIPS server centrally analyzes packets captured by sensors Console — The console provides the primary user interface into the system for administration and reporting A simple intrusion detection system can be a single computer, connected to a wireless signal processing device, and antennas placed throughout the facility. For huge organizations, a Multi Network Controller provides central control of multiple WIPS servers, while for SOHO or SMB customers, all the functionality of WIPS is available in single box. In a WIPS implementation, users first define the operating wireless policies in the WIPS. The WIPS sensors then analyze the traffic in the air and send this information to WIPS server. The WIPS server correlates the information, validates it against the defined policies, and classifies if it is a threat. The administrator of the WIPS is then notified of the threat, or, if a policy has been set accordingly, the WIPS takes automatic protection measures. WIPS is configured as either a network implementation or a hosted implementation. Network implementation In a network WIPS implementation, server, sensors and the console are all placed inside a private network and are not accessible from the Internet. Sensors communicate with the server over a private network using a private port. Since the server resides on the private network, users can access the console only from within the private network. A network implementation is suitable for organizations where all locations are within the private network. Hosted implementation In a hosted WIPS implementation, sensors are installed inside a private network. However, the server is hosted in secure data center and is accessible on the Internet. Users can access the WIPS console from anywhere on the Internet. A hosted WIPS implementation is as secure as a network implementation because the data flow is encrypted between sensors and server, as well as between server and console. A hosted WIPS implementation requires very little configuration because the sensors are programmed to automatically look for the server on the Internet over a secure TLS connection. For a large organization with locations that are not a part of a private network, a hosted WIPS implementation simplifies deployment significantly because sensors connect to the Server over the Internet without requiring any special configuration. Additionally, the Console can be accessed securely from anywhere on the Internet. Hosted WIPS implementations are available in an on-demand, subscription-based software as a service model. Hosted implementations may be appropriate for organizations looking to fulfill the minimum scanning requirements of PCI DSS. See also Wardriving Wireless LAN security Typhoid adware References Wireless networking Data security Secure communication
Wireless intrusion prevention system
Technology,Engineering
999
11,512,258
https://en.wikipedia.org/wiki/Erysiphe%20heraclei
Erysiphe heraclei is a plant pathogen that causes powdery mildew on several species including dill, carrot and parsley. History It was originally found in 1815, on the leaves of a species of Heracleum in France. It was found in Australia in New South Wales in 2007 then it spread to Tasmania and South Australia in 2008. Importance Erysiphe heraclei is no different than your typical powdery mildew as it shares many of the important traits that make it a plant disease worth paying attention too. In the case of powdery mildew of carrots yield loss is a very typical result of an infection, as well as the reduction of the ability to mechanically pull carrots from the ground during harvest due to leaf damage. The effects of yield lost are felt most with early infections, for carrots there has been a noted difference in disease expression and harshness across growing operations. In some experimental trials carrots who had no control measures against Erysiphe heraclei experience yield losses of 20%. Powdery mildew of carrots can also infect other plants as well. It has shown to infect certain celery, parsley, dill, chervil and parsnip strains as well. Disease cycle Erysiphe heraclei causes powdery mildew of carrots. It closely follows the standard life cycle of powdery mildews. Erysiphe heraclei is considered an obligate biotroph, which means it needs a living host to survive and feeds on living plant tissue. This characteristic is an important part for why the powdery mildew life cycle is what it is. The first stage in the disease cycle starts in the spring where the overwintering inoculum become exposed to ideal conditions. The inoculum overwinter in fungal fruiting bodies called cleistothecia (OSU, 2008). The cleistothecia then releases airborne spores called ascospores into the environment, which will serve as the primary inoculum during the growing season. The ascospores are then dispersed by the wind, or water where they then germinate on any leaf tissue they can find. It enters the plant by the use of a germ tube, giving the spore access to the inside of the plant. Once on the host plant another type of spore called, conidia are produced (McGrath, Cornell). The conidia then serve as the “secondary inoculum” for the disease and infect the plant further or other nearby plants for the rest of the growing season. Due to having this “secondary inoculum” this makes powdery mildew of carrots a polycyclic disease since it is able to infect further on in the growing season past the primary inoculum. The surviving conidia then overwinter and serve as primary inoculum in the spring to start the cycle all over again. Management Multiple management strategies are used for the control of Erysiphe heraclei. Chemical controls are the most popular method of control and include a variety of fungicides. Common fungicides used by growers include Bravo, which provides contact control of the disease. While other fungicides provide mobile control such as Quilt, Endura, Tilt, and others (McGrath, 2013). The most important aspect when it comes to applying fungicides is timing. In order for the fungicides to be as effective as possible they should be applied very early in the season and when conditions for Erysiphe heraclei are ideal (high temp, high moisture). Another key tip to remember when using fungicides is proper rotation of fungicides in order to prevent disease resistance. Aside from chemical control, mulching can also be used to minimize drought stress the plant may get during the growing season, by reducing the stress on the plant it makes it less susceptible to diseases overall. References Other sources "Erysiphe Heraclei -- Discover Life". Discoverlife.Org, 2018, https://www.discoverlife.org/20/q?search=Erysiphe+heraclei. Accessed 10 Dec 2018. Fungal plant pathogens and diseases Vegetable diseases heraclei Fungi described in 1815 Fungus species
Erysiphe heraclei
Biology
881
1,197,531
https://en.wikipedia.org/wiki/Hamiltonian%20system
A Hamiltonian system is a dynamical system governed by Hamilton's equations. In physics, this dynamical system describes the evolution of a physical system such as a planetary system or an electron in an electromagnetic field. These systems can be studied in both Hamiltonian mechanics and dynamical systems theory. Overview Informally, a Hamiltonian system is a mathematical formalism developed by Hamilton to describe the evolution equations of a physical system. The advantage of this description is that it gives important insights into the dynamics, even if the initial value problem cannot be solved analytically. One example is the planetary movement of three bodies: while there is no closed-form solution to the general problem, Poincaré showed for the first time that it exhibits deterministic chaos. Formally, a Hamiltonian system is a dynamical system characterised by the scalar function , also known as the Hamiltonian. The state of the system, , is described by the generalized coordinates and , corresponding to generalized momentum and position respectively. Both and are real-valued vectors with the same dimension N. Thus, the state is completely described by the 2N-dimensional vector and the evolution equations are given by Hamilton's equations: The trajectory is the solution of the initial value problem defined by Hamilton's equations and the initial condition . Time-independent Hamiltonian systems If the Hamiltonian is not explicitly time-dependent, i.e. if , then the Hamiltonian does not vary with time at all: and thus the Hamiltonian is a constant of motion, whose constant equals the total energy of the system: . Examples of such systems are the undamped pendulum, the harmonic oscillator, and dynamical billiards. Example An example of a time-independent Hamiltonian system is the harmonic oscillator. Consider the system defined by the coordinates and . Then the Hamiltonian is given by The Hamiltonian of this system does not depend on time and thus the energy of the system is conserved. Symplectic structure One important property of a Hamiltonian dynamical system is that it has a symplectic structure. Writing the evolution equation of the dynamical system can be written as where and IN is the N×N identity matrix. One important consequence of this property is that an infinitesimal phase-space volume is preserved. A corollary of this is Liouville's theorem, which states that on a Hamiltonian system, the phase-space volume of a closed surface is preserved under time evolution. where the third equality comes from the divergence theorem. Hamiltonian chaos Certain Hamiltonian systems exhibit chaotic behavior. When the evolution of a Hamiltonian system is highly sensitive to initial conditions, and the motion appears random and erratic, the system is said to exhibit Hamiltonian chaos. Origins The concept of chaos in Hamiltonian systems has its roots in the works of Henri Poincaré, who in the late 19th century made pioneering contributions to the understanding of the three-body problem in celestial mechanics. Poincaré showed that even a simple gravitational system of three bodies could exhibit complex behavior that could not be predicted over the long term. His work is considered to be one of the earliest explorations of chaotic behavior in physical systems. Characteristics Hamiltonian chaos is characterized by the following features: Sensitivity to Initial Conditions: A hallmark of chaotic systems, small differences in initial conditions can lead to vastly different trajectories. This is known as the butterfly effect. Mixing: Over time, the phases of the system become uniformly distributed in phase space. Recurrence: Though unpredictable, the system eventually revisits states that are arbitrarily close to its initial state, known as Poincaré recurrence. Hamiltonian chaos is also associated with the presence of chaotic invariants such as the Lyapunov exponent and Kolmogorov-Sinai entropy, which quantify the rate at which nearby trajectories diverge and the complexity of the system, respectively. Applications Hamiltonian chaos is prevalent in many areas of physics, particularly in classical mechanics and statistical mechanics. For instance, in plasma physics, the behavior of charged particles in a magnetic field can exhibit Hamiltonian chaos, which has implications for nuclear fusion and astrophysical plasmas. Moreover, in quantum mechanics, Hamiltonian chaos is studied through quantum chaos, which seeks to understand the quantum analogs of classical chaotic behavior. Hamiltonian chaos also plays a role in astrophysics, where it is used to study the dynamics of star clusters and the stability of galactic structures. Examples Dynamical billiards Planetary systems, more specifically, the n-body problem. Canonical general relativity See also Action-angle coordinates Liouville's theorem Integrable system Symplectic manifold Kolmogorov–Arnold–Moser theorem Poincaré recurrence theorem Lyapunov exponent Three-body problem Ergodic theory References Further reading Almeida, A. M. (1992). Hamiltonian systems: Chaos and quantization. Cambridge monographs on mathematical physics. Cambridge (u.a.: Cambridge Univ. Press) Audin, M., (2008). Hamiltonian systems and their integrability. Providence, R.I: American Mathematical Society, Dickey, L. A. (2003). Soliton equations and Hamiltonian systems. Advanced series in mathematical physics, v. 26. River Edge, NJ: World Scientific. Treschev, D., & Zubelevich, O. (2010). Introduction to the perturbation theory of Hamiltonian systems. Heidelberg: Springer Zaslavsky, G. M. (2007). The physics of chaos in Hamiltonian systems. London: Imperial College Press. External links Hamiltonian mechanics
Hamiltonian system
Physics,Mathematics
1,157
43,025,534
https://en.wikipedia.org/wiki/SILVA%20ribosomal%20RNA%20database
SILVA is a ribosomal RNA database established in collaboration between the Microbial Genomics Group at the Max Planck Institute for Marine Microbiology in Bremen, Germany, the Department of Microbiology at the Technical University Munich, and Ribocon. Release 117 of the database (January 2014) held more than 4,000,000 small subunit (SSU - 16S/18S) and 400,000 large subunit (LSU - 23S/28S) sequences. Sequences are provided as files for the ARB software environment. See also References External links SILVA rRNA database project Biodiversity databases
SILVA ribosomal RNA database
Biology,Environmental_science
119
58,535,509
https://en.wikipedia.org/wiki/Boron%20porphyrins
Boron porphyrins are a variety of porphyrin, a common macrocycle used for photosensitization and metal trapping applications, that incorporate boron. The central four nitrogen atoms in a porphyrin macrocycle form a unique molecular pocket which is known to accommodate transition metals of various sizes and oxidation states. Due to the diversity of binding modes available to porphyrin, there is a growing interest in introducing other elements (i.e. main group elements) into this pocket. Boron in particular has been shown to prefer binding to porphyrin in a 2:1 stoichiometry, primarily due to its small atomic radius, but the Group XIII element will bind in a 1:1 ratio with corrole, a macromolecule with a structure similar to porphyrin but with a smaller N4 pocket. Boron porphyrins are of interest because of the unique geometric environment to which both boron and porphyrin are subjected upon B-N(pyrrole) bond formation. These new geometric motifs lead to novel reactivity, one of the most surprising examples being sterically-induced reductive coupling. Possible applications for boron porphyrins include BNCT delivery agents and OLED devices. Also of interest are molecules containing both boron and porphyrin moieties, but without B-N(pyrrole) bonds. Examples include diketonate-porphyrin compounds and dyads (two-component molecules) containing the classic BODIPY dye. Synthesis Boron porphyrins first appeared in the literature during the 1960's and 1970's, in initially available literature the complex was never well characterized. The Boron porphyrin compounds can be synthesized either from the free base porphyrin or from a lithium porphyrin complex as starting material. Two representative examples are shown here. The first is the porphyrin free base reacted with BX3 in the presence of water. The second is Li2(ttp) reacted with BX3. The (BX2)2(por) can undergo reduction to form a B-B bond and eliminate X2, giving (BX)2(por). From here, the halides can be replaced with BuLi to give (B-Bu)2(por), reacted with alcohols to give (B-OR)2(por), or even undergo halogen abstraction via weakly-coordinating anions to give [(B-B)(por)]2+. Geometry One of the major differences between p-block-element-centered porphyrins and transition-metal-centered porphyrins is the far smaller size of the interstitial atom, especially in the case of the first-row p-block. Other than protons, the next smallest atom known to bind to the central N4 pocket is lithium. The first two isolated lithium porphyrin complexes each reported a 2:1 metal to base ratio, and XRD suggested both lithium atoms reside out of the porphyrin plane. Boron has a covalent radius of 85 pm, significantly smaller than lithium's 133 pm. This suggests the porphyrin pocket is more likely to accommodate two boron atoms rather than one. Indeed, each boron porphyrin synthesized thus far has adopted a ratio of 2:1, with a range of orientations relative to the N4 plane. The boron atoms can exist in the same plane as the porphyrin (both with and without additional out-of-plane B-X bonds), or out of N4 plane in either a cisoid or transoid geometry. This coordination motif is interesting because it introduces both boron and porphyrin to geometries they do not regularly adopt. Porphyrin readily binds to transition metals, which are capable of octahedral or square planar geometries. Boron, without available d-orbitals, typically adopts a trigonal planar or tetrahedral local bonding environment. Diboryl porphyrins, on the other hand, find boron in a pseudo-tetrahedral local environment and introduce a tetragonal distortion to the porphyrin, as can be seen in the DFT image above. Corroles are distinct from porphyrins in that they contain one less methine to bridge between pyrrole units, creating a lower-symmetry compound and a smaller N4 pocket. For boron chemistry, this slightly smaller core allows for the possibility of binding to a single boron, whereas the porphyrin pocket has thus far always bound two. For such monoboryl corroles, DFT studies have suggested the boron preferentially binds to the dipyrromethene (A) site shown here, in which stability is attained by maximizing both BX—HN hydrogen bonding and BH—HN dihydrogen bonding, in addition to minimizing steric crowding. The Brothers group has shown the stereochemical implications of comparing diboryl porphyrin with diboryl corrole: porphyrin prefers transoid orientation of the diboryl unit, whereas corrole prefers the cisoid orientation. Non-central boron-porphyrin interactions Two examples of boron-containing compounds that have been linked to porphyrin are BODIPY and diketonate. The BODIPY chromophore acts as an antenna: it absorbs a broad range of UV-visible light, then emits at a wavelength compatible with porphyrin absorption, allowing for efficient energy transfer. This work has been extended to triads and to porphyrins with various core transition metals, some displaying multiphoton excitation. On the other hand, when boron difluoride β-diketonate is used for an antenna, the emission-absorption overlap is small and little change in the porphyrin's optical properties is observed. Though this chromophore is preferable to BODIPY in certain applications, it is not an effective antenna for porphyrin. Reactivity Reduction One consequence of geometric strain on both the boron and the porphyrin moieties is unique reactivity. The Brothers group was able to demonstrate reductive coupling, wherein two BX2 units inside the porphyrin pocket become X-B-B-X, only occurs with X=Br and when the substrates are within the porphyrin pocket. DFT calculations show that for X=Cl or F, the reaction is endothermic and non-spontaneous. However, for X=Br, the reduction is spontaneous, which was consistent with experimental findings. Further, when the same reaction is simulated with two porphyrin halves ((dipyrromethene)BX2), it is non-spontaneous even for X=Br, suggesting the steric strain of the porphyrin ring to be the driving force behind the reduction reaction. Hydrolysis Hydrolysis is one of the primary reactions to occur in diboryl porphyrin complexes. In this reaction, RBOBR(por) reacts with water to exchange a B-OH bond for a B-R bond, liberating the R group. Hydrolysis products are important intermediates in the synthesis of the B-O-B(por) compounds from BX2(por) compounds. In fact, simply performing column chromatography on (BF2)2(por) on silica gives the partial hydrolysis product B2OF2(por). DFT computations show that hydrolysis, as in the scheme shown here, is energetically favorable (breaking of a relatively weak B-C bond, formation of a strong B-O bond, formation of benzene). However, only one of the two phenyl groups is observed to undergo hydrolysis. This suggests thermodynamic favorability is not the only factor at play. Rather, as Belcher et al. suggest, there is a significant steric component to this reaction. The boron in the porphyrin ring plane undergoes substitution, while the out-of-plane boron retains its phenyl bond. Halogen abstraction (Also see Geometry section above for a discussion of the B-B bonding environment.) Abstraction of halogens with two equivalents of sodium tetrakis[3,5-bis(trifluoromethyl)phenyl]borate gives the dication with both boron atoms within the porphyrin plane. Two reversible reduction waves occur at reduction potentials lower than that of the free base. References Boron compounds Photochemistry Porphyrins Boron–nitrogen compounds
Boron porphyrins
Chemistry
1,797
35,595,969
https://en.wikipedia.org/wiki/Nucleophilic%20abstraction
Nucleophilic abstraction is a type of an organometallic reaction which can be defined as a nucleophilic attack on a ligand which causes part or all of the original ligand to be removed from the metal along with the nucleophile. Alkyl abstraction While nucleophilic abstraction of an alkyl group is relatively uncommon, there are examples of this type of reaction. In order for this reaction to be favorable, the metal must first be oxidized because reduced metals are often poor leaving groups. The oxidation of the metal causes the M-C bond to weaken, which allows for the nucleophilic abstraction to occur. G.M. Whitesides and D.J. Boschetto use the halogens Br2 and I2 as M-C cleaving agents in the following example of nucleophilic abstraction. It is important to note that the product of this reaction is inverted with respect to the stereochemical center attached to the metal. There are several possibilities for the mechanism of this reaction which are shown in the following schematic. In path a, the first step proceeds with the oxidative addition of the halogen to the metal complex. This step results in the oxidized metal center that is needed to weaken the M-C bond. The second step can proceed with either the nucleophilic attack of the halide ion on the α-carbon of the alkyl group or reductive elimination, both of which result in the inversion of stereochemistry. In path b, the metal is first oxidized without the addition of the halide. The second step occurs with a nucleophilic attack of the α-carbon which again results in the inversion of stereochemistry. Carbonyl abstraction Trimethylamine N-oxide (Me3NO) can be used in the nucleophilic abstraction of carbonyl. There is an nucleophilic attack of Me3NO on the carbon of the carbonyl group which pushes electrons on the metal. The reaction then proceeds to kick out CO2 and NMe3. An article from the Bulletin of Korean Chemical Society journal showed interesting results where one iridium complex undergoes carbonyl abstraction while a very similar iridium complex undergoes hydride extraction. Hydrogen abstraction Nucleophilic abstraction can occur on a ligand of a metal if the conditions are right. For instance the following example shows the nucleophilic abstraction of H+ from an arene ligand attached to chromium. The electron withdrawing nature of the chromium allows for the reaction to occur as a facile reaction. Methyl abstraction A Fischer carbene can undergo nucleophilic abstraction where a methyl group is removed. With the addition of a small abstracting agent, the abstracting agent would normally add to the carbene carbon. In this case however, the steric bulk of the abstracting agent that is added causes the abstraction of the methyl group. If the methyl group is replaced with ethyl, the reaction proceeds 70 times slower which is to be expected with a SN2 displacement mechanism. Silylium abstraction A silylium ion is a silicon cation with only three bonds and a positive charge. The abstraction of the silylium ion is seen from the ruthenium complex shown below. In the first step of this mechanism one of the acetonitrile groups is replaced by a silicon molecule where the bond between the silicon and the hydrogen is coordinating to the ruthenium. In the second step a ketone is added for the nucleophilic abstraction of the silylium ion and the hydrogen is left on the metal. α-Acyl abstraction One example of nucleophilic abstraction of an α-acyl group is seen when MeOH is added to the following palladium complex. The mechanism follows a tetrahedral intermediate which results in the methyl ester and the reduced palladium complex shown. The following year a similar mechanism was proposed where oxidative addition of an aryl halide followed by migratory CO insertion and is followed by nucleophilic abstraction of the α-acyl by MeOH. One of the advantages of this intermolecular nucleophilic abstraction is the production of linear acyl derivatives. The intramolecular attack of these linear acyl derivatives gives rise to cyclic compounds such as lactones or lactams. See also Addition to pi ligands References Inorganic chemistry Organometallic chemistry
Nucleophilic abstraction
Chemistry
926
41,465,880
https://en.wikipedia.org/wiki/Hartley%20%28unit%29
The hartley (symbol Hart), also called a ban, or a dit (short for "decimal digit"), is a logarithmic unit that measures information or entropy, based on base 10 logarithms and powers of 10. One hartley is the information content of an event if the probability of that event occurring is . It is therefore equal to the information contained in one decimal digit (or dit), assuming a priori equiprobability of each possible value. It is named after Ralph Hartley. If base 2 logarithms and powers of 2 are used instead, then the unit of information is the shannon or bit, which is the information content of an event if the probability of that event occurring is . Natural logarithms and powers of e define the nat. One ban corresponds to ln(10) nat = log2(10) Sh, or approximately 2.303 nat, or 3.322 bit (3.322 Sh). A deciban is one tenth of a ban (or about 0.332 Sh); the name is formed from ban by the SI prefix deci-. Though there is no associated SI unit, information entropy is part of the International System of Quantities, defined by International Standard IEC 80000-13 of the International Electrotechnical Commission. History The term hartley is named after Ralph Hartley, who suggested in 1928 to measure information using a logarithmic base equal to the number of distinguishable states in its representation, which would be the base 10 for a decimal digit. The ban and the deciban were invented by Alan Turing with Irving John "Jack" Good in 1940, to measure the amount of information that could be deduced by the codebreakers at Bletchley Park using the Banburismus procedure, towards determining each day's unknown setting of the German naval Enigma cipher machine. The name was inspired by the enormous sheets of card, printed in the town of Banbury about 30 miles away, that were used in the process. Good argued that the sequential summation of decibans to build up a measure of the weight of evidence in favour of a hypothesis, is essentially Bayesian inference. Donald A. Gillies, however, argued the ban is, in effect, the same as Karl Popper's measure of the severity of a test. Usage as a unit of odds The deciban is a particularly useful unit for log-odds, notably as a measure of information in Bayes factors, odds ratios (ratio of odds, so log is difference of log-odds), or weights of evidence. 10 decibans corresponds to odds of 10:1; 20 decibans to 100:1 odds, etc. According to Good, a change in a weight of evidence of 1 deciban (i.e., a change in the odds from evens to about 5:4) is about as finely as humans can reasonably be expected to quantify their degree of belief in a hypothesis. Odds corresponding to integer decibans can often be well-approximated by simple integer ratios; these are collated below. Value to two decimal places, simple approximation (to within about 5%), with more accurate approximation (to within 1%) if simple one is inaccurate: See also bit decibel Notes References Units of information Units of level
Hartley (unit)
Physics,Mathematics
686
39,568,490
https://en.wikipedia.org/wiki/Aharoni%20%28typeface%29
Aharoni is a Hebrew language typeface created by Tuvia Aharoni for Ludwig & Mayer as a Hebrew version of Erbar-Grotesk, and later used by the Monotype Corporation and Kivun Computers Ltd, known best for its use in Microsoft Windows. Versions of it have been included in Windows 2000, XP, XP SP2, Server 2003, Server 2008, Server 2012, 7, 8, 10 and 11. References External links Microsoft Typography - Aharoni Hebrew typefaces Typefaces and fonts introduced in 1935
Aharoni (typeface)
Technology
115
46,955,253
https://en.wikipedia.org/wiki/Free-turbine%20turboshaft
A free-turbine turboshaft is a form of turboshaft or turboprop gas turbine engine where the power is extracted from the exhaust stream of a gas turbine by an independent turbine, downstream of the gas turbine. The power turbine is not mechanically connected to the turbines that drive the compressors, hence the term "free", referring to the independence of the power output shaft (or spool). This is opposed to the power being extracted from the turbine/compressor shaft via a gearbox. The advantage of the free turbine is that the two turbines can operate at different speeds and that these speeds can vary relative to each other. This is particularly advantageous for varying loads, such as turboprop engines. Design A free-turbine turboshaft ingests air through an intake. The air passes through a compressor and into a combustor where fuel is mixed with the compressed air and ignited. The combustion gases are expanded through a compressor-driving turbine, and then through a "free" power turbine before being exhausted to the atmosphere. The compressor and its turbine are connected by a common shaft which, together with the combustor, is known as a gas generator, which is modelled using the Brayton Cycle. The (free) power turbine is on a separate shaft. Turboshaft engines are sometimes characterized by the number of spools. This refers to the number of compressor-and-turbine assemblies in the gas generator stage and does not include the free power turbine assembly. As an example, the General Electric T64 is a single-spool design that uses a 14-stage axial compressor; the independent power shaft is coaxial with the gas generator shaft. Risk of overspeed One particular failure scenario, a gearbox failure, showed a free-turbine arrangement to be more at risk than a single-shaft turboprop. It could suffer a turbine overspeed to destruction after losing its connection to the propeller load. (In a single-shaft arrangement with a similar gearbox failure the turbine would still have most of its load from the compressor). Such a failure resulted in the 1954 accident of the second prototype Bristol Britannia, G-ALRX, which was forced to land in the Severn Estuary. A failure in the Bristol Proteus propeller reduction gearbox led to an overspeed and release of the power turbine of Nº3 engine. It cut through the oil tank and started a fire that threatened the integrity of the wing spar. The pilot, Bill Pegg, made a forced landing on the estuary mud. The Proteus gears were redesigned and an emergency fuel shut-off device was fitted to prevent a similar reoccurrence. Writing in 1994, Gunston found it remarkable that protection was not common on free-turbine engines. However, certification regulations allow other methods for preventing excessive overspeed such as disc rubbing and blade interference. Applications Most turboshaft and turboprop engines now use free turbines. This includes those for static power generation, as marine propulsion and particularly for helicopters. Helicopters Helicopters are a major market for turboshaft engines. When turboshaft engines became available in the 1950s, they were rapidly adopted for both new designs and as replacements for piston engines. They offered more power and far better power to weight ratios. Piston helicopters of this period had barely adequate performance; the switch to a turbine engine could both reduce several hundred pounds of engine weight, for the Napier Gazelle of the Westland Wessex, and also allow considerably more payload weight. For the Westland Whirlwind, this converted the inadequate piston-engined HAS.7 to the de Havilland Gnome turbine-powered HAR.9. As one of the first anti-submarine helicopters, the HAS.7 had been so weight restricted that it could carry either a search sonar or a torpedo, but not both. The free-turbine engine was found to be particularly suitable. It does not need a clutch, as the gas generator may be started while the output shaft remains stationary. For the Wessex, this was used to give a particularly fast take-off from a cold start. By locking the main rotor (and the power turbine) with the rotor brake, the engine could be started and then, with the gas generator at a speed of 10,500 rpm, the brake released allowing the power turbine to accelerate and bring the rotor to its operating speed from stationary in just 15 seconds and a time from engine start to take-off of only 30 seconds. A further advantage of the free turbine design was the ease with which a counter-rotating engine could be designed and manufactured, simply by reversing the power turbine alone. This allowed handed engines to be made in pairs when needed. It also allowed contra-rotating engines, where the gas generator core and power turbine revolved in opposite directions, reducing the overall moment of inertia. For the helicopter engine replacement market, this ability allowed previous engines of either direction to be replaced simply. Some turboshaft engines' omni-angle freedom of their installation angle also allowed installation into existing helicopter designs, no matter how the previous engines had been arranged. In time though, the move towards axial LP compressors and so smaller diameter engines encouraged a move to the now-standard layout of one or two engines set side-by-side, horizontally above the cabin. Aircraft Turboprop aircraft are still powered by a range of free- and non-free turbine engines. Larger engines have mostly retained the non-free design, although many are two-shaft designs where the 'power' turbine drives the propeller and the low-pressure compressor while the high-pressure compressor has its own turbine. The first free-turbine gas turbine engine was the Bristol Theseus turboprop. This was the first Bristol gas turbine and its broad design had been produced by Frank Owner at Tockington Manor. It first ran in July 1945 and in December 1946 was the first turboprop to pass a 100 hour type test. Some large turboprop engines, such as the original Bristol Proteus and the modern TP400 have free turbines. The TP400 is a three-shaft design, with two compressor turbines and a separate power turbine. Where the turbine is at the rear of the engine, a turboprop engine requires a long drive shaft forwards to the propeller reduction gearbox. Such long shafts can be a difficult design problem and must carefully control any shaft vibration. For small turboprop engines, the free-turbine design has come to dominate and these designs are also mostly reversed overall, with their air inlet and compressor to the rear, feeding forwards to hot section and power turbine at the front. This places the turbine output close to the propeller gearbox, avoiding the need for a long driveshaft. Such engines are often recognisable externally, as they use external 'elbow' exhausts ahead of the main engine. A particularly common example of this is the PT6 engine, of which over 50,000 have been produced. Pusher propfans An attractively simple configuration making use of the free turbine is the propfan engine, with a rear-mounted unducted fan in pusher configuration, rather than the more familiar tractor layout. The first such engine was the very early and promising Metropolitan-Vickers F.3 of 1942 with a ducted fan, followed by the unducted and much lighter F.5. Development of these engines stopped abruptly owing to corporate takeovers, rather than technical reasons. Rolls-Royce continued with design studies for such engines into the 1980s, as did GE, but they have yet to appear as commercial engines. The advantage of the pusher propfan with a free power turbine is its simplicity. The prop blades are attached directly to the outside of the rotating turbine disc. No gearboxes or drive shafts are required. The short length of the rotating components also reduces vibration. The static structure of the engine over this length is a large diameter tube within the turbine. In most designs, two contra-rotating rings of turbine and propeller are used. Intermeshed contra-rotating turbines can act as the guide vanes for each other, removing the need for static vanes. Land and sea The M1 Abrams main battle tank is powered by a Honeywell AGT1500 (formerly Textron Lycoming) two-spool gas turbine engine. A commercial derivative has been designed as the TF15 for marine and railroad applications, and a flight-rated version, the PLT27, was also developed but lost a major contract to the GE T700 turboshaft. Turboshaft engines were used to power several gas turbine locomotives, most notably using the Turbomeca Turmo in Turbotrain (France) and Turboliner (United States) service. See also Air turborocket Free-piston engine Motorjet Rocket turbine engine Turbo-compound engine References Gas turbines Aircraft engines
Free-turbine turboshaft
Technology
1,781
27,739,767
https://en.wikipedia.org/wiki/Forces%20on%20sails
Forces on sails result from movement of air that interacts with sails and gives them motive power for sailing craft, including sailing ships, sailboats, windsurfers, ice boats, and sail-powered land vehicles. Similar principles in a rotating frame of reference apply to windmill sails and wind turbine blades, which are also wind-driven. They are differentiated from forces on wings, and propeller blades, the actions of which are not adjusted to the wind. Kites also power certain sailing craft, but do not employ a mast to support the airfoil and are beyond the scope of this article. Forces on sails depend on wind speed and direction and the speed and direction of the craft. The direction that the craft is traveling with respect to the "true wind" (the wind direction and speed over the surface) is called the point of sail. The speed of the craft at a given point of sail contributes to the "apparent wind"—the wind speed and direction as measured on the moving craft. The apparent wind on the sail creates a total aerodynamic force, which may be resolved into drag—the force component in the direction of the apparent wind—and lift—the force component normal (90°) to the apparent wind. Depending on the alignment of the sail with the apparent wind, lift or drag may be the predominant propulsive component. Total aerodynamic force also resolves into a forward, propulsive, driving force—resisted by the medium through or over which the craft is passing (e.g. through water, air, or over ice, sand)—and a lateral force, resisted by the underwater foils, ice runners, or wheels of the sailing craft. For apparent wind angles aligned with the entry point of the sail, the sail acts as an airfoil and lift is the predominant component of propulsion. For apparent wind angles behind the sail, lift diminishes and drag increases as the predominant component of propulsion. For a given true wind velocity over the surface, a sail can propel a craft to a higher speed, on points of sail when the entry point of the sail is aligned with the apparent wind, than it can with the entry point not aligned, because of a combination of the diminished force from airflow around the sail and the diminished apparent wind from the velocity of the craft. Because of limitations on speed through the water, displacement sailboats generally derive power from sails generating lift on points of sail that include close-hauled through broad reach (approximately 40° to 135° off the wind). Because of low friction over the surface and high speeds over the ice that create high apparent wind speeds for most points of sail, iceboats can derive power from lift further off the wind than displacement boats. Various mathematical models address lift and drag by taking into account the density of air, coefficients of lift and drag that result from the shape and area of the sail, and the speed and direction of the apparent wind, among other factors. This knowledge is applied to the design of sails in such a manner that sailors can adjust sails to the strength and direction of the apparent wind in order to provide motive power to sailing craft. Overview The combination of a sailing craft's speed and direction with respect to the wind, together with wind strength, generate an apparent wind velocity. When the craft is aligned in a direction where the sail can be adjusted to align with its leading edge parallel to the apparent wind, the sail acts as an airfoil to generate lift in a direction perpendicular to the apparent wind. A component of this lift pushes the craft crosswise to its course, which is resisted by a sailboat's keel, an ice boat's blades or a land-sailing craft's wheels. An important component of lift is directed forward in the direction of travel and propels the craft. Language of velocity and force To understand forces and velocities, discussed here, one must understand what is meant by a "vector" and a "scalar." Velocity (V), denoted as boldface in this article, is an example of a vector, because it implies both direction and speed. The corresponding speed (V ), denoted as italics in this article is a scalar value. Likewise, a force vector, F, denotes direction and strength, whereas its corresponding scalar (F ) denotes strength alone. Graphically, each vector is represented with an arrow that shows direction and a length that shows speed or strength. Vectors of consistent units (e.g. V in m/s or F in N) may be added and subtracted, graphically, by positioning tips and tails of the arrows, representing the input variables and drawing the resulting derived vector. Components of force: lift vs. drag and driving vs. lateral force Lift on a sail (L), acting as an airfoil, occurs in a direction perpendicular to the incident airstream (the apparent wind velocity, VA, for the head sail) and is a result of pressure differences between the windward and leeward surfaces and depends on angle of attack, sail shape, air density, and speed of the apparent wind. Pressure differences result from the normal force per unit area on the sail from the air passing around it. The lift force results from the average pressure on the windward surface of the sail being higher than the average pressure on the leeward side. These pressure differences arise in conjunction with the curved air flow. As air follows a curved path along the windward side of a sail, there is a pressure gradient perpendicular to the flow direction with lower pressure on the outside of the curve and higher pressure on the inside. To generate lift, a sail must present an "angle of attack" (α) between the chord line of the sail and the apparent wind velocity (VA). Angle of attack is a function of both the craft's point of sail and how the sail is adjusted with respect to the apparent wind. As the lift generated by a sail increases, so does lift-induced drag, which together with parasitic drag constitutes total drag, (D). This occurs when the angle of attack increases with sail trim or change of course to cause the lift coefficient to increase up to the point of aerodynamic stall, so does the lift-induced drag coefficient. At the onset of stall, lift is abruptly decreased, as is lift-induced drag, but viscous pressure drag, a component of parasitic drag, increases due to the formation of separated flow on the surface of the sail. Sails with the apparent wind behind them (especially going downwind) operate in a stalled condition. Lift and drag are components of the total aerodynamic force on sail (FT). Since the forces on the sail are resisted by forces in the water (for a boat) or on the traveled surface (for an ice boat or land sailing craft), their corresponding forces can also be decomposed from total aerodynamic force into driving force (FR) and lateral force (FLAT). Driving force overcomes resistance to forward motion. Lateral force is met by lateral resistance from a keel, blade or wheel, but also creates a heeling force. Effect of points of sail on forces Apparent wind (VA) is the air velocity acting upon the leading edge of the most forward sail or as experienced by instrumentation or crew on a moving sailing craft. It is the vector sum of true wind velocity and the apparent wind component resulting from boat velocity (VA = −VB + VT). In nautical terminology, wind speeds are normally expressed in knots and wind angles in degrees. The craft's point of sail affects its velocity (VB) for a given true wind velocity (VT). Conventional sailing craft cannot derive power from the wind in a "no-go" zone that is approximately 40° to 50° away from the true wind, depending on the craft. Likewise, the directly downwind speed of all conventional sailing craft is limited to the true wind speed. Effect of apparent wind on sailing craft at three points of sail Boat velocity (in black) generates an equal and opposite apparent wind component (not shown), which adds to the true wind to become apparent wind. Sailing craft A is close-hauled. Sailing craft B is on a beam reach. Sailing craft C is on a broad reach. A sailboat's speed through the water is limited by the resistance that results from hull drag in the water. Sail boats on foils are much less limited. Ice boats typically have the least resistance to forward motion of any sailing craft. Craft with the higher forward resistance achieve lower forward velocities for a given wind velocity than ice boats, which can travel at speeds several multiples of the true wind speed. Consequently, a sailboat experiences a wider range of apparent wind angles than does an ice boat, whose speed is typically great enough to have the apparent wind coming from a few degrees to one side of its course, necessitating sailing with the sail sheeted in for most points of sail. On conventional sail boats, the sails are set to create lift for those points of sail where it's possible to align the leading edge of the sail with the apparent wind. For a sailboat, point of sail affects lateral force significantly. The higher the boat points to the wind under sail, the stronger the lateral force, which requires resistance from a keel or other underwater foils, including daggerboard, centerboard, skeg and rudder. Lateral force also induces heeling in a sailboat, which requires resistance by weight of ballast from the crew or the boat itself and by the shape of the boat, especially with a catamaran. As the boat points off the wind, lateral force and the forces required to resist it become less important. On ice boats, lateral forces are countered by the lateral resistance of the blades on ice and their distance apart, which generally prevents heeling. Forces on sailing craft Each sailing craft is a system that mobilizes wind force through its sails—supported by spars and rigging—which provide motive power and reactive force from the underbody of a sailboat—including the keel, centerboard, rudder or other underwater foils—or the running gear of an ice boat or land craft, which allows it to be kept on a course. Without the ability to mobilize reactive forces in directions different from the wind direction, a craft would simply be adrift before the wind. Accordingly, motive and heeling forces on sailing craft are either components of or reactions to the total aerodynamic force (FT) on sails, which is a function of apparent wind velocity (VA) and varies with point of sail. The forward driving force (FR) component contributes to boat velocity (VB), which is, itself, a determinant of apparent wind velocity. Absent lateral reactive forces to FT from a keel (in water), a skate runner (on ice) or a wheel (on land), a craft would only be able to move downwind and the sail would not be able to develop lift. At a stable angle of heel (for a sailboat) and a steady speed, aerodynamic and hydrodynamic forces are in balance. Integrated over the sailing craft, the total aerodynamic force (FT) is located at the centre of effort (CE), which is a function of the design and adjustment of the sails on a sailing craft. Similarly, the total hydrodynamic force (Fl) is located at the centre of lateral resistance (CLR), which is a function of the design of the hull and its underwater appendages (keel, rudder, foils, etc.). These two forces act in opposition to one another with Fl a reaction to FT. Whereas ice boats and land-sailing craft resist lateral forces with their wide stance and high-friction contact with the surface, sailboats travel through water, which provides limited resistance to side forces. In a sailboat, side forces are resisted in two ways: Leeway: Leeway is the rate of travel perpendicular to the course. It is constant when the lateral force on the sail (FLAT) equals the lateral force on the boat's keel and other underwater appendages (PLAT). This causes the boat to travel through the water on a course that is different from the direction in which the boat is pointed by the angle (λ ), which is called the "leeway angle." Heeling: The heeling angle (θ) is constant when the torque between the centre of effort (CE) on the sail and the centre of resistance on the hull (CR) over moment arm (h) equals the torque between the boat's centre of buoyancy (CB) and its centre of gravity (CG) over moment arm (b), described as heeling moment. All sailing craft reach a constant forward speed (VB) for a given wind speed (VT) and point of sail, when the forward driving force (FR) equals the forward resisting force (Rl). For an ice boat, the dominant forward resisting force is aerodynamic, since the coefficient of friction on smooth ice is as low as 0.02. Accordingly, high-performance ice boats are streamlined to minimize aerodynamic drag. Aerodynamic forces in balance with hydrodynamic forces on a close-hauled sailboat Force components on sails The approximate locus of net aerodynamic force on a craft with a single sail is the centre of effort (CE ) at the geometric centre of the sail. Filled with wind, the sail has a roughly spherical polygon shape and if the shape is stable, then the location of centre of effort is stable. On sailing craft with multiple sails, the position of centre of effort varies with the sail plan. Sail trim or airfoil profile, boat trim and point of sail also affect CE. On a given sail, the net aerodynamic force on the sail is located approximately at the maximum draught intersecting the camber of the sail and passing through a plane intersecting the centre of effort, normal to the leading edge (luff), roughly perpendicular to the chord of the sail (a straight line between the leading edge (luff) and the trailing edge (leech)). Net aerodynamic force with respect to the air stream is usually considered in reference to the direction of the apparent wind (VA) over the surface plane (ocean, land or ice) and is decomposed into lift (L), perpendicular with VA, and drag (D), in line with VA. For windsurfers, lift component vertical to the surface plane is important, because in strong winds windsurfer sails are leaned into the wind to create a vertical lifting component ( FVERT) that reduces drag on the board (hull) through the water. Note that FVERT acts downwards for boats heeling away from the wind, but is negligible under normal conditions. The three dimensional vector relationship for net aerodynamic force with respect to apparent wind (VA) is: Likewise, net aerodynamic force may be decomposed into the three translational directions with respect to a boat's course over the surface: surge (forward/astern), sway (starboard/port—relevant to leeway), and heave (up/down). The scalar values and direction of these components can be dynamic, depending on wind and waves (for a boat). In this case, FT is considered in reference to the direction of the boat's course and is decomposed into driving force (FR), in line with the boat's course, and lateral force (FLAT), perpendicular with the boat's course. Again for windsurfers, the lift component vertical to the surface plane ( FVERT) is important. The three dimensional vector relationship for net aerodynamic force with respect to the course over the surface is: The values of driving force (FR ) and lateral force (FLAT ) with apparent wind angle (α), assuming no heeling, relate to the values of lift (L ) and drag (D ), as follows: Reactive forces on sailing craft Reactive forces on sailing craft include forward resistance—sailboat's hydrodynamic resistance (Rl), an ice boat's sliding resistance or a land sailing craft's rolling resistance in the direction of travel—which are to be minimized in order to increase speed, and lateral force, perpendicular to the direction of travel, which is to be made sufficiently strong to minimize sideways motion and to guide the craft on course. Forward resistance comprises the types of drag that impede a sailboat's speed through water (or an ice boat's speed over the surface) include components of parasitic drag, consisting primarily of form drag, which arises because of the shape of the hull, and skin friction, which arises from the friction of the water (for boats) or air (for ice boats and land sailing craft) against the "skin" of the hull that is moving through it. Displacement vessels are also subject to wave resistance from the energy that goes into displacing water into waves and that is limited by hull speed, which is a function of waterline length, Wheeled vehicles' forward speed is subject to rolling friction and ice boats are subject to kinetic or sliding friction. Parasitic drag in water or air increases with the square of speed (VB2 or VA2, respectively); rolling friction increases linearly with velocity; whereas kinetic friction is normally a constant, but on ice may become reduced with speed as it transitions to lubricated friction with melting. Ways to reduce wave-making resistance used on sailing vessels include reduced displacement—through planing or (as with a windsurfer) offsetting vessel weight with a lifting sail—and fine entry, as with catamarans, where a narrow hull minimizes the water displaced into a bow wave. Sailing hydrofoils also substantially reduce forward friction with an underwater foil that lifts the vessel free of the water. Sailing craft with low forward resistance and high lateral resistance. Sailing craft with low forward resistance can achieve high velocities with respect to the wind velocity: High-performance catamarans, including the Extreme 40 catamaran and International C-class catamaran can sail at speeds up to twice the speed of the wind. Sailing hydrofoils achieve boat speeds up to twice the speed of the wind, as did the AC72 catamarans used for the 2013 America's Cup. Ice boats can sail up to five times the speed of the wind. Lateral force is a reaction supplied by the underwater shape of a sailboat, the blades of an ice boat and the wheels of a land sailing craft. Sailboats rely on keels, centerboards, and other underwater foils, including rudders, that provide lift in the lateral direction, to provide hydrodynamic lateral force (PLAT) to offset the lateral force component acting on the sail (FLAT) and minimize leeway. Such foils provide hydrodynamic lift and, for keels, ballast to offset heeling. They incorporate a wide variety of design considerations. Rotational forces on sailing craft The forces on sails that contribute to torque and cause rotation with respect to the boat's longitudinal (fore and aft), horizontal (abeam) and vertical (aloft) rotational axes result in: roll (e.g. heeling). pitch (e.g. pitch-poling), and yaw (e.g. broaching). Heeling, which results from the lateral force component (FLAT), is the most significant rotational effect of total aerodynamic force (FT). In stasis, heeling moment from the wind and righting moment from the boat's heel force (FH ) and its opposing hydrodynamic lift force on hull (Fl ), separated by a distance (h = "heeling arm"), versus its hydrostatic displacement weight (W ) and its opposing buoyancy force (Δ), separated by a distance (b = "righting arm") are in balance: (heeling arm × heeling force = righting arm × buoyancy force = heeling arm × hydrodynamic lift force on hull = righting arm × displacement weight) Sails come in a wide variety of configurations that are designed to match the capabilities of the sailing craft to be powered by them. They are designed to stay within the limitations of a craft's stability and power requirements, which are functions of hull (for boats) or chassis (for land craft) design. Sails derive power from wind that varies in time and with height above the surface. In order to do so, they are designed to adjust to the wind force for various points of sail. Both their design and method for control include means to match their lift and drag capabilities to the available apparent wind, by changing surface area, angle of attack, and curvature. Wind variation with elevation Wind speed increases with height above the surface; at the same time, wind speed may vary over short periods of time as gusts. These considerations may be described empirically. Measurements show that wind speed, (V (h ) ) varies, according to a power law with height (h ) above a non-zero measurement height datum (h0 —e.g. at the height of the foot of a sail), using a reference wind speed measured at the datum height (V (h0 ) ), as follows: Where the power law exponent (p) has values that have been empirically determined to range from 0.11 over the ocean to 0.31 over the land. This means that a V (3 m) = 5-m/s (≈10-knot) wind at 3 m above the water would be approximately V (15 m) = 6 m/s (≈12 knots) at 15 m above the water. In hurricane-force winds with V (3 m) = 40-m/s (≈78 knots) the speed at 15 m would be V (15 m) = 49 m/s (≈95 knots) with p = 0.128. This suggests that sails that reach higher above the surface can be subject to stronger wind forces that move the centre of effort (CE ) higher above the surface and increase the heeling moment. Additionally, apparent wind direction moves aft with height above water, which may necessitate a corresponding twist in the shape of the sail to achieve attached flow with height. Wind variation with time Hsu gives a simple formula for a gust factor (G ) for winds as a function of the exponent (p ), above, where G is the ratio of the wind gust speed to baseline wind speed at a given height: So, for a given windspeed and Hsu's recommended value of p = 0.126, one can expect G = 1.5 (a 10-knot wind might gust up to 15 knots). This, combined with changes in wind direction suggest the degree to which a sailing craft must adjust to wind gusts on a given course. Forces on sails A sailing craft's motive system comprises one or more sails, supported by spars and rigging, that derive power from the wind and induce reactive force from the underbody of a sailboat or the running gear of an ice boat or land craft. Depending on the angle of attack of a set of sails with respect to the apparent wind, each sail is providing motive force to the sailing craft either from lift-dominant attached flow or drag-dominant separated flow. Additionally, sails may interact with one another to create forces that are different from the sum of the individual contributions each sail, when used alone. Lift predominant (attached flow) Sails allow progress of a sailing craft to windward, thanks to their ability to generate lift (and the craft's ability to resist the lateral forces that result). Each sail configuration has a characteristic coefficient of lift and attendant coefficient of drag, which can be determined experimentally and calculated theoretically. Sailing craft orient their sails with a favorable angle of attack between the entry point of the sail and the apparent wind as their course changes. The ability to generate lift is limited by sailing too close to the wind when no effective angle of attack is available to generate lift (luffing) and sailing sufficiently off the wind that the sail cannot be oriented at a favorable angle of attack (running downwind). Instead, past a critical angle of attack, the sail stalls and promotes flow separation. Effect of angle of attack on coefficients of lift and drag Each type of sail, acting as an airfoil, has characteristic coefficients of lift (CL ) and lift-induced drag (CD ) at a given angle of attack, which follow that same basic form of: Where force (F) equals lift (L) for forces measured perpendicular to the airstream to determine C = CL or force (F) equals drag (D) for forces measured in line with the airstream to determine C = CD on a sail of area (A) and a given aspect ratio (length to average cord width). These coefficients vary with angle of attack (αj for a headsail) with respect to the incident wind (VA for a headsail). This formulation allows determination of CL and CD experimentally for a given sail shape by varying angle of attack at an experimental wind velocity and measuring force on the sail in the direction of the incident wind (D—drag) and perpendicular to it (L—lift). As the angle of attack grows larger, the lift reaches a maximum at some angle; increasing the angle of attack beyond this critical angle of attack causes the upper-surface flow to separate from the convex surface of the sail; there is less deflection of air to windward, so the sail as airfoil generates less lift. The sail is said to be stalled. At the same time, induced drag increases with angle of attack (for the headsail: αj ). Determination of coefficients of lift (CL ) and drag (CD ) for angle of attack and aspect ratio Fossati presents polar diagrams that relate coefficients of lift and drag for different angles of attack based on the work of Gustave Eiffel, who pioneered wind tunnel experiments on airfoils, which he published in 1910. Among them were studies of cambered plates. The results shown are for plates of varying camber and aspect ratios, as shown. They show that, as aspect ratio decreases, maximum lift shifts further towards increased drag (rightwards in the diagram). They also show that, for lower angles of attack, a higher aspect ratio generates more lift and less drag than for lower aspect ratios. Effect of coefficients of lift and drag on forces If the lift and drag coefficients (CL and CD) for a sail at a specified angle of attack are known, then the lift (L) and drag (D) forces produced can be determined, using the following equations, which vary as the square of apparent wind speed (VA ): Garrett demonstrates how those diagrams translate into lift and drag, for a given sail, on different points of sail, in diagrams similar to these: Polar diagrams, showing lift (L), drag (D), total aerodynamic force (FT), forward driving force (FR), and lateral force (FLAT) for upwind points of sail In these diagrams the direction of travel changes with respect to the apparent wind (VA), which is constant for the purpose of illustration. In reality, for a constant true wind, apparent wind would vary with point of sail. Constant VA in these examples means that either VT or VB varies with point of sail; this allows the same polar diagram to be used for comparison with the same conversion of coefficients into units of force (in this case Newtons). In the examples for close-hauled and reach (left and right), the sail's angle of attack (α ) is essentially constant, although the boom angle over the boat changes with point of sail to trim the sail close to the highest lift force on the polar curve. In these cases, lift and drag are the same, but the decomposition of total aerodynamic force (FT) into forward driving force (FR) and lateral force (FLAT) vary with point of sail. Forward driving force (FR) increases, as the direction of travel is more aligned with the wind, and lateral force (FLAT) decreases. In reference to the above diagrams relating lift and drag, Garrett explains that for a maximum speed made good to windward, the sail must be trimmed to an angle of attack that is greater than the maximum lift/drag ratio (more lift), while the hull is operated in a manner that is lower than its maximum lift/drag ratio (more drag). Drag predominant (separated flow) When sailing craft are on a course where the angle of attack between the sail and the apparent wind (α ) exceeds the point of maximum lift on the CL–CD polar diagram, separation of flow occurs. The separation becomes more pronounced until at α = 90° lift becomes small and drag predominates. In addition to the sails used upwind, spinnakers provide area and curvature appropriate for sailing with separated flow on downwind points of sail. Polar diagrams, showing lift (L), drag (D), total aerodynamic force (FT), forward driving force (FR), and lateral force (FLAT) for downwind points of sail Again, in these diagrams the direction of travel changes with respect to the apparent wind (VA), which is constant for the sake of illustration, but would in reality vary with point of sail for a constant true wind. In the left-hand diagram (broad reach), the boat is on a point of sail, where the sail can no longer be aligned into the apparent wind to create an optimum angle of attack. Instead, the sail is in a stalled condition, creating about 80% of the lift as in the upwind examples and drag has doubled. Total aerodynamic force (FT) has moved away from the maximum lift value. In the right-hand diagram (running before the wind), lift is one-fifth of the upwind cases (for the same strength apparent wind) and drag has almost quadrupled. Downwind sailing with a spinnaker A velocity prediction program can translate sail performance and hull characteristics into a polar diagram, depicting boat speed for various windspeeds at each point of sail. Displacement sailboats exhibit a change in what course has the best velocity made good (VMG), depending on windspeed. For the example given, the sailboat achieves best downwind VMG for windspeed of 10 knots and less at a course about 150° off the wind. For higher windspeed the optimum downwind VMG occurs at more than 170° off the wind. This "downwind cliff" (abrupt change in optimum downwind course) results from the change of balance in drag forces on the hull with speed. Sail interactions Sailboats often have a jib that overlaps the mainsail—called a genoa. Arvel Gentry demonstrated in his series of articles published in "Best of sail trim" published in 1977 (and later reported and republished in summary in 1981) that the genoa and the mainsail interact in a symbiotic manner, owing to the circulation of air between them slowing down in the gap between the two sails (contrary to traditional explanations), which prevents separation of flow along the mainsail. The presence of a jib causes the stagnation line on the mainsail to move forward, which reduces the suction velocities on the main and reduces the potential for boundary layer separation and stalling. This allows higher angles of attack. Likewise, the presence of the mainsail causes the stagnation line on the jib to be shifted aft and allows the boat to point closer to the wind, owing to higher leeward velocities of the air over both sails. The two sails cause an overall larger displacement of air perpendicular to the direction of flow when compared to one sail. They act to form a larger wing, or airfoil, around which the wind must pass. The total length around the outside has also increased and the difference in air speed between windward and leeward sides of the two sails is greater, resulting in more lift. The jib experiences a greater increase in lift with the two sail combination. Sail performance design variables Sails characteristically have a coefficient of lift (CL) and coefficient of drag (CD) for each apparent wind angle. The planform, curvature and area of a given sail are dominant determinants of each coefficient. Sail terminology Sails are classified as "triangular sails", "quadrilateral fore-and-aft sails" (gaff-rigged, etc.), and "square sails". The top of a triangular sail, the head, is raised by a halyard, The forward lower corner of the sail, the tack, is shackled to a fixed point on the boat in a manner to allow pivoting about that point—either on a mast, e.g. for a mainsail, or on the deck, e.g. for a jib or staysail. The trailing lower corner, the clew, is positioned with an outhaul on a boom or directly with a sheet, absent a boom. Symmetrical sails have two clews, which may be adjusted forward or back. The windward edge of a sail is called the luff, the trailing edge, the leach, and the bottom edge the foot. On symmetrical sails, either vertical edge may be presented to windward and, therefore, there are two leaches. On sails attached to a mast and boom, these edges may be curved, when laid on a flat surface, to promote both horizontal and vertical curvature in the cross-section of the sail, once attached. The use of battens allows a sail have an arc of material on the leech, beyond a line drawn from the head to the clew, called the roach. Lift variables As with aircraft wings, the two dominant factors affecting sail efficiency are its planform—primarily sail width versus sail height, expressed as an aspect ratio—and cross-sectional curvature or draft. Aspect ratio In aerodynamics, the aspect ratio of a sail is the ratio of its length to its breadth (chord). A high aspect ratio indicates a long, narrow sail, whereas a low aspect ratio indicates a short, wide sail. For most sails, the length of the chord is not a constant but varies along the wing, so the aspect ratio AR is defined as the square of the sail height b divided by the area A of the sail planform: Aspect ratio and planform can be used to predict the aerodynamic performance of a sail. For a given sail area, the aspect ratio, which is proportional to the square of the sail height, is of particular significance in determining lift-induced drag, and is used to calculate the induced drag coefficient of a sail : where is the Oswald efficiency number that accounts for the variable sail shapes. This formula demonstrates that a sail's induced drag coefficient decreases with increased aspect ratio. Sail curvature The horizontal curvature of a sail is termed "draft" and corresponds to the camber of an airfoil. Increasing the draft generally increases the sail's lift force. The Royal Yachting Association categorizes draft by depth and by the placement of the maximum depth as a percentage of the distance from the luff to the leach. Sail draft is adjusted for wind speed to achieve a flatter sail (less draft) in stronger winds and a fuller sails (more draft) in lighter winds. Staysails and sails attached to a mast (e.g. a mainsail) have different, but similar controls to achieve draft depth and position. On a staysail, tightening the luff with the halyard helps flatten the sail and adjusts the position of maximum draft. On a mainsail curving the mast to fit the curvature of the luff helps flatten the sail. Depending on wind strength, Dellenbaugh offers the following advice on setting the draft of a sailboat mainsail: For light air (less than 8 knots), the sail is at its fullest with the depth of draft between 13-16% of the cord and maximum fullness 50% aft from the luff. For medium air (8-15 knots), the mainsail has minimal twist with a depth of draft set between 11-13% of the cord and maximum fullness 45% aft from the luff. For heavy (greater than15 knots), the sail is flattened and allowed to twist in a manner that dumps lift with a depth of draft set between 9-12% of cord and maximum fullness 45% aft of the luff. Plots by Larsson et al show that draft is a much more significant factor affecting sail propulsive force than the position of maximum draft. Coefficients of propulsive forces and heeling forces as a function of draft (camber) depth or position. The primary tool for adjusting mainsail shape is mast bend; a straight mast increases draft and lift; a curved mast decreases draft and lift—the backstay tensioner is a primary tool for bending the mast. Secondary tools for sail shape adjustment are the mainsheet, traveler, outhaul, and Cunningham. Drag variables Spinnakers have traditionally been optimized to mobilize drag as a more important propulsive component than lift. As sailing craft are able to achieve higher speeds, whether on water, ice or land, the velocity made good (VMG) at a given course off the wind occurs at apparent wind angles that are increasingly further forward with speed. This suggests that the optimum VMG for a given course may be in a regime where a spinnaker may be providing significant lift. Traditional displacement sailboats may at times have optimum VMG courses close to downwind; for these the dominant force on sails is from drag. According to Kimball, CD ≈ 4/3 for most sails with the apparent wind angle astern, so drag force on a downwind sail becomes substantially a function of area and wind speed, approximated as follows: Measurement and computation tools Sail design relies on empirical measurements of pressures and their resulting forces on sails, which validate modern analysis tools, including computational fluid dynamics. Measurement of pressure on the sail Modern sail design and manufacture employs wind tunnel studies, full-scale experiments, and computer models as a basis for efficiently harnessing forces on sails. Instruments for measuring air pressure effects in wind tunnel studies of sails include pitot tubes, which measure air speed and manometers, which measure static pressures and atmospheric pressure (static pressure in undisturbed flow). Researchers plot pressure across the windward and leeward sides of test sails along the chord and calculate pressure coefficients (static pressure difference over wind-induced dynamic pressure). Research results describe airflow around the sail and in the boundary layer. Wilkinson, modelling the boundary layer in two dimensions, described nine regions around the sail: Upper mast attached airflow. Upper separation bubble. Upper reattachment region. Upper aerofoil attached flow region. Trailing edge separation region. Lower mast attached flow region. Lower separation bubble. Lower reattachment region. Lower aerofoil attached flow region. Analysis Sail design differs from wing design in several respects, especially since on a sail air flow varies with wind and boat motion and sails are usually deformable airfoils, sometimes with a mast for a leading edge. Often simplifying assumptions are employed when making design calculations, including: a flat travel surface—water, ice or land, constant wind velocity and unchanging sail adjustment. The analysis of the forces on sails takes into account the aerodynamic surface force, its centre of effort on a sail, its direction, and its variable distribution over the sail. Modern analysis employs fluid mechanics and aerodynamics airflow calculations for sail design and manufacture, using aeroelasticity models, which combine computational fluid dynamics and structural analysis. Secondary effects pertaining to turbulence and separation of the boundary layer are secondary factors. Computational limitations persist. Theoretical results require empirical confirmation with wind tunnel tests on scale models and full-scale testing of sails. Velocity prediction programs combine elements of hydrodynamic forces (mainly drag) and aerodynamic forces (lift and drag) to predict sailboat performance at various windspeed for all points of sail See also Sail Sailing Sailcloth Point of sail Polar diagram (sailing) Sail-plan Rigging Wing Sail twist High-performance sailing Stays (nautical) Sheet (sailing) References Aerodynamics Naval architecture Sailing Marine propulsion
Forces on sails
Chemistry,Engineering
8,247
12,347,828
https://en.wikipedia.org/wiki/FlyLady
FlyLady is a support and self-help group that offers advice to help people with housekeeping, founded by "The FlyLady", Marla Cilley. The group is based upon the website FlyLady.net, as well as a Constant Contact group for its email mailing list. Members of FlyLady have stated that the group has helped them and has changed their lives. FlyLady's messages cover topics include clutter, the value of routines, weekly and monthly cleaning, increased self-esteem, and letting go of perfectionism. As of 2016, she had over 300,000 subscribers on her email list, and 550,000 followers on Facebook. In 2020, FlyLady announced an additional presence on Parler. In 2022 FlyLady also began to diversify her platform by publishing on Truth Social, and Bitchute. A store on her website sells organizational tools and housewares, sent from the FlyLady Distribution Center in Brevard, NC. In 2007, sales from the store reached US$4 million. In November 2015 Alex Elsea, Marla's nephew, launched FlyLady Premium, a paid virtual mentoring service which adds extra support to followers of the FlyLady methodology in small private online groups. FlyLady Premium released an app, FlyLadyPlus in July 2016. Later in 2016, FlyLady herself released a subscription-based iOS reminder app, FlyLady Messenger. History Marla Cilley, founder of FlyLady, is from North Carolina. In 1999, Cilley joined a web forum called SHE's Online, based on the housekeeping system created by Pam Young and Peggy Jones, ("The Slob Sisters") detailed in their book Sidetracked Home Executives: From Pigpen to Paradise (1977) The book covers many of the key topics that were adapted to become the FlyLady system; daily task lists, routines, "slipshod cleaning", and a systematic view to housekeeping. Marla Cilley refers to Pam and Peggy as her mentors and inspiration. She licensed the "Sidetracked Home Executives" system, and the Flylady system is based upon it. Marla first created an email group (first on e-Groups, then Yahoo! Groups), then published her website, Flylady.net, in February 2001. " The name FlyLady was Marla Cilley's screen name, as she was a fly-fishing fan and instructor. One of the members of the FlyLady e-mail list later created a "backronym" for FLY: Finally Loving Yourself. Methodology FlyLady's methodology is outlined in Cilley's book Sink Reflections (Bantam Books, ) and on the web site. The system encourages "baby steps" to develop routines and habits to organize and maintain your home. The primary focus is on "Finally Loving Yourself" by making your life easier by decluttering, menu planning, "anti-procrastination" day, and establishing routines, as well as financial and health-related self-care In 2007, Marla Cilley and co-author Leanne Ely also released a New York Times bestselling book called Body Clutter: Love Your Body, Love Yourself, which aims to apply the FlyLady's housekeeping methodology to caring for the reader's body, through self-examination and increased self-respect. Key points in the FlyLady system include: Babysteps and Routines New recruits to the FlyLady system are called "Flybabies" and are introduced to "babysteps" - a series of 31 small daily tasks which introduce and then reinforce aspects of cleaning and decluttering, building up to creating personalized routines for morning, afternoon and evening. Once these routines are established, the "Flybaby" has "graduated" and will no longer need the scaffolding of the emails. Shine Your Sink Cilley's first instruction to new members is "Go shine your sink!" She asserts that even in a messy kitchen, the cleaned-out and polished sink provides positive reinforcement to the person who cleaned it, encouraging further cleaning in the rest of the room and home. 15 Minutes at a Time Cilley recommends using a timer to work for only 15 minutes at a time. The short time commitment helps stop procrastination, and reduces opportunities to get sidetracked or bored. Clutter Cannot Be Organized Cilley recommends that her followers get rid of excess items in their homes, and bring in fewer items, rather than attempting to organize them. This reduction "decluttering" is done 15 minutes at a time. One such exercise is FlyLady's "27-fling Boogie," in which the follower quickly selects 27 items in their home to discard and 27 items to give away. Weekly Routines Cilley advises the use of weekly routines, whereby each weekday is assigned an additional task or focus; Monday is daily cleaning, Wednesday is errand day, Thursday is grocery day and Friday is "desk day", focusing on paperwork and finances, as well as the day to declutter the car. Weekly Home Blessing Cilley's adaption of the Pam Young and Peggy Jones' "Slipshod cleaning" is the one-hour housecleaning mission called the "weekly home blessing." Using their timers, followers are instructed to vacuum, dust, mop, empty trash, change bedsheets and clean up old magazines. Each task is allocated ten minutes only. Get Dressed to Shoes Cilley insists that her followers "get dressed to lace-up shoes" before beginning their housekeeping tasks - or contacting her for an interview. Zones Flylady divides a house into five sections or zones, which are allocated to the five weeks or partial weeks of the month. Each day the email list will provide a "mission" with a detailed cleaning task in the current zone. Control Journal Flylady advises the use of a "Control Journal," a household management notebook or binder, to store the owner's routines, lists and other important household information. Flylady Reminders Daily reminders of the routines, zones and missions, as well as "testimonials" of the system and products, are sent to subscribers of the FlyLady list. Perfectionism leads to Procrastination FlyLady asserts that the most frequent reason for procrastination and inefficiency is perfectionism, as people won't start a task if they think they don't have the time or the ability to do it perfectly. Some frequently repeated sayings in this respect are "good enough is good enough" or "housework done incorrectly still blesses your family". No Whining FlyLady often repeats that her Facebook page is a "No Whining Zone", and that "If you can't say anything nice, say nothing at all". References External links FlyLady.net FlyLady.tv Karen Kohlhaas. Why flylady is great for actors. August, 2006 Orit Kuritsky. Transformational Tales: Media, Makeovers, and Material Culture. February, 2009 Related websites: Saving Dinner by Leanne Ely (adviser to Flybabies) and American social networking websites Electronic mailing lists Cleaning Home improvement Domestic life
FlyLady
Chemistry
1,506
21,675,886
https://en.wikipedia.org/wiki/Germs%3A%20Biological%20Weapons%20and%20America%27s%20Secret%20War
Germs: Biological Weapons and America's Secret War is a 2001 book written by New York Times journalists Judith Miller, Stephen Engelberg, and William Broad. It describes how humanity has dealt with biological weapons, and the dangers of bioterrorism. It was the 2001 New York Times #1 Non-Fiction Bestseller the weeks of October 28 and November 4. Overview Germs, is a work of investigative journalism employing biographical and historical narrative to provide context. The three authors interviewed hundreds of scientists and senior U.S. officials, and reviewed recently declassified documents, and reports from the former Soviet Union's bioweapons laboratories. Summary The book opens with an account of the 1984 salmonella poisonings in The Dalles, Oregon, caused by followers of Bhagwan Shree Rajneesh who sprayed salmonella onto salad bars. Other research shows how Moscow scientists created an untraceable germ that would induce the body to self-destruct, and reveals that the U.S. military planned for germ warfare on Cuba during the 1960s. Three classified U.S. biodefense projects are detailed: Project Bacchus, Project Clear Vision, and Project Jefferson. Germs concludes with an assessment of the United States' ability to deter future bio-attack. Reviews The New York Times Book Review was favorable, though it criticized the book's tone as "somewhat alarmist". BusinessWeek was also generally favorable, except for pointing out some conflicting views on bioterrorism. The Guardian'''s book review by British psychiatrist Simon Wessely, cautioned against panic, stating that biological weapons can cause destruction through fear, effectively giving the biodefense industry "the equivalent of a blank cheque". Adaptations On November 13, 2001, the science TV series Nova aired an episode entitled Bioterror''. Two years in the making, it chronicled Miller, Engelberg, and Broad's research and investigation into biological weapons. References External links Panel discussion at the Council on Foreign Relations with Miller, Engelberg, and Broad, October 29, 2001, C-SPAN Panel discussion at the 92nd Street Y with Miller, Engelberg, and Broad, November 11, 2001, C-SPAN 2001 non-fiction books Science books American non-fiction books Non-fiction books about war Biological warfare Simon & Schuster books
Germs: Biological Weapons and America's Secret War
Biology
489
62,175,208
https://en.wikipedia.org/wiki/Leucine-sensitive%20hypoglycemia%20of%20infancy
Leucine-sensitive hypoglycemia of infancy is a type of metabolic disorder. It is inherited in an autosomal dominant fashion. It is rare. Names Other names include hypoglycemia leucine-induced; hypoglycemia leucine induced; and familial infantile hypoglycemia precipitated by leucine. References Metabolic disorders Autosomal dominant disorders
Leucine-sensitive hypoglycemia of infancy
Chemistry
94
1,583,940
https://en.wikipedia.org/wiki/PNY%20Technologies
PNY Technologies, Inc., doing business as PNY, is an American manufacturer of flash memory cards, USB flash drives, solid state drives, memory upgrade modules, portable battery chargers, computer locks, cables, chargers, adapters, and consumer and professional graphics cards. The company is headquartered in Parsippany-Troy Hills, New Jersey. PNY stands for "Paris, New York", as they used to trade memory modules between Paris and New York. History PNY Electronics, Inc. originated out of Brooklyn, New York in 1985 as a company that bought and sold memory chips. In 1996, the company was headquartered in Moonachie, New Jersey, and had a manufacturing production plant there, an additional plant in Santa Clara, California, and served Europe from a third facility in Bordeaux, France. To emphasize its expansion into manufacturing new forms of memory and complementary products, the company changed its name in 1997 to PNY Technologies, Inc. The company now has main offices in Parsippany, New Jersey; Santa Clara, California; Miami, Florida; Bordeaux, France, and Taiwan. In 2009, the New Jersey Nets sold the naming rights of their practice jerseys to PNY. In 2010, New Jersey Governor Chris Christie spoke to PNY CEO Gadi Cohen about staying in New Jersey after Cohen was reportedly considering a move to Pennsylvania. In 2011, PNY moved their global headquarters and main manufacturing facility to a 40+ acre location on Jefferson Road in Parsippany, NJ. Lt. Governor Kim Guadagno toured the company and called it "a real good business news story for New Jersey." Products PNY is a memory and graphics technology company and manufacturer of computer peripherals, including the following products: Flash memory cards USB flash drives Solid state drives Memory upgrades NVIDIA graphics cards HDMI cables DRAM modules Portable battery chargers HP Pendrive & MicroSD Cards Legacy products: CD-R discs PNY has introduced water-cooled video cards and themed USB flash drives that include full films. References External links Companies based in Morris County, New Jersey American companies established in 1985 Computer companies established in 1985 1985 establishments in New York City Computer companies of the United States Computer memory companies Computer hardware companies Graphics hardware companies Manufacturing companies based in New Jersey Parsippany-Troy Hills, New Jersey Privately held companies based in New Jersey
PNY Technologies
Technology
476
6,688,339
https://en.wikipedia.org/wiki/Bacterivore
A bacterivore is an organism which obtains energy and nutrients primarily or entirely from the consumption of bacteria. The term is most commonly used to describe free-living, heterotrophic, microscopic organisms such as nematodes as well as many species of amoeba and numerous other types of protozoans, but some macroscopic invertebrates are also bacterivores, including sponges, polychaetes, and certain molluscs and arthropods. Many bacterivorous organisms are adapted for generalist predation on any species of bacteria, but not all bacteria are easily digested; the spores of some species, such as Clostridium perfringens, will never be prey because of their cellular attributes. In microbiology Bacterivores can sometimes be a problem in microbiology studies. For instance, when scientists seek to assess microorganisms in samples from the environment (such as freshwater), the samples are often contaminated with microscopic bacterivores, which interfere with the growing of bacteria for study. Adding cycloheximide can inhibit the growth of bacterivores without affecting some bacterial species, but it has also been shown to inhibit the growth of some anaerobic prokaryotes. Examples of bacterivores Caenorhabditis elegans Ceriodaphnia quadrangula Diaphanosoma brachyura Vorticella Paramecium Paratrimastix pyriformis Many species of protozoa Many benthic meiofauna, e.g. gastrotrichs Springtails Many sponges, e.g. Aplysina aerophoba Many crustaceans Many polychaetes, e.g. feather duster worms Some marine molluscs See also Microbivory References Davies, Cheryl M. et al.: Survival of Fecal Microorganisms in Marine and Freshwater Sediments, 1995, PDF Ecology terminology Trophic ecology
Bacterivore
Biology
407
11,466,019
https://en.wikipedia.org/wiki/Puccinia%20kuehnii
Puccinia kuehnii is a plant pathogen that causes orange rust disease of sugarcane. Orange rust was first discovered in India in 1914, but the first case of huge economical damage in sugarcane was registered in Australia in 2001. The first case in United States was in 2007 in Florida and has so far been the only state in the United States where sugarcane has been affected by this kind of rust. In order to treat the infected sugarcane at least three rounds of fungicide must be applied to the plant, costing growers $40 million a year. Currently scientists at the Agricultural Research Service are genetically analyzing the fungus that causes orange rust in order to help combat the problem. See also List of Puccinia species References External links Index Fungorum USDA ARS Fungal Database British Society for Plant Pathology (BSPP) USDA Agricultural Research Service Fungal plant pathogens and diseases Sugarcane diseases kuehnii Fungi described in 1890 Fungus species
Puccinia kuehnii
Biology
188
897,658
https://en.wikipedia.org/wiki/Derivation%20%28differential%20algebra%29
In mathematics, a derivation is a function on an algebra that generalizes certain features of the derivative operator. Specifically, given an algebra A over a ring or a field K, a K-derivation is a K-linear map that satisfies Leibniz's law: More generally, if M is an A-bimodule, a K-linear map that satisfies the Leibniz law is also called a derivation. The collection of all K-derivations of A to itself is denoted by DerK(A). The collection of K-derivations of A into an A-module M is denoted by . Derivations occur in many different contexts in diverse areas of mathematics. The partial derivative with respect to a variable is an R-derivation on the algebra of real-valued differentiable functions on Rn. The Lie derivative with respect to a vector field is an R-derivation on the algebra of differentiable functions on a differentiable manifold; more generally it is a derivation on the tensor algebra of a manifold. It follows that the adjoint representation of a Lie algebra is a derivation on that algebra. The Pincherle derivative is an example of a derivation in abstract algebra. If the algebra A is noncommutative, then the commutator with respect to an element of the algebra A defines a linear endomorphism of A to itself, which is a derivation over K. That is, where is the commutator with respect to . An algebra A equipped with a distinguished derivation d forms a differential algebra, and is itself a significant object of study in areas such as differential Galois theory. Properties If A is a K-algebra, for K a ring, and is a K-derivation, then If A has a unit 1, then D(1) = D(12) = 2D(1), so that D(1) = 0. Thus by K-linearity, D(k) = 0 for all . If A is commutative, D(x2) = xD(x) + D(x)x = 2xD(x), and D(xn) = nxn−1D(x), by the Leibniz rule. More generally, for any , it follows by induction that which is if for all , commutes with . For n > 1, Dn is not a derivation, instead satisfying a higher-order Leibniz rule: Moreover, if M is an A-bimodule, write for the set of K-derivations from A to M. is a module over K. DerK(A) is a Lie algebra with Lie bracket defined by the commutator: since it is readily verified that the commutator of two derivations is again a derivation. There is an A-module (called the Kähler differentials) with a K-derivation through which any derivation factors. That is, for any derivation D there is a A-module map with The correspondence is an isomorphism of A-modules: If is a subring, then A inherits a k-algebra structure, so there is an inclusion since any K-derivation is a fortiori a k-derivation. Graded derivations Given a graded algebra A and a homogeneous linear map D of grade on A, D is a homogeneous derivation if for every homogeneous element a and every element b of A for a commutator factor . A graded derivation is sum of homogeneous derivations with the same ε. If , this definition reduces to the usual case. If , however, then for odd , and D is called an anti-derivation. Examples of anti-derivations include the exterior derivative and the interior product acting on differential forms. Graded derivations of superalgebras (i.e. Z2-graded algebras) are often called superderivations. Related notions Hasse–Schmidt derivations are K-algebra homomorphisms Composing further with the map that sends a formal power series to the coefficient gives a derivation. See also In differential geometry derivations are tangent vectors Kähler differential Hasse derivative p-derivation Wirtinger derivatives Derivative of the exponential map References . . . . Differential algebra
Derivation (differential algebra)
Mathematics
861
13,629,331
https://en.wikipedia.org/wiki/Small%20nucleolar%20RNA%20SNORD100
In molecular biology, Small Nucleolar RNA SNORD100 (also known as HBII-429) is a non-coding RNA (ncRNA) molecule which functions in the biogenesis (modification) of other small nuclear RNAs (snRNAs). This type of modifying RNA is located in the nucleolus of the eukaryotic cell which is a major site of snRNA biogenesis. It is known as a small nucleolar RNA (snoRNA) and also often referred to as a guide RNA. SNORD100 belongs to the C/D box class of snoRNAs which contain the C (UGAUGA) and D (CUGA) box motifs. Most of the members of the C/D box family function in directing site-specific 2'-O-methylation of substrate RNAs. SNORD100 is predicted to guide the 2'O-ribose methylation of 18S ribosomal RNA (rRNA) at residue G436. References External links Non-coding RNA
Small nucleolar RNA SNORD100
Chemistry
218
19,933,553
https://en.wikipedia.org/wiki/MACS%20J0025.4-1222
MACS J0025.4-1222 is a galaxy cluster created by the collision of two galaxy clusters, and is part of the MAssive Cluster Survey (MACS). Like the earlier discovered Bullet Cluster, this cluster shows a clear separation between the centroid of the intergalactic gas (of majority of the normal, or baryonic, mass) and the colliding clusters. In the image, intergalactic gas is shown in pink and the mass centroids of the colliding clusters in blue, showing the separation of the two, similar to the Bullet Cluster. It provides independent, direct evidence for dark matter and supports the view that dark matter particles interact with each other only very weakly. Details The shown image is a composite of separate exposures made by Hubble Space Telescope ACS and WFPC2 detectors and the Chandra ACIS detector. The Hubble images were taken on November 5, 2006, and June 6, 2007. The visible light images from Hubble showed gravitational lensing which allowed astronomers to infer the distribution of total mass (both dark matter and normal matter)(colored in blue). The distribution of normal matter is mostly in the form of hot gas glowing brightly in X-rays (shown pink). Its distribution was accurately mapped from Chandra data. From these it was possible to tell that most of the mass in the two blue regions was dark matter. The international team of astronomers in this study was led by Marusa Bradac of the University of California, Santa Barbara, and Steve Allen of the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University and the Stanford Linear Accelerator Center (SLAC). The two clusters that formed MACS J0025 are each almost a quadrillion times the mass of the Sun. They merged at speeds of millions of miles per hour, and as they did so the hot gas in each cluster collided with the hot gas in the other and slowed down. The dark matter (which interacts weakly) did not. The separation between the normal matter (pink) and dark matter (blue) therefore provides direct evidence for dark matter and supports the view that dark matter particles interact with each other almost entirely through gravity. References External links Galaxy clusters
MACS J0025.4-1222
Astronomy
455
7,959,499
https://en.wikipedia.org/wiki/Generalized%20quantifier
In formal semantics, a generalized quantifier (GQ) is an expression that denotes a set of sets. This is the standard semantics assigned to quantified noun phrases. For example, the generalized quantifier every boy denotes the set of sets of which every boy is a member: This treatment of quantifiers has been essential in achieving a compositional semantics for sentences containing quantifiers. Type theory A version of type theory is often used to make the semantics of different kinds of expressions explicit. The standard construction defines the set of types recursively as follows: e and t are types. If a and b are both types, then so is Nothing is a type, except what can be constructed on the basis of lines 1 and 2 above. Given this definition, we have the simple types e and t, but also a countable infinity of complex types, some of which include: Expressions of type e denote elements of the universe of discourse, the set of entities the discourse is about. This set is usually written as . Examples of type e expressions include John and he. Expressions of type t denote a truth value, usually rendered as the set , where 0 stands for "false" and 1 stands for "true". Examples of expressions that are sometimes said to be of type t are sentences or propositions. Expressions of type denote functions from the set of entities to the set of truth values. This set of functions is rendered as . Such functions are characteristic functions of sets. They map every individual that is an element of the set to "true", and everything else to "false." It is common to say that they denote sets rather than characteristic functions, although, strictly speaking, the latter is more accurate. Examples of expressions of this type are predicates, nouns and some kinds of adjectives. In general, expressions of complex types denote functions from the set of entities of type to the set of entities of type , a construct we can write as follows: . We can now assign types to the words in our sentence above (Every boy sleeps) as follows. Type(boy) = Type(sleeps) = Type(every) = Type(every boy) = and so we can see that the generalized quantifier in our example is of type Thus, every denotes a function from a set to a function from a set to a truth value. Put differently, it denotes a function from a set to a set of sets. It is that function which for any two sets A,B, every(A)(B)= 1 if and only if . Typed lambda calculus A useful way to write complex functions is the lambda calculus. For example, one can write the meaning of sleeps as the following lambda expression, which is a function from an individual x to the proposition that x sleeps. Such lambda terms are functions whose domain is what precedes the period, and whose range are the type of thing that follows the period. If x is a variable that ranges over elements of , then the following lambda term denotes the identity function on individuals: We can now write the meaning of every with the following lambda term, where X,Y are variables of type : If we abbreviate the meaning of boy and sleeps as "B" and "S", respectively, we have that the sentence every boy sleeps now means the following: By β-reduction, and The expression every is a determiner. Combined with a noun, it yields a generalized quantifier of type . Properties Monotonicity Monotone increasing GQs A generalized quantifier GQ is said to be monotone increasing (also called upward entailing) if, for every pair of sets X and Y, the following holds: if , then GQ(X) entails GQ(Y). The GQ every boy is monotone increasing. For example, the set of things that run fast is a subset of the set of things that run. Therefore, the first sentence below entails the second: Every boy runs fast. Every boy runs. Monotone decreasing GQs A GQ is said to be monotone decreasing (also called downward entailing) if, for every pair of sets X and Y, the following holds: If , then GQ(Y) entails GQ(X). An example of a monotone decreasing GQ is no boy. For this GQ we have that the first sentence below entails the second. No boy runs. No boy runs fast. The lambda term for the determiner no is the following. It says that the two sets have an empty intersection. Monotone decreasing GQs are among the expressions that can license a negative polarity item, such as any. Monotone increasing GQs do not license negative polarity items. Good: No boy has any money. Bad: *Every boy has any money. Non-monotone GQs A GQ is said to be non-monotone if it is neither monotone increasing nor monotone decreasing. An example of such a GQ is exactly three boys. Neither of the following sentences entails the other. Exactly three students ran. Exactly three students ran fast. The first sentence does not entail the second. The fact that the number of students that ran is exactly three does not entail that each of these students ran fast, so the number of students that did that can be smaller than 3. Conversely, the second sentence does not entail the first. The sentence exactly three students ran fast can be true, even though the number of students who merely ran (i.e. not so fast) is greater than 3. The lambda term for the (complex) determiner exactly three is the following. It says that the cardinality of the intersection between the two sets equals 3. Conservativity A determiner D is said to be conservative if the following equivalence holds: For example, the following two sentences are equivalent. Every boy sleeps. Every boy is a boy who sleeps. It has been proposed that all determinersin every natural languageare conservative. The expression only is not conservative. The following two sentences are not equivalent. But it is, in fact, not common to analyze only as a determiner. Rather, it is standardly treated as a focus-sensitive adverb. Only boys sleep. Only boys are boys who sleep. See also Scope (formal semantics) Lindström quantifier Branching quantifier References Further reading External links Dag Westerståhl, 2011. 'Generalized Quantifiers'. Stanford Encyclopedia of Philosophy. Semantics Formal semantics (natural language) Quantifier (logic)
Generalized quantifier
Mathematics
1,364
46,801,915
https://en.wikipedia.org/wiki/%CE%93-Melanocyte-stimulating%20hormone
γ-Melanocyte-stimulating hormone (γ-MSH) is an endogenous peptide hormone and neuropeptide. It is a melanocortin, specifically, one of the three types of melanocyte-stimulating hormone (MSH), and is produced from proopiomelanocortin (POMC). It is an agonist of the MC1, MC3, MC4, and MC5 receptors. It exists in three forms, γ1-MSH, γ2-MSH, and γ3-MSH. γ-MSH regulated cardiovascular functions. γ-MSH effects are measured through the effects it has on the central neural pathway dispersed throughout the kidney. It is not moderated based on tubular sodium transport. Gamma-MSH activates MC3R in renal tubular cells by limiting sodium absorption by inhibiting the central neural pathway. This regulates sodium balance and blood pressure. If MC3R is absent then there is resistance in γ-MSH which results in hypertension on HSD. See also α-Melanocyte-stimulating hormone β-Melanocyte-stimulating hormone Adrenocorticotropic hormone References Human hormones Melanocortin receptor agonists Peptide hormones
Γ-Melanocyte-stimulating hormone
Chemistry,Biology
260
56,871,287
https://en.wikipedia.org/wiki/SAE%20J300
SAE J300 is a standard that defines the viscometric properties of mono- and multigrade engine oils, maintained by SAE International. Key parameters for engine oil viscometrics are the oil's kinematic viscosity, its high temperature-high shear viscosity measured by the tapered bearing simulator, and low temperature properties measured by the cold-cranking simulator and mini-rotary viscometer. This standard is commonly used throughout the world, and standards organizations that do so include API and ILSAC, and ACEA. The SAE has a separate viscosity rating system for gear, axle, and manual transmission oils, SAE J306, which should not be confused with engine oil viscosity. The higher numbers of a gear oil (e.g., 75W-140) does not mean that it has higher viscosity than an engine oil 20W-50. Grades In the SAE J300 standard (2021), the viscosity grades are 0W, 5W, 10W, 15W, 20W, 25W, 8, 12, 16, 20, 30, 40, 50, and 60. In the United States, these numbers are often referred to as the "weight" of a motor oil, and single-grade motor oils are often called "straight-weight" oils. The grades with a W designation are considered Winter-grades, and denote an engine oil's low-temperature properties, while non-winter grades denote an engine oil's properties at the operating temperature of an engine. The SAE 8 through SAE 16 viscosity grades describe oils that can improve fuel economy through reduced hydrodynamic friction. To assign winter grades, the dynamic viscosity is measured at various cold temperatures, specified in J300, in units of mPa·s, or the equivalent older non-SI units, centipoise (abbreviated cP), using two test methods. They are the cold-cranking simulator (CCS, ASTM D5293) and the mini-rotary viscometer (pumping, ASTM D4684). Each temperature is associated with a grade, SAE 0W, 5W, 10W, 15W, 20W, or 25W, with higher grade numbers corresponding to higher temperatures. The oil fails the test at a particular temperature if the oil is too viscous. The grade of the oil is that associated with the coldest temperature at which the oil passes the test. For example, if an oil passes at the specified temperatures for 10W and 5W, but fails at the 0W temperature, the oil is grade 5W. It cannot be labeled 0W or 10W. To assign non-winter grades, kinematic viscosity is graded by ASTM D445 or ASTM D7042, measuring the time it takes for a standard amount of oil at a temperature of to flow through a standard orifice, in units of mm2/s (millimetre squared per second) or the equivalent older non-SI units, centistokes (abbreviated cSt). The longer it takes, the higher the viscosity and thus the higher the SAE code. Larger numbers are thicker. J300 specifies a viscosity range for each non-winter grade, with higher grade numbers corresponding to higher viscosities. In addition, a minimum viscosity measured at a high temperature and high-shear rate (HTHS, ASTM D4683) is also required. Multi-grade designations Grades may appear alone - for example, a lawnmower may require SAE 30. This single grade specification means that the oil must meet the SAE 30 requirements. But SAE also allows designating an oil with two viscosity grades, referred to as a multi-grade oil. For example, 10W-30 designates a common multi-grade oil. A 10W-30 oil must pass the SAE J300 viscosity grade requirements for both 10W and 30, and all limitations placed on the viscosity grades, such as the requirement that a 10W oil must fail the 5W requirements. Viscosity index improvers (VIIs) are special polymer additives added to oil, usually to improve cold weather performance in passenger vehicles. If any VIIs are used, the oil must be labeled with a multi-grade designation. Otherwise, an oil not containing VIIs can be labeled as multi-grade or single grade. For example, a 20W-20 oil can be easily made with modern base oils without any VIIs. This oil can be labeled as 20W-20, 20W, or 20. History Before the discovery of oil fields in Pennsylvania, lubricating oils primarily consisted of animal and vegetable oils like lard and castor oil. However, with the opening of these fields, petroleum-based lubricants quickly entered the market. Initially, there was skepticism surrounding these petroleum oils, seen as inferior to their animal and vegetable counterparts. To cut costs, some started blending petroleum oils with animal or vegetable oils, often selling these mixtures at regular prices without disclosing the presence of petroleum oils. This practice of adulteration was frowned upon, prompting chemists and oil experts to develop tests to detect such fraud. Tests such as viscosity, specific gravity, flash point, fire point, pour point, acid number, and saponification number were devised to distinguish between petroleum and animal/vegetable oils. Of these, while some such as viscosity were relevant to selecting the right oil for an application, most were useful only for detecting adulteration. Often, specific test values were specified as requirements despite being irrelevant, unfairly giving the perception that Pennsylvania oil was inferior. Despite these difficulties, Pennsylvania oils gradually replaced animal and vegetable oils in many applications because they were cheap and gave good lubrication. As oil fields in the Central, Western, and Southwestern regions of the United States began production, newer oils entered the market, competing with Pennsylvania oils. Chemists discovered variations in the properties of these oils compared to Pennsylvania oils. Again, these oils were often considered inferior solely on the basis of particular tests, despite these tests being unrelated to the specific application, but eventually the new oils developed a place for themselves based on their merits. Although the oils differed in their characteristics, most automobiles could be used with a large variety of oils. In June 1911, SAE published Specification No. 26 for "automobile engine light lubricating oil", the first such formal specification. H. C. Dickinson, of the Bureau of Standards, similarly tried very hard to have the larger oil companies agree on the viscosities represented by the terms "light", "medium", "heavy", "extra heavy", and so on. These efforts were unsuccessful, since these names were tied up with the trademarks and advertising of the oil companies. Companies would market the same oil as a "heavy" automobile oil and a "light" tractor oil, and different companies might call this "heavy" automobile oil a "light" or "medium" automobile oil. Beginning in 1920, the SAE began efforts to draw up a more extensive set of specifications. In 1921, US government lubrication specifications were drawn up to facilitate the purchase of oil by the government. Representatives from the government, SAE, and the American Petroleum Institute met in 1922, resulting in SAE adopting a specification for 10 grades of oil in March 1923. This specification was not adopted for general use. Refiners felt that marking their oils with the SAE specifications would associate them with inferior oils. Also, the specifications included many tests that were irrelevant to automobile performance. By 1926, it had become clear that the light/medium/heavy distinction was not practical for automobile users. In the fall of 1925, a joint meeting of SAE and ASTM committee members (automotive and oil engineers) worked out a new standard. The SAE adopted this standard in July 1926. This standard was similar to the modern single-grade standard in having grade numbers with no direct relationship to any measured property, but being ordered by ascending viscosity, and contained six grades 10 through 60. By 1928, the standard was being widely adopted by oil and automotive companies. Grade 70 was added in 1928. In 1933, SAE proposed 10W and 20W grades, which saw popular use despite never being formally adopted until 1950. In 1950, the 10, 60, and 70 grades were dropped, new 5W, 10W, and 20W grades were added, and the testing criteria were simplified. The multi-grade labeling scheme was approved in 1955. The J300 identifier was attached around 1962. The criteria were reformulated in 1967 to use kinematic viscosity in centiStokes and the cold-cranking simulator. The 15W grade was added December 1975. In 1980, 0W and 25W grades were added, and a low-temperature pumpability test. Grade 60 was re-added in 1987. HTHS viscosity was added in 1992. Grade 16 was added in 2013. Michael Covitch of Lubrizol, Chair of the SAE International Engine Oil Viscosity Classification (EOVC) task force was quoted stating "If we continued to count down from SAE 20 to 15 to 10, etc., we would be facing continuing customer confusion problems with popular low-temperature viscosity grades such as SAE 10W, SAE 5W, and SAE 0W," he noted. "By choosing to call the new viscosity grade SAE 16, we established a precedent for future grades, counting down by fours instead of fives: SAE 12, SAE 8, SAE 4." Grades 8 and 12 were added in 2015. The use of ASTM D7042 for determining low shear rate kinematic viscosity was added in 2021. References Further reading Lubrication Viscosity Motor oils Automotive standards
SAE J300
Physics
2,058
13,993,279
https://en.wikipedia.org/wiki/Anneliese%20Maier
Anneliese Maier (; November 17, 1905 in Tübingen, Germany – December, 1971 in Rome, Italy) was a German historian of science particularly known for her work researching natural philosophy in the middle ages. Biography Anneliese Maier was the daughter of the philosopher Heinrich Maier (1876–1933). She studied natural sciences and philosophy from 1923 to 1926 at the universities in Berlin and Zurich. In 1930 she finished her dissertation on Immanuel Kant (Kants Qualitätskategorien). She then worked for the Prussian Academy of Sciences. In 1936 she moved to Rome. There she worked until 1945 at the Biblioteca Apostolica Vaticana on the philosophy of nature. According to E. J. Dijksterhuis, the path of the influence of Oresme through James of St. Martinus was found by Maier: "The fourteenth-century treatise De Latitudinibus formarum which, omitting all the speculative elements, gives a summary of the purely mathematical part of Oresme's own work, was very widely diffused, first in manuscript and later in print, and as Auctor de latitudinibus the anonymous author became better known than Oresme himself. Through later researches by Miss A. Maier, the identity of this Auctor has meanwhile been established: the man who ensured the survival of Oresme's methods was an Italian Augustinian hermit, James of St. Martinus, also called James of Naples." In 1951 Maier became a professor at the University of Cologne. She became a member of the Academies of Sciences in Mainz (1949), Göttingen (1962) and Munich (1966). In 1966 she received the George Sarton Medal for her profound studies on the history of natural philosophy in the Middle Ages. The Alexander von Humboldt Foundation has named a research grant after her, the Anneliese Maier Research Award, which is a "collaboration award to promote the internationalisation of the humanities and social sciences in Germany." Selected works 1982: On the Threshold of Exact Science: Selected Writings of Anneliese Maier on Late Medieval Natural Philosophy, Steven D. Sargent, editor and translator, University of Pennsylvania Press. 1930: Kants Qualitätskategorien 1938: Die Mechanisierung des Weltbildes im 17. Jahrhundert Studien zur Naturphilosophie der Spätscholastik, 5 parts, 1949–1958. 1949:Die Vorläufer Galileis im 14. Jahrhundert 1951: Zwei Grundprobleme der scholastischen Naturphilosophie 1952: An der Grenze von Scholastik und Naturwissenschaft 1955: Metaphysische Hintergründe der spätscholastischen Naturphilosophie 1958: Zwischen Philosophie und Mechanik. Studien zur Naturphilosophie der Spätscholastik 1964–1977: Ausgehendes Mittelalter: Gesammelte Aufsätze zur Geistesgeschichte des 14. Jahrhunderts, 3 volumes. References Further reading Annette Vogt, "Von Berlin nach Rom - Anneliese Maier (1905–1971)", in MPI für Wissenschaftsgeschichte (ed.), Steiner Vlg., Stuttgart 2004, pp. 391–414. External links International Dictionary of Intellectual Historians "Anneliese Maier Research Award" 1905 births 1971 deaths People from Tübingen German historians of science 20th-century German writers 20th-century German historians 20th-century German women writers Women science writers German women historians Humboldt University of Berlin alumni University of Zurich alumni Corresponding Fellows of the Medieval Academy of America
Anneliese Maier
Technology
791
36,443,205
https://en.wikipedia.org/wiki/Clavulina%20arcuatus
Clavulina arcuatus is a species of coral fungus in the family Clavulinaceae. Found in Cameroon, it was described in 2007. References External links Fungi described in 2007 Fungi of Africa arcuatus Fungus species
Clavulina arcuatus
Biology
48
28,419,201
https://en.wikipedia.org/wiki/Shaft%20%28civil%20engineering%29
In civil engineering a shaft is an underground vertical or inclined passageway. Shafts are often entered through a manhole and closed by a manhole cover. They are constructed for a number of reasons including: For the construction of a tunnel For ventilation of a tunnel or underground structure, aka ventilation shaft As a drop shaft for a sewerage or water tunnel For access to a tunnel or underground structure, also as an escape route Construction There are a number of methods for the construction of shafts, the most significant being: The use of sheet piles, diaphragm walls or bored piles to construct a square or rectangular braced shaft The use of segmental lining installed by underpinning or caisson sunk to form a circular shaft Incremental excavation with a shotcrete circular or elliptical lining Incremental excavation supported by shotcrete, rock bolts, cable anchors and steel sets or ribs Shafts can be sunk either dry or for methods such as the caisson method they can be sunk wet. Sinking a dry shaft means that any water that flows into the excavation is pumped out to leave no significant standing or flowing water in the base of the shaft. When wet sinking a shaft the shaft is allowed to flood and the muck is excavated out of the base of the shaft underwater using a grab on the end of a crane or similar excavation method. Because the shaft is flooded, the lining can not be constructed at the excavation level of the shaft so this method only suits methods where the lining is installed before shaft sinking (such as the use of sheet piles) or where the lining is sunk down with the shaft such as the caisson method. civil engineering tunnel construction
Shaft (civil engineering)
Engineering
331
9,667,107
https://en.wikipedia.org/wiki/Minimal%20polynomial%20%28linear%20algebra%29
In linear algebra, the minimal polynomial of an matrix over a field is the monic polynomial over of least degree such that . Any other polynomial with is a (polynomial) multiple of . The following three statements are equivalent: is a root of , is a root of the characteristic polynomial of , is an eigenvalue of matrix . The multiplicity of a root of is the largest power such that strictly contains . In other words, increasing the exponent up to will give ever larger kernels, but further increasing the exponent beyond will just give the same kernel. If the field is not algebraically closed, then the minimal and characteristic polynomials need not factor according to their roots (in ) alone, in other words they may have irreducible polynomial factors of degree greater than . For irreducible polynomials one has similar equivalences: divides , divides , the kernel of has dimension at least . the kernel of has dimension at least . Like the characteristic polynomial, the minimal polynomial does not depend on the base field. In other words, considering the matrix as one with coefficients in a larger field does not change the minimal polynomial. The reason for this differs from the case with the characteristic polynomial (where it is immediate from the definition of determinants), namely by the fact that the minimal polynomial is determined by the relations of linear dependence between the powers of : extending the base field will not introduce any new such relations (nor of course will it remove existing ones). The minimal polynomial is often the same as the characteristic polynomial, but not always. For example, if is a multiple of the identity matrix, then its minimal polynomial is since the kernel of is already the entire space; on the other hand its characteristic polynomial is (the only eigenvalue is , and the degree of the characteristic polynomial is always equal to the dimension of the space). The minimal polynomial always divides the characteristic polynomial, which is one way of formulating the Cayley–Hamilton theorem (for the case of matrices over a field). Formal definition Given an endomorphism on a finite-dimensional vector space over a field , let be the set defined as where is the space of all polynomials over the field . is a proper ideal of . Since is a field, is a principal ideal domain, thus any ideal is generated by a single polynomial, which is unique up to a unit in . A particular choice among the generators can be made, since precisely one of the generators is monic. The minimal polynomial is thus defined to be the monic polynomial that generates . It is the monic polynomial of least degree in . Applications An endomorphism of a finite-dimensional vector space over a field is diagonalizable if and only if its minimal polynomial factors completely over into distinct linear factors. The fact that there is only one factor for every eigenvalue means that the generalized eigenspace for is the same as the eigenspace for : every Jordan block has size . More generally, if satisfies a polynomial equation where factors into distinct linear factors over , then it will be diagonalizable: its minimal polynomial is a divisor of and therefore also factors into distinct linear factors. In particular one has: : finite order endomorphisms of complex vector spaces are diagonalizable. For the special case of involutions, this is even true for endomorphisms of vector spaces over any field of characteristic other than , since is a factorization into distinct factors over such a field. This is a part of representation theory of cyclic groups. : endomorphisms satisfying are called projections, and are always diagonalizable (moreover their only eigenvalues are and ). By contrast if with then (a nilpotent endomorphism) is not necessarily diagonalizable, since has a repeated root . These cases can also be proved directly, but the minimal polynomial gives a unified perspective and proof. Computation For a nonzero vector in define: This definition satisfies the properties of a proper ideal. Let be the monic polynomial which generates it. Properties Example Define to be the endomorphism of with matrix, on the canonical basis, Taking the first canonical basis vector and its repeated images by one obtains of which the first three are easily seen to be linearly independent, and therefore span all of . The last one then necessarily is a linear combination of the first three, in fact , so that: . This is in fact also the minimal polynomial and the characteristic polynomial  : indeed divides which divides , and since the first and last are of degree and all are monic, they must all be the same. Another reason is that in general if any polynomial in annihilates a vector , then it also annihilates (just apply to the equation that says that it annihilates ), and therefore by iteration it annihilates the entire space generated by the iterated images by of ; in the current case we have seen that for that space is all of , so . Indeed one verifies for the full matrix that is the zero matrix: See also Annihilating polynomial References Matrix theory Polynomials
Minimal polynomial (linear algebra)
Mathematics
1,052
24,776,783
https://en.wikipedia.org/wiki/Gliese%20667%20Cb
Gliese 667 Cb is an exoplanet orbiting the star Gliese 667 C, a member of the Gliese 667 triple-star system. It is the most massive planet discovered in the system and is likely a super-Earth or a mini-Neptune. Orbital-stability analysis indicates that it cannot be more than twice its minimum mass. It orbits too close to the star to be in the habitable zone and thus not suitable for life as we know it. Eccentricity analysis indicates that Gliese 667 Cb is not a rocky planet. The planet is likely to be tidally locked. Thus, one side of the planet is in permanent daylight and the other side in permanent darkness. References Gliese 667 Exoplanets discovered in 2009 Exoplanets detected by radial velocity Scorpius 6
Gliese 667 Cb
Astronomy
174
43,837,903
https://en.wikipedia.org/wiki/China%20Railways%20Test%20and%20Certification%20Center
China Railways Test and Certification Center (CRCC, ) is responsible for certifications of railway products for the Chinese market. Until 2003 the state-owned enterprise was named Railways Product Certification Center. In 2012, the CRCC enlarged its business scope and included railway equipment in their business portfolio. The CRCC employs more than 300 qualified employees. Mandatory CRCC products In 2014, the official CRCC product catalogue includes about 378 products, which all can be check on the CRCC website. Among them are switches, single wagons, locomotives and complete trains. Furthermore, components like signaling equipment, isolators, brakes and brake blocks. Certification process For applying for a CRCC certification, firstly, the Chinese regulations (Implementation Rules and GB-Standards) have to be bought at the CRCC. Afterwards the application documents for the respective products have to be handed in at the CRCC. For the third step, companies shall send product sample to the CRCC for receiving the test reports with the test results which are issued only in Chinese. If all product tests are completed successfully, Chinese inspectors will visit the manufacturing site for a 2-day audit. After the factory audit, the CRCC will issue a Certificate of Approval. References External links 中铁检验认证中心 (Official website) Certification marks Export and import control Economy of China Safety codes Foreign trade of China Rail transport in China
China Railways Test and Certification Center
Mathematics
279
1,845,155
https://en.wikipedia.org/wiki/Disk%20encryption%20software
Disk encryption software is a computer security software that protects the confidentiality of data stored on computer media (e.g., a hard disk, floppy disk, or USB device) by using disk encryption. Compared to access controls commonly enforced by an operating system (OS), encryption passively protects data confidentiality even when the OS is not active, for example, if data is read directly from the hardware or by a different OS. In addition, crypto-shredding suppresses the need to erase the data at the end of the disk's lifecycle. Disk encryption generally refers to wholesale encryption that operates on an entire volume mostly transparently to the user, the system, and applications. This is generally distinguished from file-level encryption that operates by user invocation on a single file or group of files, and which requires the user to decide which specific files should be encrypted. Disk encryption usually includes all aspects of the disk, including directories, so that an adversary cannot determine content, name or size of any file. It is well suited to portable devices such as laptop computers and thumb drives which are particularly susceptible to being lost or stolen. If used properly, someone finding a lost device cannot penetrate actual data, or even know what files might be present. Methods The disk's data is protected using symmetric cryptography with the key randomly generated when a disk's encryption is first established. This key is itself encrypted in some way using a password or pass-phrase known (ideally) only to the user. Thereafter, in order to access the disk's data, the user must supply the password to make the key available to the software. This must be done sometime after each operating system start-up before the encrypted data can be used. Done in software, encryption typically operates at a level between all applications and most system programs and the low-level device drivers by "transparently" (from a user's point of view) encrypting data after it is produced by a program but before it is physically written to the disk. Conversely, it decrypts data immediately after being read but before it is presented to a program. Properly done, programs are unaware of these cryptographic operations. Some disk encryption software (e.g., TrueCrypt or BestCrypt) provide features that generally cannot be accomplished with disk hardware encryption: the ability to mount "container" files as encrypted logical disks with their own file system; and encrypted logical "inner" volumes which are secretly hidden within the free space of the more obvious "outer" volumes. Such strategies provide plausible deniability. Well-known examples of disk encryption software include, BitLocker for Windows; FileVault for Apple OS/X; LUKS a standard free software mainly for Linux and TrueCrypt, a non-commercial freeware application, for Windows, OS/X and Linux. A 2008 study found data remanence in dynamic random access memory (DRAM), with data retention of seconds to minutes at room temperature and much longer times when memory chips were cooled to low temperature. The study authors were able to demonstrate a cold boot attack to recover cryptographic keys for several popular disk encryption systems despite some memory degradation, by taking advantage of redundancy in the way keys are stored after they have been expanded for efficient use. The authors recommend that computers be powered down, rather than be left in a "sleep" state, when not under physical control by the computer's legitimate owner. This method of key recovery, however, is suited for controlled laboratory settings and is extremely impractical for "field" use due to the equipment and cooling systems required. Other features Plausible deniability Some disk encryption systems, such as VeraCrypt, CipherShed (active open source forks of the discontinued TrueCrypt project), BestCrypt (proprietary trialware), offer levels of plausible deniability, which might be useful if a user is compelled to reveal the password of an encrypted volume. Hidden volumes Hidden volumes are a steganographic feature that allows a second, "hidden", volume to reside within the apparent free space of a visible "container" volume (sometimes known as "outer" volume). The hidden volume has its own separate file system, password, and encryption key distinct from the container volume. The content of the hidden volume is encrypted and resides in the free space of the file system of the outer volume—space which would otherwise be filled with random values if the hidden volume did not exist. When the outer container is brought online through the disk encryption software, whether the inner or outer volume is mounted depends on the password provided. If the "normal" password/key of the outer volume proves valid, the outer volume is mounted; if the password/key of the hidden volume proves valid, then (and only then) can the existence of the hidden volume even be detected, and it is mounted; otherwise if the password/key does not successfully decrypt either the inner or outer volume descriptors, then neither is mounted. Once a hidden volume has been created inside the visible container volume, the user will store important-looking information (but which the user does not actually mind revealing) on the outer volume, whereas more sensitive information is stored within the hidden volume. If the user is forced to reveal a password, the user can reveal the password to the outer volume, without disclosing the existence of the hidden volume. The hidden volume will not be compromised, if the user takes certain precautions in overwriting the free areas of the "host" disk. No identifying features Volumes, be they stored in a file or a device/partition, may intentionally not contain any discernible "signatures" or unencrypted headers. As cipher algorithms are designed to be indistinguishable from a pseudorandom permutation without knowing the key, the presence of data on the encrypted volume is also undetectable unless there are known weaknesses in the cipher. This means that it is impossible to prove that any file or partition is an encrypted volume (rather than random data) without having the password to mount it. This characteristic also makes it impossible to determine if a volume contains another hidden volume. A file hosted volume (as opposed to partitions) may look out of place in some cases since it will be entirely random data placed in a file intentionally. However, a partition or device hosted volume will look no different from a partition or device that has been wiped with a common disk wiping tool such as Darik's Boot and Nuke. One can plausibly claim that such a device or partition has been wiped to clear personal data. Portable or "traveller mode" means the encryption software can be run without installation to the system hard drive. In this mode, the software typically installs a temporary driver from the portable media. Since it is installing a driver (albeit temporarily), administrative privileges are still required. Resizable volumes Some disk encryption software allows encrypted volumes to be resized. Not many systems implement this fully and resort to using "sparse files" to achieve this. Backups Encrypted volumes contain "header" (or "CDB") data, which may be backed up. Overwriting these data will destroy the volume, so the ability to back them up is useful. Restoring the backup copy of these data may reset the volume's password to what it was when the backup was taken. See also Disk encryption theory Disk encryption hardware Comparison of disk encryption software Data remanence Disk encryption On-the-fly encryption Cold boot attack Single sign-on United States v. Boucher References Cryptographic software Disk encryption
Disk encryption software
Mathematics
1,590
2,122,657
https://en.wikipedia.org/wiki/Bhabha%20Atomic%20Research%20Centre
The Bhabha Atomic Research Centre (BARC) is India's premier nuclear research facility, headquartered in Trombay, Mumbai, Maharashtra, India. It was founded by Homi Jehangir Bhabha as the Atomic Energy Establishment, Trombay (AEET) in January 1954 as a multidisciplinary research program essential for India's nuclear program. It operates under the Department of Atomic Energy (DAE), which is directly overseen by the Prime Minister of India. BARC is a multi-disciplinary research centre with extensive infrastructure for advanced research and development covering the entire spectrum of nuclear science, chemical engineering, material sciences and metallurgy, electronic instrumentation, biology and medicine, supercomputing, high-energy physics and plasma physics and associated research for Indian nuclear programme and related areas. BARC's core mandate is to sustain peaceful applications of nuclear energy. It manages all facets of nuclear power generation, from the theoretical design of reactors to, computer modeling and simulation, risk analysis, development and testing of new reactor fuel, materials, etc. It also researches spent fuel processing and safe disposal of nuclear waste. Its other research focus areas are applications for isotopes in industries, radiation technologies and their application to health, food and medicine, agriculture and environment, accelerator and laser technology, electronics, instrumentation and reactor control and material science, environment and radiation monitoring etc. BARC operates a number of research reactors across the country. Its primary facilities are located in Trombay, with new facilities also located in Challakere in Chitradurga district of Karnataka. A new Special Mineral Enrichment Facility which focuses on enrichment of uranium fuel is under construction in Atchutapuram near Visakhapatnam in Andhra Pradesh, for supporting India's nuclear submarine program and produce high specific activity radioisotopes for extensive research. History When Homi Jehangir Bhabha was working at the Indian Institute of Science, there was no institute in India which had the necessary facilities for original work in nuclear physics, cosmic rays, high energy physics, and other frontiers of knowledge in physics. This prompted him to send a proposal in March 1944 to the Sir Dorabji Tata Trust for establishing "a vigorous school of research in fundamental physics". When Bhabha realised that technology development for the atomic energy programme could no longer be carried out within TIFR he proposed to the government to build a new laboratory entirely devoted to this purpose. For this purpose, 1200 acres of land was acquired at Trombay from the Bombay Government. Thus the Atomic Energy Establishment Trombay (AEET) started functioning in 1954. The same year the Department of Atomic Energy (DAE) was also established. Bhabha established the BARC Training School to cater to the manpower needs of the expanding atomic energy research and development program. Bhabha emphasized self-reliance in all fields of nuclear science and engineering. The Government of India created the Atomic Energy Establishment, Trombay (AEET) with Bhabha as the founding director on 3 January 1954. It was established to consolidate all the research and development activities for nuclear reactors and technology under the Atomic Energy Commission. All scientists and engineers engaged in the fields of reactor designing and development, instrumentation, metallurgy, and material science, etc., were transferred with their respective programs from the Tata Institute of Fundamental Research (TIFR) to AEET, with TIFR retaining its original focus for fundamental research in the sciences. After Bhabha's death in 1966, the centre was renamed as the Bhabha Atomic Research Centre on 22 January 1967. The first reactors at BARC and its affiliated power generation centres were imported from the west. India's first power reactors, installed at the Tarapur Atomic Power Station were from the United States. The primary importance of BARC is as a research centre. The BARC and the Indian government has consistently maintained that the reactors are used for this purpose only: Apsara (1956; named by the then Prime Minister of India, Jawaharlal Nehru when he likened the blue Cerenkov radiation to the beauty of the Apsaras), CIRUS (1960; the "Canada-India Reactor" with assistance from the US), the now-defunct ZERLINA (1961; Zero Energy Reactor for Lattice Investigations and Neutron Assay), Purnima I (1972), Purnima II (1984), Dhruva (1985), Purnima III (1990), and KAMINI. Apsara was India's first nuclear reactor built at BARC in 1956 to conduct basic research in nuclear physics. It is 1 MWTh light water cooled and moderated swimming pool type thermal reactor that went critical on August 4, 1956, and is suitable for production of isotopes, basic nuclear research, shielding experiments, neutron activation analysis, neutron radiography and testing of neutron detectors. It was shut down permanently in 2010 and replaced with Apsara-U. Purnima-I is a plutonium oxide fuelled 1 MWTh pulsed-fast reactor that was built starting in 1970 and went critical on 18 May 1972 to primarily support the validation of design parameters for development of plutonium-239 powered nuclear weapons. On the twentieth anniversary of the 1974 Pokhran nuclear test, Purnima's designer, P. K. Iyengar, reflected on the reactor's critical role: "Purnima was a novel device, built with about 20 kg of plutonium, a variable geometry of reflectors, and a unique control system. This gave considerable experience and helped to benchmark calculations regarding the behaviour of a chain-reacting system made out of plutonium. The kinetic behaviour of the system just above critical could be well studied. Very clever physicists could then calculate the time behaviour of the core of a bomb on isotropic compression. What the critical parameters would be, how to achieve optimum explosive power, and its dependence on the first self sustaining neutron trigger, were all investigated". It was decommissioned in 1973. Along with DRDO and other agencies and laboratories BARC also played an essential and important role in nuclear weapons technology and research. The plutonium used in India's 1974 Smiling Buddha nuclear test came from CIRUS. In 1974 the head of this entire nuclear bomb project was the director of the BARC, Raja Ramanna. The neutron initiator was of the polonium–beryllium type and code-named Flower was developed by BARC. The entire nuclear bomb was engineered and finally assembled by Indian engineers at Trombay before transportation to the test site. The 1974 test (and the 1998 tests that followed) gave Indian scientists the technological know-how and confidence not only to develop nuclear fuel for future reactors to be used in power generation and research but also the capacity to refine the same fuel into weapons-grade fuel to be used in the development of nuclear weapons. BARC was also involved in the Pokhran-II series of five nuclear test conducted at Pokhran Test Range in May 1998. It was the second instance of nuclear testing conducted after Smiling Buddha by India. The tests achieved their main objective of giving India the capability to build fission and thermonuclear weapons(Hydrogen bomb/fusion bomb) with yields up to 200 Kilotons. The then Chairman of the Indian Atomic Energy Commission described each one of the explosions of Pokhran-II to be "equivalent to several tests carried out by other nuclear weapon states over decades". Subsequently, India established computer simulation capability to predict the yields of nuclear explosives whose designs are related to the designs of explosives used in this test. The scientists and engineers of the BARC, the Atomic Minerals Directorate for Exploration and Research (AMDER), and the Defence Research and Development Organisation (DRDO) were involved in the nuclear weapon assembly, layout, detonation and data collection. On 3 June 1998 BARC was hacked by hacktivist group milw0rm, consisting of hackers from the United States, United Kingdom and New Zealand. They downloaded classified information, defaced the website and deleted data from servers. BARC also designed a class of Indian Pressurized Heavy Water Reactor IPHWR (Indian Pressurized Heavy Water Reactor), the baseline 220 MWe design was developed from the Canadian CANDU reactor. The design was later expanded into 540 MW and 700 MW designs. The IPHWR-220 (Indian Pressurized Heavy Water Reactor-220) was the first in class series of Indian pressurized heavy-water reactor designed by the Bhabha Atomic Research Centre. It is a Generation II reactor developed from earlier CANDU based RAPS-1 and RAPS-2 reactors built at Rawatbhata, Rajasthan. Currently there are 14 units operational at various locations in India. Upon completion of the design of IPHWR-220, a larger 540 MWe design was started around 1984 under the aegis of BARC in partnership with NPCIL. Two reactors of this design were built in Tarapur, Maharashtra starting in the year 2000 and the first was commissioned on 12 September 2005. The IPHWR-540 design was later upgraded to a 700 MWe with the main objective to improve fuel efficiency and develop a standardized design to be installed at many locations across India as a fleet-mode effort. The design was also upgraded to incorporate Generation III+ features. Almost 100% of the parts of these indigenously designed reactors are manufactured by Indian industry. BARC designed and built India's first pressurised water reactor at Kalpakkam, a 80MW land based prototype of INS Arihant's nuclear power unit, as well as the Arihant's main propulsion reactor. Three other submarine vessels of the class(Arihant class) including the upcoming INS arighat, S4 and S4* will also get the same class of reactors as there primary propulsion. BARC also developed stabilization systems for Seekers, Antenna Units for India's multirole fighter HAL Tejas and contributed to Chandrayaan-I and Mangalyaan missions. BARC has contributed for collaboration with various mega science projects of National and International repute viz. CERN (LHC), India-based Neutrino Observatory (INO), ITER, Low Energy High Intensity Proton Accelerator (LEHIPA), Facility for Antiproton and Ion Research (FAIR), Major Atmospheric Cerenkov Experiment Telescope (MACE), etc. In 2012 it was reported that new facilities and campuses of BARC were planned in Atchutapuram, near Visakhapatnam in Andhra Pradesh, and in Challakere in Chitradurga district in Karnataka. BARC would be setting 30 MW special research reactor using an enriched uranium fuel at Visakhapatnam to meet the demand for high specific activity radio isotopes and carry out extensive research and development in nuclear sector. The site would also support the nuclear submarine program. Description BARC is a multi-disciplinary research centre with extensive infrastructure for advanced research and development covering the entire spectrum of nuclear science, chemical engineering, material sciences and metallurgy, electronic instrumentation, biology and medicine, supercomputing, high-energy physics and plasma physics and associated research for Indian nuclear programme and related areas. BARC is a premier nuclear and multi-disciplinary research organisation though founded primarily to serve India's nuclear program and its peaceful applications of nuclear energy does an extensive and advanced research and development covering the entire spectrum of nuclear science, chemical engineering, Radiology and their application to health, food, medicine, agriculture and environment, accelerator and Laser Technology, electronics, High Performance Computing, instrumentation and reactor control, Materials Science and radiation monitoring, high-energy physics and plasma physics among others. Organisation and governance BARC is an agency of the Department of Atomic Energy. It is divided into a number of Groups, each under a director, and many more Divisions. Nuclear Recycle Board BARC's Nuclear Recycle Board (NRB) was formed in 2009. It is located in three cities – Mumbai, Tarapur, and Kalpakkam. Areas of research BARC conducts extensive and advanced research and development covering the entire spectrum of nuclear science, chemical engineering, material sciences and metallurgy, electronics instrumentation, biology and medicine, advance computing, high-energy plasma physics and associated research for Indian nuclear program and related areas. The few are: Thorium fuel cycle India has a unique position in the world, in terms of availability of nuclear fuel resource. It has a limited resource of uranium but a large resource of thorium. The beach sands of Kerala and Orissa have rich reserves of monazite, which contains about 8–10% thorium. Studies have been carried out on all aspects of thorium fuel cycle - mining and extraction, fuel fabrication, utilisation in different reactor systems, evaluation of its various properties and irradiation behaviour, reprocessing and recycling. Some of the important milestones achieved / technological progress made in these are as follows: The process of producing thoria from monazite is well established. IREL has produced several tonnes of nuclear grade thoria powder The fabrication of thoria based fuel by powder-pellet method is well established. Few tonnes of thoria fuel have been fabricated at BARC and NFC for various irradiations in research and power reactors. Studies have been carried out regarding use of thorium in different types of reactors with respect to fuel management, reactor control and fuel utilisation. A Critical Facility has been constructed and is being used for carrying out experiments with thoria based fuels. Thoria based fuel irradiations have been carried out in our research and power reactors. Thoria fuel rods in the reflector region of research reactor CIRUS. Thoria fuel assemblies as reactivity load in research reactor Dhruva. Thoria fuel bundles for flux flattening in the Initial Core of PHWRs. Thoria blanket assemblies in FBTR. (Th-Pu)MOX fuel pins of BWR, PHWR and AHWR design in research reactors CIRUS and Dhruva. Post-irradiation examinations have been carried out on the irradiated PHWR thoria fuel bundles and (Th-Pu) MOX fuel pins. Thermo-physical and thermodynamic properties have been evaluated for the thoria based fuels. Thoria fuel rods irradiated in CIRUS have been reprocessed at Uranium Thorium Separation Facility (UTSF) BARC. The recovered 233U has been fabricated as fuel for KAMINI reactor. Thoria blanket assemblies irradiated in FBTR have been reprocessed at IGCAR. The recovered 233U has been used for experimental irradiation of PFBR type fuel assembly in FBTR. Thoria fuel bundles irradiated in PHWRs will be reprocessed in Power Reactor Thorium Reprocessing Facility (PRTRF). The recovered 233U will be used for reactor physics experiments in AHWR-Critical Facility. Advanced reactors AHWR and AHWR300-LEU have been designed at BARC to provide impetus to the large-scale utilisation of thorium. Reprocessing and nuclear waste management After certain energy utilization, known as burn-up (a legacy of thermal power) is reached, nuclear fuel in a reactor is replaced by fresh fuel so that fission chain reactions can sustain and desired power output can be maintained. The spent fuel discharged from the reactor is known as spent nuclear fuel (SNF). BARC has come a long way since it first began reprocessing of spent fuel in the year 1964 at Trombay. India has more than five decades of experience for reprocessing of spent fuel of Uranium based first stage reactor resulting in development of well matured and highly evolved PUREX based reprocessing flow sheet involving recovery of SNM. Implementation of thorium fuel cycle requires extraction of 233U from irradiated thorium fuel and its re-insertion into the fuel cycle. Based on indigenous efforts, a flow sheet for reprocessing of spent thoria rods was developed and demonstrated at Uranium Thorium Separation Facility (UTSF), Trombay. After gaining successful experience at UTSF, Power Reactor Thoria Reprocessing Facility (PRTRF) has been set up employing advanced laser based technology for dismantling of thoria bundle and single pin mechanical chopper for cutting of fuel pins. Thoria irradiated fuel bundles from PHWR were reprocessed using TBP as extractant to recover 233U. High Level Liquid Waste (HLLW) generated during reprocessing of spent fuel contains most of the radioactivity generated in entire nuclear fuel cycle. The HLLW is immobilised into an inert Sodium Boro-Silicate glass matrix through a process, called vitrification. The vitrified waste is stored for an interim period in an air cooled vault to facilitate the dissipation of heat generated during radioactive decay. Prior to its eventual disposal in geological disposal facility. Vitrification of HLLW is a complex process and poses challenges in view of high temperature operations in presence of high amount of radioactivity. As a result, very few countries in world could master the technology of vitrification of HLLW and India is among them. Three melter technologies, Induction Heated Metallic Melter (IHMM), Joule Heated Ceramic Melter (JHCM) and Cold Crucible Induction Melter (CCIM), have been indigenously developed for vitrification of HLLW. HLLW vitrification plants, based on IHMM or JHCM technologies, have been constructed and successfully operated at Trombay, Tarapur and Kalpakkam sites of India. Vitrification Cell (IHMM), WIP, Trombay Joule Heated Ceramic Melter, Tarapur Inside view of Cold Crucible Induction Melter R&D in the field of partitioning of Minor Actinides from HLLW are also aimed to separate out the long-lived radioactive waste constituents prior to immobilizing then in glass matrice. The long lived radio-contaminants is planned to be burnt in Fast reactor or Accelerator Driven Sub Critical systems to get converted into short- lived species. This will reduce the need of long term isolation of radionuclide from environment by multifold. R&D is also directed towards management of Hulls, contaminated left over pieces of zirconium clad tube after dissolution of fuel, and Geological Disposal Facility for safe disposal of vitrified HLLW and long lived waste with objective to long term isolation of radionuclide from the human environment. Advanced Fuel Fabrication Facility The Advanced Fuel Fabrication Facility (AFFF), a MOX fuel fabrication facility, is part of the Nuclear Recycle Board (NRB), and located at the Tarapur, Maharashtra. Advanced Fuel Fabrication Facility has fabricated MOX fuels on experimental basis for BWR, PHWR, FBTR and research reactors. It makes plutonium-based MOX fuel for the stage 2 of Indian Nuclear Program. The unit has successfully fabricated more than 1 lakh PFBR fuel elements for the Kalpakam based Bhavini's PFBR. AFFF is presently engaged in the fabrication of PFBR fuel elements for reloads of PFBR. AFFF also is involved in AHWR(Thorium MOX Fuel) MOX fuel fabrication for the third stage of Indian nuclear program and is experimenting with different fabrication techniques. | MOX fuel fabrication at AFFF follows Powder Oxide Pelletisation (POP) Method. Major operations are mixing and milling, pre-compaction, granulation, Final compaction, Sintering, centreless grinding, degassing, endplug welding, decontamination of fuel elements and wire wrapping. AFFF also does the recycling of the rejects based on either thermal pulverisation or microwave based oxidation and reduction. AFFF uses Laser welding for encapsulation of fuel elements along with GTAW. Basic and applied physics The interdisciplinary research includes investigation of matter under different physicochemical environments, including temperature, magnetic field and pressure. Reactors, ion and electron accelerators and lasers are being employed as tools to investigate crucial phenomena in materials over wide length and time scales. Major facilities, operated by BARC for research in Physical sciences, include the Pelletron-Superconducting linear accelerator at TIFR, the National Facility for Neutron Beam Research (NFNBR) at Dhruva, a number of state-of-the-art beam lines at INDUS synchrotron, RRCAT-Indore, the TeV Atmospheric Cherenkov Telescope with Imaging Camera (TACTIC) at Mt. Abu, the Folded Tandem Ion Accelerator (FOTIA) and PURNIMA fast neutron facilities at BARC, the 3 MV Tandetron accelerator at the National Centre for Compositional Characterization of Materials (NCCCM) at Hyderabad, the 10 MeV electron accelerator at the Electron Beam Centre at Navi Mumbai. BARC also has sustained programs of indigenous development of detectors, sensors, mass spectrometer, imaging technique and multilayer-mirrors. Recent achievements include: commissioning of the Major Atmospheric Cerenkov Experiment Telescope (MACE) at Ladakh, a time-of-flight neutron spectrometer at Dhruva, the beam-lines at INDUS (Small-and wide angle X-ray Scattering (SWAXS), protein crystallography, Infrared spectroscopy, Extended X-ray absorption fine structure (EXAFS), Photoelectron spectroscopy (PES/ PEEM), Energy and angle-dispersive XRD, and imaging), commissioning of beam-lines and associated detector facilities at BARC-TIFR Pelletron facility, the Low Energy High Intensity Proton Accelerator (LEHIPA) at BARC, the Digital holographic microscopy for biological cell imaging at Vizag. The Low Energy High Intensity Proton Accelerator (LEHIPA) project is under installation at common facility building in BARC premises. The 20 MeV, 30 mA, CW proton linac will consist of a 50 keV ion source, a 3 MeV, 4 m long, radio-frequency quadrupole (RFQ) and a 3-20 MeV, 12 m long, drift-tube linac (DTL) and a beam dump. Major Atmospheric Cerenkov Experiment Telescope (MACE) is an Imaging Atmospheric Cerenkov telescope (IACT) located near Hanle, Ladakh, India. It is the highest (in altitude) and second largest Cerenkov telescope in the world. It was built by Electronics Corporation of India, Hyderabad, for the Bhabha Atomic Research Centre and was assembled at the campus of Indian Astronomical Observatory at Hanle. The telescope is the second-largest gamma ray telescope in the world and will help the scientific community enhance its understanding in the fields of astrophysics, fundamental physics, and particle acceleration mechanisms. The largest telescope of the same class is the 28-metre-diameter High Energy Stereoscopic System (HESS) telescope being operated in Namibia. Ongoing basic and applied research encompasses a broad spectrum covering condensed matter physics, nuclear physics, astrophysical sciences and atomic and molecular spectroscopy. Important research areas include advanced magnetism, soft and nano structured materials, energy materials, thin film and multi-layers, accelerator/reactor based fusion-fission studies, nuclear-astrophysics, nuclear data management, reactor based neutrino physics, very high-energy astrophysics and astro-particle physics. Some of the important ongoing developmental activities are: Indian Scintillat or Matrix for Reactor Anti-Neutrinos (ISMRAN), neutron guides, polarizers and Neutron supermirror, Nb-based superconducting RF cavities, high purity Germanium detector, 2-D neutron detectors, cryogen-free superconducting magnets, electromagnetic separator for radio-isotopes, nuclear batteries and radioisotope thermoelectric generators (RTG) power source and liquid Hydrogen cold neutron source. Other activities include research and developmental towards India-based Neutrino Observatory (INO) and quantum computing. High-performance computing BARC designed and developed a series of supercomputers for their internal usage. They were mainly used for molecular dynamical simulations, reactor physics, theoretical physics, computational chemistry, computational fluid dynamics, and finite element analysis. The latest in the series is Anupam-Aganya. BARC has started development of supercomputers under the ANUPAM project in 1991 and till date, has developed more than 20 different computer systems. All ANUPAM systems have employed parallel processing as the underlying philosophy and MIMD (Multiple Instruction Multiple Data) as the core architecture. BARC, being a multidisciplinary research organization, has a large pool of scientists and engineers, working in various aspects of nuclear science and technology and thus are involved in doing diverse nature of computation. To keep the gestation period short, the parallel computers were built with commercially available off-the-shelf components, with BARC's major contribution being in the areas of system integration, system engineering, system software development, application software development, fine tuning of the system and support to a diverse set of users. The series started with a small four-processor system in 1991 with a sustained performance of 34 MFlops. Keeping in mind the ever increasing demands from the users, new systems have been built regularly with increasing computational power. The latest in the series of supercomputers is Anupam-Aganya with processing power of 270 TFLOPS and PARALLEL PROCESSING SUPERCOMPUTER ANUPAM-ATULYA:Provides sustained LINPACK performance of 1.35 PetaFlops for solving complex scientific problems. Electronics instrumentation and computers BARC's research and development programing electrical, electronics, instrumentation and computers is in the fields of Nuclear Science and Technology, and this has resulted in the development of various indigenous technologies. In the fields of nuclear energy, many Control and Instrumentation systems including In Service Inspection Systems were designed, developed and deployed for Nuclear Reactors ranging from PHWR, AHWR, LWR, PFBR, to new generation Research Reactors and C&I for reprocessing facilities. Development of simulators for Nuclear Power Plant are immense as they provide the best training facilities for the reactor personal and also for licensing of reactor operators. Core competencies cover a wide spectrum and include Process Sensors, Radiation Detector, Nuclear Instruments, Microelectronics, MEMS, Embedded Real Time Systems, Modelling and Simulation, Computer Network, High Integrity Software Engineering, High performance DAQ systems, High Voltage Supplies, Digital Signal Processing, Image Processing, Deep Learning, Motion control, Security Electronics, Medical Electronics etc. Development of stabilization systems for Seekers, Antenna Platform Unit for LCA HAL Tejas multi-mode Radar, Servo system for Indian Deep Space Network IDSN32- 32 meter antenna which tracked Chandrayaan-I and Mangalyaan, Instrumented PIG for Oil Pipe line inspection, Servo control and camera electronics for MACE telescope, Radiometry and Radiation Monitoring Systems etc. Various technology spin-offs include products developed for industrial, medical, transportation, security, aero-space and defense applications. Generic electronic products like Qualified Programmable Logic Controller platform (TPLC-32), suitable for deployment in safety critical applications, Reactivity meters, Machinery Protection systems, Security Gadgets for Physical Protection, Access Control Systems, Perimeter Intrusion Detection Systems, CCTV and Video surveillance Systems, Scanning Electron Microscope, VHF Communication Systems have been developed as part of the indigenization process. Material Sciences and Engineering Materials Science and Engineering plays an important role in all aspects including sustaining and providing support for Indian nuclear program and also developing advanced technologies. The minerals containing elements of interest to DAE e.g. Uranium, Rare-earth elements are taken up for developing beneficiation techniques/flow sheets to improve the metal value for its extraction. The metallic Uranium required for research reactors is produced. Improvement of process efficiency for operating uranium mills is done and inputs for implemented at plants by Uranium Corporation of India. The process flow sheet to separate individual rare earth oxide from different resources (including from secondary sources e.g. scrap/used products) are developed, demonstrated and technology is transferred to Indian Rare Earths Limited (IREL) for production at its plants. All the requirements of refractory materials for DAE applications including neutron absorber applications are being met by research, development and production in Materials Group. Materials Group works for development of flow sheets/processes for the materials required for DAE plants/applications e.g. Ti sponge, advanced alloys, coatings using various processes including pack cementation, chemical vapour, physical vapour, Electroplating/Electroless plating. Recovery of high purity Cobalt from various wastes/scrap material has also been demonstrated and technologies transferred for productionization. Research aimed at advanced materials technologies using Thermodynamics, Mechanics, Simulation and Modelling, characterisation and performance evaluation is done. Studies aimed at understanding radiation damage in materials are undertaken using advanced characterization techniques to help in alloy development and material degradation assessment activities. Generation of thermo-physical and defect property database of nuclear materials e.g., Thoria-based Mixed oxide and metallic fuels; studies on Fe-Zr alloys and natural and synthetic minerals as hosts for metallic waste immobilization through modelling and simulations is being pursued. Development of novel solvents to extract selected elements from the nuclear waste for medical applications and specific metallic values from E-waste is being done. Technologies such as Large-scale synthesis of carbon nanotube (CNT), low-carbon ferro-alloys (FeV, FeMo, FeNb, FeW, FeTi and FeC), Production of tungsten metal powder and fabrication of tungsten (W) and tungsten heavy alloy (WHA) and Production of zirconium diboride (ZrB2) powder and Fabrication of high density ZrB2 shapes etc., have been realised. Chemical Engineering and Sciences The key features underlying the development effort are self-reliance, achieving products with very high purity specifications, working with separation processes characterized by low separation factors, aiming high recoveries, optimal utilization of scarce resources, environmental benignity, high energy efficiency and stable continuous operation. Non-power application of nuclear energy has been demonstrated in the area of water desalination using the technologies such as Multi Stage Flash Distillation and Multi Effect Distillation with Thermo Vapor Compression (MED-TVC). Membrane technologies have been deployed not only for nuclear waste treatment but for society at large in line with the Jal Jeevan Mission of Government of India to provide safe drinking water at the household level. Development and demonstration of fluidized bed technology for applications in nuclear fuel cycle; synthesis and evaluation of novel extractants; synthesis of TBM materials (synthesis of lithium titanate pebbles); molecular modeling for various phenomena (such as permeation of hydrogen and its isotopes through different metals, desalination using carbon nanotubes, effect of composition of glass on properties relevant for vitrification, design of solvents and metal organic frameworks); applications of microreactors for intensification of specific processes; development of low temperature freeze desalination process; environment-friendly integrated zero liquid discharge based desalination systems; treatment of industrial effluents; new generation membranes (such as high performance graphene-based nanocomposite membranes, membranes for haemodialysis, forward osmosis and metallic membranes); hydrogen generation and storage by various processes (electrochemical water splitting, iodine-sulphur thermochemical, copper-chlorinehybrid thermochemical cycles); development of adsorptive gel materials for specific separations; heavy water upgradation; metal coatings for various applications (such as membrane permeator, neutron generator and special applications);fluidized bed chemical vapour deposition; and chemical process applications of Ultrasound Technology (UT). A pre-cooled modified Claude cycle based 50 L/hr capacity helium liquefier (LHP50) has been developed and commissioned by BARC at Trombay. Major component technologies involved in LHP50 include ultra-high speed gas bearing supported miniature turboexpanders and compact plate fin heat exchangers along with cryogenic piping and long-stem valves all housed inside the LHP50 Cold Box. Other major equipment include a coaxial helium transfer line and a liquid helium receiver vessel. Environment, Radiology and Radiochemical Science BARC also monitors Environmental impact and dose / risk assessment for radiological and chemical contaminants, Environmental surveillance and radiation protection for the entire nuclear fuel cycle facilities, Meteorological and hydro-geological investigations for DAE sites. Modelling of contaminant transport and dispersion in the atmosphere and hydrosphere, Radiological impact assessment of waste management and disposal practices, Development of Environmental Radiation Monitoring systems and Establishment of country wide radiation monitoring network, establishment of benchmarks for assessing the radiological impact of the nuclear power activities on the marine environment. The highlights of these programs are Positron and positronium chemistry, Actinide chemistry and spectroscopy, Isotope hydrology for water resource management, Radiotracer for Industrial Applications, separation and purification of new, radionuclides for medical applications, advance fuel development by sol gel method, chemical quality control of nuclear fuels, complexation and speciation of actinides, Separation method development for back end fuel cycle processes. The other major research projects are thermo-physical property evaluation of molten salt breeder reactor (MSBR) systems, development of core-catcher materials, hydrogen mitigation, catalysts for hydrogen production, hydrogen storage materials, nanotherapeutics and bio-sensors, decontamination of reactor components, biofouling control and thermal ecology studies, supramolecular chemistry, environmental and interfacial chemistry, ultrafast reaction dynamics, single molecule spectroscopy, synthesis and applications of nanomaterials, cold plasma applications, luminescent materials for bio-imaging, materials for light emitting devices and security applications etc. Health, Food and agriculture Development of new elite crop varieties including oil seeds and pulses. Using radiation-induced mutagenesis, hybridization, and tissue culture techniques 49 crop varieties have been developed, released and Gazette-notified for commercial cultivation. Development of molecular markers, transgenics, biosensors, fertilizer formulations with improved nutrient use efficiency. Understanding DNA damage repair, replication, redox biology and autophagy process and development of radio-sensitizers, chemo-sensitizers for cancer therapy. Design and synthesis of organo-fluorophores and organic electronic molecules, relevant to nuclear sciences and societal benefits (advanced technology and health). Design and synthesis of organo-fluorophores and organic electronic molecules, relevant to nuclear sciences and societal benefits (advanced technology and health). Synthesis and development of nuclear medicine ligands for diagnosis and therapy of cancer and other diseases. Asymmetric total synthesis and organocatalytic methods (green chemistry approach) for the synthesis of biologically active compounds. R&D activities in the frontier areas of radiation biology for understanding the effect of low- and high LET radiations, chronic and acute radiation exposure, high background radiation, and radionuclide exposure on mammalian cells, cancer cells, experimental rodents and human health. Preclinical and translational research is aimed at development of new drugs and therapeutics for prevention and mitigation of radiation injury, de-corporation of heavy metals and treatment of inflammatory disorders and cancers. Studying macromolecular structures and protein-ligand interactions using biophysical techniques like X-ray crystallography, neutron-scattering, circular dichroism and synchrotron radiation, with an aim for ab-initio design of therapeutic molecules. Understanding the cellular and molecular basis of stress response in bacteria, plants and animals. Understanding the extraordinary resistance to DNA damage and oxidative stress tolerance in bacteria, and epigenetic regulation of alternate splicing in plants and mammalian cells. Development of CRISPR-Cas mediated genome editing technologies in both basic and applied research and is engaged in the development of gene technologies and products for bio-medical applications. Studies on uranium sequestration by Nostoc and bacteria isolated from uranium mines. Research and development of novel radiopharmaceuticals for diagnostic and therapeutic purposes. Synthesis of substrates from suitable precursors for use in radio-labeling with diagnostic (99mTc) and therapeutic (177Lu, 153Sm, 166Ho, 186/188Re, 109Pd, 90Y, 175Yb, 170Tm) radioisotopes in the preparation of agents intended for use as radiopharmaceuticals. Custom preparation of special sources to suit the requirements of the Defense Research Organization of India (DRDO) and National Research Laboratories such as National Physics Research Laboratory, ISRO etc. India's three-stage nuclear power programme India's three-stage nuclear power programme was formulated by Homi Bhabha in the 1950s to secure the country's long term energy independence, through the use of uranium and thorium reserves found in the monazite sands of coastal regions of South India. The ultimate focus of the programme is on enabling the thorium reserves of India to be utilised in meeting the country's energy requirements. Thorium is particularly attractive for India, as it has only around 1–2% of the global uranium reserves, but one of the largest shares of global thorium reserves at about 25% of the world's known thorium reserves. Stage I – Pressurised Heavy Water Reactor In the first stage of the programme, natural uranium fueled pressurised heavy water reactors (PHWR) produce electricity while generating plutonium-239 as by-product. PHWRs was a natural choice for implementing the first stage because it had the most efficient reactor design in terms of uranium utilisation, and the existing Indian infrastructure in the 1960s allowed for quick adoption of the PHWR technology. Natural uranium contains only 0.7% of the fissile isotope uranium-235. Most of the remaining 99.3% is uranium-238 which is not fissile but can be converted in a reactor to the fissile isotope plutonium-239. Heavy water (deuterium oxide, D2O) is used as moderator and coolant. Stage II – Fast Breeder Reactor In the second stage, fast breeder reactors (FBRs) would use a mixed oxide (MOX) fuel made from plutonium-239, recovered by reprocessing spent fuel from the first stage, and natural uranium. In FBRs, plutonium-239 undergoes fission to produce energy, while the uranium-238 present in the mixed oxide fuel transmutes to additional plutonium-239. Thus, the Stage II FBRs are designed to "breed" more fuel than they consume. Once the inventory of plutonium-239 is built up thorium can be introduced as a blanket material in the reactor and transmuted to uranium-233 for use in the third stage The surplus plutonium bred in each fast reactor can be used to set up more such reactors, and might thus grow the Indian civil nuclear power capacity till the point where the third stage reactors using thorium as fuel can be brought online. The design of the country's first fast breeder, called Prototype Fast Breeder Reactor (PFBR), was done by Indira Gandhi Centre for Atomic Research (IGCAR). Doubling time Doubling time refers to the time required to extract as output, double the amount of fissile fuel, which was fed as input into the breeder reactors. This metric is critical for understanding the time durations that are unavoidable while transitioning from the second stage to the third stage of Bhabha's plan, because building up a sufficiently large fissile stock is essential to the large deployment of the third stage. Stage III – Thorium Based Reactors A Stage III reactor or an Advanced nuclear power system involves a self-sustaining series of thorium-232–uranium-233 fuelled reactors. This would be a thermal breeder reactor, which in principle can be refueled – after its initial fuel charge – using only naturally occurring thorium. According to the three-stage programme, Indian nuclear energy could grow to about 10 GW through PHWRs fueled by domestic uranium, and the growth above that would have to come from FBRs till about 50GW.[b] The third stage is to be deployed only after this capacity has been achieved. Parallel approaches As there is a long delay before direct thorium utilisation in the three-stage programme, the country is looking at reactor designs that allow more direct use of thorium in parallel with the sequential three-stage programme. Three options under consideration are the Indian Accelerator Driven Systems (IADS), Advanced Heavy Water Reactor (AHWR) and Compact High Temperature Reactor. Molten Salt Reactor is also under development. India's Department of Atomic Energy and US's Fermilab are designing unique first-of-its-kind accelerator driven systems. No country has yet built an Accelerator Driven System for power generation. Anil Kakodkar, former chairman of the Atomic Energy Commission called this a mega science project and a "necessity" for humankind. Reactor design BARC has developed a wide array of nuclear reactor designs for nuclear research, production of radioisotopes, naval propulsion and electricity generation Research reactors and production of radioisotopes Commercial reactors and power generation Pressurized heavy-water reactors BARC has developed various sizes of IPHWR class of pressurized heavy-water reactors powered by Natural Uranium for the first-stage Three-stage nuclear power programme which produce electricity and plutonium-239 to power the fast-breeder reactors being developed by IGCAR for the second stage of the program. The IPHWR class was developed from the CANDU reactors built at RAPS in Rawatbhata, Rajasthan. As of 2020, three successively larger designs IPHWR-220, IPHWR-540 and IPHWR-700 of electricity generation capacity of 220 MWe, 540 MWe and 700 MWe respectively have been developed. Advanced heavy-water reactor BARC is developing a 300 MWe advanced heavy-water reactor design that is powered by thorium-232 and uranium-233 to power the third stage of India's three-stage nuclear power programme. The AHWR at standard is set to be a closed nuclear fuel cycle. AHWR-300 is expected to have design life close to 100 years and will utilise Uranium-233 produced in the fast-breeder reactors being developed by IGCAR. Indian molten salt breeder reactor The Indian molten salt breeder reactor (IMSBR) is the platform to burn thorium as part of 3rd stage of Indian nuclear power programme. The fuel in IMSBR is in the form of a continuously circulating molten fluoride salt which flows through heat exchangers for ultimately transferring heat for power production to Super-critical based Brayton cycle (SCBC) so as to have larger energy conversion ratio as compared to existing power conversion cycle. Because of the fluid fuel, online reprocessing is possible, extracting the 233Pa (formed in conversion chain of 232Th to 233U) and allowing it to decay to 233U outside the core, thus making it possible to breed even in thermal neutron spectrum. Hence IMSBR can operate in self sustaining 233U-Th fuel cycle. Additionally, being a thermal reactor, the 233U requirement is lower (as compared to fast spectrum), thus allowing higher deployment potential. Light-water reactors BARC with experience gained from the development of the light-water reactor for the Arihant-class submarine is developing a large 900 MWe pressurized water reactor design known as IPWR-900. The design will include Generation III+ safety features like Passive Decay Heat Removal System, Emergency Core Cooling System (ECCS), Corium Retention and Core Catcher System. Marine propulsion for naval application BARC has developed multiple designs of light-water reactor designs suitable for nuclear marine propulsion for Indian Navy submarines beginning with the CLWR-B1 reactor design for the Arihant-class submarine.Total four submarine will be built for this class. India and the NPT India is not a part of the Nuclear Non-Proliferation Treaty (NPT), citing concerns that it unfairly favours the established nuclear powers, and provides no provision for complete nuclear disarmament. Indian officials argued that India's refusal to sign the treaty stemmed from its fundamentally discriminatory character; the treaty places restrictions on the non-nuclear weapons states but does little to curb the modernisation and expansion of the nuclear arsenals of the nuclear weapons states. More recently, India and the United States signed an agreement to enhance nuclear cooperation between the two countries, and for India to participate in an international consortium on fusion research, ITER (International Thermonuclear Experimental Reactor). Civilian research The BARC also researches biotechnology at the Gamma Gardens and has developed numerous disease-resistant and high-yielding crop varieties, particularly groundnuts. It also conducts research in Liquid Metal Magnetohydrodynamics for power generation. On 4 June 2005, intending to encourage research in basic sciences, BARC started the Homi Bhabha National Institute. Research institutions affiliated to BARC(Bhabha Atomic Research Centre) include IGCAR (Indira Gandhi Centre for Atomic Research), RRCAT (Raja Ramanna Centre for Advanced Technology), and VECC (Variable Energy Cyclotron Centre). Power projects that have benefited from BARC expertise but which fall under the NPCIL (Nuclear Power Corporation of India Limited) are KAPP (Kakrapar Atomic Power Project), RAPP (Rajasthan Atomic Power Project), and TAPP (Tarapur Atomic Power Project). The Bhabha Atomic Research Centre in addition to its nuclear research mandate also conducts research in other high technology areas like accelerators, micro electron beams, materials design, supercomputers, and computer vision among the few. The BARC has dedicated departments for these specialized fields. BARC has designed and developed, for its own use an infrastructure of supercomputers, Anupam using state of the art technology. See also IPHWR, class of PHWR electricity generation reactors designed by BARC AHWR, thorium fuelled reactor being designed by BARC Milw0rm#BARC attack Department of Atomic Energy, Government of India Indira Gandhi Centre for Atomic Research Raja Ramanna Centre for Advanced Technology Variable Energy Cyclotron Centre Homi Bhabha Cancer Hospital and Research Centre (disambiguation) References 1954 establishments in Bombay State Atomic Energy Commission of India Companies based in Mumbai Executive branch of the government of India Homi Bhabha National Institute Nuclear technology in India Research institutes in Mumbai Technology companies established in 1954 Research institutes established in 1954 Energy research Nuclear research institutes
Bhabha Atomic Research Centre
Engineering
9,684
32,435,807
https://en.wikipedia.org/wiki/SN%20393
SN 393 is the modern designation for a probable supernova that was reported by the Chinese in the year 393 CE. An extracted record of this astronomical event was translated into English as follows: The second lunar month mentioned in the record corresponds to the period 27 February to 28 March 393 CE, while the ninth lunar month ran from 22 October to 19 November 393 CE. The bowl-shaped asterism named Wěi is formed by the tail of the modern constellation Scorpius. This asterism consists of the stars in Scorpius designated ε, μ, ζ, η, θ, ι, κ, λ and ν. The guest star reached an estimated apparent magnitude of −1 and was visible for about eight months before fading from sight, whose lengthy duration suggests the source was a supernova. However, a classical nova is not excluded as possibility. Suggested as supernova Before 1975, the observation made by the Chinese between February and March 393 CE was considered to be likely a bright nova with a secondary maximum. At the time, there were only seven possible candidate supernova remnants near where SN 393 was observed. Assuming maximum –1 magnitude occurred close to away, this immediately ruled out four possible candidates. Another discounted remnant was G350.0-1.8, as the expectant expansion rate indicated the supernova occurred around 8,000 years ago. Of the two remaining sources, G348.5+0.1 and G348.7+0.3, were both at the required 10,000 pc. distance and also each had estimated ages of 1,500 years. If true, it seems unlikely such supernovae would be visible to the naked eye over eight months, especially because they occurred close to a particularly dusty part of the galactic plane. Stephenson and his colleagues preferred the supernova suggestion. In their most recent book and subsequent articles, Stephenson and Green refer to the suggestion by Wang et al. (1997) who suggested G347.3–00.5. Suggested as classical nova The decline time of classical novae is measured typically as the duration of decline by 3 mag from peak. This so-called t3 time ranges from typical 25–30 days (a month or two) for fast novae up to ten months for the slowest known classical novae (and even longer for diffusion induced novae). Thus, this historical transient could easily have been caused by a (slow) classical nova: postulating a peak brightness of (at least) 2 mag for the historical sighting and vanishing to invisibility (>5 mag) within 8 months, it could be a slow nova. The brighter the peak, the faster the nova: if the peak was −1 mag (like Sirius) or −4 (like Venus) and declined to >5 mag within eight months (6 mag or more in eight months) it could also refer to a moderately fast nova. Possible (and certainly not the only) candidates in the Chinese constellation of Wei are according to: Possible confirmation of SN 393 During 1996, the ROSAT All Sky Survey discovered another nearby supernova remnant, RX J1713.7-3946, which two years later, was suggested as a better match for SN 393. Observations in 1999 suggested that this remnant was associated with a H II region, G347.611 +0.204, whose distance was about , but in 2003, examining interactions between a nearby molecular cloud and the expanding remnant found the closer distance of around . In 2004, measures of the degree of X-ray and neutral hydrogen absorption by intervening matter between the remnant and Earth, confirmed this closer distance, making the true physical remnant diameter as , assuming the apparent angular diameter of about 1.2° or 70 arcminutes. Supernova remnant RX J1713.7-3946 is consistent with type II or type Ib supernovae. SN 393's progenitor had a mass of at least 15 solar masses, whose destruction generated energies of about , with three solar masses of material ejected into the surrounding interstellar medium. See also Chinese astronomy Chinese constellations References Historical supernovae Supernova remnants Scorpius 393 93
SN 393
Astronomy
865
77,373,411
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Watch%207
The Samsung Galaxy Watch 7 (stylized as Samsung Galaxy Watch7) is a series of Wear OS-based smartwatches developed by Samsung Electronics. It was announced on July 10, 2024, at Samsung's biannual Galaxy Unpacked event. The watches were launched on July 24, 2024. Specifications References External links Consumer electronics brands Products introduced in 2024 Smartwatches Samsung wearable devices Watch 7 Wear OS devices
Samsung Galaxy Watch 7
Technology
89
65,617,852
https://en.wikipedia.org/wiki/Zlatko%20Tesanovic
Zlatko Boško Tešanović (August 1, 1956 – July 26, 2012) was a Yugoslav-American theoretical condensed-matter physicist, whose work focused mainly on the high-temperature superconductors (HTS) and related materials. His particular research interests were in the areas of theoretical condensed matter physics, revolving primarily around iron- and copper-based high-temperature superconductors, quantum Hall effects (QHE), superconductivity and strongly correlated electron materials. His broad knowledge of condensed matter physics, his deep understanding of the effects of strong magnetic fields, and his talent for exposition were influential. Biography He was born in Sarajevo, former Yugoslavia (present Bosnia and Herzegovina). In 1979, he received a B.Sci. in physics from the University of Sarajevo. He then received a Fulbright Fellowship and attended the University of Minnesota, where he earned a Ph.D. in physics in 1985. He became a naturalized American citizen. He worked as a professor of physics at Johns Hopkins University (JHU) in the Henry A. Rowland Department of Physics and Astronomy in Baltimore from July 1987 until his death on July 26, 2012. Previously, he served as director of the TIPAC Theory Center at JHU. He was a foreign member of the Royal Norwegian Society of Sciences and Letters and a fellow of the APS Division of Condensed Matter Physics (DCMP). He served as a member of the committee to Assess the Current Status and Future Direction of High Magnetic Field Science in the United States, and contributed strongly to it, until his death. Students Among his graduate students are: Lei Xing (Jacob Haimson Professor, Stanford University) Igor F. Herbut (Professor, Simon Fraser University) Anton Andreev (Associate Professor, University of Washington) Sasha Dukan (Professor and Chair of Physics, Goucher College) Oskar Vafek (Associate Professor, Florida State University and NHMFL) Ashot Melikyan (Editor, Physical Review B) Andrés Concha (Postdoctoral Fellow, Harvard SEAS) Valentin Stanev (Postdoctoral Fellow, Argonne National Laboratory) Jian Kang (Grad student, Johns Hopkins University) Works He gave more than 100 invited talks at scientific meetings, including major international conferences. He has authored and published more than 125 scientific papers, and a book entitled: Honors and awards Fulbright Fellowship, U.S. Institute of International Education (1980) Shevlin Fellowship, University of Minnesota (1983) Stanwood Johnston Memorial Fellowship, University of Minnesota (1984) J. R. Oppenheimer Fellowship, Los Alamos National Laboratory, 1985 (declined) David and Lucile Packard Foundation Fellowship (1988-1994) Inaugural Speaker, J. R. Schrieffer Tutorial Lecture Series, National High Magnetic Field Laboratory (1997) Foreign Member, The Royal Norwegian Society of Sciences and Letters Fellow, The American Physical Society, Division of Condensed Matter Physics He received grants from the Department of Energy, and the National Science Foundation awarded him a post-doctoral fellowship that enabled him to spend two years studying at Harvard University. Death He died on July 26, 2012, at the age of 55 of an "apparent" heart attack at the George Washington University Hospital in Washington, D.C., after collapsing at Reagan National Airport. On March 23, 2013, the Johns Hopkins University Department of Physics and Astronomy organised a memorial symposium as a tribute to him. A number of distinguished speakers have been invited to highlight Zlatko's scientific accomplishments. See also List of American Physical Society Fellows (2011–) List of theoretical physicists Piers Coleman Alexei Alexeyevich Abrikosov Edward Witten Joseph Polchinski Notes References External links Are iron pnictides new cuprates? by Zlatko Tesanovic — American Physical Society Profile on Blogger — Blogger.com Zlatko Tesanovic: What is the theory of the Fe-pnictides? Curriculum vitae of Dr. Zlatko B. Tešanović 1956 births 2012 deaths Scientists from Sarajevo American string theorists American condensed matter physicists Yugoslav emigrants to the United States Bosniaks of Bosnia and Herzegovina Serbs of Bosnia and Herzegovina Johns Hopkins University faculty Fellows of the American Physical Society Superconductivity Death in Washington, D.C.
Zlatko Tesanovic
Physics,Materials_science,Engineering
864
44,921,122
https://en.wikipedia.org/wiki/Brazilian%20units%20of%20measurement
A number of different units of measurement were used in Brazil to measure quantities including length, area, volume, and mass as those units were derived from Portugal and had significant local variances. In 1814 as part of the Portuguese Empire, Brazil adopted the new Portuguese metric system, which was based in the original metric system, but with its units having Portuguese traditional names. This system was not, however, widely adopted and was soon abandoned, with the Portuguese customary units continuing to be used. In 1862 the metric system finally became compulsory in Brazil, and consolidated in 1972. Pre-metric units A number of units were used with local variations. Length A number of different units were used in Brazil to measure length. One pé (foot) was equal to 0.33 m (with local variations). Some of other units are given below: 1 polegada (inch) = pé 1 palmo (palm) = pé 1 vara (yard) = pés 1 passo geométrico (pace) = 5 pés 1 braça (fathom) = pés 1 légua (league) = 20,000 pés. Mass A number of different units were used in Brazil to measure mass. One libra (pound) was equal to 459.05 g (with local variations). Some of other units are given below : 1 onça (ounce) = libra 1 marco (mark) = 1/2 libra 1 arroba = 32 libras (One arroba métrica is equal to 15 kg. In Santos market Exchange, one arroba was 10 kg.) 1 quintal (hundredweight) = 128 libras 1 tonelada (ton) = 1,728 libras. The quilate (karat) used to measure to mass of gems was equal to 3.075 grains, and the outava used to measure mass of topazes was 57.17 grains. Area Different units were used to measure area in Brazil, often with significant local variations. One tarefa was equal to 3,000–4,000 m2. One alqueire was equal to 24,200 or 48,400 m2 (it was equal to 8 salamis and it was 33 L in Minas Gerais). Volume A number of different units with notable local variations were used in Brazil. One almude was equal to 31.944 L. One alqueire was equal to 40 to 320 L (generally for grain 33 L) (According to some sources, 1 alqueire = 5.324 L, alqueire (salt) = 4.076 L, 1 alqueire (common) = 3.626 L, 1 alqueire (Bahia) = 3.524 L, and 1 alqueire = 1/6 almude). Some of other units are provided below: 1 canada = almude 1 moio = 10 almudes 1 pipa = 12 almudes 1 tonel = 30 almudes. The cargueiro (mule load) consisted of two small barrels of 40 L each. See also Portuguese customary units Quebra–Quilos revolt References External links Units of measurement by country Standards of Brazil Historical geography of Brazil Empire of Brazil
Brazilian units of measurement
Mathematics
666
58,848,157
https://en.wikipedia.org/wiki/Ro%203-0422
Ro 3-0422 is an extremely potent organophosphate acetylcholinesterase inhibitor. It is extremely toxic. The intravenous is 20 μg/kg in mice. It is over 300 times more potent than neostigmine. See also Ro 3-0419 References Acetylcholinesterase inhibitors Organophosphates Quinolines Ethyl esters Quaternary ammonium compounds Methylsulfates
Ro 3-0422
Chemistry
91
20,902,402
https://en.wikipedia.org/wiki/Reichstein%20process
The Reichstein process in chemistry is a combined chemical and microbial method for the production of ascorbic acid from D-glucose that takes place in several steps. This process was devised by Nobel Prize winner Tadeusz Reichstein and his colleagues in 1933 while working in the laboratory of the ETH in Zürich. Reaction steps The reaction steps are: hydrogenation of D-glucose to D-sorbitol, an organic reaction with nickel as a catalyst under high temperature and high pressure. Microbial oxidation or fermentation of sorbitol to L-sorbose with acetobacter at pH 4-6 and 30 °C. protection of the 4 hydroxyl groups in sorbose by formation of the acetal with acetone and an acid to Diacetone-L-sorbose (2,3:4,6−Diisopropyliden−α−L−sorbose) Organic oxidation with potassium permanganate (to Diprogulic acid) followed by heating with water gives the 2-Keto-L-gulonic acid The final step is a ring-closing step or gamma lactonization with removal of water. Intermediate 5 can also be prepared directly from 3 with oxygen and platinum The microbial oxidation of sorbitol to sorbose is important because it provides the correct stereochemistry. Importance This process was patented and sold to Hoffmann-La Roche in 1934. The first commercially sold vitamin C product was either Cebion from Merck or Redoxon from Hoffmann-La Roche. Even today industrial methods for the production of ascorbic acid can be based on the Reichstein process. In modern methods however, sorbose is directly oxidized with a platinum catalyst (developed by Kurt Heyns (1908–2005) in 1942). This method avoids the use of protective groups. A side product with particular modification is 5-Keto-D-gluconic acid. A shorter biotechnological synthesis of ascorbic acid was announced in 1988 by Genencor International and Eastman Chemical. Glucose is converted to 2-keto-L-gulonic acid in two steps (via 2,4-diketo-L-gulonic acid intermediate) as compared to five steps in the traditional process. Though many organisms synthesize their own vitamin C, the steps can be different in plants and mammals. Smirnoff concluded that “..little is known about many of the enzymes involved in ascorbate biosynthesis or about the factors controlling flux through the pathways". There is interest in finding alternatives to the Reichstein process. Experiments suggest that genetically modified bacteria might be commercially usable. References Literature Boudrant, J. (1990): Microbial processes for ascorbic acid biosynthesis: a review. In: Enzyme Microb Technol. 12(5); 322–9; ; Bremus, C. et al. (2006): The use of microorganisms in L-ascorbic acid production. In: J Biotechnol.'' 124(1); 196–205; ; External links http://www.chemieunterricht.de/dc2/asch2/a-synthe.htm Der Schweizerische Weg zur Viamin-C-Synthese Organic reactions
Reichstein process
Chemistry
699
61,163,125
https://en.wikipedia.org/wiki/C11H11Cl2N
{{DISPLAYTITLE:C11H11Cl2N}} The molecular formula C11H11Cl2N (molar mass: 228.118 g/mol, exact mass: 227.0269 u) may refer to: Amitifadine DOV-102,677 DOV-216,303 Molecular formulas
C11H11Cl2N
Physics,Chemistry
73
14,541,675
https://en.wikipedia.org/wiki/ATP%20citrate%20synthase
ATP citrate synthase (also ATP citrate lyase (ACLY)) is an enzyme that in animals catalyzes an important step in fatty acid biosynthesis. By converting citrate to acetyl-CoA, the enzyme links carbohydrate metabolism, which yields citrate as an intermediate, with fatty acid biosynthesis, which consumes acetyl-CoA. In plants, ATP citrate lyase generates cytosolic acetyl-CoA precursors of thousands of specialized metabolites, including waxes, sterols, and polyketides. Function ATP citrate lyase is the primary enzyme responsible for the synthesis of cytosolic acetyl-CoA in many tissues. The enzyme is a tetramer of apparently identical subunits. In animals, the product, acetyl-CoA, is used in several important biosynthetic pathways, including lipogenesis and cholesterogenesis. It is activated by insulin. In plants, ATP citrate lyase generates acetyl-CoA for cytosolically-synthesized metabolites; Acetyl-CoA is not transported across subcellular membranes of plants. Such metabolites include: elongated fatty acids (used in seed oils, membrane phospholipids, the ceramide moieties of sphingolipids, cuticle, cutin, and suberin); flavonoids; malonic acid; acetylated phenolics, alkaloids, isoprenoids, anthocyanins, and sugars; and, mevalonate-derived isoprenoids (e.g., sesquiterpenes, sterols, brassinosteroids); malonyl and acyl-derivatives (d-amino acids, malonylated flavonoids, acylated, prenylated and malonated proteins). De novo fatty acid biosynthesis in plants occurs in plastids; thus, ATP citrate lyase is not relevant to this pathway. Reaction ATP citrate lyase is responsible for catalyzing the conversion of citrate and Coenzyme A (CoA) to acetyl-CoA and oxaloacetate, driven by hydrolysis of ATP. In the presence of ATP and CoA, citrate lyase catalyzes the cleavage of citrate to yield acetyl CoA, oxaloacetate, adenosine diphosphate (ADP), and orthophosphate (Pi): citrate + ATP + CoA → oxaloacetate + Acetyl-CoA + ADP + Pi This enzyme was formerly given the EC number 4.1.3.8. Location The enzyme is cytosolic in plants and animals. Structure The enzyme is composed of two subunits in green plants (including Chlorophyceae, Marchantimorpha, Bryopsida, Pinaceae, monocotyledons, and eudicots), species of fungi, glaucophytes, Chlamydomonas, and prokaryotes. Animal ACL enzymes are homomeric; a fusion of the ACLA and ACLB genes probably occurred early in the evolutionary history of this kingdom. The mammalian ATP citrate lyase has a N-terminal citrate-binding domain that adopts a Rossmann fold, followed by a CoA binding domain and CoA-ligase domain and finally a C-terminal citrate synthase domain. The cleft between the CoA binding and citrate synthase domains forms the active site of the enzyme, where both citrate and acetyl-coenzyme A bind. In 2010, a structure of truncated human ATP citrate lyase was determined using X-ray diffraction to a resolution of 2.10 Å. In 2019, a full length structure of human ACLY in complex with the substrates coenzyme A, citrate and Mg.ADP was determined by X-ray crystallography to a resolution of 3.2 Å. Moreover, in 2019 a full length structure of ACLY in complex with an inhibitor was determined by cryo-EM methods to a resolution of 3.7 Å. Additional structures of heteromeric ACLY-A/B from the green sulfur bacteria Chlorobium limicola and the archaeon Methanosaeta concilii show that the architecture of ACLY is evolutionarily conserved. Full length ACLY structures showed that the tetrameric protein oligomerizes via its C-terminal domain. The C-terminal domain had not been observed in the previously determined truncated crystal structures. The C-terminal region of ACLY assembles in a tetrameric module that is structurally similar to citryl-CoA lyase (CCL) found in deep branching bacteria. This CCL module catalyses the cleavage of the citryl-CoA intermediate into the products acetyl-CoA and oxaloacetate. In 2019, cryo-EM structures of human ACLY, alone or bound to substrates or products were reported as well. ACLY forms a homotetramer with a rigid citrate synthase homology (CSH) module, flanked by four flexible acetyl-CoA synthetase homology (ASH) domains; CoA is bound at the CSH–ASH interface in mutually exclusive productive or unproductive conformations. The structure of a catalytic mutant of ACLY in the presence of ATP, citrate and CoA substrates reveals a CoA and phosphor-citrate intermediate in the N-terminal domain. Cryo-EM structures of products bound ACLY and substrates bound ACLY were also determined at 3.0 Å and 3.1 Å. An EM structure of mutant E599Q in complex with CoA and phospho-citrate intermediate was determined at resolution of 2.9 Å. Comparison between these structures of apo-ACLY and ligands bound ACLY demonstrated conformational changes on ASH domain (N-terminal domain) when different ligands bind. Pharmacology The enzyme's action can be inhibited by the coenzyme A-conjugate of bempedoic acid, a compound which lowers LDL cholesterol in humans. The drug was approved by the Food and Drug Administration in February 2020 for use in the United States. References Further reading External links EC 2.3.3 Citric acid cycle
ATP citrate synthase
Chemistry
1,363
1,227,788
https://en.wikipedia.org/wiki/Harold%20Pender%20Award
The Harold Pender Award, initiated in 1972 and named after founding Dean Harold Pender, is given by the Faculty of the School of Engineering and Applied Science of the University of Pennsylvania to an outstanding member of the engineering profession who has achieved distinction by significant contributions to society. The Pender Award is the School of Engineering's highest honor. Past recipients 2018: Yann LeCun, for his work in convolutional neural networks. 2013: Barbara Liskov, for her work in programming languages, programming methodology and distributed systems. 2010: Robert E. Kahn and Vinton G. Cerf, for their pioneering and seminal contributions to network-based information technology, and especially for the design and implementation of the TCP/IP protocol suite, which continues to provide the foundation for the growing Internet 2006: Mildred Dresselhaus, for pioneering contributions and leadership in the field of carbon-based nanostructures and nanotechnology, and for promoting opportunities for women in science and engineering 2003: Dennis Ritchie and Ken Thompson, for development of the UNIX operating system and C programming language 2002: John J. Hopfield, for his pioneering accomplishments in the field of computational neuroscience and neuroengineering 2000: Jack St. Clair Kilby, for his contribution to the invention of the integrated circuit, or microchip 1999: John H. Holland, founder of genetic algorithms and innovative research in the science of complexity and adaptation 1995: George Dantzig, developer of the simplex algorithm spawning the field of linear programming 1993: Hiroshi Inose, leader in advances in digital communication and in increasing our understanding of the effects of information flow on society 1991: Arno Penzias, discoverer of the background microwave blackbody radiation of the universe 1990: Dana S. Scott, pioneer in application of concepts from logic and algebra to the development of mathematical semantics of programming languages 1989: Leo Esaki, pioneer in tunneling phenomena in semiconductors and development of quantum well structures 1988: John Bardeen, co-inventor of the transistor and contributor to the theory of superconductivity 1987: Herbert A. Simon, contributor to cross-disciplinary work between computer science, psychology, economics, and management, including the development of artificial intelligence and cognitive science 1986: Ronold W. P. King, leader in the development of electromagnetic antenna theory 1985: Amnon Yariv, innovator in quantum electronics and integrated optics 1984: Carver Mead and Lynn Conway, developers of CAD techniques for VLSI technology and authors of first VLSI textbook 1983: John Backus, developer of speed-coding and FORTRAN 1982: Maurice V. Wilkes, developer of world's second large-scale general-purpose electronic digital computer and author of first digital computer programmers textbook 1981: Richard W. Hamming, father of algebraic coding theory 1980: Robert N. Noyce, developer of the integrated circuit 1979: Edwin H. Land, Inventor of instant photography 1978: Claude E. Shannon, creator of quantitative Information theory 1977: Jan A. Rajchman, electronic and computer research 1976: Hyman G. Rickover, USN, father of the nuclear navy 1975: Chauncey Starr, founder of the Electric Power Research Institute (EPRI) 1974: Peter C. Goldmark, inventor of the 33-1/3 rpm long-playing record (among other things) 1973: John Mauchly and J. Presper Eckert, inventors of ENIAC 1972: Edward E. David Jr., science advisor to the President of the United States See also List of engineering awards References Awards established in 1972 American science and technology awards Engineering awards
Harold Pender Award
Technology
741
11,040,991
https://en.wikipedia.org/wiki/HATNet%20Project
The Hungarian Automated Telescope Network (HATNet) project is a network of six small fully automated "HAT" telescopes. The scientific goal of the project is to detect and characterize extrasolar planets using the transit method. This network is used also to find and follow bright variable stars. The network is maintained by the Center for Astrophysics Harvard & Smithsonian. The HAT acronym stands for Hungarian-made Automated Telescope, because it was developed by a small group of Hungarians who met through the Hungarian Astronomical Association. The project started in 1999 and has been fully operational since May 2001. Equipment The prototype instrument, HAT-1 was built from a 180 mm focal length and 65 mm aperture Nikon telephoto lens and a Kodak KAF-0401E chip of 512 × 768, 9 μm pixels. The test period was from 2000 to 2001 at the Konkoly Observatory in Budapest. HAT-1 was transported from Budapest to the Steward Observatory, Kitt Peak, Arizona, USA, in January 2001. The transportation caused serious damage to the equipment. Later built telescopes use Canon 11 cm diameter f/1.8L lenses for a wide-field of 8°×8°. It is a fully automated instrument with 2K x 2K Charge-coupled device (CCD) sensors. One HAT instrument operates at the Wise Observatory. HAT is controlled by a single Linux PC without human supervision. Data are stored in a MySQL database. HAT-South From 2009, three other locations joined the HATNet with telescopes of completely new design. The telescopes are deployed to Australia, Namibia and Chile. Each system has eight (2*4) joint-mounted, quasi-parallel Takahashi Epsilon (180 mm diameter, f/2.8) astrographs with Apogee 4k*4k CCDs with overlapping fields of view. The processing computers are Xenomai-based industrial PCs with 10 TB of storage. Participants in the project HAT-1 was developed during the undergraduate (and also the first year graduate) studies of Gáspár Bakos (Eötvös Loránd University, now at Princeton University) and at Konkoly Observatory (Budapest), under the supervision of Dr. Géza Kovács. In the development József Lázár, István Papp and Pál Sári also played an important role. More than 100 people have contributed altogether to the seventy planet discovery papers published or submitted by the project as of Feb 2020. Gáspár Bakos, István Papp, József Lázár, Pál Sári, have contributed to all of the planet discoveries by HAT. Other participants who have contributed to at least 10 discovery papers include: Joel Hartman (62 papers, Princeton), Robert Noyes (55, CfA), David Latham (44, CfA), Zoltán Csubry (43, Princeton), Kaloyan Penev (43, UT Dallas), Géza Kovács (42, Konkoly Observatory), Guillermo Torres (40, CfA), Geoffrey Marcy (38, UC Berkeley), Gilbert Esquerdo (37, CfA), Waqas Bhatti (34, Princeton), Miguel de Val-Borro (34, Goddard Space Flight Center), Lars Buchhave (33, Niels Bohr Institute), Daniel Bayliss (32, University of Warwick), Dimitar Sasselov (32, CfA), Bence Béky (31, CfA), Andrew Howard (31, Caltech), Debra Fischer (30, Yale University), George Zhou (30, CfA), Néstor Espinoza (29, STSCI), Andrés Jordán (29, Adolfo Ibáñez University), Robert Stefanik (29, CfA), Rafael Brahm (28, Pontifical Catholic University of Chile), Thomas Henning (28, MPIA), Luigi Mancini (28, University of Rome Tor Vergata), Markus Rabus (28, Las Cumbres Observatory), Vincent Suc (28, Pontifical Catholic University of Chile), John Johnson (27, CfA), R. Paul Butler (20, Carnegie Institution for Science), Simona Ciceri (19, MPIA), Brian Schmidt (19, ANU), Joao Bento (17, ANU), Thiam-Guan Tan (17, Perth Exoplanet Survey Telescope), Mark Everett (16, NOAO), Sam Quinn (16, CfA), Avi Shporer (16, MIT), Allyson Bieryla (14, CfA), Bun'ei Sato (14, Tokyo Institute of Technology), B.J. Fulton (12, Caltech), Howard Isaacson (12, UC Berkeley), András Pál (12, CfA), Brigitta Sipőcz (12, University of Hertfordshire), Támás Szkelenár (12), Chris Tinney (12, University of New South Wales), Duncan Wright (11, Australian Astronomical Observatory), Jeffrey Crane (10, Carnegie Institution for Science), Emilio Falco (10, CfA), Paula Sarkis (10, MPIA), and Stephen Shectman (10, Carnegie Institution for Science). Planets discovered One-hundred-thirty-four extrasolar planets have been discovered so far by the HAT surveys, including a handful of planets that were independently discovered by other groups as well (particularly the WASP survey). Sixty-three of these were found by the northern HATNet project, and seventy-one by the southern HATSouth project. All have been discovered using the transit method. In addition, a few additional planetary companions to the transiting planets were discovered through radial velocity follow-up observations, including HAT-P-13c, which was the first outer planetary or brown-dwarf companion confirmed with a well-characterised orbit for a system with a transiting planet Light green rows indicate that the planet orbits one of the stars in a binary star system. North South See also List of extrasolar planets A subset of HATNet light curves are available at the NASA Exoplanet Archive. Other extrasolar planet search projects Trans-Atlantic Exoplanet Survey or TrES SuperWASP or WASP XO Telescope or XO Kilodegree Extremely Little Telescope or KELT Next-Generation Transit Survey or NGTS Extrasolar planet searching spacecraft COROT is a CNES/ESA spacecraft launched in December 2006 The Kepler Mission is a NASA spacecraft launched in March 2009 The Transiting Exoplanet Survey Satellite (TESS) is a NASA spacecraft launched in March 2018 References External links The HAT Exoplanet Surveys The HATNet Exoplanet Survey The HATSouth Exoplanet Survey Hungarian Astronomical Association Wise observatory Hungarian-made Automated Telescope The Extrasolar Planets Encyclopaedia Telescopes Astrometry Exoplanet search projects by small telescope
HATNet Project
Astronomy
1,412
58,514,123
https://en.wikipedia.org/wiki/Claudine%20Stirling
Claudine Helen Stirling is a New Zealand isotope geochemistry academic. As of 2018, she is a full professor at the University of Otago. In 2024 she was elected as a Fellow of the Royal Society Te Apārangi. Academic career After a 1996 PhD titled 'High-precision U-series dating of corals from Western Australia : implication for last interglacial sea-levels' at the Australian National University, Stirling worked at University of Michigan and ETH Zürich before moving to the University of Otago in 2006, rising to full professor in 2018. Prof Stirling is a member of the Department of Geology with current research interests including: isotope geochemistry, biogeochemical cycles of trace metals, paleoceanography & paleoclimatology, and environmental geochemistry. In 2024 Stirling was elected as a Fellow of the Royal Society Te Apārangi. Selected works Halliday, Alex N., Der-Chuen Lee, John N. Christensen, Mark Rehkämper, Wen Yi, Xiaozhong Luo, Chris M. Hall, Chris J. Ballentine, Thomas Pettke, and Claudine Stirling. "Applications of multiple collector-ICPMS to cosmochemistry, geochemistry, and paleoceanography." Geochimica et Cosmochimica Acta 62, no. 6 (1998): 919–940. Amelin, Yuri, Angela Kaltenbach, Tsuyoshi Iizuka, Claudine H. Stirling, Trevor R. Ireland, Michail Petaev, and Stein B. Jacobsen. "U–Pb chronology of the Solar System's oldest solids with variable 238U/235U." Earth and Planetary Science Letters 300, no. 3-4 (2010): 343–350. Stirling, Claudine H., Morten B. Andersen, Emma-Kate Potter, and Alex N. Halliday. "Low-temperature isotopic fractionation of uranium." Earth and Planetary Science Letters 264, no. 1-2 (2007): 208–225. Gutjahr, Marcus, Martin Frank, Claudine H. Stirling, Veronika Klemm, Tina Van de Flierdt, and Alex N. Halliday. "Reliable extraction of a deepwater trace metal isotope signal from Fe–Mn oxyhydroxide coatings of marine sediments." Chemical Geology 242, no. 3-4 (2007): 351–370. Rehkämper, Mark, Maria Schönbächler, and Claudine H. Stirling. "Multiple collector ICP‐MS: Introduction to instrumentation, measurement techniques and analytical capabilities." Geostandards Newsletter 25, no. 1 (2001): 23–40. References Living people New Zealand women academics Australian National University alumni Academic staff of the University of Otago University of Michigan faculty Academic staff of ETH Zurich Geochemists New Zealand chemists New Zealand women chemists Year of birth missing (living people) Fellows of the Royal Society of New Zealand
Claudine Stirling
Chemistry
634
1,216,793
https://en.wikipedia.org/wiki/HD%2076700
HD 76700 is a star in the southern constellation of Volans. It is yellow in hue and is too faint to be visible to the naked eye, having an apparent visual magnitude of 8.16. This object is located at a distance of 197 light years from the Sun based on stellar parallax. It is drifting further away with a radial velocity of +39 km/s. Properties This object is a G-type main-sequence star with a stellar classification of G6V, which indicates it is generating energy through core hydrogen fusion. It is a metal-enriched star, showing a much higher metallicity than the Sun. This may be explained by prior accretion of refractory-rich planetary bodies into the stellar atmosphere. The mass of HD 76700 is very similar to (1.1 times) that of the Sun, but it is cooler and brighter (with an effective temperature of 5,694 K and luminosity of 1.69 Suns) and thus much older—around 6.9 billion years old. Planetary HD 76700 is orbited by a giant planet that was discovered in 2003 via the radial velocity method. Designated , this planet is orbiting very close to the star with a period of just four days. References G-type main-sequence stars Planetary systems with one confirmed planet Volans Durchmusterung objects 076700 043686
HD 76700
Astronomy
286
423,056
https://en.wikipedia.org/wiki/Delta%20%28rocket%20family%29
The Delta rocket family was a versatile range of American rocket-powered expendable launch systems that provided space launch capability in the United States from 1960 to 2024. Japan also launched license-built derivatives (N-I, N-II, and H-I) from 1975 to 1992. More than 300 Delta rockets were launched with a 95% success rate. The series was phased out in favor of the Vulcan Centaur, with the Delta IV Heavy rocket's last launch occurring on April 9, 2024. Origins The original Delta rockets used a modified version of the PGM-17 Thor, the first ballistic missile deployed by the United States Air Force (USAF), as their first stage. The Thor had been designed in the mid-1950s to reach Moscow from bases in Britain or similar allied nations, and the first wholly successful Thor launch had occurred in September 1957. Subsequent satellite and space probe flights soon followed, using a Thor first stage with several different upper stages. The fourth upper-stage combination of the Thor was named the Thor "Delta", reflecting the fourth letter of the Greek alphabet. Eventually the entire Thor–Delta launch vehicle came to be called simply "Delta". NASA intended Delta as "an interim general-purpose vehicle" to be "used for communication, meteorological, and scientific satellites and lunar probes during 1960 and 1961". The plan was to replace Delta with other rocket designs when they came on-line. From this point onward, the launch vehicle family was split into civilian variants flown from Cape Canaveral, which bore the Delta name, and military variants flown from Vandenberg Air Force Base (VAFB), which used the more warlike Thor name. The Delta design emphasized reliability rather than performance by replacing components that had caused problems on earlier Thor flights; in particular, the trouble-prone inertial guidance package made by AC Spark Plug was replaced by a radio ground guidance system, which was mounted to the second stage instead of the first. NASA made the original Delta contract to the Douglas Aircraft Company in April 1959 for 12 vehicles of this design: Stage 1: Modified Thor IRBM with a Block I MB-3 engine group consisting of one Rocketdyne LR-79 main engine and two Rocketdyne LR-101 vernier thrusters for roll control, producing a total of thrust, including LOX/RP1 turbopump exhaust. Stage 2: Modified Able. Pressure-fed UDMH/nitric acid-powered Aerojet AJ-10-118 engine producing . This reliable engine cost US$4 million to build and is still flying in modified form today. Gas-jet attitude control system. Stage 3: Altair. A spin-stabilized (via a turntable on top of the Able) at 100 rpm by two solid rocket motors before separation. One ABL X-248 solid rocket motor provided of thrust for 28 seconds. The stage weighed and was largely constructed of wound fiberglass. These vehicles would be able to place into a LEO or into GTO. Eleven of the twelve initial Delta flights were successful, and until 1968, no failures occurred in the first two minutes of launch. The high degree of success achieved by Delta stood in contrast to the endless parade of failures that dogged West Coast Thor launches. The total project development and launch cost came to US$43 million, US$3 million over budget. An order for 14 more vehicles was made before 1962. Evolution Delta A Delta B Delta C Delta D Delta E Delta F Delta G Delta J Delta K Delta L Delta M Delta N "Super Six" Delta 0100-series Delta 1000-series Delta 2000-series Delta 3000-series Delta 4000-series Delta 5000-series Delta II (6000-series and 7000-series) The Delta II series was developed after the 1986 Challenger accident and consisted of the Delta 6000-series and 7000-series, with two variants (Light and Heavy) of the latter. The Delta 6000-series introduced the Extra Extended Long Tank first stage, which was longer, and the Castor 4A boosters. Six SRBs ignited at takeoff, and three ignited in the air. The Delta 7000-series introduced the RS-27A main engine, which was modified for efficiency at high altitude at some cost to low-altitude performance, and the lighter and more powerful GEM-40 solid boosters from Hercules. The Delta II Med-Lite was a 7000-series with no third stage and fewer strap-ons (often three, sometimes four) that was usually used for small NASA missions. The Delta II Heavy was a Delta II 792X with the enlarged GEM-46 boosters from Delta III. Delta III (8000-Series) The Delta III 8000-series was a McDonnell Douglas / Boeing-developed program to keep pace with growing satellite masses: The two upper stages, with low-performance fuels, were replaced with a single cryogenic stage, improving performance and reducing recurring costs and pad labor. The engine was a single Pratt & Whitney RL10, from the Centaur upper stage. The hydrogen fuel tank, 4 metres in diameter in orange insulation, is exposed; the narrower oxygen tank and engine are covered until stage ignition. Fuel tank contracted to Mitsubishi and produced using technologies from Japanese H-II launcher. To keep the stack short and resistant to crosswinds, the first-stage kerosene tank was widened and shortened, matching the upper-stage and fairing diameters. Nine enlarged GEM-46 solid boosters were attached. Three have thrust-vectoring nozzles. Of the three Delta III flights, the first two were failures, and the third carried only a dummy (inert) payload. Delta IV (9000-series) As part of the Air Force's Evolved Expendable Launch Vehicle (EELV) program, McDonnell Douglas / Boeing proposed Delta IV. As the program name implied, many components and technologies were borrowed from existing launchers. Both Boeing and Lockheed Martin were contracted to produce their EELV designs. Delta IVs were produced in a new facility in Decatur, Alabama. The first stage changed to liquid hydrogen fuel. Tank technologies derived from Delta III upper stage, but widened to 5 metres. The kerosene engine replaced with Rocketdyne RS-68, the first new, large liquid-fueled rocket engine designed in the United States since the Space Shuttle Main Engine (SSME) in the 1970s. Designed for low cost, it had lower chamber pressure and efficiency than the SSME, and a much simpler nozzle. Thrust chamber and upper nozzle was a channel-wall design, pioneered by Soviet engines. Lower nozzle was ablatively cooled. The second stage and fairing were taken from the Delta III in smaller (Delta IV Medium) models; widened to 5 metres in Medium+ and Heavy models. Medium+ models had two or four GEM 60, solid boosters. The plumbing was revised and electrical circuits eliminated need for a launch tower. The first stage was referred to as a Common Booster Core (CBC); a Delta IV Heavy attached two extra CBCs as boosters. Delta IV Heavy Launch reliability From 1969 through 1978 (inclusive), Thor-Delta was NASA's most used launcher, with 84 launch attempts. (Scout was the second-most used vehicle with 32 launches.) Satellites for other government agencies and foreign governments were also launched on a cost-reimbursable basis, totaling 63 satellites. Out of the 84 launch attempts there were 7 failures or partial failures, a 91.6% success rate. The Delta was a launch success, but it has also been a significant contributor to orbital debris, as a variant used in the 1970s was prone to in-orbit explosions. Eight Delta second stages launched between 1973 and 1981 were involved in fragmentation events between 1973 and 1991 usually within the first 3 years after launch, but others have broken apart 10 or more years later. Studies determined that explosions were caused by propellant left after shutdown. The nature of the propellant and the thermal environment occupied by the derelict rockets made explosions inevitable. Depletion burns were started in 1981, and no fragmentation events for rockets launched after that have been identified. Deltas launched before the 1970s variant have had fragmentation events as long as 50 years after launch. Numbering system In 1972, McDonnell Douglas introduced a four-digit numbering system to replace the letter-naming system. The new system could better accommodate the various changes and improvements to Delta rockets and avoided the problem of a rapidly depleting alphabet. The digits specified (1) the tank and main engine type, (2) number of solid rocket boosters, (3) second stage (letters in the following table refer to the engine), and (4) third stage: This numbering system was to have been phased out in favor of a new system that was introduced in 2005. In practice, the new system was never used, as all but the Delta II have been retired: See also Comparison of orbital launchers families Comparison of orbital launch systems List of Thor and Delta launches HoloVID visualization tool Space debris Project Echo References Forsyth, Kevin S. (2002) Delta: The Ultimate Thor, In Roger Launius and Dennis Jenkins (Eds.), To Reach The High Frontier: A History of U.S. Launch Vehicles, Lexington: University Press of Kentucky, External links History of the Delta Launch Vehicle The Satellite Encyclopedia - Thor Delta Military space program of the United States Rocket families United Launch Alliance space launch vehicles Spacecraft that broke apart in space
Delta (rocket family)
Technology
1,930
14,633,642
https://en.wikipedia.org/wiki/POLG
DNA polymerase subunit gamma (POLG or POLG1) is an enzyme that in humans is encoded by the POLG gene. Mitochondrial DNA polymerase is heterotrimeric, consisting of a homodimer of accessory subunits plus a catalytic subunit. The protein encoded by this gene is the catalytic subunit of mitochondrial DNA polymerase. Defects in this gene are a cause of progressive external ophthalmoplegia with mitochondrial DNA deletions 1 (PEOA1), sensory ataxic neuropathy dysarthria and ophthalmoparesis (SANDO), Alpers-Huttenlocher syndrome (AHS), and mitochondrial neurogastrointestinal encephalopathy syndrome (MNGIE). Structure POLG is located on the q arm of chromosome 15 in position 26.1 and has 23 exons. The POLG gene produces a 140 kDa protein composed of 1239 amino acids. POLG, the protein encoded by this gene, is a member of the DNA polymerase type-A family. It is a mitochondrion nucleiod with an Mg2+ cofactor and 15 turns, 52 beta strands, and 39 alpha helixes. POLG contains a polyglutamine tract near its N-terminus that may be polymorphic. Two transcript variants encoding the same protein have been found for this gene. Function POLG is a gene that codes for the catalytic subunit of the mitochondrial DNA polymerase, called DNA polymerase gamma. The human POLG cDNA and gene were cloned and mapped to chromosome band 15q25. In eukaryotic cells, the mitochondrial DNA is replicated by DNA polymerase gamma, a trimeric protein complex composed of a catalytic subunit, POLG, and a dimeric accessory subunit of 55 kDa encoded by the POLG2 gene. The catalytic subunit contains three enzymatic activities, a DNA polymerase activity, a 3’-5’ exonuclease activity that proofreads misincorporated nucleotides, and a 5’-dRP lyase activity required for base excision repair. Catalytic activity Deoxynucleoside triphosphate + DNA(n) = diphosphate + DNA(n+1). Clinical significance Mutations in the POLG gene are associated with several mitochondrial diseases, progressive external ophthalmoplegia with mitochondrial DNA deletions 1 (PEOA1), sensory ataxic neuropathy dysarthria and ophthalmoparesis (SANDO), Alpers-Huttenlocher syndrome (AHS), and mitochondrial neurogastrointestinal encephalopathy syndrome (MNGIE). Pathogenic variants have also been linked with fatal congenital myopathy and gastrointestinal pseudo-obstruction and fatal infantile hepatic failure. A list of all published mutations in the POLG coding region and their associated disease can be found at the Human DNA Polymerase Gamma Mutation Database. Mice heterozygous for a Polg mutation are only able to replicate their mitochondrial DNA inaccurately, so that they sustain a 500-fold higher mutation burden than normal mice. These mice show no clear features of rapidly accelerated aging, indicating that mitochondrial mutations do not have a causal role in natural aging. Interactions POLG has been shown to have 50 binary protein-protein interactions including 32 co-complex interactions. POLG appears to interact with POLG2, Dlg4, Tp53, and Sod2. References Further reading External links GeneReviews/NCBI/NIH/UW entry on POLG-Related Disorders DNA replication
POLG
Biology
750
25,672,988
https://en.wikipedia.org/wiki/PLATO%20%28spacecraft%29
PLAnetary Transits and Oscillations of stars (PLATO) is a space telescope under development by the European Space Agency for launch in 2026. The mission goals are to search for planetary transits across up to one million stars, and to discover and characterize rocky extrasolar planets around yellow dwarf stars (like the Sun), subgiant stars, and red dwarf stars. The emphasis of the mission is on Earth-like planets in the habitable zone around Sun-like stars where water can exist in a liquid state. It is the third medium-class mission in ESA's Cosmic Vision programme and is named after the influential Greek philosopher Plato. A secondary objective of the mission is to study stellar oscillations or seismic activity in stars to measure stellar masses and evolution and enable the precise characterization of the planet host star, including its age. History PLATO was first proposed in 2007 to the European Space Agency (ESA) by a team of scientists in response to the call for ESA's Cosmic Vision 2015–2025 programme. The assessment phase was completed during 2009, and in May 2010 it entered the Definition Phase. Following a call for missions in July 2010, ESA selected in February 2011 four candidates for a medium-class mission (M3 mission) for a launch opportunity in 2024. PLATO was announced on 19 February 2014 as the selected M3 class science mission for implementation as part of its Cosmic Vision Programme. Other competing concepts that were studied included the four candidate missions EChO, LOFT, MarcoPolo-R and STE-QUEST. In January 2015, ESA selected Thales Alenia Space, Airbus DS, and OHB System AG to conduct three parallel phase B1 studies to define the system and subsystem aspects of PLATO, which were completed in 2016. On 20 June 2017, ESA adopted PLATO in the Science Programme, which means that the mission can move from a blueprint into construction. Over the coming months, industry was asked to make bids to supply the spacecraft platform. PLATO is an acronym, but also the name of a philosopher in Classical Greece; Plato (428–348 BC) was looking for a physical law accounting for the orbit of planets (errant stars) and able to satisfy the philosopher's needs for "uniformity" and "regularity". Management The PLATO Mission Consortium (PMC) that is responsible for the payload and major contributions to the science operations is led by Prof. Heike Rauer at the German Aerospace Center (DLR) Institute of Planetary Research. The design of the Telescope Optical Units is made by an international team from Italy, Switzerland and Sweden and coordinated by Roberto Ragazzoni at INAF (Istituto Nazionale di Astrofisica) Osservatorio Astronomico di Padova. The Telescope Optical Unit development is funded by the Italian Space Agency, the Swiss Space Office and the Swedish National Space Board. The PMC Science Management (PSM), composed of more than 500 experts, is coordinated by Prof. Don Pollacco of the University of Warwick and provides expertise for: The preparation of the PLATO Input Catalogue (PIC) Identifying the optimal fields for PLATO to observe Coordinating follow-up observations Scientifically validating PLATO's data products Objective The objective is the detection of terrestrial exoplanets up to the habitable zone of solar-type stars and the characterization of their bulk properties needed to determine their habitability. To achieve this objective, the mission has these goals: Discover and characterize many nearby exoplanetary systems, with precision in the determination of the planets' radii of up to 3%, stellar age of up to 10%, and planet mass of up to 10% (the latter in combination with on-ground radial velocity measurements) Detect and characterize Earth-sized planets and super-Earths in the habitable zone around solar-type stars Discover and characterize many exoplanetary systems to study their typical architectures, and dependencies on the properties of their host stars and the environment Measure stellar oscillations to study the internal structure of stars and how it evolves with age Identify good targets for spectroscopic measurements to investigate exoplanet atmospheres PLATO will differ from the CoRoT, TESS, CHEOPS, and Kepler space telescopes in that it will study relatively bright stars (between magnitudes 4 and 11), enabling a more accurate determination of planetary parameters, and making it easier to confirm planets and measure their masses using follow-up radial velocity measurements on ground-based telescopes. Its dwell time will be longer than that of the TESS NASA mission, making it sensitive to longer-period planets. Design Optics The PLATO payload is based on a multi-telescope approach, involving 26 cameras in total: 24 "normal" cameras organized in 4 groups, and 2 "fast" cameras for bright stars. The 24 "normal" cameras work at a readout cadence of 25 seconds and monitor stars fainter than apparent magnitude 8. The two "fast" cameras work at a cadence of 2.5 seconds to observe stars between magnitude 4 to 8. The cameras are refracting telescopes using six lenses; each camera has a 1,100 deg2 field and a 120 mm lens diameter. Each camera is equipped with its own CCD staring array, consisting of four CCDs of 4510 x 4510 pixels. The 24 "normal cameras" will be arranged in four groups of six cameras with their lines of sight offset by a 9.2° angle from the +ZPLM axis. This particular configuration allows surveying an instantaneous field of view of about 2,250 deg2 per pointing. The space observatory will rotate around the mean line of sight once per year, delivering a continuous survey of the same region of the sky. Launch The space observatory is planned to launch at the end of 2026 to the Sun-Earth Lagrange point. Data release schedule The public release of photometric data (including light curves) and high-level science products for each quarter will be made after six months and by one year after the end of their validation period. The data are processed by quarters because this is the duration between each 90-degree rotation of the spacecraft. For the first quarter of observations, six months are required for data validation and pipeline updates. For the next quarters, three months will be needed. A small number of stars (no more than 2,000 stars out of 250,000) will have proprietary status, meaning the data will only be accessible to the PLATO Mission Consortium members for a given time period. They will be selected using the first three months of PLATO observations for each field. The proprietary period is limited to 6 months after the completion of the ground-based observations or the end of the mission archival phase (Launch date + 7.5 years), whichever comes first. See also Cosmic Vision, ESA program (2015-2025) List of projects of the European Space Agency List of space telescopes CHEOPS, a European space telescope to determine the size of known extrasolar planets, launched in 2019 Transiting Exoplanet Survey Satellite (TESS), NASA, launched in 2018, with a similar multi-camera design References External links Official gallery The PLATO 2.0 Mission scientific paper What can PLATO do for exoplanet astronomy? PLATO article on eoPortal by ESA Space telescopes Exoplanet search projects European Space Agency space probes 2026 in spaceflight 2026 in Europe Cosmic Vision
PLATO (spacecraft)
Astronomy
1,511
1,109,552
https://en.wikipedia.org/wiki/Agent%20Extensibility%20Protocol
The Agent Extensibility Protocol or AgentX is a computer networking protocol that allows management of Simple Network Management Protocol objects defined by different processes via a single master agent. Agents that export objects via AgentX to a master agent are called subagents. The AgentX standard not only defines the AgentX protocol, but also the procedure by which those subagents process SNMP protocol messages. For more information, see RFC 2741 for the original definition of the protocol and the IETF Agentx Working Group. References Network management Agent communications languages
Agent Extensibility Protocol
Technology,Engineering
111
44,342,842
https://en.wikipedia.org/wiki/Harborth%27s%20conjecture
In mathematics, Harborth's conjecture states that every planar graph has a planar drawing in which every edge is a straight segment of integer length. This conjecture is named after Heiko Harborth, and (if true) would strengthen Fáry's theorem on the existence of straight-line drawings for every planar graph. For this reason, a drawing with integer edge lengths is also known as an integral Fáry embedding. Despite much subsequent research, Harborth's conjecture remains unsolved. Special classes of graphs Although Harborth's conjecture is not known to be true for all planar graphs, it has been proven for several special kinds of planar graph. One class of graphs that have integral Fáry embeddings are the graphs that can be reduced to the empty graph by a sequence of operations of two types: Removing a vertex of degree at most two. Replacing a vertex of degree three by an edge between two of its neighbors. (If such an edge already exists, the degree three vertex can be removed without adding another edge between its neighbors.) For such a graph, a rational Fáry embedding can be constructed incrementally by reversing this removal process, re-inserting the vertices that were removed. Re-inserting a degree-two vertex uses the fact that the set of points that are at a rational distance from two given points are dense in the plane. Re-inserting a degree-three vertex uses the fact that, if three points have rational distance between one pair and square-root-of-rational distance between the other two pairs, then the points at rational distances from all three are again dense in the plane. The distances in such an embedding can be made into integers by scaling the embedding by an appropriate factor. Based on this construction, the graphs known to have integral Fáry embeddings include the bipartite planar graphs, (2,1)-sparse planar graphs, planar graphs of treewidth at most 3, and graphs of degree at most four that either contain a diamond subgraph or are not 4-edge-connected. In particular, the graphs that can be reduced to the empty graph by the removal only of vertices of degree at most two (the 2-degenerate planar graphs) include both the outerplanar graphs and the series–parallel graphs. However, for the outerplanar graphs a more direct construction of integral Fáry embeddings is possible, based on the existence of infinite subsets of the unit circle in which all distances are rational. Additionally, integral Fáry embeddings are known for each of the five Platonic solids. Related conjectures A stronger version of Harborth's conjecture, posed by , asks whether every planar graph has a planar drawing in which the vertex coordinates as well as the edge lengths are all integers. It is known to be true for 3-regular graphs, for graphs that have maximum degree 4 but are not 4-regular, and for planar 3-trees. Another unsolved problem in geometry, the Erdős–Ulam problem, concerns the existence of dense subsets of the plane in which all distances are rational numbers. If such a subset existed, it would form a universal point set that could be used to draw all planar graphs with rational edge lengths (and therefore, after scaling them appropriately, with integer edge lengths). However, Ulam conjectured that dense rational-distance sets do not exist. According to the Erdős–Anning theorem, infinite non-collinear point sets with all distances being integers cannot exist. This does not rule out the existence of sets with all distances rational, but it does imply that in any such set the denominators of the rational distances must grow arbitrarily large. See also Integer triangle, an integral Fáry embedding of the triangle graph Matchstick graph, a graph that can be drawn planarly with all edge lengths equal to 1 Erdős–Diophantine graph, a complete graph with integer distances that cannot be extended to a larger complete graph with the same property Euler brick, an integer-distance realization problem in three dimensions References Conjectures Unsolved problems in graph theory Planar graphs Arithmetic problems of plane geometry
Harborth's conjecture
Mathematics
883
51,921,029
https://en.wikipedia.org/wiki/HgeTx1
HgeTx1 (systematic name: α-KTx 6.14) is a toxin produced by the Mexican scorpion Hoffmanihadrurus gertschi that is a reversible blocker of the Shaker B K+-channel, a type of voltage-gated potassium channels. Etymology and Source The toxin HgeTx1 is produced by the Mexican scorpion Hoffmanihadrurus gertschi, which belongs to the family of Caraboctonidae. HgeTx1 is the first toxin (Tx1) from this scorpion (). HgeTx1 belongs to the α-KTx potassium channel toxin category, and is placed in the sixth subfamily of all α-KTx toxins where HgeTx1 is the fourteenth member, which gives HgeTx1 its systematic name α-KTx 6.14. Chemical Structure All α-KTx category toxins are peptides that contain between 20 and 40 amino acids and contain three or four disulfide bridges. HgeTx1 consists of 36 amino acids and has four disulfide bridges. These disulfide bridges exist between Cys1–Cys5, Cys2–Cys6, Cys3–Cys7 and Cys4–Cys8. It has a molecular mass of 3950 atomic mass units. Target Electrophysiological experiments (whole cell configuration patch clamping) have been performed to investigate the physiological effect of HgeTx1 on Shaker B K+-channels in insect cell cultures. These recordings show that HgeTx1 reversibly blocks the Shaker B K+-channel. This blockage follows a Michaelis-Menten saturation relationship with a Kd of 52 nM. However, there is no report of selectivity for or blockage of other subtypes of K+-channels. Mode of action HgeTx1 has only been investigated for its effectiveness on the Shaker B K+-channel, where the toxin seems to work as a plug that blocks the pore's ion conductance. This blockage follows the functional dyad model that underlies most α-KTx toxins. In the functional dyad model, a lysine residue interacts with a hydrophobic Leu, Tyr, Met or Phe residue, in order to recognize the K+-channel. On the extracellular side of the channel, the side-chain of the lysine residue will enter the pore and subsequently block the channel. In HgeTx1, it seems likely that the Lys24 residue will interact with the hydrophobic Met33 or Leu34 residue according to the functional dyad model, which allows it to block the Shaker B K+-channel. Toxicity Scorpions of the family Caraboctonidae, each of which produce a cocktail of different toxins, are not considered dangerous to humans. References Ion channel toxins Neurotoxins Scorpion toxins
HgeTx1
Chemistry
611
45,228,357
https://en.wikipedia.org/wiki/Issues%20in%20Environmental%20Science%20and%20Technology
Issues in Environmental Science and Technology is a book series by the Royal Society of Chemistry, published twice a year. Each issue's content focuses on a specific theme topic. The series is written by worldwide experts in various specialist fields, and covers broader aspects of the science (such as economics and politics) as well as the narrower chemistry of environmental science. It aims to assess possible practical solutions to perceived environmental problems. The Editors Commission will review articles from authors that may come from industry, public service and academia. The current Editors are RE Hester, University of York, UK and RM Harrison, University of Birmingham, UK. References Royal Society of Chemistry Book series
Issues in Environmental Science and Technology
Chemistry
133
24,913,186
https://en.wikipedia.org/wiki/Palmitoylethanolamide
Palmitoylethanolamide (PEA) is an endogenous fatty acid amide, and lipid modulator. A main target of PEA is proposed to be the peroxisome proliferator-activated receptor alpha (PPAR-α). PEA also has affinity to cannabinoid-like G-coupled receptors GPR55 and GPR119. PEA cannot strictly be considered a classic endocannabinoid because it lacks affinity for the cannabinoid receptors CB1 and CB2. Early and recent studies In 1975, Czech physicians described the results of a clinical trial looking at joint pain, where the analgesic action of aspirin versus PEA was tested; both drugs were reported to enhance joint movements and decrease pain. In 1970 the drug manufacturer Spofa in Czechoslovakia introduced Impulsin, a tablet dose of PEA, for the treatment and prophylaxis of influenza and other respiratory infections. In Spain, the company Almirall introduced Palmidrol in tablet and suspension forms in 1976, for the same indications. In the mid-1990s, the relationship between anandamide and PEA was described; the expression of mast cell receptors sensitive to the two molecules was demonstrated by Levi-Montalcini and coworkers. During this period, more insight into the functions of endogenous fatty acid derivatives emerged, and compounds such as oleamide, palmitoylethanolamide, 2-lineoylglycerol and 2-palmitoylglycerol were explored for their capacity to modulate pain sensitivity and inflammation via what at that time was thought to be the endocannabinoid signalling pathway. Primary reports also have provided evidence that PEA downregulates hyperactive mast cells in a dose-dependent manner, and that it alleviates pain elicited in mouse models. PEA and related compounds such as anandamide also seem to have synergistic effects in models of pain and analgesia. Animal models In a variety of animal models, PEA seems to have some promise; researchers have been able to demonstrate relevant clinical efficacy in a variety of disorders, from multiple sclerosis to neuropathic pain. In the mouse forced swimming test, palmitoylethanolamide was comparable to fluoxetine for depression. An Italian study published in 2011 found that PEA reduced the raised intraocular pressure of glaucoma. In a spinal trauma model, PEA reduced the resulting neurological deficit via the reduction of mast cell infiltration and activation. PEA in this model also reduced the activation of microglia and astrocytes. Its activity as an inhibitor of inflammation counteracts reactive astrogliosis induced by beta-amyloid peptide, in a model relevant for neurodegeneration, probably via the PPAR-α mechanism of action. In models of stroke and other CNS trauma, PEA exerted neuroprotective properties. Animal models of chronic pain and inflammation Chronic pain and neuropathic pain are indications for which there is high unmet need in the clinic. PEA has been tested in a variety of animal models for chronic and neuropathic pain, because cannabinoids, such as THC, have been proven to be effective in neuropathic pain states. The analgesic and antihyperalgesic effects of PEA in two models of acute and persistent pain seemed to be explained at least partly via the de novo neurosteroid synthesis. In chronic granulomatous pain and inflammation model, PEA could prevent nerve formation and sprouting, mechanical allodynia, and PEA inhibited dorsal root ganglia activation, which is a hallmark for winding up in neuropathic pain. The mechanism of action of PEA as an analgesic and anti-inflammatory molecule is probably based on different aspects. PEA inhibits the release of both preformed and newly synthesised mast cell mediators, such as histamine and TNF-alpha. PEA, as well as its analogue adelmidrol (di-amide derivative of azelaic acid), can both down-regulate mast cells. PEA reduces the expression of cyclooxygenase-2 (COX-2) and inducible nitric oxide synthase (iNOS) and prevents IkB-alpha degradation and p65 NF-kappaB nuclear translocation, the latter related to PEA as an endogenous PPAR-alpha agonist. In 2012 it became clear that PEA can also reduce reperfusion injury and the negative impact of shock on various outcome parameters, such as renal dysfunction, ischemic injury and inflammation, most probably via the PPAR-alpha pathway. Studies have shown that PEA activates PPAR-alpha and TRPV1 receptors that control inflammation and the sensation of pain. Among the reperfusion and inflammation markers measured PEA could reduce the increase in creatinine, γGT, AST, nuclear translocation of NF-κBp65; kidney MPO activity and MDA levels, nitrotyrosine, PAR and adhesion molecules expression, the infiltration and activation of mast cells and apoptosis. The biological responses to PEA dosing in animal models and in humans are being investigated vis-à-vis its involvement in a repair mechanism relevant to patient conditions of chronic inflammation and chronic pain. In a model of visceral pain (inflammation of the urinary bladder) PEA was able to attenuate the viscero-visceral hyper-reflexia induced by inflammation of the urinary bladder, one of the reasons why PEA is currently explored in the painful bladder syndrome. In a different model for bladder pain, the turpentine-induced urinary bladder inflammation in the rat, PEA also attenuated a referred hyperalgesia in a dose-dependent way. Chronic pelvic pain in patients seem to respond favourably to a treatment with PEA. Activity in non-neuronal cells PEA, as an N-acylethanolamine, has physico-chemical properties comparable to anandamide, and, while it is not strictly an endocannabinoid, it is often studied in conjunction with anandamide because of their overlapping synthetic and metabolic pathways. N-acylethanolamines such as PEA often act as signaling molecules, activating receptors and regulating a variety of physiological functions. PEA is known to activate intracellular, nuclear and membrane-associated receptors, and to regulate many physiological functions related to the inflammatory cascade and chronic pain states. Endocannabinoid lipids like PEA are widely distributed in nature, in a variety of plant, invertebrate, and mammalian tissues. PEA's mechanism of action sometimes is described as Autacoid Local Injury Antagonism (acronym ALIA), and PEA under this nomenclature is an ALIAmide. Levi-Montalcini and coworkers presented evidence in 1993 that lipid amides of the N-acylethanolamine type, such as PEA, are potential prototypes of naturally occurring molecules capable of modulating mast cell activation, and her group used the acronym ALIA in that report. An autocoid is a regulating molecule, locally produced. An ALIAmide is an autocoid synthesized on-demand in response to injury, and acts locally to counteract such pathology. Soon after the breakthrough paper of Levi-Montalcini, the mast cell appeared to be an important target for the anti-inflammatory activity of PEA. Since 1993, at least 25 papers have been published on the various effects of PEA on mast cells. These cells are often found in proximity to sensory nerve endings, and their degranulation can enhance the nociceptive signal, the reason why peripheral mast cells are considered to be pro-inflammatory and pro-nociceptive. PEA's activity is currently seen as a new inroad in the treatment of neuropathic pain and related disorders based on overactivation of glia and glia-related cells, such as in diabetes and glaucoma. Microglia plays a key role in the winding up phenomenon and central sensitization. Clinical relevance The effects of oral dosing of PEA have been explored in humans, and include clinical trials for a variety of pain states, for inflammatory and pain syndromes. Daily doses range from 300 to 1200 mg per day. In a 2017 systematic meta-analysis involving 10 studies including data from 786 patients receiving PEA for pain-related indications and 512 controls, PEA was found to be associated with pain reduction significantly greater than observed in controls (P < 0.001). Positive influences have also been observed in dermal applications, specifically atopic eczema, which may be linked to PPAR alpha activation. In a 2015 analysis of a double blind placebo controlled study of PEA in sciatic pain, the Numbers Needed to Treat was 1.5. Its positive influence in chronic pain, and inflammatory states such as atopic eczema, seems to originate mainly from PPAR alpha activation. Since 2012 a number of new trials have been published, among which studies in glaucoma. PEA also seems to be one of the factors responsible for the decrease in pain sensitivity during and after sport, comparable to the endogenous opiates (endorphines). From a clinical perspective the most important and promising indications for PEA are linked to neuropathic and chronic pain states, such as diabetic neuropathic pain, sciatic pain, CRPS, pelvic pain and entrapment neuropathic pain states. In a blind trial reported in a conference proceeding, patients affected by pain from synovitis or TMJ osteoarthritis (N=25, in total) were randomly assigned to PEA or ibuprofen groups for two weeks; the decrease in pain reported after two weeks was significantly higher for the PEA-treated group, likewise for improved masticatory function. In 2012, 20 patients with thalidomide and bortezomib induced neuropathy were reported to have improved nerve functions and less pain after a two-month treatment with PEA. The authors pointed out that although a placebo effect might play a role in the reported pain relief, the changes in neurophysiological measures clearly indicated that PEA exerted a positive action on the myelinated fibre groups. Sixteen men and fourteen women with two major types of neuropathic pain refractory to analgesic treatment—peripheral diabetic neuropathy (4 men, 7 women) or post-herpetic neuralgia (12 men, 7 women)—whose symptoms spanned eight pain categories ("burning", "osteoarticular", "piercing", etc.) who were under prior treatment with pregabalin were transferred to PEA, after which pregabalin treatment was gradually reintroduced; all were responding well after 45 days, and presented significant decreases in pain scores (without drug-drug interactions). In 2013, a metareview was published on the clinical efficacy and safety of PEA in the treatment of the common cold and influenza, based on reports from six double-blind, placebo, randomized controlled trials, addressing PEA's proposed anti-inflammatory and retinoprotectant effects. In 2019, significant increases in fatty acid amides including PEA, arachidonoylethanolamide, and oleoylethanolamide were noted in a Scottish woman with a previously undocumented variant of congenital insensitivity to pain. This was found to be a result of a combination of a hypomorphic single nucleotide polymorphism of fatty acid amide hydrolase (FAAH), alongside a mutation of the pseudogene, FAAH-OUT. The pseudogene was previously considered to be non-coding DNA, FAAH-OUT was found to be capable of modulating the expression of FAAH, making it a possible future target for novel analgesia/anxiolytic drug development. In 2020, PEA has been suggested as a drug that may prove beneficial for the treatment of lung inflammation caused by SARS-CoV-2 infection. A pharmaceutical company called FSD Pharma have entered PEA into a Phase 1 clinical trial under the name FSD-201, and has approval from the FDA for progressing to Phase 2a for this indication. Metabolism PEA is metabolized by the cellular enzymes fatty acid amide hydrolase (FAAH) and N-acylethanolamine acid amide hydrolase (NAAA), the latter of which has more specificity toward PEA over other fatty acid amides. Safety PEA is generally considered safe, and without adverse drug reactions (ADRs) or drug interactions. A 2016 study assessing safety claims in sixteen clinical trials, six case reports/pilot studies and a meta‐analysis of PEA as an analgesic, concluded that for treatment periods up to 49 days, clinical data argued against serious ADRs at an incidence of 1/200 or greater. A 2016 pooled meta-analysis involving twelve studies found that no serious ADRs were registered and/or reported. No data on interactions with PEA have been reported. Based on its mechanism, PEA may be considered likely to interact with other PPAR-α agonists used to treat high triglycerides; this remains unconfirmed. See also N-Acylethanolamine N-Acylphosphatidylethanolamine References Further reading Biomolecules Lipids Fatty acid amides Endocannabinoids
Palmitoylethanolamide
Chemistry,Biology
2,763
10,418,611
https://en.wikipedia.org/wiki/Photodarkening
Photodarkening is an optical effect observed in the interaction of laser radiation with amorphous media (glasses) in optical fibers. Until now, such creation of color centers was reported only in glass fibers. Photodarkening limits the density of excitations in fiber lasers and amplifiers. The experimental results suggest that operating at a saturated regime helps to reduce photodarkening. Definition One could expect the term photodarkening to refer to any process when any object becomes non-transparent (dark) due to illumination with light. Formally, the darkening of the photo-emulsion also could be considered as photodarkening. However, recent papers use this term meaning reversible creation of absorbing color centers in optical fibers. One may expect that the effect is not specific for fibers; therefore, the definition should cover wide class of phenomena, excluding, perhaps, non-reversible darkening of photographic emulsions. According to the Encyclopedia of Laser Physics and Technology, photodarkening is the effect that the optical losses in a medium can grow when the medium is irradiated with light at certain wavelengths. We may also define photodarkening as reversible creation of absorption centers in optical media at the illumination with light. Photodarkening rate The inverse of the timescale at which photodarkening occurs can be interpreted as photodarkening rate. Color centers Usually, photodarkening is attributed to creation of color centers due to resonant interaction of electromagnetic field with an active medium. Possible mechanisms of photodarkening The phenomenon, similar to photodarkening in fibers, was recently observed in chunks of Yb-doped ceramics and crystals. At the high concentration of excitations, the absorption jumps up, causing the avalanche of the broadband luminescence. Increase of absorption can be caused by formation of color centers by electrons in the conduction band, created by several neighboring excited ions. (The energy of one or two excitations is not sufficient to pop an electron into the conduction band). This explains, why the rate of darkening is strong function of the intensity of the exciting beam (as in the case with optical fibers discussed above). In the experiments, the thermal effects are important; therefore only the initial stage of the avalanche can be interpreted as photodarkening, and such interpretation is not yet confirmed. Recent work pointed out the role of thulium contamination. Through laser pump and signal absorption, and energy transfer from ytterbium; thulium is able to emit UV light, known to create color centers in silica glass. Although the actual mechanism of photodarkening is still unknown, a reliable setup to test the photodarkening properties of different types of fibers has been recently reported. References Optical materials Laser gain media Laser science
Photodarkening
Physics
576
2,371,888
https://en.wikipedia.org/wiki/Stereo%20imaging
Stereo imaging refers to the aspect of sound recording and reproduction of stereophonic sound concerning the perceived spatial locations of the sound source(s), both laterally and in depth. An image is considered to be good if the location of the performers can be clearly identified; the image is considered to be poor if the location of the performers is difficult to locate. A well-made stereo recording, properly reproduced, can provide good imaging within the front quadrant. More complex recording and reproduction systems such as surround sound and Ambisonics can offer good imaging all around the listener and even including height information. Imaging is usually thought of in the context of recording with two or more channels, though single-channel recording may convey depth information convincingly. See also Panning (audio) Pan law Phantom center External links Online Stereo Imaging Test (LEDR) Stereophonic sound
Stereo imaging
Engineering
173
41,048,986
https://en.wikipedia.org/wiki/Syntrophus%20aciditrophicus
Syntrophus aciditrophicus is a gram-negative and rod-shaped bacterium. It is non-motile, non-spore-forming and grows under strictly anaerobic conditions, thus an obligate anaerobe. It degrades fatty acids and benzoate in syntrophic association with hydrogen-using microorganisms. Its genome was published in 2007. References External links LPSN Type strain of Syntrophus aciditrophicus at BacDive - the Bacterial Diversity Metadatabase Thermodesulfobacteriota Bacteria described in 2001
Syntrophus aciditrophicus
Biology
122
3,232,061
https://en.wikipedia.org/wiki/Isotope%20dilution
Isotope dilution analysis is a method of determining the quantity of chemical substances. In its most simple conception, the method of isotope dilution comprises the addition of known amounts of isotopically enriched substance to the analyzed sample. Mixing of the isotopic standard with the sample effectively "dilutes" the isotopic enrichment of the standard and this forms the basis for the isotope dilution method. Isotope dilution is classified as a method of internal standardisation, because the standard (isotopically enriched form of analyte) is added directly to the sample. In addition, unlike traditional analytical methods which rely on signal intensity, isotope dilution employs signal ratios. Owing to both of these advantages, the method of isotope dilution is regarded among chemistry measurement methods of the highest metrological standing. Isotopes are variants of a particular chemical element which differ in neutron number. All isotopes of a given element have the same number of protons in each atom. The term isotope is formed from the Greek roots isos (ἴσος "equal") and topos (τόπος "place"), meaning "the same place"; thus, the meaning behind the name is that different isotopes of a single element occupy the same position on the periodic table. Early history Analytical application of the radiotracer method is a forerunner of isotope dilution. This method was developed in the early 20th century by George de Hevesy for which he was awarded the Nobel Prize in Chemistry for 1943. An early application of isotope dilution in the form of radiotracer method was determination of the solubility of lead sulphide and lead chromate in 1913 by George de Hevesy and Friedrich Adolf Paneth. In the 1930s, US biochemist David Rittenberg pioneered the use of isotope dilution in biochemistry enabling detailed studies of cell metabolism. Tutorial example Isotope dilution is analogous to the mark and recapture method, commonly used in ecology to estimate population size. For instance, consider the determination of the number of fish (nA) in a lake. For the purpose of this example, assume all fish native to the lake are blue. On their first visit to the lake, an ecologist adds five yellow fish (nB = 5). On their second visit, the ecologist captures a number of fish according to a sampling plan and observes that the ratio of blue-to-yellow (i.e. native-to-marked) fish is 10:1. The number of fish native to the lake can be calculated using the following equation: This is a simplified view of isotope dilution but it illustrates the method's salient features. A more complex situation arises when the distinction between marked and unmarked fish becomes fuzzy. This can occur, for example, when the lake already contains a small number of marked fish from previous field experiments; and vice versa, where the amount of marked fish added contains a small number of unmarked fish. In a laboratory setting, an unknown (the "lake") may contain a quantity of a compound that is naturally present in major ("blue") and minor ("yellow") isotopic forms. A standard that is enriched in the minor isotopic form may then be added to the unknown, which can be subsequently analyzed. Keeping to the fish analogy, the following expression can be employed: where, as indicated above, nA and nB represent the number of fish in the lake and the number of fish added to the lake, respectively; RA is the ratio of the native-to-marked fish in the lake prior to the addition of marked fish; RB is the ratio of the native-to-marked fish in the amount of marked fish added to the lake; finally, RAB is the ratio of the native-to-marked fish captured during the second visit. Applications Isotope dilution is almost exclusively employed with mass spectrometry in applications where high-accuracy is demanded. For example, all National Metrology Institutes rely significantly on isotope dilution when producing certified reference materials. In addition to high-precision analysis, isotope dilution is applied when low recovery of the analyte is encountered. In addition to the use of stable isotopes, radioactive isotopes can be employed in isotope dilution which is often encountered in biomedical applications, for example, in estimating the volume of blood. Single dilution method Consider a natural analyte rich in isotope iA (denoted as A), and the same analyte, enriched in isotope jA (denoted as B). Then, the obtained mixture is analyzed for the isotopic composition of the analyte, RAB = n(iA)AB/n(jA)AB. If the amount of the isotopically enriched substance (nB) is known, the amount of substance in the sample (nA) can be obtained: Here, RA is the isotope amount ratio of the natural analyte, RA = n(iA)A/n(jA)A, RB is the isotope amount ratio of the isotopically enriched analyte, RB = n(iA)B/n(jA)B, RAB is the isotope amount ratio of the resulting mixture, x(jA)A is the isotopic abundance of the minor isotope in the natural analyte, and x(jA)B is the isotopic abundance of the major isotope in the isotopically enriched analyte. For elements with only two stable isotopes, such as boron, chlorine, or silver, the above single dilution equation simplifies to the following: In a typical gas chromatography analysis, isotopic dilution can decrease the uncertainty of the measurement results from 5% to 1%. It can also be used in mass spectrometry (commonly referred to as isotopic dilution mass spectrometry or IDMS), in which the isotopic ratio can be determined with precision typically better than 0.25%. Optimum composition of the blend In a simplified manner, the uncertainty of the measurement results is largely determined from the measurement of RAB: From here, we obtain the relative uncertainty of nA, ur(nA) = u(nA)/nA: The lowest relative uncertainty of nA corresponds to the condition when the first derivative with respect to RAB equals zero. In addition, it is common in mass spectrometry that u(RAB)/RAB is constant and therefore we can replace u(RAB) with RAB. These ideas combine to give Solving this equation leads to the optimum composition of the blend AB, i.e., the geometric mean between the isotopic compositions of standard (A) and spike (B): This simplified equation was first proposed by De Bievre and Debus numerically and later by Komori et al. and by Riepe and Kaiser analytically. It has been noted that this simple expression is only a general approximation and it does not hold, for example, in the presence of Poisson statistics or in the presence of strong isotope signal ratio correlation. Double dilution method The single dilution method requires the knowledge of the isotopic composition of the isotopically enriched analyte (RB) and the amount of the enriched analyte added (nB). Both of these variables are hard to establish since isotopically enriched substances are generally available in small quantities of questionable purity. As a result, before isotope dilution is performed on the sample, the amount of the enriched analyte is ascertained beforehand using isotope dilution. This preparatory step is called the reverse isotope dilution and it involves a standard of natural isotopic-composition analyte (denoted as A*). First proposed in the 1940s and further developed in the 1950s, reverse isotope dilution remains an effective means of characterizing a labeled material. Reverse isotope dilution analysis of the enriched analyte: Isotope dilution analysis of the analyte: Since isotopic composition of A and A* are identical, combining these two expressions eliminates the need to measure the amount of the added enriched standard (nB): Double dilution method can be designed such that the isotopic composition of the two blends, A+B and A*+B, is identical, i.e., RAB = RA*B. This condition of exact-matching double isotope dilution simplifies the above equation significantly: Triple dilution method To avoid contamination of the mass spectrometer with the isotopically enriched spike, an additional blend of the primary standard (A*) and the spike (B) can be measured instead of measuring the enriched spike (B) directly. This approach was first put forward in the 1970s and developed in 2002. Calculations using calibration curve Many analysts do not employ analytical equations for isotope dilution analysis. Instead, they rely on building a calibration curve from mixtures of the natural primary standard (A*) and the isotopically enriched standard (the spike, B). Calibration curves are obtained by plotting measured isotope ratios in the prepared blends against the known ratio of the sample mass to the mass of the spike solution in each blend. Isotope dilution calibration plots sometimes show nonlinear relationships and in practice polynomial fitting is often performed to empirically describe such curves. When calibration plots are markedly nonlinear, one can bypass the empirical polynomial fitting and employ the ratio of two linear functions (known as Padé approximant) which is shown to describe the curvature of isotope dilution curves exactly. See also Standard addition Internal standard Mass spectrometry Mark and recapture Lincoln index References Further reading Scientific method Scientific techniques Laboratory techniques
Isotope dilution
Chemistry
1,966
21,293,885
https://en.wikipedia.org/wiki/Hoppus
The hoppus cubic foot (or ‘hoppus cube’ or ‘h cu ft’) was the standard volume measurement used for timber in the British Empire and countries in the British sphere of influence before the introduction of metric units. It is still used in the hardwood trade in some countries. This volume measurement was developed to estimate what volume of a round log would be usable timber after processing, in effect attempting to ‘square’ the log and allow for waste. The hoppus ton (HT) was also a traditionally used unit of volume in British forestry. One hoppus ton is equal to 50 hoppus feet or 1.8027 cubic meters. Some shipments of tropical hardwoods, especially shipments of teak from Myanmar (Burma), are still stated in hoppus tons. History The English surveyor Edward Hoppus introduced the unit in his 1736 manual of practical calculations. The tables include reference to stone as well as timber, as stone can similarly suffer wastage during processing into regular pieces. Calculation of timber volume in round logs The following calculation can be used to estimate the usable timber in round logs using a "girth tape" that is calibrated in "quarter-girth inches" (e.g. that shows "12" when measuring a 48-inch-circumference log): Hoppus Volume (h ft) = ("Quarter Girth" (in))2 × Length (ft) / 144 = (circumference (ft) / 4)2 × Length (ft) Equivalents 1 h ft = 1.273 ft3 27.74 h ft = 1 m3 1 h ft = 0.03605 m3 See also Board foot Cord (unit) Cubic ton List of unusual units of measurement Units of measurement References Imperial units Units of volume Units of measurement in surveying
Hoppus
Mathematics
369
42,936,953
https://en.wikipedia.org/wiki/HR%205401
HR 5401 is a possible astrometric binary star system in the southern constellation of Lupus. With an apparent visual magnitude of 5.83, it is just visible to the naked eye under good seeing conditions. The distance to HR 5401 can be estimated from its annual parallax shift of , yielding a range of 205 light years. It is moving closer to Earth with a heliocentric radial velocity of −30 km/s, and is expected to come within in ~524,000 years. This is an Am star with a stellar classification of A1m A5/7-F2. Lu (1991) lists it as a likely dwarf barium star. It is radiating 13 times the Sun's luminosity from its photosphere at an effective temperature of 7,300 K. This system is a source of X-ray emission which may be coming from the companion. HR 5401 has two visual companions. Component B is a magnitude 11.50 star at an angular separation of along a position angle (PA) of 114°, as of 1999. The second companion, designated component C, is magnitude 11.16 with a separation of at a PA of 164°, as of 2000. References Am stars Astrometric binaries Lupus (constellation) Durchmusterung objects 126504 070663 5401
HR 5401
Astronomy
275
21,510,668
https://en.wikipedia.org/wiki/Pascal%27s%20law
Pascal's law (also Pascal's principle or the principle of transmission of fluid-pressure) is a principle in fluid mechanics given by Blaise Pascal that states that a pressure change at any point in a confined incompressible fluid is transmitted throughout the fluid such that the same change occurs everywhere. The law was established by French mathematician Blaise Pascal in 1653 and published in 1663. Definition Pascal's principle is defined as: Fluid column with gravity For a fluid column in a uniform gravity (e.g. in a hydraulic press), this principle can be stated mathematically as: where The intuitive explanation of this formula is that the change in pressure between two elevations is due to the weight of the fluid between the elevations. Alternatively, the result can be interpreted as a pressure change caused by the change of potential energy per unit volume of the liquid due to the existence of the gravitational field. Note that the variation with height does not depend on any additional pressures. Therefore, Pascal's law can be interpreted as saying that any change in pressure applied at any given point of the fluid is transmitted undiminished throughout the fluid. The formula is a specific case of Navier–Stokes equations without inertia and viscosity terms. Applications If a U-tube is filled with water and pistons are placed at each end, pressure exerted by the left piston will be transmitted throughout the liquid and against the bottom of the right piston (The pistons are simply "plugs" that can slide freely but snugly inside the tube.). The pressure that the left piston exerts against the water will be exactly equal to the pressure the water exerts against the right piston . By using we get . Suppose the tube on the right side is made 50 times wider . If a 1 N load is placed on the left piston (), an additional pressure due to the weight of the load is transmitted throughout the liquid and up against the right piston. This additional pressure on the right piston will cause an upward force which is 50 times bigger than the force on the left piston. The difference between force and pressure is important: the additional pressure is exerted against the entire area of the larger piston. Since there is 50 times the area, 50 times as much force is exerted on the larger piston. Thus, the larger piston will support a 50 N load - fifty times the load on the smaller piston. Forces can be multiplied using such a device. One newton input produces 50 newtons output. By further increasing the area of the larger piston (or reducing the area of the smaller piston), forces can be multiplied, in principle, by any amount. Pascal's principle underlies the operation of the hydraulic press. The hydraulic press does not violate energy conservation, because a decrease in distance moved compensates for the increase in force. When the small piston is moved downward 100 centimeters, the large piston will be raised only one-fiftieth of this, or 2 centimeters. The input force multiplied by the distance moved by the smaller piston is equal to the output force multiplied by the distance moved by the larger piston; this is one more example of a simple machine operating on the same principle as a mechanical lever. A typical application of Pascal's principle for gases and liquids is the automobile lift seen in many service stations (the hydraulic jack). Increased air pressure produced by an air compressor is transmitted through the air to the surface of oil in an underground reservoir. The oil, in turn, transmits the pressure to a piston, which lifts the automobile. The relatively low pressure that exerts the lifting force against the piston is about the same as the air pressure in automobile tires. Hydraulics is employed by modern devices ranging from very small to enormous. For example, there are hydraulic pistons in almost all construction machines where heavy loads are involved. Other applications: Force amplification in the braking system of most motor vehicles. Used in artesian wells, water towers, and dams. Scuba divers must understand this principle. Starting from normal atmospheric pressure, about 100 kilopascal, the pressure increases by about 100 kPa for each increase of 10 m depth. Usually Pascal's rule is applied to confined space (static flow), but due to the continuous flow process, Pascal's principle can be applied to the lift oil mechanism (which can be represented as a U tube with pistons on either end). Pascal's barrel Pascal's barrel is the name of a hydrostatics experiment allegedly performed by Blaise Pascal in 1646. In the experiment, Pascal supposedly inserted a long vertical tube into an (otherwise sealed) barrel filled with water. When water was poured into the vertical tube, the increase in hydrostatic pressure caused the barrel to burst. The experiment is mentioned nowhere in Pascal's preserved works and it may be apocryphal, attributed to him by 19th-century French authors, among whom the experiment is known as crève-tonneau (approx.: "barrel-buster"); nevertheless the experiment remains associated with Pascal in many elementary physics textbooks. See also Pascal's contributions to the physical sciences References Hydrostatics Fluid mechanics Blaise Pascal
Pascal's law
Engineering
1,048
2,179,719
https://en.wikipedia.org/wiki/Fortress%20Europe
Fortress Europe () was a military propaganda term used by both sides of World War II which referred to the areas of Continental Europe occupied by Nazi Germany, as opposed to the United Kingdom across the Channel. World War II defences In British phraseology, Fortress Europe meant the battle honour accorded to Royal Air Force and Allied squadrons during the war, but to qualify, operations had to be made by aircraft based in Britain against targets in Germany, Italy and other parts of German-occupied Europe, in the period from the fall of France to the Normandy invasion. Simultaneously, the term Festung Europa was being used by Nazi propaganda, namely to refer to Hitler's and the Wehrmacht's plans to fortify the whole of occupied Europe, in order to prevent an invasion by Allied forces. These measures included the construction of the Atlantic wall, along with the reorganization of the Luftwaffe for air defence. This use of the term Fortress Europe was subsequently adopted by correspondents and historians in the English language to describe the military efforts of the Axis powers at defending the continent from the Allies. Postwar usage Currently, within Europe, the term is used either to describe dumping effect of external borders in commercial matters, or as a pejorative description of the state of immigration into the European Union. This can be in reference either to attitudes towards immigration, to border fortification policies pursued for instance in the Spanish North African enclaves of Ceuta and Melilla or to increasing level of externalization of borders that is used to help prevent asylum seekers and other migrants from entering the European Union. For right-wing and nationalist parties such as the Freedom Party of Austria, 'Fortress Europe' is a positive term. They mostly claim that such a fortress does not really exist yet, and that immigrants can enter Europe far too easily. They often charge the southern states with insufficient border control, claiming that the latter are acting on the knowledge that immigrants tend to be more attracted to western/northern states with more generous welfare systems such as Switzerland, Germany, Austria, and Sweden. Controlled external borders Ceuta and Melilla (Spain) (from Morocco) Italian and Maltese coast (from Libya and Tunisia) Canary Islands (Spain) (from Morocco, Western Sahara and Mauritania) Maritsa (Turkey) (from Near East) Eastern border of the European Union (from Ukraine, Belarus, Moldova, Russia) South-Eastern border of the European Union (from Bosnia and Herzegovina, Serbia, Montenegro, Albania, North Macedonia) Strait of Gibraltar (from Morocco) South Aegean and North Aegean (from Near East) See also Hindenburg Line, German defences on the Western Front of World War I Siegfried Line, German defences against France in World War II Maginot Line, French defenses against Germany constructed for World War II Salpa Line, The last fortified defence line of Finland against the Soviet Union in World War II Iron Curtain, dividing line through Europe during the Cold War References Operation Overlord World War II defensive lines Historic defensive lines World War II propaganda Illegal immigration to Europe Axis powers
Fortress Europe
Engineering
607
73,814,302
https://en.wikipedia.org/wiki/HOPO%2014-1
HOPO 14-1 is an investigational drug product for removing radioactive contaminants from the body. It is an oral capsule designed to act as a defence against radioactive threats such as nuclear power plant accidents or dirty bomb attacks. The active ingredient is the hydroxypyridinone ligand 3,4,3-LI(1,2-HOPO), which is a powerful chelating agent. HOPO 14-1 works by selectively binding to heavy metals in the body and forming a complex that the body can naturally excrete. The drug is also being studied as a treatment for other forms of heavy metal toxicity, including lead poisoning and exposure to gadolinium from MRI contrast agents. HOPO 14-1 was developed at Lawrence Berkeley National Laboratory by actinide chemist Rebecca Abergel. Abergel and former postdoc Julian Avery Rees co-founded HOPO Therapeutics, a company commercializing HOPO 14-1 and other treatments for heavy metal poisoning. References See also Diethylenetriamine pentaacetate Experimental drugs Chelating agents used as drugs Pyridines Carboxamides
HOPO 14-1
Chemistry
232
37,081,647
https://en.wikipedia.org/wiki/Saddleback%20roof
A saddleback roof is usually on a tower, with a ridge and two sloping sides, producing a gable at each end. See also List of roof shapes Saddle roof References Architectural elements Roofs
Saddleback roof
Technology,Engineering
39
2,902,535
https://en.wikipedia.org/wiki/62%20Andromedae
62 Andromedae, abbreviated 62 And, is a single star in the northern constellation Andromeda. 62 Andromedae is the Flamsteed designation; it also bears the Bayer designation of c Andromedae. It is bright enough to be seen by the naked eye, with an apparent magnitude of 5.31. Based upon parallax measurements made during the Gaia mission, it is at a distance of roughly 273 light-years (84 parsecs) from Earth. The star is moving closer to the Earth with a heliocentric radial velocity of −30 km/s, and is predicted to come to within in 1,6 million years. This is an A-type main-sequence star with a stellar classification of A0 V. Abt and Morrel (1995) gave it a class of A1 III, matching a more evolved giant star. The star has 2.42 times the mass of the Sun, about 1.8 times the Sun's radius, and is spinning with a projected rotational velocity of 86 km/s. It is radiating 45 times the Sun's luminosity from its photosphere at an effective temperature of 9,572 K. References A-type main-sequence stars Andromeda (constellation) Andromedae, c BD+46 0552 Andromedae, 62 014212 010819 0670
62 Andromedae
Astronomy
289
45,085,176
https://en.wikipedia.org/wiki/Sutorius%20australiensis
Sutorius australiensis is a species of bolete mushroom found in Australia. It was first described in 1991 as a species of Leccinum, but transferred to the newly created genus Sutorius in 2012. References External links Boletaceae Fungi described in 1991 Fungi of Australia Fungus species
Sutorius australiensis
Biology
61
9,136,981
https://en.wikipedia.org/wiki/Radiogram%20%28message%29
A radiogram is a formal written message transmitted by radio. Also known as a radio telegram or radio telegraphic message, radiograms use a standardized message format, form and radiotelephone and/or radiotelegraph transmission procedures. These procedures typically provide a means of transmitting the content of the messages without including the names of the various headers and message sections, so as to minimize the time needed to transmit messages over limited and/or congested radio channels. Various formats have been used historically by maritime radio services, military organizations, and Amateur Radio organizations. Radiograms are typically employed for conducting Record communications, which provides a message transmission and delivery audit trail. Sometimes these records are kept for proprietary purposes internal to the organization sending them, but are also sometimes legally defined as public records. For example, maritime Mayday/SOS messages transmitted by radio are defined by international agreements as public records. Historical development From 1850 to the mid 20th century industrial countries used the electric telegraph as a long distance person-to-person text message service. A telegraph system consisted of two or more geographically separated stations linked by wire supported on telegraph poles. A message was sent by an operator in one station tapping on a telegraph key, which sent pulses of current from a battery or generator down the wire to the receiving station, spelling out the text message in Morse code. At the receiving station the current would activate a telegraph sounder which would produce a series of audible clicks, and a receiving operator who knew Morse code would translate the clicks to text and write down the message. By the 1870s, most industrial nations had nationwide telegraph networks with telegraph offices in most towns, allowing citizens to send a message called a telegram for a fee to any person in the country. Submarine telegraph cables allowed intercontinental messages called cablegrams. The invention of radiotelegraphy (wireless telegraphy) communication around 1900 allowed telegraph signals to be sent by radio. An operator at a radio transmitter would tap on a telegraph key, turning the transmitter on and off, sending pulses of radio waves through the air, and at the receiving station a radio receiver would receive the pulses and make them audible as a sequence of beeps in the earphone, and the receiving operator would translate the Morse code to text and write it down. High speed systems used paper tape to send and record the message. Guglielmo Marconi's demonstration of transatlantic radiotelegraphy transmission in 1901 showed that the wireless telegraph could be a useful long-distance communication technology which didn't require the costly installation of a telegraph wire. Around 1906 industrial nations began building powerful transoceanic radiotelegraphy stations to communicate with other countries and their overseas colonies. By World War I these were integrated with landline telegraph networks, so citizens could go to a telegraph office and send a person-to-person telegraph message by radio to another country. This was written down on a standardized form called a radiogram. International radiotelegraphy was expensive so radiograms were mostly used for business and commercial communication. The concept of the standard message format originated in the wired telegraph services. Each telegraph company likely had its own format, but soon after radio telegraph services began, some elements of the message exchange format were codified in international conventions (such as the International Radiotelegraph Convention, Washington, 1927), and these were then often duplicated in domestic radio communications regulations (such as the FCC in the U.S.) and in military procedure documentation. Military organizations independently developed their own procedures, and in addition to differing from the international procedures, they sometimes differed between different branches of the military within the same country. For example, the publication "Communication Instructions, 1929", from the U.S. Navy Department, includes: One procedure for messages transmitted "in naval form over nonnaval systems" (Part II: Radio, Chapter 15) One procedure for exchanging messages with commercial radio stations (Part II: Radio, Chapter 16, pages 36–37 for examples; see also Part I: Chapter 7) One procedure for messages transmitted within the Navy (Part IV: Procedure and Examples, Chapter 32, especially pages 21 & 22 for the format) One format for exchanging messages between the Army and Navy (Part IV: Appendix A), called the "Joint Army and Navy Radiotelegraph Procedure", with the format shown on page 70. Notable characteristics of radiograms include headers that include information such as the from and to addresses, date and time filed, and precedence (e.g. emergency, priority, or routine), so that the radio operators can determine which messages need to be delivered first during times of congestion. Chronology of the commercial radiogram format International Telegraph Conference (London, 1903; including Order of transmission beginning on page 40) International Telegraph Conference (Paris, 1925) International Radiotelegraph Convention (Washington, 1927) International Radiotelegraph Conference (Madrid, 1932) was redrafted to include general principles common to telegraph, telephone and radio services. Maritime radio service radiotelegrams The message format for communications transmitted to sea-going vessels is defined in Rec. ITU-R M.1171, § 28: radiotelegram begins: from . . . (name of ship or aircraft); number . . . (serial number of radiotelegram); number of words . . . ; date . . . ; time . . . (time radiotelegram was handed in aboard ship or aircraft); service indicators (if any); address . . . ; text . . . ; signature . . . (if any); radiotelegram ends, over Airline Teletype Message The international airline industry continues to use a radioteletype message format originally designed for transmission to Teleprinters, Airline Teletype System, which is now disseminated via e-mail and other modern electronic formats. However, the relationship of the IATA Type B message to other radio telegram message formats is clearly visible in a typical message: QD AAABBCC .XXXYYZZ 111301 ASM UTC 27SEP03899E001/TSTF DL Y NEW BA667/13APR J 319 C1M25VVA4C26 LHR1340 BCN1610 LHRQQQ 99/1 QQQBCN 98/A QQQQQQ 906/PAYDIV B LHRQQQ 999/1 QQQBCN 998/A SI Military radiograms Military organizations have historically used radiograms for transmitting messages. One notable example is the notification of the air raid on Pearl Harbor that brought the United States into World War II. The standard military radiogram format (in NATO allied nations) is known as the 16-line message format, for the manner in which a paper message form is transcribed through voice, Morse code, or TTY transmission formats. Each format line contains pre-defined content. When sent as an ACP-126 message over teletype, a 16-line format radiogram would appear similar to this: RFHT DE RFG NR 114 R 151412Z MAR FM CG FIFTH CORPS TO CG THIRD INFDIV WD GRNC BT UNCLAS PLAINDRESS SINGLE ADDRESS MESSAGES WILL BE TRANSMITTED OVER TELETIPWRITER <!-- sic --> CIRCUITS AS INDICATED IN THIS EXAMPLE BT C WA OVER TELETYPEWRITER NNNN Some of the format lines in the above example have been omitted for efficiency. The translation of this abbreviate format follows: This radiotelegraph message format (also "radio teletype message format", "teletypewriter message format", and "radiotelephone message format") and transmission procedures have been documented in numerous military standards, including the World War II-era U.S. Army Manuals TM 11-454 (The Radio Operator), FM 24-5 (Basic Field Manual, Signal Communication), FM 24-6 (Radio Operator's Manual), TM 1-460 (Radiotelephone Procedure), FM 24-18 (Radio Communication), FM-24-19 (Radio Operator's Handbook), FM 101-5-2 (U.S. Army Report and Message Formats), TM 11-380, FM 11-490-7 (Military Affiliate Radio System), AR 105–75, Navy Department Communication Instructions 1929, and their modern decedents in the Allied Communications Procedures, including ACP 124 (messages relayed by telegraphy), ACP 125 (messages relayed by voice), ACP 126 (messages relayed by radio teletype), ACP 127 (messages relayed by automated tape), AR 25–6, U.S. Navy Signalman training courses and others. At one point before World War II, the U.S. FCC defined (at least for domestic police radio traffic) a station serial number as a sequential message number that was reset at the beginning of each calendar month. The Communications Standard Dictionary defines radiotelegraph message format as "The prescribed arrangement of the parts of a message that has been prepared for radiotelegraph transmission." MARS radiograms The Military Affiliate Radio System uses radiograms, or MARSgrams, to transmit health & welfare message between military members and their families, and also for emergency communications. Some MARS radio procedure documents include instructions on how to exchange ARRL NTS Radiograms over a MARS radio net. Both formats include a procedure for counting the number of word groups (words in NTS, groups in the ACP/MARS format), but differ in how word groups are counted, for instance, so the counting method must be resolved when converting messages between formats. U.S. Department of State ACP-127 radiograms The U.S. Department of State uses the military's automated message delivery version of the 16-line format, known as ACP-127, with its own structured definitions of the format lines. Police Radiogram Police radiograms had their own format, likely derived from the commercial radiogram format. Example radiogram from A National Training Manual and Procedural Guide for Police and Public Safety Radio Communications Personnel, 1968. 15 SHRF LEE COUNTY ILL 12-20-66 (A. Preamble) PD CARBONDALE ILL (B. Address) DATA AND DISPOSITION RED 62 CHEVROLET (C. Text) 4 DOOR ILL LL1948 VIN 21723T58723 ABANDONED DIXON ILLINOIS THREE DAYS HELD ANDREWS GARAGE FRONT END DAMAGED NOT DRIVEABLE NO APPREHENSIONS WILL BE RELEASED TO OWNER ON PROOF OF OWNERSHIP SHERIFF LEE COUNTY ILLINOIS JRM 1530 CST (D. Signature) Section A6.6 Message Form From the above training manual: A formal message is one constructed, transmitted and recorded according to a standard prescribed form (see Sec. 4). A formal message should contain the following essential P A R T S: Preamble - message number, point of origin or agency identifier, date. Address - to whom the message is directed. Reference - to previous message, if any. Text - the message. Signature or Authority - department requesting the message. ARRL radiogram An ARRL radiogram is an instance of formal written message traffic routed by a network of amateur radio operators through traffic nets, called the National Traffic System (NTS). It is a plaintext message, along with relevant metadata (headers), that is placed into a traffic net by an amateur radio operator. Each radiogram is relayed, possibly through one or more other amateur radio operators, to a radio operator who volunteers to deliver the radiogram content to its destination. VOA Radiogram VOA Radiogram was an experimental Voice of America program, aired from 2012 to 2017, which broadcasts digital text and images via shortwave radiograms This digital stream can be decoded using a basic AM shortwave receiver and freely downloadable software of the Fldigi family. This software is available for Windows, Apple (macOS), Linux, and FreeBSD systems. The mode used most often on VOA Radiogram, for both text and images, is MFSK32, but other modes are occasionally transmitted. Broadcasts were made via the Edward R. Murrow transmitting station in North Carolina on the following schedule: Due to the retirement of Dr. Kim Andrew Elliott from VOA and the decision of VOA to not replace his role with the program, VOA Radiogram program's final airing was on June 17–18, 2017, however Elliott will be continuing to air Radiograms via commercial shortwave stations under the name of "Shortwave Radiogram." References Radio communications ITU-R recommendations
Radiogram (message)
Engineering
2,599
36,788,725
https://en.wikipedia.org/wiki/Todt%20Battery
The Todt Battery, also known as Batterie Todt, was a battery of coastal artillery built by Nazi Germany during World War II, located in the hamlet of Haringzelles, Audinghen, near Cape Gris-Nez, Pas de Calais, France. The battery consisted of four Krupp guns with a range up to , capable of reaching the British coast, each protected by a bunker of reinforced concrete. Originally to be called Siegfried Battery, it was renamed in honor of the German engineer Fritz Todt, creator of the Todt Organisation. It was later integrated into the Atlantic Wall. The 3rd Canadian Infantry Division attacked the Cape Gris-Nez batteries on 29 September 1944, and the positions were secured by the afternoon of the same day. The Todt battery fired for the last time on 29 September 1944 and was taken hours later by the North Nova Scotia Highlanders that landed in Normandy, as part of the 9th Infantry Brigade, 3rd Canadian Infantry Division, after an intense aerial bombardment, as part of Operation Undergo. History Germany's swift and successful occupation of France and the Low Countries gained control of the Channel coast. Grand Admiral Erich Raeder met Hitler on 21 May 1940 and raised the topic of invasion, but warned of the risks and expressed a preference for blockade by air, submarines and raiders. By the end of May, the Kriegsmarine had become even more opposed to invading Britain following its costly victory in Norway. Over half of the Kriegsmarine surface fleet had been either sunk or badly damaged in Operation Weserübung, and his service was hopelessly outnumbered by the ships of the Royal Navy. In an OKW directive on 10 July, General Wilhelm Keitel requested artillery protection during the planned invasion: OKW Chief of Staff Alfred Jodl set out the OKW proposals for the proposed invasion of Britain in a memorandum issued on 12 July, which described it as "a river crossing on a broad front", irritating the Kriegsmarine. On 16 July 1940 Hitler issued Führer Directive No. 16, setting in motion preparations for a landing in Britain, codenamed Operation Sea Lion. One of the four conditions for the invasion to occur set out in Hitler's directive was the coastal zone between occupied France and England must be dominated by heavy coastal artillery to close the Strait of Dover to Royal Navy warships and merchant convoys. The Kriegsmarines Naval Operations Office deemed this a plausible and desirable goal, especially given the relatively short distance, , between the French and English coasts. Orders were therefore issued to assemble and begin emplacing every Army and Navy heavy artillery piece available along the French coast, primarily at Pas-de-Calais. This work was assigned to the Organisation Todt and commenced on 22 July 1940. By early August 1940, all of the Army's large-caliber railway guns were operational taking advantage of the narrow width of the English Channel in the Pas-de-Calais. Firing sites for these railway guns were quickly set up between Wimereux, in the south, and Calais in the north, along the axis Calais-Boulogne-sur-Mer making the most of the railway tracks entering the dunes and skirting the hills of Boulonnais, before fanning out behind Cape gris-Nez. Other firing locations were set up behind Wissant and near Calais, at the level of the Digue Royale (royal dyke). Copied from swing bridges and railway turntables, Vögele rotating tables were assembled, on stabilized or lightly reinforced ground, at the end of these various deviations enabling rapid adjustments and all-round firing of these railway guns. Outside of firing periods, the guns and their accompanying carriages would find refuge in quarries, under the railway tunnels or under one of the three (cathedral-bunkers), reinforced concrete shelters of an ogival shape whose construction began in September 1940. Six 28 cm K5 pieces and a single K12 gun, with a range of , could only be used effectively against land targets. Thirteen and five pieces, plus additional motorized batteries comprising twelve 24 cm guns and ten 21 cm weapons. The railway guns could be fired at shipping but were of limited effectiveness due to their slow traverse speed, long loading time and ammunition types. Better suited for use against naval targets were the heavy naval batteries that began to be installed around the end of July 1940. First came the Siegfried Battery at Audinghen, south of Cape gris-Nez, (later increased to 4 and renamed Todt Battery). Four naval batteries were operational by mid-September 1940: Friedrich August with three barrels; Prinz Heinrich with two 28 cm guns; Oldenburg with two 24 cm weapons and, largest of all, Siegfried (later renamed Batterie Todt) with a pair of guns. While the bombing of Britain intensified during the Blitz, Hitler issued his Directive No. 21 on 18 December 1940 instructing the Wehrmacht to be ready for a quick attack to commence his long-planned invasion of the Soviet Union. Operation Sea Lion lapsed, never to be resumed. On 23 September 1941, Hitler ordered all Sea Lion preparations to cease. Most historians agree Sea Lion would have failed regardless, because of the weaknesses of German sea power, compared to the Royal Navy . On 23 March 1942, days after the British raid on the German coastal radar installation at Bruneval, Hitler issued Führer Directive No. 40, which called for the creation of an "Atlantic Wall", an extensive system of coastal defenses and fortifications, along the coast of continental Europe and Scandinavia as a defense against an anticipated Allied invasion of Nazi-occupied Europe from the United Kingdom. The manning and operation of the Atlantic Wall was administratively overseen by the German Army, with some support from Luftwaffe ground forces. The fortification of the Atlantic coast, with a special attention to ports, was accelerated in the aftermath the British amphibious attack on the heavily defended Normandie dry dock at St Nazaire during Operation Chariot on 28 March 1942. The Führer Directive No. 51 definitely confirmed the defensive role of the batteries of the Cape Gris-Nez on 3 November 1943. Description Built on the small plateau of Haringzelles, located 3 km southeast of Cape gris-Nez, the Todt battery consisted of four casemates. Each casemate consisted of two parts: the firing chamber which housed the 38 cm SK C/34 naval guns under an armored turret, designated as Bettungsschiessgerüst C/39, and, on two floors, one of which was underground, the ammunition bunkers and all the facilities needed for the ammunition, the machinery and the crew. The casemates are 47 meters long, 29 wide and 20 high, 8 of which are underground. The reinforced concrete walls and roof are 3.5 m thick to be able to resist 380 mm shells, ordinary 4000 pound bombs or 2000 pound armor-piercing bombs. The casemates were distributed along an arc of a circle with a radius of about 400 meters. In addition to the large-caliber guns, this battery also commanded the following weapon systems and buildings: 14 passive bunkers, four barracks, a belt of 15 "Tobruks" (small stand-alone bunkers, with a hole at the top, usually manned by two people that served as an observation post or machine gun nest), three bunkers with anti-tank guns facing south and directed towards the interior of the coast, nine pieces of anti-aircraft guns of French origin, installed at the center of the battery, a drinking water pumping station, a hospital bunker and a pre-existing farm, between casemate 2 and casemate 3, integrated into the defensive system to serve as barracks and an observation post. Each casemate had a buffer stock of propelling charges and shells but relied on two separated ammunition bunkers located near the hamlet of Onglevert, located east of the battery Todt. Each casemate was connected to these ammunition bunkers (30 x 20 x 5 m) by a truck road and by a network of Decauville-type narrow-gauge tracks. These two large constructions were made up of 6 cells arranged on either side of a corridor closed at each end by a heavy double-winged armored door. They were integrated into the strongpoint Wn Onglevert, renamed Wn 183 Eber from 1944. The battery fired its first shell on 20 January 1942, although it was only officially opened in February 1942 in the presence of Admirals Karl Dönitz and Erich Raeder. Originally to be called Siegfried Battery, it was renamed in honor of the German engineer Fritz Todt, creator of the Todt Organisation and responsible for the construction of the Atlantic Wall, who died on 8 February 1942 in a plane crash days before the battery's inauguration after meeting with Hitler at his Eastern Front military headquarters ("Wolf's Lair") near Rastenburg in East Prussia. This decision was materialized by embossed 1.50-m high letters, displayed on Casemate 3. Hitler visited the Todt battery on 23 December 1940. In 1941, the battery was initially codenamed 18. When integrated into the Atlantic wall, the Todt Battery, its close-combat defensive positions and its anti-aircraft guns formed the strongpoint Stützpunkt (StP) 213 Saitenspiel in 1943, renamed StP 166 Saitenspiel in 1944. Construction Before 1940, Haringzelles consisted of three farmsteads bordered by low walls and bushes. The occupants left shortly after the German engineers chose the site to build the Todt Battery. German troops transplanted mature trees from the forests of Boulogne-sur-Mer and Desvres to camouflage the construction operations. According to the post-war accounts of Franz Xavier Dorsch who supervised the construction of the Todt battery, the construction was divided into 2 phases. First, the guns were to be ready to fire within 8 weeks, with half of its auxiliary facilities ready but without any protective cover in reinforced concrete. The battery was then to be completed in its entirety as soon as possible, without specifying an exact date, while maintaining, at all time, the gun capability to fire from their 60 mm-thick armored turrets. The Organization Todt began the groundwork at the battery in July 1940 and began to build in August 1940 the firing platforms with circular parapets for the rotation of the armored C/39 firing platform with its 38 cm SK C/34 naval guns. Dorsch estimated at 12000 – 15000 the number of workers employed by the Organisation Todt for the construction of the heavy coastal batteries between Boulogne-sur-Mer and Calais. About 9000 of them were Germans. According to Dorsch, the firing platforms and all the facilities needed for the ammunition, the machinery and the crew were finalized in 8 weeks and three days. Winston Churchill, in his book "The Second World War", recorded that the British had already identified the Todt, Friedrich August, Grosser Kurfürst, Prinz Heinrich and Oldenburg batteries, together with fourteen other 17-cm guns, were "by the middle of September [1940] mounted and ready for use in this region alone", around Calais and Cape Gris-Nez. Dorsch considered that three factors contributed to making the battery combat-ready in about two months. Firstly, most of the workers could be immediately accommodated in the Nissen huts of the former British camp of Etaples, about 15 km southwest of Boulogne. Secondly, the camouflage of the construction site was kept minimal given the size of the future casemates, which allowed the swift progress of the construction. Thirdly, suitable construction aggregates were found in large quantities within a radius of about 15 km from the site. The Organisation Todt had to improve the road network in the surrounding area to transport the building materials with up to 1200 heavy trucks. A dedicated road was built between the construction site and the largest source of gravel in the nearby quarries of Hidrequent-Rinxent, near Marquise, avoiding towns when possible and building a new bridge above the road Boulogne-Calais to not disrupt the traffic of this strategic road. The road from the train station of Wimereux to Audinghem had to be upgraded to allow the transport of the guns. Two Sd.Kfz. 9 half-tracks towed the guns, weighing more than 70 tons, loaded on Culemeyer-type heavy trailers, developed by the Gothaer Waggonfabrik, with 48 wheels on 12 axles and a capacity up to 100 tons. The Organisation Todt could also use a fully equipped sawmill in Outreau, south of Boulogne-sur-mer, to produce the large quantities of formwork needed for the reinforced concrete structures and to transport it to the construction site. The formwork for the ceiling of the casemate was supported by a temporary falsework above the firing platform that had to remain combat-ready during its construction. This temporary falsework was later removed once the reinforced concrete had sufficiently hardened to support itself and was used to building the next casemate of the battery. In November 1941, the casemates were completed after pouring 12,000 cubic meters of concrete and using 800 tonnes of reinforcing bars to build each SK (Sonderkonstruktion) casemates. No shots were fired by the battery between September 1940 and January 1942. Firing chamber The pivot of the armored turreted 38 cm SK C/34 naval gun was at the center of an open vast circular room with an internal diameter of 29 m, under an 11-meter high ceiling. Two continuous concrete benches are running along the rear wall of the casemate. The lower one supports the rotating turret. The railroad track connecting the casemate to the main ammunition bunkers located at Onglevert arrived at the level of the higher bench through two 2 meters-wide openings. Between the two benches runs a circular corridor equipped with two concentric Decauville-type rails. The inner track supported the rollers of the turret loading crane, while the second track was used to move trolleys with shells and propelling charges. Two passages gave access for servicing the shaft. The embrasure of the casemate allowed a 120° rotation of the turret, a -4° to 60° elevation for the gun. This large embrasure was protected, on its sides, by 4 cm thick armored plates following as closely as possible the shape of the rotating turret and, on its higher part, by a "Todt front" reinforced with thick steel plates, removed by scrap metal dealers after the war. Garrison The Kriegsmarine maintained a separate coastal defense network during World War II. It established early 1940 several sea defense zones to protect the large amount of coastline which Germany had acquired after invading the Low Countries, Denmark, Norway, and France. In spring 1940, the Kriegsmarine began to reorganize coastal defense around sea defense zones. Logistically, the sea defense zones and its separate coastal defense network were strictly a Navy command but were eventually integrated into the Atlantic Wall which was generally overseen by the German Army. The Todt battery was under the orders of the seekommandant Pas-de-Calais, Vice Admiral Friedrich Frisius, who also commanded the other coastal batteries. The 242nd Coastal Artillery Battalion of the Kriegsmarine (Marine-Artillerie-Abteilung 242 – MAA 242) manned the battery with a garrison of some 390 men (4 officers, 49 NCOs and 337 sailors). The battery was commanded from 1940 to 1942 by Kapitänleutnant MA Wilhelm Günther and from 1942 until its capture on 29 September 1944 by Oberleutnant MA Klaus Momber. Fire control The casemates were not equipped with sighting elements. The firing coordinates were given to the casemates by the fire control post located in a regelbau S100 bunker along the shoreline at Cran-aux-Oeufs, north of the battery (). The command center, two personnel bunkers, a water reservoir with its close-combat defensive positions at Cran-aux-Oeufs formed the strongpoint Widerstandsnest (Wn) 166a Seydlitz. This command center was equipped with a 10.5-meters optical coincidence rangefinder under a steel cupola. A direction finder and active ranging radar FuMO 214 Würzburg Riese was installed on top of one of the personnel bunkers.''' Target information was also provided by both spotter aircraft and by naval radar sets installed at Cap Blanc-Nez and Cap d’Alprech, south of Outreau, known as DeTe-Gerät (, decimetric telegraphy device). These units were capable of detecting targets out to a range of , including small British patrol craft inshore of the English coast. Two additional radar sites were added by mid-September 1940: a DeTe-Gerät at Cap de la Hague and a FernDeTe-Gerät long-range radar at Cap d’Antifer near Le Havre. 380-mm cannons The 38 cm SK C/34 naval gun was developed by Germany mid to late 1930s to arm the . Bismarcks and Tirpitzs main battery consisted of eight 38 cm SK C/34 guns in four twin turrets. As with other German large-caliber naval rifles, these guns were designed by Krupp and featured sliding-wedge breechblocks, which required brass cartridge cases for the propellant charges. Under optimal conditions, the rate of fire was one shot every 18 seconds, or three per minute. Under battle conditions, Bismarck averaged roughly one round per minute in her battle with and . The Kriegsmarine also planned to use these naval guns as the armament of the three planned battleships, with a displacement of 35,400 tons, which were tentatively named "O", "P" and "Q". The ships' main armament batteries were to have consisted of six 38 cm SK C/34 guns mounted in three twin turrets. By 1940, project drawings for the three battle-cruisers were complete. They were reviewed by both Hitler and Admiral Raeder, both of whom approved. However, outside "initial procurement of materials and the issuance of some procurement orders", the ships' keels were never laid. In large part, this was due to severe material shortages, especially of high-grade steel, since there were more pressing needs for these materials for the war effort. Besides, the dockyard personnel necessary for the ships' construction were by now occupied with more pressing work, primarily on new U-boats. Spare guns were used as coastal artillery in Denmark, Norway and France. The coastal defense version of the SK C/34 was modified with a larger chamber for coast defense duties to handle the increased amount of propellant used for the special long-range Siegfried shells. Gander and Chamberlain quote a weight of only for these guns, presumably accounting for the extra volume of the enlarged chamber. An armored single mount, the Bettungsschiessgerüst (Firing platform) C/39 was used by these guns. It had a maximum elevation of 60° and could traverse up to 360°, depending on the emplacement. The C/39 mount had two compartments; the upper housed the guns and their loading equipment, while the lower contained the ammunition hoists, their motors, and the elevation and traverse motors. The mount was fully powered and had an underground magazine. C/39 mounts were also installed at the Hanstholm fortress in Denmark, and the Vara fortress in Kristiansand, Norway. Plans were made to install two of these mounts at Cap de la Hague and two at Paimpol in France, modifying guns originally intended for an abortive refit of Gneisenau, but were not executed for unknown reasons. Work on putting two more mounts at Oxsby in Denmark was well advanced but incomplete by the end of the war. Some modified SK C/34 guns also saw service as 38 cm Siegfried K (E) railway guns, one of these being captured by American forces during the Rhône Valley campaign in 1944. Like the 38 cm SK C/34 naval guns deployed as coastal defense, the 38 cm Siegfried K guns were modified with a larger chamber to handle the increased amount of propellant used for the special long-range Siegfried shells. The gun could not traverse on its mount, relying instead on moving along a curving section of track or on a Vögele turntable to aim. The battery Todt was equipped with four 38 cm SK C/34 naval guns and their corresponding C/39 Firing platform. With a range up to , the guns were capable of reaching Dover and the British coast located less than 30 km from Cape Griz-Nez. Normally these were placed in open concrete barbettes, relying on their armor for protection, but Hitler thought that there was not enough protection for Todt Battery and ordered a concrete casemate thick built over and around the mounts. This had the unfortunate effect of limiting their traverse to 120°. The guns of the Todt Battery weighed 105.3 tons and had a total length of . The barrel was progressively rifled with 90 right-handed twisted grooves. Although the range of the gun elevation was -4° to 60°, its loading had to be performed horizontally, i.e. at an elevation of 0°. In 1949, France exchanged 3 German 38 cm SKC/34 naval guns from the Todt Battery with three French 380 mm/45 Modèle 1935 naval guns intended for the battleship Jean Bart. These French guns were originally transported to Norway following the decision in March 1944 to install them, using the C/39 armored single mounts, in the Vardåsen coastal battery at Nøtterøy (M.K.B. 6/501 "Nötteröy"). Ammunition The 38 cm SK C/34 guns of the Todt battery could fire five types of shells, four of which developed by the Kriegsmarine and one by the Heer. The Kriegsmarine shells weighted and had a range of with an initial speed of . A lighter version was developed for the coastal batteries to increase the operational life of the barrel from about 200 rounds to 350 rounds. Developed by the Wehrmacht, the Siegfried shell (German: Siegfried Granate) was almost 40 percent lighter could be fired with a reduced charge at out to . With a full charge it reached and could travel – over 34 miles. The Kriegsmarine shells were fired with a unique standard charge, divided into 2 parts for easier handling: a main charge (Hauptkartusche) and a forecharge (Vorkartusche). Fitted with a C/12 nASt percussion primer, the main charge, referenced as 38 cm Hülsenkartusche 34, weighted . It was high and was, at its base, in diameter. Weighting , the forecharge was high and had a diameter of . The propelling charge for the Siegfried shell (Siegfried ladung) also came in two parts capable to fire with a light load (Siegfried Hauptkartusche) or with a full load (Siegfried Hauptkartusche) with its forecharge (Siegfried Vorkartusche). The Siegfried Hauptkartusche weighted and its forecharge . In both cases, the main charge was in the form of a yellow brass casing while the additional load was contained in a fiber-reinforced cellulose bag. The loading was carried out in the following order: shell, Vorkartusche, then Hauptkartusche. Service history Although the guns were already operational in September 1940, the battery went into action, for the first time, two days after its inauguration ceremony on 12 February 1942, providing counter-battery fire, to support the return of the battleships Gneisenau and Scharnhorst, the two s, the heavy cruiser and escorts to German bases through the English Channel. They were not silenced until 1944, when the batteries were overrun by Allied ground forces. They caused 3,059 alerts, 216 civilian deaths, and damage to 10,056 premises in the Dover area. However, despite firing on frequent slow-moving coastal convoys, often in broad daylight, for almost the whole of that period (there was an interlude in 1943), there is no record of any vessel being hit by them, although one seaman was killed and others were injured by shell splinters from near misses. Capture Following the victory of Operation Overlord and the break-out from Normandy, the Allies judged it essential to silence the German heavy coastal batteries around Calais which could threaten Boulogne-bound shipping and bombard Dover and inland targets. In 1944 the Germans had 42 heavy guns in the vicinity of Calais, including five batteries of cross-channel guns, the Todt Battery (four guns), (four guns at Sangatte), (150 mm guns near Wissant), (four guns) and (three guns). The Germans had broken the drainage systems, flooding the hinterland and added large barbed wire entanglements, minefields and blockhouses. The first attempt by elements of the 7th Canadian Infantry Brigade to take Cape Gris-Nez from 16 to 17 September failed. As part of Operation Undergo, the 3rd Canadian Infantry Division led the attack on the two heavy batteries at Cape Gris-Nez which threatened the sea approaches to Boulogne. The plan devised by General Daniel Spry was to bombard them from land, sea and air to "soften up" the defenders, even if it failed to destroy the defenses. Preceded by local bombardments to keep the defenders under cover until too late to be effective, infantry assaults would follow, accompanied by flame-throwing Churchill Crocodiles to act as final "persuaders". Kangaroo armored personnel carriers would deliver infantry as close to their objectives as possible. The 9th Canadian Infantry Brigade, with armoured support from the 1st Hussars (6th Armoured Regiment), was deployed to Cape gris-Nez to take the three remaining heavy batteries. They were also supported by the British 79th Armoured Division and its mine flail tanks, Churchill Crocodiles and Churchill AVRE (Armoured Vehicle Royal Engineers), equipped with a 230 mm spigot mortar designed for the quick leveling of fortifications. While the Highland Light Infantry of Canada attacked the batteries at Floringzelle and about north, the North Nova Scotia Highlanders faced the Todt battery protected by minefields, barbed wire, blockhouses and anti-tank positions. The infantry assault was preceded by two intense aerial bombardments by 532 aircraft from RAF Bomber Command on 26 September and by 302 bombers on 28 September that dropped 855 tons on the Gris-Nez positions. Although these probably weakened the defenses as well as the defenders' will to fight, cratering of the ground impeded the use of armor, causing tanks to bog down. Accurate shooting by the British cross-Channel guns Winnie and Pooh, two BL 14-inch Mk VII naval guns positioned behind St Margaret's, disabled the Grosser Kurfürst battery that could fire inland. On 29 September, the artillery opened fire at and the infantry attack began after ten minutes behind a creeping barrage that kept the defenders under cover. The Todt battery fired for the last time. The North Nova Scotia Highlanders encountered little resistance, reaching the gun houses without opposition. The concrete walls were impervious even to AVRE petard mortars but their noise and concussion, along with hand grenades thrown into embrasures, induced the German gunners to surrender by mid-morning. The North Nova Scotia Highlanders continued on to capture the fire control post at Cran-aux-Oeufs. Despite the impressive German fortifications, the defenders refused to fight on and the operation was concluded at relatively low cost in casualties. Post-war and museum In August 1945, two French visitors accidentally triggered a massive explosion in Casemate 3, which pushed out part of the sidewall and caused the ceiling to collapse. Soon after the end of the war, the battery was disarmed. The guns it housed were torched by scrap merchants. The French Ministry of Armed Force became the owner of the battery but a few years later sold the land to farmers who left the bunkers abandoned. Left abandoned, the casemate were gradually invaded by wild vegetation and flooded with water. Today, the four casemates are located on private land. They are still visible and accessible. Only casemate 3, partially destroyed following its explosion in 1945, is not easily accessible. Nature protection area Before World War II, Cape Gris-Nez had a typical agricultural landscape of bocage on the Channel coast. The agricultural parcels were delimited with dry stone walls and hedgerows separated the cultivated areas from the grassland used for the grazing of sheep and cows. There were no woodlands and small farms were all built in depressions, sheltered from the winds. The landscape changed considerably during the Second World War. In August 1940, the German army completely vacated the Cape Gris-Nez and its surroundings. The local population had to leave and almost all the old buildings were demolished to make way for the construction of an offensive military structures in support of Operation Sea Lion and, later, for the construction of the Atlantic Wall. To build these military works, all the dry stone walls and farm buildings were dismantled or demolished. Allied bombing raids took out the remaining buildings. Man-made woods were planted to camouflage these structures, such as the Haringzelles Woods around the Todt Battery. At the end of the war, Cape Gris-Nez looked like a moonscape, dotted with deep holes left by Allied bombing. These bomb holes nowadays shelter ponds suitable for protected amphibians. Several bombed areas were classified as dangerous zones by the French authorities. Turning the land over was forbidden. Large areas were left to pasture. The woods planted by the Germans, also bombed, are still unexploitable today and have been left in the same state since the war. They have since become unique biotopes. Dozens of varying-sized bunkers were quickly abandoned, demolished or otherwise left to become derelict, allowing nature to slowly take over. Most of the large German military structures were not demolished after the war and became ideal locations for bats for shelter, breeding and hibernation during wintertime. In 1963, the site known as "Anse du Cap Gris-Nez" was included in the French inventory of protected sites. As a consequence of the 1973 oil crisis, Prime Minister Pierre Messmer initiated in June 1974 the construction of 13 nuclear power plants aimed at generating all of France's electricity from nuclear power.Interview of Pierre Messmer on 3 June 1974 (film), on the French government's website The French electric utility company Electricité de France (EDF) started to look for possible sites in France. In the Pas-de-Calais, the sites of Gravelines, Cape Gris-Nez and Dannes were initially considered but only the projects of Graveline and Cape gris-Nez were further pursued by EDF. At Cape Gris-Nez, the project called for the power plant to be located at the Cran-aux-Oeufs, digging it into the cliff. The cooling water was to be pumped form the English Channel and the hot water was to be discharged back into the sea through a canal that would open up at the Tardinghen marshes in the north. In 1976, the project to build the nuclear power station at Cran-aux-Oeufs was finally abandoned, while the Gravelines Nuclear Power Station entered into service in 1980. The entire Cape Gris-Nez was finally protected in 1980. The cliffs of Cran-aux-Oeufs and the Haringzelles wood, in which the casemates of the Todt Battery are now scattered, were designated Natura 2000. They are today part of the protected natural site "Grand Site des Deux Caps", labelled a Grand Site de France since 29 March 2011, and integrated into the larger Parc naturel régional des caps et marais d'Opale created in 2000. Musée du Mur de l'Atlantique Claude-David Davies, the owner of a hotel-restaurant in Wissant, bought the land on which Casemate 1 was located to open it to the public and turn it into a museum. The work required to open the site to the public was considerable. Buckets and shovels had to be used to remove years of accumulated mud. The ground was drained and the water pumped out after stopping most of the water infiltration. With the help of several people and after three years of work, the private museum about World War II, Musée du Mur de l'Atlantique, opened its doors in 1972. An exterior metal staircase, later dismantled, replaced the old concrete one destroyed in 1944 that gave access to the roof, which was surrounded by a guardrail and open to the public. The interior of the casemate has been progressively transformed into showrooms for weapons, various equipment and even some vehicles such as motorcycles or small trucks. The exhibits today include military hardware, posters and uniforms remembering the Atlantic Wall. Outside the museum, one of two surviving German Krupp 28 cm K5 railway gun is displayed on an iron track, alongside military vehicles and tanks. At the beginning of the 1980s, the existence of this 28 cm K5(E) Ausführung D (model D) cannon, originally stationed at Fort Nieulay (Stp 89 Fulda) in Calais, became known to the founder of the museum. After years of negotiations with the French army, the K5 cannon was transported in 1992 from the Atelier de Construction de Tarbes (A.T.S) in Tarbes to the north of France. The origin of the cannon is not clear but it is believed that it was captured in the Montélimar pocket in southern France when the cannons of the EisenbahnBatterie 749 were captured. Numerous objects from the Second World War are also displayed outside the casemate 1, among which one 8.8 cm Flak 18/36/37/41 anti-aircraft gun, a half-track armored personnel carrier OT-810 (a Czechoslovak post-war version of the SdKfz 251), a 75-mm 7.5 cm Pak 40 anti-tank gun, a Belgian gate (anti-tank steel fence) and several Czech hedgehogs and anti-tank tetrahedra. Gallery See also Channel Dash Kristiansand Cannon Museum Hanstholm fortress References Bibliography This article was created from the translation of the article Batterie Todt the French Wikipedia, licensed under the Creative Commons Attribution Share Alike 3.0 Unported and free documentation license GNU. Further reading External links Todt Battery Museum website (in French)'' Battery Todt on Bunkersite.com Germany 38 cm (14.96") SK C/34 (NavWeps page) Atlantic Wall World War II sites of Nazi Germany World War II sites in France Coastal fortifications Nazi architecture World War II defensive lines Kriegsmarine Military and war museums in France World War II museums Artillery battery fortifications in France
Todt Battery
Engineering
7,195
1,445,976
https://en.wikipedia.org/wiki/Volodymyr%20Savchenko%20%28writer%29
Vladimir Ivanovich Savchenko (; ) was a Soviet Ukrainian science fiction writer and engineer. Born on February 15, 1933, in Poltava, he studied at the Moscow Power Engineering Institute and was an electronics engineer. Savchenko, who wrote in Russian , published his first short stories in the late 1950s, and his first novel (Black Stars) in 1960. His works were often self-published. Savchenko also authored several texts about physics and engineering, including the article "Sixteen New Formulas of Physics and Cosmology," which he considered to be his most important scientific text. As of today, Savchenko's works have been published in 29 countries and have been translated into many of the world's languages. He was found dead on January 19, 2005, in Kyiv. He was 71 years old. Biography Savchenko was a graduate of the Moscow Power Engineering Institute. He worked at the V.M. Glushkov Institute of Cybernetics in Kyiv. His first publication, "Toward the Stars" (1955), identified the author as an advocate of science fiction that was interested in exploring the heuristic potential of the personality. In 1956, Savchenko's story "The Awakening of Professor Bern" was published. Of publications in his native Ukrainian language, the most well-known is the story "The Ghost of Time" (1964). In the novel Black Stars, Savchenko investigated the boundaries of traditional science, putting forward original hypotheses. In particular, Savchenko's 1959 novel The Second Expedition to the Strange Planet (known in English as The Second Oddball Expedition) explored the political nuances revealed by contact with crystalline forms of life. Savchenko positioned himself as an adherent of the cybernetic view of society and the living organism, consistently developing different aspects of the process of self-discovery. After the 1967 publication of the novel Self-Discovery, in which Savchenko warned about the ethical problems involved in the creation of clones, Savchenko occupied the leading position in Soviet science fiction. In 1973, the twenty five-volume collection Library of Contemporary Fantastic Literature was published under the name Anthology, in which an abbreviated version of Savchenko's programmatic story "The Trial of Truth" appeared. The collection included the best publications of the largest and most popular science fiction authors of the United States, Great Britain, Japan, and France. In Savchenko's story, the protagonist, Dmitri Kaluzhnikov, makes a fundamental discovery that leads to the merging of Dmitri's individuality with an intelligent substance, with the ensuing catastrophic effect of the creation of a new Tunguska meteorite. The story became a cult text for an entire generation of young Soviet engineers. In the late period of his work, Savchenko focused on the biological side of the phenomenon of the Übermensch, as can be seen in the story "Confused" (1983). Also widely known is the novel Over the Pass (1984), which explored the Communist future of Earth. On Wednesday, January 19, 2005, he was found dead in his apartment in Kyiv. Since 2005, the International Assembly of Fantastic Literature Authors, which convenes every year in Kyiv under the name "Portal," has awarded a prize in Savchenko's name called "Self-Discovery" to works which shows a writer's qualitative growth. Literary Awards Prize at the Detgiz RSFSR contest for the novel Black Stars (1960, Moscow) "Chumatskii Way" prize for the story "The Kidnappers' Essences" (1989, Kyiv) "Great Circle" award for the novel Position in the Universe (1994) "Philosopher's Stone Award" (2002, "Golden Bridge" Fourth Festival of the Fantastic in Kharkiv, Ukraine) "Aelita" award for contribution to Russian-language fiction (2003) List of Works Novels Black Stars (1956) Second Expedition to the Strange Planet (1959) Self-Discovery (1967) Dead End (1972) Meeters () (1980) The Success Algorithm (1983) Over the Pass (1984) The Kidnappers' Essence (1988) Position in the Universe (1992) A Time of Great Negations (2002) Expository Technical Writing Semiconductors at Launch (1958) Technology and Properties of Microelectronic Diode Rays (1965) "Sixteen new formulas of physics and cosmology. Universal correlation Field Activity (U-field), manifesting itself as universal communication variables and phenomena" (1992) Available Publications in English Self-Discovery. New York, McMillan, 1979. Success Algorithm in New Soviet Science Fiction. New York, Collier Books, 1980. (hardcover) and (paperback) Mixed Up in Red Star Tales: A Century of Russian and Soviet Science Fiction. Montpelier, VT, RIS Publications, 2015. References External links 1933 births 2005 deaths Electronics engineers Writers from Kyiv Ukrainian science fiction writers Soviet science fiction writers Soviet male writers Engineers from Kyiv
Volodymyr Savchenko (writer)
Engineering
1,029
46,589,402
https://en.wikipedia.org/wiki/WR%20102
WR 102 is a Wolf–Rayet star in the constellation Sagittarius, an extremely rare star on the WO oxygen sequence. It is a luminous and very hot star, highly evolved and close to exploding as a supernova. Discovery WR 102 was first mentioned as the possible optical counterpart to a peculiar X-ray source GX 3+1. However, it became clear that it was a separate object and in 1971 it was highlighted as a luminous star with unusual OVI emission lines in its spectrum. It was classified as a WC star, an unusual one because of the highly ionised emission lines, and not the central star of a planetary nebula. It was seen to vary in brightness and was given the variable star designation V3893 Sagittarii in the 62nd name-list of variable stars. Faint nebulosity was discovered around WR 102 in 1981 and was identified as a wind-blown bubble. In 1982, a set of five luminous stars with highly ionised oxygen emission lines, including WR 102, was used to define the WO class of Wolf–Rayet stars. They were identified as highly evolved massive stars. Features WR 102, of spectral classification WO2, is one of the very few known oxygen-sequence Wolf–Rayet stars, just four in the Milky Way galaxy and nine in external galaxies. It is also one of the hottest known, with a surface temperature estimated to be at 210,000 K. Modelling the atmosphere gives a luminosity around , while calculations from brightness and distance gives a luminosity of (assuming a temperature of about 200,000 K) with a distance of . WR 102 was likely born from the OB association Sagittarius OB5. It is a very small dense star, with a radius around and a mass of . Very strong stellar winds with a terminal velocity of 5,000 kilometers per second are causing WR 102 to lose /year. For comparison, the Sun loses (2-3) x 10−14 solar masses per year due to its solar wind, several hundred million times less than WR 102. These winds and the strong ultraviolet radiation from the hot star have compressed and ionised the surrounding interstellar material into a complex series of arcs described as the bubble type of Wolf–Rayet nebula. Evolutionary status WO stars are the last evolutionary stage of the most massive stars before exploding as supernovae. It is very likely that WR 102 is on its last stages of nuclear fusion, near or beyond the end of helium burning. It has been calculated that WR 102 will explode as a supernova within 1,500 years. High mass and rapid rotation would make a gamma-ray burst (GRB) possible, but it is unclear if WR 102 is rotating rapidly. It was previously thought that the projected rotation velocity within the stellar wind could be as fast as 1,000 km/s but spectropolarimetric observations seem to indicate that if WR 102 is rotating, it is rotating at a much lower speed. See also WR 142 WR 30a WR 93b List of supernova candidates References Sagittarius (constellation) Wolf–Rayet stars Sagittarii, V3893
WR 102
Astronomy
643
20,611
https://en.wikipedia.org/wiki/Monophyly
In biological cladistics for the classification of organisms, monophyly is the condition of a taxonomic grouping being a clade – that is, a grouping of organisms which meets these criteria: the grouping contains its own most recent common ancestor (or more precisely an ancestral population), i.e. excludes non-descendants of that common ancestor the grouping contains all the descendants of that common ancestor, without exception Monophyly is contrasted with paraphyly and polyphyly as shown in the second diagram. A paraphyletic grouping meets 1. but not 2., thus consisting of the descendants of a common ancestor, excepting one or more monophyletic subgroups. A polyphyletic grouping meets neither criterion, and instead serves to characterize convergent relationships of biological features rather than genetic relationships – for example, night-active primates, fruit trees, or aquatic insects. As such, these characteristic features of a polyphyletic grouping are not inherited from a common ancestor, but evolved independently. Monophyletic groups are typically characterised by shared derived characteristics (synapomorphies), which distinguish organisms in the clade from other organisms. An equivalent term is holophyly. The word "mono-phyly" means "one-tribe" in Greek. These definitions have taken some time to be accepted. When the cladistics school of thought became mainstream in the 1960s, several alternative definitions were in use. Indeed, taxonomists sometimes used terms without defining them, leading to confusion in the early literature, a confusion which persists. The first diagram shows a phylogenetic tree with two monophyletic groups. The several groups and subgroups are particularly situated as branches of the tree to indicate ordered lineal relationships between all the organisms shown. Further, any group may (or may not) be considered a taxon by modern systematics, depending upon the selection of its members in relation to their common ancestor(s); see second and third diagrams. Etymology The term monophyly, or monophyletic, derives from the two Ancient Greek words (), meaning "alone, only, unique", and (), meaning "genus, species", and refers to the fact that a monophyletic group includes organisms (e.g., genera, species) consisting of all the descendants of a unique common ancestor. Conversely, the term polyphyly, or polyphyletic, builds on the ancient Greek prefix (), meaning "many, a lot of", and refers to the fact that a polyphyletic group includes organisms arising from multiple ancestral sources. By comparison, the term paraphyly, or paraphyletic, uses the ancient Greek prefix (), meaning "beside, near", and refers to the situation in which one or several monophyletic subgroups are left apart from all other descendants of a unique common ancestor. That is, a paraphyletic group is nearly monophyletic, hence the prefix . Definitions On the broadest scale, definitions fall into two groups. Willi Hennig (1966:148) defined monophyly as groups based on synapomorphy (in contrast to paraphyletic groups, based on symplesiomorphy, and polyphyletic groups, based on convergence). Some authors have sought to define monophyly to include paraphyly as any two or more groups sharing a common ancestor. However, this broader definition encompasses both monophyletic and paraphyletic groups as defined above. Therefore, most scientists today restrict the term "monophyletic" to refer to groups consisting of all the descendants of one (hypothetical) common ancestor. However, when considering taxonomic groups such as genera and species, the most appropriate nature of their common ancestor is rather a population. Assuming that it would be one individual or mating pair is unrealistic for sexually reproducing species, which are by definition interbreeding populations. Monophyly (or holophyly) and associated terms are restricted to discussions of taxa, and are not necessarily accurate when used to describe what Hennig called tokogenetic relationships – now referred to as genealogies. Some argue that using a broader definition, such as a species and all its descendants, does not really work to define a genus. The loose definition also fails to recognize the relations of all organisms. According to D. M. Stamos, a satisfactory cladistic definition of a species or genus is impossible because many species (and even genera) may form by "budding" from an existing species, leaving the parent species paraphyletic; or the species or genera may be the result of hybrid speciation. The concepts of monophyly, paraphyly, and polyphyly have been used in deducing key genes for barcoding of diverse group of species. See also Clade Crown group Glossary of scientific naming Monotypic taxon Paraphyly Polyphyly References External links Phylogenetics
Monophyly
Biology
1,022
27,775,027
https://en.wikipedia.org/wiki/Lars%20Bergstr%C3%B6m%20%28physicist%29
Lars Bergström (born 1952) is a Swedish professor of theoretical physics specializing in astroparticle physics at Stockholm University, AlbaNova campus. He is a member of the Royal Swedish Academy of Sciences and since 2004 serves as the secretary of the Nobel Committee for Physics. Education and Academic Career Bergström received his PhD 1981 from the Royal Institute of Technology, with a thesis by the title of ``Aspects of bound states in hadron physics". After a postdoctoral fellowship at CERN, he was nominated docent in theoretical physics at the Royal Institute of Technology. Afterwards, he was appointed professor of theoretical physics at Uppsala University, before becoming associate professor at Stockholm University in 1995. From 2008 to 2014 he has served as director of the Oskar Klein Centre for Cosmoparticle Physics. Contributions Bergström has worked at the interface of particle physics, astrophysics and cosmology. He has collaborated in numerous international experiments, including AMANDA, IceCube and Fermi. His contributions have been exceptionally important in the field of dark matter indirect detection, through the search of annihilation products of dark matter in the Universe. Together with Paolo Gondolo, Joakim Edsjö, Piero Ullio, Mia Schelke and Edward Baltz, he developed DarkSUSY, a famous numerical package for neutralino dark matter calculations. Bergström has also contributed importantly to the field of supersymmetry, particularly studying supersymmetric dark matter candidates. Bergström has published over 100 papers in peer-reviewed journals. Papers Papers listed on the Smithsonian/NASA Astrophysics Data System (ADS) Books "Cosmology and Particle Physics"; Bergström with Ariel Goobar, 2nd ed. Springer (2004). References External links Oskar Klein Centres web site Physics Department of Stockholm university 1952 births Living people Swedish physicists Stockholm University alumni Members of the Royal Swedish Academy of Sciences Theoretical physicists People associated with CERN
Lars Bergström (physicist)
Physics
388
53,855,642
https://en.wikipedia.org/wiki/Maurice%20Zeeman
Maurice G. Zeeman is an American toxicologist who worked for the Environmental Protection Agency as chief of the Environmental Effects branch. He also represented the US at the OECD, and chaired their Working Group of the National Co-ordinators of the Test Guidelines Programme (WNT), stepping down in 2004. He is an Elected Fellow of the American Association for the Advancement of Science. References Year of birth missing (living people) Living people Fellows of the American Association for the Advancement of Science American toxicologists
Maurice Zeeman
Environmental_science
105
48,738,733
https://en.wikipedia.org/wiki/Crouton%20%28computing%29
Crouton (ChromiumOS Universal Chroot Environment) is a set of scripts which allows Ubuntu, Debian, and Kali Linux systems to run parallel to a ChromeOS system. Crouton works by using a chroot instead of dual-booting to allow a user to run desktop environments at the same time: ChromeOS and another environment of the user's choice. In Google I/O 2019, Google announced all Chromebooks shipped that year onward will be Linux compatible out of the box. Usage Crouton requires the user to switch their ChromeOS device to Developer Mode. This requires a full "Powerwash" of the device and enabled the use of special commands in the Crosh terminal. Despite having many Linux distributions to choose from, none are officially supported by their developers. While Crostini has become an officially supported way to run Linux applications, many people still prefer Crouton due to the fact it allows the user to access a desktop environment. References External links Crouton on GitHub Crouton on reddit Crouton Central on Google forums Crouton Users on Google+ Communities Linux software
Crouton (computing)
Technology
237
317,695
https://en.wikipedia.org/wiki/Moulting
In biology, moulting (British English), or molting (American English), also known as sloughing, shedding, or in many invertebrates, ecdysis, is a process by which an animal casts off parts of its body to serve some beneficial purpose, either at specific times of the year, or at specific points in its life cycle. In medieval times, it was also known as "mewing" (from the French verb "muer", to moult), a term that lives on in the name of Britain's Royal Mews where the King's hawks used to be kept during moulting time before becoming horse stables after Tudor times. Moulting can involve shedding the epidermis (skin), pelage (hair, feathers, fur, wool), or other external layer. In some groups, other body parts may be shed, for example, the entire exoskeleton in arthropods, including the wings in some insects. Examples In birds In birds, moulting is the periodic replacement of feathers by shedding old feathers while producing new ones. Feathers are dead structures at maturity which are gradually abraded and need to be replaced. Adult birds moult at least once a year, although many moult twice and a few three times each year. It is generally a slow process: birds rarely shed all their feathers at any one time. The bird must retain sufficient feathers to regulate its body temperature and repel moisture. The number and area of feathers that are shed varies. In some moulting periods, a bird may renew only the feathers on the head and body, shedding the wing and tail feathers during a later moulting period. Some species of bird become flightless during an annual "wing moult" and must seek a protected habitat with a reliable food supply during that time. While the plumage may appear thin or uneven during the moult, the bird's general shape is maintained despite the loss of apparently many feathers; bald spots are typically signs of unrelated illnesses, such as gross injuries, parasites, feather pecking (especially in commercial poultry), or (in pet birds) feather plucking. Some birds will drop feathers, especially tail feathers, in what is called a "fright moult". The process of moulting in birds is as follows: First, the bird begins to shed some old feathers, then pin feathers grow in to replace the old feathers. As the pin feathers become full feathers, other feathers are shed. This is a cyclical process that occurs in many phases. It is usually symmetrical, with feather loss equal on each side of the body. Because feathers make up 4–12% of a bird's body weight, it takes a large amount of energy to replace them. For this reason, moults often occur immediately after the breeding season, but while food is still abundant. The plumage produced during this time is called postnuptial plumage. Prenuptial moulting occurs in red-collared widowbirds where the males replace their nonbreeding plumage with breeding plumage. It is thought that large birds can advance the moult of severely damaged feathers. Determining the process birds go through during moult can be useful in understanding breeding, migration and foraging strategies. One non-invasive method of studying moult in birds is through using field photography. The evolutionary and ecological forces driving moult can also be investigated using intrinsic markers such as stable hydrogen isotope (δ2H) analysis. In some tropical birds, such as the common bulbul, breeding seasonality is weak at the population level, instead moult can show high seasonality with individuals probably under strong selection to match moult with peak environmental conditions. A 2023 paleontological analysis concluded that moulting probably evolved late in the evolutionary lineage of birds. Forced moulting In some countries, flocks of commercial layer hens are force-moulted to reinvigorate egg-laying. This usually involves complete withdrawal of their food and sometimes water for 7–14 days or up to 28 days under experimental conditions, which presumably reflect standard farming practice in some countries. This causes a body weight loss of 25 to 35%, which stimulates the hen to lose her feathers, but also reinvigorates egg-production. Some flocks may be force-moulted several times. In 2003, more than 75% of all flocks were force-moulted in the US. Other methods of inducing a moult include low-density diets (e.g. grape pomace, cotton seed meal, alfalfa meal) or dietary manipulation to create an imbalance of a particular nutrient(s). The most important among these include manipulation of minerals including sodium (Na), calcium (Ca), iodine (I) and zinc (Zn), with full or partially reduced dietary intakes. In reptiles and amphibians Squamates periodically engage in moulting, as their skin is scaly. The most familiar example of moulting in such reptiles is when snakes "shed their skin". This is usually achieved by the snake rubbing its head against a hard object, such as a rock (or between two rocks) or piece of wood, causing the already stretched skin to split. At this point, the snake continues to rub its skin on objects, causing the end nearest the head to peel back on itself, until the snake is able to crawl out of its skin, effectively turning the moulted skin inside-out. This is similar to how one might remove a sock from one's foot by grabbing the open end and pulling it over itself. The snake's skin is often left in one piece after the moulting process, including the discarded brille (ocular scale), so that the moult is vital for maintaining the animal's quality of vision. The skins of lizards, in contrast, generally fall off in pieces. Both frogs and salamanders moult regularly and consume the skin, with some species moulting in pieces and others in one piece. In arthropods In arthropods, such as insects, arachnids and crustaceans, moulting is the shedding of the exoskeleton, which is often called its shell, typically to let the organism grow. This process is called ecdysis. Most Arthropoda with soft, flexible skins also undergo ecdysis. Ecdysis permits metamorphosis, the sometimes radical difference between the morphology of successive instars. A new skin can replace structures, such as by providing new external lenses for eyes. The new exoskeleton is initially soft but hardens after the moulting of the old exoskeleton. The old exoskeleton is called an exuviae. While moulting, insects cannot breathe. In the crustacean Ovalipes catharus molting must occur before they mate. In dogs Most dogs moult twice each year, in the spring and autumn, depending on the breed, environment and temperature. Dogs shedding much more than usual are known as "blow coats" or "blowing coats". Gallery See also Abscission (Shedding, more general) References External links Moulting in Pigeons Moulting in Chicken and other fowl Animal developmental biology Skin Ethology de:Häutung
Moulting
Biology
1,524
25,958,257
https://en.wikipedia.org/wiki/Library%20Review%20%28journal%29
Library Review is an academic journal which was established in 1927. This journal focuses on social sciences, specific to library and information sciences. The journal is published nine times a year by Emerald Group Publishing. The editor-in-chief is Judith Broady-Preston (Aberystwyth University). In January 2018, Library Review was renamed Global Knowledge, Memory and Communication (GKMC). References External links Library science journals Information technology management Academic journals established in 1927 English-language journals Emerald Group Publishing academic journals 9 times per year journals
Library Review (journal)
Technology
110
68,237,700
https://en.wikipedia.org/wiki/Hans-Wilhelm%20Knobloch
Hans-Wilhelm Knobloch (18 March 1927, in Schmalkalden – 10 July 2019) was a German mathematician, specializing in dynamical systems and control theory. Although the field of mathematical systems and control theory was already well-established in several other countries, Hans-Wilhelm Knobloch and Diederich Hinrichsen were the two mathematicians of most importance in establishing this field in Germany. Education and career After completing undergraduate study in mathematics from 1946 to 1950 at the University of Greifswald, he matriculated at the Humboldt University of Berlin, where he received his PhD in 1950. His thesis Über galoissche Algebren (On Galois algebras) was supervised by Helmut Hasse. After completing his doctorate, Knobloch, with the aid of a scholarship, followed Hasse to the University of Hamburg. In 1952 and 1953 Knobloch held a teaching appointment at the University of Würzburg, after which he was offered a scholarship to complete his habilitation. After completing his habitation at the University of Würzburg in 1957, he was appointed to a substitute professorship in Münster. He held temporary academic posts at the Technical University of Munich, the University of Michigan from 1962 to 1963, and Denmark's Aarhus University from 1963 to 1965. From 1965 to 1970 he held a full professorship at Technische Universität Berlin. In 1970 at the University of Würzburg he accepted the professorial chair for control theory and dynamical systems, which he held until his retirement as professor emeritus in 1995. In the 1950s Knobloch published several papers in algebra and number theory. In 1958 he published two papers in integral transforms and differential equations. By the 1960s he focused on differential equations and control theory. He made important contributions in the theory of the existence of periodic solutions of non-linear differential equations, the construction of integral manifolds for ordinary differential equations, and necessary higher-order conditions for optimal control problems. In 1983 he was an invited speaker at the International Congress of Mathematicians in Warsaw. Knobloch was the author or co-author of several books and book chapters. His book on ordinary differential equations, co-authored with Franz Kappel, and his book linear control theory, co-authored with Huibert Kwakernaak, became standard textbooks in Germany. Knobloch promoted interdisciplinary cooperation with engineers and international cooperation among mathematicians. For the Oberwolfach workshops over many years he was one of the organizers, with Peter Sagirow, Manfred Thoma, and Huibert Kwakernaak, on the topic of control theory and, with Rolf Reissig, Jean Mawhin, and Klaus Schmitt, on the topic of ordinary differential equations. Knobloch played a key role in organizing the Equadiff conference held in Würzburg from 23 to 28 August in 1982. Selected publications (over 100 citations) article in 2012 reprint Books (1st edition 1974) (1st edition 1983) (pbk reprint of 1985 hbk 1st edition) References External links 1927 births 2019 deaths 20th-century German mathematicians 21st-century German mathematicians Control theorists Dynamical systems theorists University of Greifswald alumni Humboldt University of Berlin alumni Academic staff of Technische Universität Berlin Academic staff of the University of Würzburg People from Schmalkalden
Hans-Wilhelm Knobloch
Mathematics,Engineering
674
36,522,126
https://en.wikipedia.org/wiki/NGC%204700
NGC 4700 is a spiral galaxy located about 50 million light years away in the constellation of Virgo. NGC 4700 was discovered in March 1786 by the British astronomer William Herschel who noted it as a "very faint nebula". It is a member of the NGC 4699 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. NGC 4700 was imaged by the Hubble Space Telescope in 2012, showing an abundance of star-forming regions similar to the Orion Nebula. Gallery References External links 4700 Barred spiral galaxies Virgo (constellation) 043330
NGC 4700
Astronomy
142
15,193,490
https://en.wikipedia.org/wiki/RING%20finger%20domain
In molecular biology, a RING (short for Really Interesting New Gene) finger domain is a protein structural domain of zinc finger type which contains a C3HC4 amino acid motif which binds two zinc cations (seven cysteines and one histidine arranged non-consecutively). This protein domain contains 40 to 60 amino acids. Many proteins containing a RING finger play a key role in the ubiquitination pathway. Conversely, proteins with RING finger domains are the largest type of ubiquitin ligases in the human genome. Zinc fingers Zinc finger (Znf) domains are relatively small protein motifs that bind one or more zinc atoms, and which usually contain multiple finger-like protrusions that make tandem contacts with their target molecule. They bind DNA, RNA, protein and/or lipid substrates. Their binding properties depend on the amino acid sequence of the finger domains and of the linker between fingers, as well as on the higher-order structures and the number of fingers. Znf domains are often found in clusters, where fingers can have different binding specificities. There are many superfamilies of Znf motifs, varying in both sequence and structure. They display considerable versatility in binding modes, even between members of the same class (e.g. some bind DNA, others protein), suggesting that Znf motifs are stable scaffolds that have evolved specialised functions. For example, Znf-containing proteins function in gene transcription, translation, mRNA trafficking, cytoskeleton organisation, epithelial development, cell adhesion, protein folding, chromatin remodelling and zinc sensing. Zinc-binding motifs are stable structures, and they rarely undergo conformational changes upon binding their target. Some Zn finger domains have diverged such that they still maintain their core structure, but have lost their ability to bind zinc, using other means such as salt bridges or binding to other metals to stabilise the finger-like folds. Function Many RING finger domains simultaneously bind ubiquitination enzymes and their substrates and hence function as ligases. Ubiquitination in turn targets the substrate protein for degradation. Structure The RING finger domain has the consensus sequence C-X2-C-X[9-39]-C-X[1-3]-H-X[2-3]-C-X2-C-X[4-48]-C-X2-C. where: C is a conserved cysteine residue involved zinc coordination, H is a conserved histidine involved in zinc coordination, Zn is zinc atom, and X is any amino acid residue. The following is a schematic representation of the structure of the RING finger domain: x x x x x x x x x x x x x x x x x x C C C C x \ / x x \ / x x Zn x x Zn x C / \ H C / \ C x x x x x x x x x x x x x x x x x Examples Examples of human genes which encode proteins containing a RING finger domain include: AMFR, BARD1, BBAP, BFAR, BIRC2, BIRC3, BIRC7, BIRC8, BMI1, BRAP, BRCA1, CBL, CBLB, CBLC, CBLL1, CHFR, CNOT4, COMMD3, DTX1, DTX2, DTX3, DTX3L, DTX4, DZIP3, HCGV, HLTF, HOIL-1, IRF2BP2, LNX1, LNX2, LONRF1, LONRF2, LONRF3, MARCH1, MARCH10, MARCH2, MARCH3, MARCH4, MARCH5, MARCH6, MARCH7, MARCH8, MARCH9, MDM2, MEX3A, MEX3B, MEX3C, MEX3D, MGRN1, MIB1, MID1, MID2, MKRN1, MKRN2, MKRN3, MKRN4, MNAT1, MYLIP, NFX1, NFX2, PCGF1, PCGF2, PCGF3, PCGF4, PCGF5, PCGF6, PDZRN3, PDZRN4, PEX10, PHRF1, PJA1, PJA2, PML, PML-RAR, PXMP3, RAD18, RAG1, RAPSN, RBCK1, RBX1, RC3H1, RC3H2, RCHY1, RFP2, RFPL1, RFPL2, RFPL3, RFPL4B, RFWD2, RFWD3, RING1, RNF2, RNF4, RNF5, RNF6, RNF7, RNF8, RNF10, RNF11, RNF12, RNF13, RNF14, RNF19A, RNF20, RNF24, RNF25, RNF26, RNF32, RNF38, RNF39, RNF40, RNF41, RNF43, RNF44, RNF55, RNF71, RNF103, RNF111, RNF113A, RNF113B, RNF121, RNF122, RNF123, RNF125, RNF126, RNF128, RNF130, RNF133, RNF135, RNF138, RNF139, RNF141, RNF144A, RNF145, RNF146, RNF148, RNF149, RNF150, RNF151, RNF152, RNF157, RNF165, RNF166, RNF167, RNF168, RNF169, RNF170, RNF175, RNF180, RNF181, RNF182, RNF185, RNF207, RNF213, RNF215, RNFT1, SH3MD4, SH3RF1, SH3RF2, SYVN1, TIF1, TMEM118, TOPORS, TRAF2, TRAF3, TRAF4, TRAF5, TRAF6, TRAF7, TRAIP, TRIM2, TRIM3, TRIM4, TRIM5, TRIM6, TRIM7, TRIM8, TRIM9, TRIM10, TRIM11, TRIM13, TRIM15, TRIM17, TRIM21, TRIM22, TRIM23, TRIM24, TRIM25, TRIM26, TRIM27, TRIM28, TRIM31, TRIM32, TRIM33, TRIM34, TRIM35, TRIM36, TRIM38, TRIM39, TRIM40, TRIM41, TRIM42, TRIM43, TRIM45, TRIM46, TRIM47, TRIM48, TRIM49, TRIM50, TRIM52, TRIM54, TRIM55, TRIM56, TRIM58, TRIM59, TRIM60, TRIM61, TRIM62, TRIM63, TRIM65, TRIM67, TRIM68, TRIM69, TRIM71, TRIM72, TRIM73, TRIM74, TRIML1, TTC3, UHRF1, UHRF2, VPS11, VPS8, ZNF179, ZNF294, ZNF313, ZNF364, ZNF451, ZNF650, ZNFB7, ZNRF1, ZNRF2, ZNRF3, ZNRF4, and ZSWIM2. References External links Protein structural motifs RING finger proteins Protein superfamilies
RING finger domain
Biology
1,655