id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
5,444,216
https://en.wikipedia.org/wiki/Atmospheric%20model
In atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth (or other planetary body), or regional (limited-area), covering only part of the Earth. Atmospheric models also differ in how they compute vertical fluid motions; some types of models are thermotropic, barotropic, hydrostatic, and non-hydrostatic. These model types are differentiated by their assumptions about the atmosphere, which must balance computational speed with the model's fidelity to the atmosphere it is simulating. Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues. Types Thermotropic The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated using the and geopotential height surfaces and the average thermal wind between them. Barotropic Barotropic models assume the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow arctic highs) and warm-core lows (such as tropical cyclones). A barotropic model tries to solve a simplified form of atmospheric dynamics based on the assumption that the atmosphere is in geostrophic balance; that is, that the Rossby number of the air in the atmosphere is small. If the assumption is made that the atmosphere is divergence-free, the curl of the Euler equations reduces into the barotropic vorticity equation. This latter equation can be solved over a single layer of the atmosphere. Since the atmosphere at a height of approximately is mostly divergence-free, the barotropic model best approximates the state of the atmosphere at a geopotential height corresponding to that altitude, which corresponds to the atmosphere's pressure surface. Hydrostatic Hydrostatic models filter out vertically moving acoustic waves from the vertical momentum equation, which significantly increases the time step used within the model's run. This is known as the hydrostatic approximation. Hydrostatic models use either pressure or sigma-pressure vertical coordinates. Pressure coordinates intersect topography while sigma coordinates follow the contour of the land. Its hydrostatic assumption is reasonable as long as horizontal grid resolution is not small, which is a scale where the hydrostatic assumption fails. Models which use the entire vertical momentum equation are known as nonhydrostatic. A nonhydrostatic model can be solved anelastically, meaning it solves the complete continuity equation for air assuming it is incompressible, or elastically, meaning it solves the complete continuity equation for air and is fully compressible. Nonhydrostatic models use altitude or sigma altitude for their vertical coordinates. Altitude coordinates can intersect land while sigma-altitude coordinates follow the contours of the land. History The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson who utilized procedures developed by Vilhelm Bjerknes. It was not until the advent of the computer and computer simulation that computation time was reduced to less than the forecast period itself. ENIAC created the first computer forecasts in 1950, and more powerful computers later increased the size of initial datasets and included more complicated versions of the equations of motion. In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977. The development of global forecasting models led to the first climate models. The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclone as well as air quality in the 1970s and 1980s. Because the output of forecast models based on atmospheric dynamics requires corrections near ground level, model output statistics (MOS) were developed in the 1970s and 1980s for individual forecast points (locations). Even with the increasing power of supercomputers, the forecast skill of numerical weather models only extends to about two weeks into the future, since the density and quality of observations—together with the chaotic nature of the partial differential equations used to calculate the forecast—introduce errors which double every five days. The use of model ensemble forecasts since the 1990s helps to define the forecast uncertainty and extend weather forecasting farther into the future than otherwise possible. Initialization Computation A model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere. These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future, with each time increment known as a time step. The equations are then applied to this new atmospheric state to find new rates of change, and these new rates of change predict the atmosphere at a yet further time into the future. Time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability. Time steps for global models are on the order of tens of minutes, while time steps for regional models are between one and four minutes. The global models are run at varying times into the future. The UKMET Unified model is run six days into the future, the European Centre for Medium-Range Weather Forecasts model is run out to 10 days into the future, while the Global Forecast System model run by the Environmental Modeling Center is run 16 days into the future. The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods, with the exception of a few idealized cases. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models use spectral methods for the horizontal dimensions and finite difference methods for the vertical dimension, while regional models and other global models usually use finite-difference methods in all three dimensions. The visual output produced by a model solution is known as a prognostic chart, or prog. Parameterization Weather and climate model gridboxes have sides of between and . A typical cumulus cloud has a scale of less than , and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air in a model gridbox was unstable (i.e., the bottom warmer than the top) then it would be overturned, and the air in that vertical column mixed. More sophisticated schemes add enhancements, recognizing that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sides between and can explicitly represent convective clouds, although they still need to parameterize cloud microphysics. The formation of large-scale (stratus-type) clouds is more physically based, they form when the relative humidity reaches some prescribed value. Still, sub grid scale processes need to be taken into account. Rather than assuming that clouds form at 100% relative humidity, the cloud fraction can be related to a critical relative humidity of 70% for stratus-type clouds, and at or above 80% for cumuliform clouds, reflecting the sub grid scale variation that would occur in the real world. The amount of solar radiation reaching ground level in rugged terrain, or due to variable cloudiness, is parameterized as this process occurs on the molecular scale. Also, the grid size of the models is large when compared to the actual size and roughness of clouds and topography. Sun angle as well as the impact of multiple cloud layers is taken into account. Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere. Thus, they are important to parameterize. Domains The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models also are known as limited-area models, or LAMs. Regional models use finer grid spacing to resolve explicitly smaller-scale meteorological phenomena, since their smaller domain decreases computational demands. Regional models use a compatible global model for initial conditions of the edge of their domain. Uncertainty and errors within LAMs are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as within the creation of the boundary conditions for the LAMs itself. The vertical coordinate is handled in various ways. Some models, such as Richardson's 1922 model, use geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations. This follows since pressure decreases with height through the Earth's atmosphere. The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the level, and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates. Global versions Some of the better known global numerical models are: GFS Global Forecast System (previously AVN) – developed by NOAA NOGAPS – developed by the US Navy to compare with the GFS GEM Global Environmental Multiscale Model – developed by the Meteorological Service of Canada (MSC) IFS Integrated Forecast System developed by the European Centre for Medium-Range Weather Forecasts UM – Unified Model developed by the UK Met Office ICON developed by the German Weather Service, DWD, jointly with the Max-Planck-Institute (MPI) for Meteorology, Hamburg, NWP Global model of DWD ARPEGE developed by the French Weather Service, Météo-France IGCM Intermediate General Circulation Model PLAV Vorticity-divergence semi-Lagrangian global atmospheric model – developed by Hydrometeorological Centre of Russia Regional versions Some of the better known regional numerical models are: WRF The Weather Research and Forecasting model was developed cooperatively by NCEP, NCAR, and the meteorological research community. WRF has several configurations, including: WRF-NMM The WRF Nonhydrostatic Mesoscale Model is the primary short-term weather forecast model for the U.S., replacing the Eta model. WRF-ARW Advanced Research WRF developed primarily at the U.S. National Center for Atmospheric Research (NCAR) HARMONIE-Climate (HCLIM) is a limited area climate model based on the HARMONIE model developed by a large consortium of European weather forecastign and research institutes . It is a model system that like WRF can be run in many configurations, including at high resolution with the non-hydrostatic Arome physics or at lower resolutions with hydrostatic physics based on the ALADIN physical schemes. It has mostly been used in Europe and the Arctic for climate studies including 3km downscaling over Scandinavia and in studies looking at extreme weather events. RACMO was developed at the Netherlands Meteorological Institute, KNMI and is based on the dynamics of the HIRLAM model with physical schemes from the IFS RACMO2.3p2 is a polar version of the model used in many studies to provide surface mass balance of the polar ice sheets that was developed at the University of Utrecht MAR (Modele Atmospherique Regionale) is a regional climate model developed at the University of Grenoble in France and the University of Liege in Belgium. HIRHAM5 is a regional climate model developed at the Danish Meteorological Institute and the Alfred Wegener Institute in Potsdam. It is also based on the HIRLAM dynamics with physical schemes based on those in the ECHAM model. Like the RACMO model HIRHAM has been used widely in many different parts of the world under the CORDEX scheme to provide regional climate projections. It also has a polar mode that has been used for polar ice sheet studies in Greenland and Antarctica NAM The term North American Mesoscale model refers to whatever regional model NCEP operates over the North American domain. NCEP began using this designation system in January 2005. Between January 2005 and May 2006 the Eta model used this designation. Beginning in May 2006, NCEP began to use the WRF-NMM as the operational NAM. RAMS the Regional Atmospheric Modeling System developed at Colorado State University for numerical simulations of atmospheric meteorology and other environmental phenomena on scales from meters to hundreds of kilometers – now supported in the public domain MM5 The Fifth Generation Penn State/NCAR Mesoscale Model ARPS the Advanced Region Prediction System developed at the University of Oklahoma is a comprehensive multi-scale nonhydrostatic simulation and prediction system that can be used for regional-scale weather prediction up to the tornado-scale simulation and prediction. Advanced radar data assimilation for thunderstorm prediction is a key part of the system.. HIRLAM High Resolution Limited Area Model, is developed by the European NWP research consortia co-funded by 10 European weather services. The meso-scale HIRLAM model is known as HARMONIE and developed in collaboration with Meteo France and ALADIN consortia. GEM-LAM Global Environmental Multiscale Limited Area Model, the high resolution GEM by the Meteorological Service of Canada (MSC) ALADIN The high-resolution limited-area hydrostatic and non-hydrostatic model developed and operated by several European and North African countries under the leadership of Météo-France COSMO The COSMO Model, formerly known as LM, aLMo or LAMI, is a limited-area non-hydrostatic model developed within the framework of the Consortium for Small-Scale Modelling (Germany, Switzerland, Italy, Greece, Poland, Romania, and Russia). Meso-NH The Meso-NH Model is a limited-area non-hydrostatic model developed jointly by the Centre National de Recherches Météorologiques and the Laboratoire d'Aérologie (France, Toulouse) since 1998. Its application is from mesoscale to centimetric scales weather simulations. Model output statistics Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions near the ground, statistical corrections were developed to attempt to resolve this problem. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations, and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS), and were developed by the National Weather Service for their suite of weather forecasting models. The United States Air Force developed its own set of MOS based upon their dynamical weather model by 1983. Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect. MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds. Applications Climate modeling In 1956, Norman Phillips developed a mathematical model that realistically depicted monthly and seasonal patterns in the troposphere. This was the first successful climate model. Several groups then began working to create general circulation models. The first general circulation climate model combined oceanic and atmospheric processes and was developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, a component of the U.S. National Oceanic and Atmospheric Administration. By 1975, Manabe and Wetherald had developed a three-dimensional global climate model that gave a roughly accurate representation of the current climate. Doubling CO2 in the model's atmosphere gave a roughly 2 °C rise in global temperature. Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO2 concentration was increased. By the early 1980s, the U.S. National Center for Atmospheric Research had developed the Community Atmosphere Model (CAM), which can be run by itself or as the atmospheric component of the Community Climate System Model. The latest update (version 3.1) of the standalone CAM was issued on 1 February 2006. In 1986, efforts began to initialize and model soil and vegetation types, resulting in more realistic forecasts. Coupled ocean-atmosphere climate models, such as the Hadley Centre for Climate Prediction and Research's HadCM3 model, are being used as inputs for climate change studies. Limited area modeling Air pollution forecasts depend on atmospheric models to provide fluid flow information for tracking the movement of pollutants. In 1970, a private company in the U.S. developed the regional Urban Airshed Model (UAM), which was used to forecast the effects of air pollution and acid rain. In the mid- to late-1970s, the United States Environmental Protection Agency took over the development of the UAM and then used the results from a regional air pollution study to improve it. Although the UAM was developed for California, it was during the 1980s used elsewhere in North America, Europe, and Asia. The Movable Fine-Mesh model, which began operating in 1978, was the first tropical cyclone forecast model to be based on atmospheric dynamics. Despite the constantly improving dynamical model guidance made possible by increasing computational power, it was not until the 1980s that numerical weather prediction (NWP) showed skill in forecasting the track of tropical cyclones. And it was not until the 1990s that NWP consistently outperformed statistical or simple dynamical models. Predicting the intensity of tropical cyclones using NWP has also been challenging. As of 2009, dynamical guidance remained less skillful than statistical methods. See also Atmospheric reanalysis Climate model Numerical weather prediction Upper-atmospheric models Static atmospheric model Chemistry transport model References Further reading External links WRF Source Codes and Graphics Software Download Page RAMS source code available under the GNU General Public License MM5 Source Code download The source code of ARPS Model Visualisation Numerical climate and weather models Articles containing video clips
Atmospheric model
[ "Environmental_science" ]
4,062
[ "Atmospheric models", "Environmental modelling" ]
6,936,288
https://en.wikipedia.org/wiki/Baseball%20telecasts%20technology
The following is a chronological list of the technological advancements of Major League Baseball television broadcasts: 1930s and 1940s 1939 On August 26, the first ever Major League Baseball telecast (the Brooklyn Dodgers vs. Cincinnati Reds from Ebbets Field) aired by W2XBS, an experimental station in New York City which would ultimately become WNBC-TV. Red Barber called the game without the benefit of a monitor and with only two cameras capturing the game. One camera was on Barber and the other was behind the plate. Barber had to guess from which light was on and where it pointed. In 1939, baseball games were usually covered by one camera providing a point-of-view along the third base line. 1949 Equipment: Three black-and-white cameras, all located on the Mezzanine level. Camera lenses: Fixed, no zoom capabilities. Replays: None Graphics: None Audio: One microphone on the play-by-play announcer. 1950s 1951 On August 11, 1951, WCBS-TV in New York City televised the first baseball game (in which the Boston Braves beat the Brooklyn Dodgers by the score of 8–1) in color. On October 3 of that year, NBC aired the first coast-to-coast baseball telecast as the Brooklyn Dodgers were beaten by the New York Giants in the final game of a playoff series by the score of 5-4 (off Bobby Thomson's now-legendary home run). 1953 Equipment: Four black-and-white cameras, all located on the Mezzanine level. Camera lenses: Fixed, no zoom capabilities. Video: Quality of picture has improved since the 1940s. Replay: None Graphics: White-text containing one line of information. Audio: One microphone on the play-by-play announcer and one mic is suspended from the press box for crowd noise. 1955 1955 marked the first time that the World Series was televised in color. 1957 Equipment: Four cameras on Mezzanine level while a fifth camera is added in center field. Camera lenses: Three fixed lenses on each camera that's manually rotated by a camera operator. Video: Quality of picture is a very sharp black-and-white. Replays: None Graphics: White-text only; information about the balls and strikes are added. Broadcasters: Analysts added to broadcast alongside the play-by-play announcer. Audio: One mic is suspended from the press box for crowd noise. 1960s 1961 Equipment: Five cameras: Four on the Mezzanine level and one in center field. Camera lenses: Zoom capability existed albeit, limited. Video: Black-and-white picture quality has improved. Replays: Yes; regular speed; no longer than thirty seconds long at a line angle only. Graphics: White-text only including two lines of text. Audio: Improved - Audience can now hear the crack of the bat. 1962 On July 23, 1962, Major League Baseball had its first satellite telecast (via Telstar Communications). The telecast included portion of a contest between the Chicago Cubs vs. the Philadelphia Phillies from Wrigley Field with Jack Brickhouse commentating on WGN-TV. 1969 By 1969, the usage of chroma key (in which the commentators would open a telecast by standing in front of a greenscreen composite of the stadiums' crowds) became a common practice for baseball telecasts. Equipment: Five cameras: Four on the Mezzanine level and one in center field. Camera lenses: Zoom capability existed albeit, limited. Video: Color became an industry standard. Replays: Yes; regular speed; no longer than thirty seconds long at a line angle. Graphics: Electronic graphics introduced. Audio: Improved - Audience can now hear the crack of the bat. 1970s 1974 Equipment: Seven cameras: One at first and third base each, one at home plate, one at center field, one at left field, and each in the dugout. Camera lenses: 18×1; the batter can now be seen from head to toe. Video: Color quality has improved since the 1960s. Replays: Slow-motion from all camera angles. Graphics: Video font with two color capabilities. Audio: Mono. - much improved quality; an effect microphone is placed near the field. 1975 In the bottom of the 12th inning of Game 6 of the 1975 World Series at Boston's Fenway Park, Red Sox catcher Carlton Fisk was facing Cincinnati Reds pitcher Pat Darcy. Fisk then hit a pitch down the left field line that appeared to be heading to foul territory. The enduring image of Fisk jumping and waving the ball fair as he made his way to first base is arguably one of baseball's greatest moments. The ball struck the foul pole, giving the Red Sox a 7–6 win and forcing a seventh and deciding game of the Fall Classic. During this time, cameramen covering baseball games were instructed to follow the flight of the ball; reportedly, Fisk's reaction was only being recorded because NBC cameraman Lou Gerard, positioned inside Fenway's scoreboard at the base of the left-field Green Monster wall, had become distracted by a large rat. This play was perhaps the most important catalyst in getting camera operators to focus most of their attention on the players themselves. 1980s 1983 On July 6, 1983, NBC televised the All-Star Game out of Chicago's Comiskey Park. During the telecast, special guest analyst, Don Sutton helped introduce NBC's new pitching tracking device dubbed The NBC Tracer. The NBC Tracer was a stroboscopic comet tail showing the path of a pitch to the catcher's glove. For instance, The NBC Tracer helped track a Dave Stieb curveball among others. 1985 In 1985, NBC's telecast of the All-Star Game out of the Metrodome in Minnesota was the first program to be broadcast in stereo by a television network. Equipment: Eight cameras: One at first and third base each, one at home plate (a low home angle is added), one each in right field, center field and left field, and one in each dugout. Camera lenses: 40×1; tight shots of players are routine. Replays: Super slow-motion replays became a new technology. Graphics: Computer generated in multiple colors. Audio: Mono - much improved quality. 1987 For the 1987 World Series between the Minnesota Twins and St. Louis Cardinals, ABC utilized 12 cameras and nine tape machines. This includes cameras positioned down the left field line, on the roof of the Metrodome, and high above third base. 1990s 1990 In 1990, CBS took over from both ABC and NBC as Major League Baseball's national, over-the-air television provider. They in the process brought along their telestration technology that they dubbed CBS Chalkboard. CBS Chalkboard made its debut eight years earlier during CBS' coverage of Super Bowl XVI. 1992 For CBS' coverage of the 1992 All-Star Game, they introduced Basecam, a lipstick-size camera, inside first base. 1993 During CBS' coverage of the 1993 World Series, umpires were upset with the overhead replays being televised by CBS. Dave Phillips, the crew chief, said just prior to Game 2 that the umpires want "CBS to be fair with their approach." Rick Gentile, the senior vice president for production of CBS Sports, said that Richie Phillips, the lawyer for the Major League Umpires Association, tried to call the broadcast booth during Saturday's game, but the call was not put through. Richie Phillips apparently was upset when Dave Phillips called the Philadelphia Phillies' Ricky Jordan out on strikes in the fourth inning, and a replay showed the pitch to be about 6 inches outside. National League President Bill White, while using a CBS headset in the broadcast booth during Game 1, was overheard telling Gentile and the producer Bob Dekas: 1995 April 1995 - ESPN debuted in-game box scores during Major League Baseball telecasts. Hitting, pitching and fielding stats from the game are shown along the bottom of the screen three times per game. 1996 Equipment: Ten cameras: Eight manned cameras plus two robotic cameras. Six tape machines plus one digital disk recorder. Camera lenses: 55×1 Graphics: Computer generated and in high resolution; the FoxBox is introduced. Audio: In Stereo and surround sound; wireless mics are placed in the bases. 1997 May/June 1997 - ESPN debuted MaskCam on an umpire at the College World Series. On July 8, 1997, Fox televised its first ever All-Star Game (out of Jacobs Field in Cleveland). For this particular game, Fox introduced "Catcher-Cam" in which a camera was affixed to the catchers' masks in order to provide unique perspectives of the action around home plate. Catcher-Cam soon would become a regular fixture in Fox's baseball broadcasts. In addition to Catcher-Cam, other innovations (some of which have received more acclaim than others) that Fox has provided for baseball telecasts have been: Sennheiser MKE-2 microphones and SK-250 transmitters in the bases. Between 12 and 16 microphones throughout the outfield, ranging from Sennheiser MKH-416 shotgun microphones to DPA 4061s with Crystal Partners Big Ear parabolic microphones to Crown Audio PCC160 plate microphones. The continuous "FoxBox" graphic, which contained the score, inning and other information in an upper corner of the TV screen. Since 2001, the FoxBox has morphed into a strip across the top of the screen which would later be used by other sports networks. Audio accompanying graphics and sandwiched replays between "whooshes." "Mega Slo-Mo" technology. Scooter, a cartoony 3-D animated talking baseball (voiced by Tom Kenny) that occasionally appears to explain pitch types and mechanics, purportedly for younger viewers—approximately the 10- to 12-year-olds. Ball Tracer, a stroboscopic comet tail showing the path of a pitch to the catcher's glove. Strike Zone, which shows pitch sequences with strikes in yellow and balls in white. It can put a simulated pane of glass that shatters when a ball goes through the zone (à la the computerized scoring graphics used for bowling). The "high home" camera from high behind home plate. Its purpose is that it can trace the arc of a home run and measure the distance the ball traveled. The "high home" camera can also measure a runner's lead off first base while showing in different colors (green, yellow, red) and how far off the base and into pickoff danger a runner is venturing. 2000s 2000 For a Saturday afternoon telecast of a Los Angeles Dodgers/Chicago Cubs game at Wrigley Field on August 26, 2000, Fox aired a special "Turn Back the Clock" broadcast to commemorate the 61st anniversary of the first televised baseball game. The broadcast started with a re-creation of the television technology of 1939, with play-by-play announcer Joe Buck working alone with a single microphone, a single black-and-white camera, and no graphics; then, each subsequent half-inning would see the broadcast "jump ahead in time" to a later era, showing the evolving technologies and presentation of network baseball coverage through the years. 2001 April 15, 2001 - ESPN Dead Center debuted on Sunday Night Baseball with the Texas Rangers versus the Oakland Athletics. This new camera angle, directly behind the pitcher, is used provide true depiction of inside/outside pitch location and is used in certain parks in conjunction with K Zone. July 1, 2001 - ESPN's K Zone officially debuted on Sunday Night Baseball. The FoxBox becomes a banner at the top of the screen. 2002 April 7, 2002 - ESPN became the first network to place a microphone on a player during a regular-season baseball game. "Player Mic" was worn by Oakland catcher Ramón Hernández (who also wore "MaskCam") and taped segments were heard. May 26, 2002 - "UmpireCam" debuted on ESPN, worn by Matt Hollowell behind the plate in the New York Yankees at Boston Red Sox telecast. In October 2002, Fox televised the first ever World Series to be shown in high definition. 2003 March 30, 2003 - ESPNHD, a high-definition simulcast service of ESPN, debuted with the first regular-season Major League Baseball game of the season - Texas at Anaheim. 2004 April 2004 - ESPN's Sunday Night Baseball telecasts added a fantasy baseball bottom line, updating viewers on the stats for their rotisserie league players at 15 and 45 minutes after the hour. Starting in 2004, some TBS telecasts (mostly Fridays or Saturdays) became more enhanced. The network decided to call it Braves TBS Xtra. Enhancements included catcher cam, Xtra Motion, which featured the type of pitch and movement, also leadOff Line. It would also show features with inside access to players. In October 2004, Fox started airing all Major League Baseball postseason broadcasts (including the League Championship Series and World Series) in high definition. Fox also started airing the Major League Baseball All-Star Game in HD the following year. At the same time, the FoxBox and graphics are upgraded. 2005 April 13, 2005 - "SkyCam" premiered during Sunday Night Baseball on ESPN. "SkyCam" is mounted more than 20 feet above the stands in foul territory and travels down a designated base path (first or third base line, from behind home plate to the foul pole), capturing overhead views of the action. The remote-controlled camera can zoom, pan and tilt. 2006 April 2, 2006 - A handheld camera brings viewers closer to the action for in-game live shots of home run celebrations, managers approaching the mound and more. May 1, 2006 - 'K Zone 2.0' debuted on ESPN's Monday Night Baseball. 2007 For their 2007 Division Series coverage, TBS debuted various new looks, such as the first live online views from cameras in dugouts and ones focused on pitchers. TBS also introduced a graphic that creates sort of a rainbow to trace the arc of pitches on game replays. The graphic was superimposed in the studio so analysts like Cal Ripken Jr. for instance, could take virtual cuts at pitches thrown in games. 2009 During their 2009 playoff coverage, TBS displays their PitchTrax graphic full-time during at-bats (with the center field camera only) during the high-definition version of the broadcast in the extreme right-hand corner of the screen. Meanwhile, for their own 2009 playoff coverage, Fox announced that they would occasionally include this stat on replays: Speed of pitches as they leave pitchers' hands as well as their speed when they cross home plate. 2010s 2010 YES Network and NESN integrates the pitch count on to their on screen graphics. ESPN would follow suit, while also re-hashing their score bug akin to Monday Night Football, now featuring dots instead of numbers to represent the balls, strikes and outs. The 2010 All-Star Game marked the first time the annual game would be shown in 3D. Kenny Albert and Mark Grace had the call. On September 29, Fox announced that their plans to use cable-cams for their coverage of the National League Championship Series and World Series. The cable-cams according to Fox, can roam over the field at altitudes ranging from about 12 to 80 feet above ground. They will be able to provide overhead shots of, among other things, "close plays" at bases and "managers talking to their pitchers on the mound." 2011 With the start of the 2011 postseason, TBS planned to introduce the following Bloomberg Stats: TBS would use Bloomberg Stats as means to integrate comprehensive statistical information into each telecast. Liberovision: This is an innovative 3D interactive telestrator meant to give fans a new perspective of instant replays. New graphics that intend to feature improved functionality with a nostalgic feel. Pitch Trax: An in-game technology that illustrates pitch location throughout the games. The screen on TBS's standard definition 4:3 feed now airs a letterboxed version of the native HD feed to match Fox's default widescreen SD presentation, allowing the right side pitch tracking graphic to be seen by SD viewers. For the 2011 World Series, Fox debuted infrared technology that's designed to pinpoint heat made by a ball making contact — with, say, bats, face masks, players' bodies — and mark the spot for viewers by making it glow. During Game 1, Fox used "Hot Spot" to show that a batted ball was fouled off Texas Rangers batter Adrián Beltré's foot. 2012 Fox's 2012 World Series coverage would include a camera whose replays could generate as many as 20,000 frames per second, the most ever seen on Fox—and up from about 60 frames per second on regular replays. The camera would allow viewers "to see the ball compress" when batted, similar to how cameras now show golf balls getting compressed when struck. The technology for the camera originated with the U.S. military looking at replays of missile impacts. 2016 At the beginning of the 2016 season, TBS introduced new graphics that were used all season including the postseason. The score box, which was originally docked to the top and left edges of the screen, was completely redesigned for 2017 after much criticism during the 2016 postseason for its large size. Like the 2016 score bug, the current one still stands in the top left corner, only it is smaller. 2020s 2020 The 2020 season was delayed until July due to the COVID-19 pandemic. Fox soon announced that they would virtually fill the seats of Chicago's Wrigley Field, Los Angeles' Dodger Stadium, Washington's Nationals Park, San Diego's Petco Park and other ballparks that it broadcasts games over the next several weeks. Announcers later spent time explaining and demonstrating the use of virtual fans during the July 25 game between the Chicago Cubs-Milwaukee Brewers at Wrigley Field. See also Digital on-screen graphic FoxBox (sports) Instant replay Major League Baseball television contracts Score bug References External links Baseball on TV by Deborah Tudor Technological Innovations in Sports Broadcasting Baseball needs to clutch technology ESPN Retools for Baseball MLB ALL-STAR GAME ON FOX CHANGES WITH TECHNOLOGY, MORE CABLE Telecast technology Telecasts technology Sports television technology Film and video technology Telecasts technology History of technology
Baseball telecasts technology
[ "Technology" ]
3,734
[ "Science and technology studies", "History of science and technology", "History of technology" ]
6,936,536
https://en.wikipedia.org/wiki/Prandtl%E2%80%93Meyer%20expansion%20fan
A supersonic expansion fan, technically known as Prandtl–Meyer expansion fan, a two-dimensional simple wave, is a centered expansion process that occurs when a supersonic flow turns around a convex corner. The fan consists of an infinite number of Mach waves, diverging from a sharp corner. When a flow turns around a smooth and circular corner, these waves can be extended backwards to meet at a point. Each wave in the expansion fan turns the flow gradually (in small steps). It is physically impossible for the flow to turn through a single "shock" wave because this would violate the second law of thermodynamics. Across the expansion fan, the flow accelerates (velocity increases) and the Mach number increases, while the static pressure, temperature and density decrease. Since the process is isentropic, the stagnation properties (e.g. the total pressure and total temperature) remain constant across the fan. The theory was described by Theodor Meyer on his thesis dissertation in 1908, along with his advisor Ludwig Prandtl, who had already discussed the problem a year before. Flow properties The expansion fan consists of an infinite number of expansion waves or Mach lines. The first Mach line is at an angle with respect to the flow direction, and the last Mach line is at an angle with respect to final flow direction. Since the flow turns in small angles and the changes across each expansion wave are small, the whole process is isentropic. This simplifies the calculations of the flow properties significantly. Since the flow is isentropic, the stagnation properties like stagnation pressure (), stagnation temperature () and stagnation density () remain constant. The final static properties are a function of the final flow Mach number () and can be related to the initial flow conditions as follows, where is the heat capacity ratio of the gas (1.4 for air): The Mach number after the turn () is related to the initial Mach number () and the turn angle () by, where, is the Prandtl–Meyer function. This function determines the angle through which a sonic flow (M = 1) must turn to reach a particular Mach number (M). Mathematically, By convention, Thus, given the initial Mach number (), one can calculate and using the turn angle find . From the value of one can obtain the final Mach number () and the other flow properties. The velocity field in the expansion fan, expressed in polar coordinates are given by is the specific enthalpy and is the stagnation specific enthalpy. Maximum turn angle As Mach number varies from 1 to , takes values from 0 to , where This places a limit on how much a supersonic flow can turn through, with the maximum turn angle given by, One can also look at it as follows. A flow has to turn so that it can satisfy the boundary conditions. In an ideal flow, there are two kinds of boundary condition that the flow has to satisfy, Velocity boundary condition, which dictates that the component of the flow velocity normal to the wall be zero. It is also known as no-penetration boundary condition. Pressure boundary condition, which states that there cannot be a discontinuity in the static pressure inside the flow (since there are no shocks in the flow). If the flow turns enough so that it becomes parallel to the wall, we do not need to worry about pressure boundary condition. However, as the flow turns, its static pressure decreases (as described earlier). If there is not enough pressure to start with, the flow won't be able to complete the turn and will not be parallel to the wall. This shows up as the maximum angle through which a flow can turn. The lower the Mach number is to start with (i.e. small ), the greater the maximum angle through which the flow can turn. The streamline which separates the final flow direction and the wall is known as a slipstream (shown as the dashed line in the figure). Across this line there is a jump in the temperature, density and tangential component of the velocity (normal component being zero). Beyond the slipstream the flow is stagnant (which automatically satisfies the velocity boundary condition at the wall). In case of real flow, a shear layer is observed instead of a slipstream, because of the additional no-slip boundary condition. Notes See also Gas dynamics Mach wave Oblique shock Shock wave Shadowgraph technique Schlieren photography Sonic boom References External links Expansion fan (NASA) Prandtl- Meyer expansion fan calculator (Java applet). Aerodynamics Conservation equations Fluid dynamics
Prandtl–Meyer expansion fan
[ "Physics", "Chemistry", "Mathematics", "Engineering" ]
955
[ "Chemical engineering", "Conservation laws", "Mathematical objects", "Equations", "Aerodynamics", "Aerospace engineering", "Piping", "Fluid dynamics", "Conservation equations", "Symmetry", "Physics theorems" ]
6,937,333
https://en.wikipedia.org/wiki/Franz%20Halberg
Franz Halberg (July 5, 1919 – June 9, 2013 ) was a scientist and one of the founders of modern chronobiology. He first began his experiments in the 1940s and later founded the Chronobiology Laboratories at the University of Minnesota. Halberg published many papers also in the serials of the History Commission of International Association of Geomagnetism and Aeronomy. He also published in "Wege zur Wissenschaft, Pathways to Science". He was a member of many international bodies, was awarded five honorary doctorates and was a member of the Leibniz Sozietät der Wissenschaften zu Berlin. In the 1950s, he introduced the word circadian, which derives from the Latin about (circa) a day (diem). Nominations for the Nobel Prize Halberg was nominated several times for the Nobel Prize in Physiology or Medicine. In 1988, and again in 1989, upon invitation by Professor Björn Nordenström of the Karolinska Institute in Sweden, then a member of the Nobel committee, Germaine Cornelissen, close associate of Halberg, nominated Halberg for the prize, highlighting the different ingredients contributed by Franz in developing the discipline of chronobiology. Nordenström had come to the University of Minnesota to give a major lecture and accepted Halberg's invitation to come and visit his laboratory. The invitation was extended upon Nordenström's return to Sweden, at the Minneapolis airport where Halberg and Cornelissen had accompanied him to continue discussions of work of mutual interest. After Nordenström left the committee, Halberg's dossier assembled by Cornelissen was handed over to Dr. Dora K. Hayes of the U.S. Department of Agriculture, who had a colleague eligible to make nominations. References External links Halberg Chronobiology Center at the University of Minnesota 2013 deaths Sleep researchers 1919 births Chronobiologists University of Minnesota faculty Romanian emigrants to the United States Harvard Medical School people
Franz Halberg
[ "Biology" ]
405
[ "Sleep researchers", "Behavior", "Sleep" ]
6,938,470
https://en.wikipedia.org/wiki/Rotary%20vacuum-drum%20filter
A Rotary Vacuum Filter Drum consists of a cylindrical filter membrane that is partly sub-merged in a slurry to be filtered. The inside of the drum is held lower than the ambient pressure. As the drum rotates through the slurry, the liquid is sucked through the membrane, leaving solids to cake on the membrane surface while the drum is submerged. A knife or blade is positioned to scrape the product from the surface. The technique is well suited to slurries, flocculated suspensions, and liquids with a high solid content, which could clog other forms of filter. It is common to pre-coated with a filter aid, typically of diatomaceous earth (DE) or Perlite. In some implementations, the knife also cuts off a small portion of the filter media to reveal a fresh media surface that will enter the liquid as the drum rotates. Such systems advance the knife automatically as the surface is removed. Basic fundamentals Rotary vacuum drum filter Rotary vacuum drum filter (RVDF), patented in 1872, is one of the oldest filters used in the industrial liquid-solids separation. It offers a wide range of industrial processing flow sheets and provides a flexible application of dewatering, washing and/or clarification. A rotary vacuum filter consists of a large rotating drum covered by a cloth. The drum is suspended on an axial over a trough containing liquid or solids slurry with approximately 50-80% of the screen area immersed in the slurry. As the drum rotates into and out of the trough, the slurry is sucked on the surface of the cloth and rotated out of the liquid or solids suspension as a cake. When the cake is rotating out, it is dewatered in the drying zone. The cake is dry because the vacuum drum is continuously sucking the cake and taking the water out of it. At the final step of the separation, the cake is discharged as solids products and the drum rotates continuously to another separation cycle. Range of application Applications: The rotary filter is most suitable for continuous operation on large quantities of slurry. If the slurry contains considerable amount of solids, that is, in the range of 15-30%. Examples of pharmaceutical applications include the collection of calcium carbonate, magnesium carbonate and starch. The separation of the mycelia from the fermentation liquor in the manufacture of antibiotics. block and instant yeast production. Advantages and limitations The advantages and limitations of rotary vacuum drum filter compared to other separation methods are: Advantages The rotary vacuum drum filter is a continuous and automatic operation, so the operating cost is low. The variation of the drum speed rotating can be used to control the cake thickness. The process can be easily modified (pre-coating filter process). Can produce relatively clean product by adding a showering device. Disadvantages Due to the structure, the pressure difference is theoretically limited to atmospheric pressure (1 bar), and in practice somewhat lower. Besides the drum, other accessories, for example, agitators and vacuum pump, vacuum receivers, slurry pumps are required. The discharge cake contains residual moisture. The cake tends to crack due to their air drawn through by the vacuum system, so that washing and drying are not efficient. High energy consumption by the vacuum pump. Designs available Basically there are five types of discharge that are used for the rotary vacuum drum filter such as belt, scraper, roll, string and pre coat discharge. Belt discharge The filter cloth is washed on both sides with each drum rotation while discharging filter cakes. The products for this mechanism are usually sticky, wet and thin thus, requiring the aid of a discharge roll. Belt discharge is used if slurry with moderate solid concentration is used or if the slurry is easy to filter to produce cake formation or if a longer wear resistance is desired for the separation of the mentioned slurry..... Scraper discharge This is the standard drum filter discharge. A scraper blade, which serves to redirect the filter cake into the discharge chute, removes the cake from the filter cloth just before re-entering the vat. Scraper discharge is used if the desired separation requires high filtration rate or if heavy solid slurry is used or if the slurry is easy to filter to produce cake formation or if a longer wear resistance is desired for the separation of the mentioned slurry. Roll discharge It is a suitable discharge option for cakes that are thin and have the tendency to stick with one another. Filter cakes on the drum and discharged roll are pressed against one another to ensure that the thin filter cake is peeled or pulled from the drum. Removal of solids from the discharge roll is done via a knife blade. Roll discharged is used if the desired separation requires high filtration rate, if high solid content slurry is used or if the slurry is easy to filter to produce cake formation or if the discharged solid is sticky or mud-like cake. String discharge The filtrate cakes that are thin and fragile are usually the end products of this discharge lie. The materials are capable of changing phases, from solid to liquid, due to instability and disturbance. Two rollers guide the strings back to drum surface and at the same time separation of the filtrate cake occurs as they pass the rollers. Application of the string discharge can be seen at the pharmaceutical and starch industries. String discharge is used if the high solid concentration slurry is used or if the slurry is easy to filter to produce cake formation or if the discharged solid is fibrous, stringy or pulpy or if a longer wear resistance is desired for the separation of the mentioned slurry. Pre coat discharge Application of this discharge are usually seen where production of filter cakes that blind the filter media thoroughly and processes that have low solid concentration slurry. Pre coat discharge is used if slurry with very low solid concentration slurry is used that resulted in difficult cake formation or if the slurry is difficult to filter to produce cake . Main process characteristics and assessment Generally, the main process in a rotary vacuum drum filter is continuous filtration whereby solids are separated from liquids through a filter medium by a vacuum. The filter cloth is one of the most important components on a filter and is typically made of weaving polymer yarns. The best selection of cloth can increase the performance of filtration. Initially, slurry is pumped into the trough and as the drum rotates, it is partially submerged in the slurry. The vacuum draws liquid and air through the filter media and out the shaft hence forming a layer of cake. An agitator is used to regulate the slurry if the texture is coarse and it is settling rapidly. Solids that are trapped on the surface of the drum are washed and dried after 2/3 of revolution, removing all the free moisture. During the washing stage, the wash liquid can either be poured onto the drum or sprayed on the cake. Cake pressing is optional but its advantages are preventing cake cracking and removing more moisture. Cake discharge is when all the solids are removed from the surface of the cake by a scraper blade, leaving a clean surface as drum re-enters the slurry. There are a few types of discharge which are scraper, roller, string, endless belt and pre coat. The filtrate and air flow through internal pipes, valve and into the vacuum receiver where the separation of liquid and gas occurs producing a clear filtrate. Pre coat filtration is an ideal method to produce a high clarity of filtrate. Basically, the drum surface is pre coated with a filter aid such as diatomaceous earth (DE) or perlite to improve filtration and increase cake permeability. It then undergoes the same process cycle as the conventional rotary vacuum drum filter however, pre coat filtration uses a higher precision blade to scrape off the cake. The filter is assessed by the size of the drum or filter area and its possible output. Typically, the output is in the units of pounds per hour of dry solids per square foot of filter area. The size of the auxiliary parts depends on the area of the filter and the type of usage. Rotary vacuum filters are flexible in handling variety of materials therefore the estimated solids yield from 5 to 200 pounds per hour per square foot. For pre coat discharge, the solid output is approximately 2 to 40 gallons per hour per square foot. Filtration efficiencies can also be improved in terms dryness of filter cake by significantly preventing filtrate liquid from getting stuck in the filter drum during filtration phase. Usage of multiple filters for example, running 3 filter units instead of 2 units yields a thicker cake hence, producing a clearer filtrate. This becomes beneficial in terms of production cost and also quality. Heuristics design process Basic operation parameters heuristics Vat level and drum speed are the two basic operating parameters for any rotary vacuum drum filter. These parameters are adjusted dependently to each other to optimize the filtration performance. Valve level determines the proportion filter cycle in the filter. The filter cycle consist of the filter drum rotation, release of cake formation from slurry and the drying period for the cake formation shown in figure 1. By default, operate the vat at its maximum level to maximise the rate of filtration. Reduce vat level if discharged solid is in the form of thin and slimy cake or if the discharged solid is very thick. Decrease in the vat level eventually leads to a decrease in the portion of the drum being submerge under the slurry, more surface exposure for the cake dying surface hence, larger cake formation to dry time ratio. This result in less moisture content of formed solid and lessen the thickness of the form solid. In addition to operating at lower vat level, the flow rate per drum revolution decreases and ultimately thinner cake formation occurs. In the case of pre coat discharge the filter aid efficiency increases. Drum speed is the driving factor for the filter output and its units is in the form of minutes per drum revolution. At steady operating conditions, adjusting the drum speed gives a proportional relationship with the filter throughput as shown as in figure 2. Discharge mechanism adjustment heuristics Endless belt Select filter cloth to obtain a good surface for cake formation. Use twill weave variation in the construction pattern of the fabric for better wear resistance. The belt tension, de-mooning bar height, wash water quantity and discharge roll speed are carefully tuned to maintain a good path for the cake formation to prevent excessive wear of the filter cloth. Scraper Select filter cloth to obtain good wear and solid binding characteristics. Use moderate blowback pressure to avoid high wear. Adjust duration of blow back pressure short enough to remove the cake from the filter cloth. The tuning of valve body is important for the blow back to prevent the excess filtrated being force back out of the pipe to with the release cake solid as this minimises wear and filter media maintenance. Roll Select filter cloth to obtain solid binding resistance and good cake release. Use coated fabric for more effective cake release and have a longer-lasting cloth media due to solid binding filter cloth. Both the discharge roll speed and drum speed must be the same. Adjust the scraper knife to leave a significant heal on discharge roll to produce a continuous cake transfer. String Minimise the lateral pressure of the strings by adjusting the alignment tine bar to avoid the string being cut off. Have ceramic tube place over each aligning tine bar to act as bearing surface for the strings. Pre coat Select filter cloth based on the type filter aid used (refer Filter aid selection), adjust the advancing knife to optimize the knife advance rate per drum revolution. (Detail explained in Advance blade section) Pre coat filter operation heuristics Filter aid selection: filter aid are recoat cake that act as the actual filter media and there two different types which are diatomaceous earth or perlite. Important parameter to consider is the solid penetration into the pre coat cake and its limits 0.002 to 0.005 inch penetration thickness. Large amount of filter aid is used i.e. “open”, more filter aid is aid removed which lead to higher disposal cost. If little amount of filter aid is used i.e. “tight” will lead to no flow rate into the drum. This comparison can be illustrated in figure 5 as below. Advanced blade The approximate knife advance rate can be determined for a set of operating conditions using table 6 below. The table indicates the number of hours that the filter can operate in a one-inch pre coat cake; the required condition is that the advance blade must be at a constant position. This method can be used to check for optimum operation range. If the operating parameter is higher than the optimum range, the user can reduce the knife advance rate and use a tighter grade of filter aid. This will result in less filter aid used (lower capital cost) and less filter aid being removed (lower disposal cost). However, if the operating parameter is lower than the optimum range, the user can increase the knife advanced rate (more production) and decrease the drum speed for less filter air usage (reduced operating cost). Necessary post treatment for waste stream for thicker Chlorination Most commonly used post treatment, where chlorine is dissolved in water to form and hydrochloric acid hypochlorous acid. The latter act as a disinfectant that is able to eliminate pathogens such as bacteria, viruses and protozoa by penetrating the cell walls. UV radiation The waste stream is irradiated with Ultraviolet radiation. The UV radiation disinfect by disrupting the pathogen cell to be mutated and prevent the cell from replicating. Eventually the mutated cell becomes extinct and this process eliminates odour. Ozonation The stream is exposed to ozone and ozone is unstable at atmospheric condition. The ozone (O3) decomposes into oxygen (O2) and more oxygen is dissolved into the stream. The pathogen is oxidised to form carbon dioxide. This process eliminates the odour of the stream but result in slightly acidic product due to the effect of carbon dioxide present. Necessary post treatment for waste stream for clarifier Land reclamation The waste discharge can be used as land stabilizer as dry bio-solids that can be distributed to the market. The land stabilizer is used in reclaiming marginal land such as mining waste land. This process will help to restore the land to its initial appearance. Incineration The waste discharge can be sent into incineration plant, where the organic solid undergoes combustion process. The combustion process produces heat that can be used to generate electricity. New Development The rotary vacuum drum filter designs available vary in physical aspects and their characteristics. The filtration area ranges from 0.5 m2 to 125 m2. Disregarding the size of the design, filter cloth washing is a priority as it ensures efficiency of cake washing and acting vacuum. However, a smaller design would be more economical as the maintenance, energy usage and investment cost would be less than a bigger rotary vacuum drum filter. Over the years, the technology drive has pushed development to further heights revolving around rotary vacuum drum filter in terms of design, performance, maintenance and cost. This has also led to the development of smaller rotary drum vacuum filters, ranging from laboratory scale to pilot scale, both of which can be used for smaller applications (such as at a lab in a university) High performance capacity, optimised filtrate drainage with low flow resistance and minimal pressure loss are just a few of the benefits. With advanced control systems prompting automation, this has reduced the operation of attention needed hence, reducing the operational cost. Advancements in technology also means that precoat can be cut to 1/20th the thickness of human hair, thus making the use of precoat more efficient Lowered operational and capital cost can also be achieved nowadays due to easier maintenance and cleaning. Complete cell emptying can be done quickly with the installation of leading and trailing pipes. Given that the filter cloth is usually one of the more expensive component in the rotary vacuum drum filter build up, priority on its maintenance must be kept quite high. A longer lifetime, protection from damage and consistent performance are the few criteria that must not be overlooked. Besides considering production cost and quality, cake washing and cake thickness are essential issues that are important in the process. Methods have been performed to ensure a minimal amount of cake moisture while undergoing good cake washing with large cake dewatering angle. An even thickness of filter cake besides having a complete cake discharge is also possible. See also Vacuum ceramic filter Dewatering References Further reading John J. McKetta, John J. McKetta Jr, "Unit Operations Handbook: Mechanical separations and materials handling", CRC Press, 1992, pp. 274–288. Hiroaki Masuda, Kō Higashitani, Hideto Yoshida, "Powder Technology: Handling and Operations, Process Instrumentation, and Working Hazards", CRC Press, 2006, pp. 194–195. External links Rotary drum filter, United States Patent 308143 Rotary drum filter, United States Patent 5006136 Luthi rotary drum filter Filter, patent number 2362300 Drum Filter Made in Viet Nam Filters Separation processes
Rotary vacuum-drum filter
[ "Chemistry", "Engineering" ]
3,533
[ "Separation processes", "Chemical equipment", "Filters", "Filtration", "nan" ]
6,938,475
https://en.wikipedia.org/wiki/Zmanim
Zmanim (, literally means "times", singular zman) are specific times of the day mentioned in Jewish law. These times appear in various contexts: Shabbat and Jewish holidays begin and end at specific times in the evening, while some rituals must be performed during the day or the night, or during specific hours of the day or night. Calculations Relative hours The daytime period is divided into twelve equal "relative hours" (or "seasonal" or "variable" hours), which can be longer or shorter than 60 minutes, as the period of daylight is generally not exactly twelve hours long. Hours of the day are counted according to these relative hours for commandments: thus, the Shema prayer must be recited in "the first three hours" of the day, i.e. the first 1/4 of the daytime period. There are two major opinions regarding the definition of the daytime period: According to Magen Avraham the period between daybreak and nightfall is divided into 12 hours. Usually this time is computed using daybreak as 72 minutes before sunrise (or more accurately, when the sun is 16.1 degrees below the horizon, as it is in Jerusalem 72 minutes before sunrise on the equinox), and nightfall as 72 minutes after sunset. However, the common practice in Jerusalem (following the Tucazinsky luach) is to compute it using 20 degrees (90 minutes at the equinox). According to Vilna Gaon the period between sunrise and sunset is divided into 12 hours. The result is that "Magen Avraham times" are earlier in the morning than "Vilna Gaon times"; in practice, there are communities that follow each of those standards. For times in the afternoon, the Vilna Gaon's times are earlier, and are almost universally followed. Near New York, for example, a "seasonal hour" based on the Vilna Gaon's calculations lasts ~45 minutes near the winter solstice, ~60 minutes near the equinoxes, and ~75 minutes near the summer solstice. Minutes and degrees The Talmud often states calculations of zmanim in terms of the time it takes to walk some distance, stated in mil (Biblical miles). Most authorities reckon the time it takes to walk one mil as being 18 minutes, though there are opinions of up to 24 minutes. Many authorities hold such calculations to be absolute: the phrase "four mils after sundown," for example, means exactly "72 minutes after sundown" in all places on all dates. Other authorities, especially those living in higher latitudes, noted that the darkness of the sky 72 minutes after sundown (for example) varies substantially from place to place, and from date to date. Therefore, they hold that "72 minutes after sundown" actually refers to the degree of darkness of the sky, 72 minutes after sundown in Jerusalem on an equinox. That degree of darkness is reckoned as being reached when the sun has fallen a certain number of degrees below the horizon (for example, 7°5′ below the horizon), and that number of degrees becomes the actual standard used for all places and all dates. Evening One calendar day ends, and the next day begins, in the evening. The Talmud states there is an uncertainty as to whether the day ends exactly at sundown or nightfall, so the period in between - known as bein hashemashot (בין השמשות) - has a status of doubt, as it could belong to either the previous or next day. The length and timing of bein hashemashot are subject to dispute. Two Talmudic passages provide contradictory statements regarding to its length: Tractate Pesachim states that the length is four mil, while Shabbat 34b states that the length is just 3/4 mil. Later authorities differ in their interpretations of these passages: The Geonim (and the Vilna Gaon) say that the Shabbat 34b describes the time of halakhic nightfall, while Pesachim describes when all the stars are visible (an occasion which has little halakhic significance). Rabbeinu Tam (and many other Rishonim) say that there are two times called "sundown": Pesachim describes the actual sundown (four mil before nightfall), while Shabbat 34b describes a time 3/4 mil before nightfall. These lead to different opinions on the length of bein hashemashot. According to the Geonim, nightfall is 13½-18 minutes after sundown (or, equivalently, when the sun falls 3–4.65° below the horizon). According to Rabbeinu Tam, nightfall occurs exactly 72 (or 90) minutes after sundown (or, equivalently, when the sun falls 16.1° or 20° below the horizon). A third Talmudic passage (Shabbat 35b) states that nightfall occurs when three medium-sized stars become visible. Until recently, all Jewish communities followed this passage, waiting for the observation of three stars to end Shabbat. This passage seems to contradict the other two, as in most of the world stars become visible more than 18 and less than 72 minutes after astronomical sunset. To reconcile the passages, various writers have proposed that halachic "sundown" (the beginning of bein hashemashot) is not when the sun crosses the horizon, but rather earlier (according to Rabbeinu Tam) or later (according to Geonim). While Shabbat 35b refers to medium-sized stars, the Shulchan Aruch rules that since we are unsure what stars are medium or big, we must be stringent to wait for the appearance of small stars. Since this time is not clearly defined, most communities (at least for the end of the Sabbath) wait until around 8.5° of solar depression. Some, following the interpretation of Rabbeinu Tam, wait until 72 (or 90) minutes after astronomical sunset; this is common practice in Chasidic and other Charedi communities. Morning There are two times for beginning of mitzvot during the day: Daybreak (alot hashachar), when some light is visible, or Sunrise (hanetz hahama), when the sun crosses the horizon. The Mishnah lists a number of daytime mitzvot should be performed after sunrise, but if they are performed after daybreak, one fulfilled his obligation ex post facto. The Talmud in Pesachim (see above) holds symmetrically that the time between daybreak and sunrise is also the time in which one can walk four mils. For morning calculations, daybreak is normally held to be when the sun is 16.1° below the horizon, or else a fixed 72 (or 90) minutes before sunrise. Times Daybreak Daybreak (עֲלוֹת הַשַּׁחַר, Alot Hashachar) refers to when the first rays of light are visible in the morning. If one has not recited the evening Shema by this time, and the omission was not due to negligence, one can still recite it now, up to sunrise, though one may not say Hashkiveinu or Baruch Hashem L'Olam. If one has prayed Shacharit after this time, one has fulfilled his obligation ex post facto. Furthermore, most mitzvot that must be performed during the day (such as the Four Species or Hallel) may be done after this time, at least ex post facto. Misheyakir After daybreak, there is a time known as misheyakir, "when one can recognize [another person four cubits away]." This is the earliest time to wear tzitzit and tefillin (though ex post facto, if one did so after Alot Hashachar, he fulfilled his obligation). Misheyakir is generally calculated relative to season and place, and because there are no Talmudic or early sources as to when this time occurs, there are a wide range of opinions. Most calculate it based on when the sun is 10.2-11.5 degrees below the horizon, but there are opinions that make it as late as 6 degrees. Sunrise Sunrise (הָנֵץ הַחַמָּה, Hanetz Hachamah) refers to when the ball of the sun rises above the horizon. It is preferable to pray the morning Shema just before this time and begin the Amidah just afterwards, and praying this way is known as vatikin. Most mitzvot that must be performed during the day (such as the Four Species or Hallel) should be done after this time ab initio. Sof Zman Kriyat Shema Sof Zman Kriyat Shema (סוֹף זְמַן קְרִיאַת שְׁמַע) means "end of the time to say the [morning] Shema." This is three halachic hours into the day. These hours are variable/seasonal hours and refer to one twelfth of the time between daybreak and nightfall (according to the Magen Avraham) or one twelfth of the time between sunrise and sunset (according to the Vilna Gaon). Sof Zman Tefilah Sof Zman Tefilah (סוֹף זְמַן תְּפִלָּה) means "end of the time to say the Shacharit Amidah." This is four halachic hours into the day. Since the Amidah is only rabbinically required (unlike the Shema which is Scriptually mandated) it is common to rely on the later time (Vilna Gaon), thus only a few calendars publish the earlier time (Magen Avraham). Midday Midday (חֲצוֹת הַיּוֹם, Chatzot Hayom or just Chatzot) means the midpoint between sunrise and sunset, or equivalently between daybreak and sundown. The absolute latest time for the Shacharit Amidah, ex post facto, is this time. On the Shabbat and on holidays, one is supposed to eat before this time. On Tish'a Ba'av one may sit on a chair at this time, and those who fast on Erev Rosh Hashanah usually eat at this time. Mincha Gedolah Minchah Gedolah (מִנְחָה גְּדוֹלָה, literally the greater Minchah), one-half variable hour after midday (6.5 variable hours into the day), is the earliest time to recite Minchah, although one should try, if possible, to wait until Minchah Ketanah. On Shabbat, and Jewish holidays, it is preferable to begin Mussaf by this time, because otherwise it is questionable whether they would be required to pray the more frequent prayer (Minchah) first. Mincha Ketanah Minchah Ketanah (מִנְחָה קְטַנָּה, literally the smaller [window of praying] Minchah), two and one-half variable hours before sunset, is the preferable earliest time to recite Minchah. Plag Hamincha Plag Hamincha (פְּלַג הַמִּנְחָה, literally half of the Minchah) is the midpoint between Minchah Ketanah and sunset, i.e. one and one-quarter variable hours before sunset. If one prayed Minchah before this time, one may recite Maariv afterwards (at the conclusion of the Sabbath, this may only be done under extenuating circumstances). Otherwise, one must wait until sunset, unless one is praying as a congregation. Furthermore, it is questionable whether an individual may pray maariv after plag hamincha if he doesn't always recite mincha before Plag Hamincha; nevertheless, the Halachic authorities allow one to do so on Friday night. Sunset Sunset (, Shkiyat Hachamah - often referred to simply as //) is the time at which the ball of the sun falls below the horizon. The next day of the Hebrew calendar begins at this point for almost all purposes. Some sources indicate that if one ate an additional specified quantity of bread, and a meal eaten now includes the new day's additions in the grace after meals, then they are added. For example, these include ReTzei and YaaLeh V'YaVo on Shabbat Erev Rosh Chodesh. Mitzvot that must be performed during the day may no longer be performed ab initio. Minchah should not be delayed past now. Maariv may be recited now, although many wait until after nightfall. Bein Hashemashot Bein Hashemashot (בֵּין הַשְּׁמָשׁוֹת, literally between the suns) is the period between sunset and nightfall, and is considered a time of questionable status. On the Sabbath, festivals, and fast days the stringencies of both the previous and following days apply. For example, if the fast of Tish'a Ba'av immediately follows the Sabbath, the intervening Bein Hashemashot is forbidden in eating, drinking, and working. However, there are occasional leniencies. Nightfall Nightfall (צֵאת הַכּוֹֹכָבִים, Tzet Hakochavim) is described in detail above. After nightfall, it is considered definitely the following day. All restrictions of the previous day go away, and any Mitzvot that must be performed at night (such as the evening Shema, the Seder, or Bedikat Chametz) may be performed. There is a mitzvah to add some additional time to one's Shabbat observance after nightfall (tosefet shabbat), and thus published times for the end of Shabbat may be a few minutes later than the time calculated (according to whatever opinion) for nightfall. Midnight Midnight (חֲצוֹת הַלַּילָה, Chatzot Halailah or just Chatzot) is the midpoint between nightfall and daybreak, or equivalently between sunset and sunrise. The evening Shema should be recited by now, and the Afikoman on Passover should be eaten by this time. The Talmud in Berachot rules that all "night" mitzvot should be performed by Chatzot, at least ab initio, in case the person would otherwise fall asleep and then fail to perform the mitzvot. Some rise at this time and recite Tikkun Chatzot, a series of supplications for the rebuilding of the Temple. Other zmanim On the Eve of Passover, chametz may not be eaten after four variable hours, and must be burned before five variable hours. The Mussaf prayer should preferably be recited before seven variable hours, on days it is recited. See also Jewish law in the polar regions Canonical hours Salat times Relative hour (Jewish law) References Jewish law and rituals Orthodox Judaism Time in religion Hebrew words and phrases in Jewish law
Zmanim
[ "Physics" ]
3,132
[ "Spacetime", "Time in religion", "Physical quantities", "Time" ]
6,938,627
https://en.wikipedia.org/wiki/List%20of%20sequenced%20eukaryotic%20genomes
This list of "sequenced" eukaryotic genomes contains all the eukaryotes known to have publicly available complete nuclear and organelle genome sequences that have been sequenced, assembled, annotated and published; draft genomes are not included, nor are organelle-only sequences. DNA was first sequenced in 1977. The first free-living organism to have its genome completely sequenced was the bacterium Haemophilus influenzae, in 1995. In 1996 Saccharomyces cerevisiae (baker's yeast) was the first eukaryote genome sequence to be released and in 1998 the first genome sequence for a multicellular eukaryote, Caenorhabditis elegans, was released. Protists Following are the nine earliest sequenced genomes of protists. For a more complete list, see the List of sequenced protist genomes. Plants Following are the five earliest sequenced genomes of plants. For a more complete list, see the List of sequenced plant genomes. Fungi Following are the five earliest sequenced genomes of fungi. For a more complete list, see the List of sequenced fungi genomes. Animals Following are the five earliest sequenced genomes of animals. For a more complete list, see the List of sequenced animal genomes. See also Genome project, Human genome Genomic organization History of genetics List of sequenced animal genomes List of sequenced archaeal genomes List of sequenced bacterial genomes List of sequenced fungi genomes List of sequenced plant genomes List of sequenced plastomes List of sequenced protist genomes References External links Diark - a resource for eukaryotic genome research EMBL-EBL Eukaryotic Genomes UCSC Genome Browser International Sequencing Consortium - Large-scale Sequencing Project Database Ensembl The Ensembl Genome Browser (includes draft and low coverage genomes) GOLD:Genomes OnLine Database v 3.0 SUPERFAMILY comparative genomics database Includes genomes of all completely sequenced eukaryotes, and sophisticated datamining plus visualisation tools for analysis Rat Genome Database Eukaryotic genomes Eukaryotic genomes
List of sequenced eukaryotic genomes
[ "Engineering", "Biology" ]
457
[ "Lists of sequenced genomes", "DNA sequencing", "Genetic engineering", "Genome projects" ]
6,939,387
https://en.wikipedia.org/wiki/Delta-ring
In mathematics, a non-empty collection of sets is called a -ring (pronounced "") if it is closed under union, relative complementation, and countable intersection. The name "delta-ring" originates from the German word for intersection, "Durschnitt", which is meant to highlight the ring's closure under countable intersection, in contrast to a -ring which is closed under countable unions. Definition A family of sets is called a -ring if it has all of the following properties: Closed under finite unions: for all Closed under relative complementation: for all and Closed under countable intersections: if for all If only the first two properties are satisfied, then is a ring of sets but not a -ring. Every -ring is a -ring, but not every -ring is a -ring. -rings can be used instead of σ-algebras in the development of measure theory if one does not wish to allow sets of infinite measure. Examples The family is a -ring but not a -ring because is not bounded. See also References Cortzen, Allan. "Delta-Ring." From MathWorld—A Wolfram Web Resource, created by Eric W. Weisstein. http://mathworld.wolfram.com/Delta-Ring.html Measure theory Families of sets
Delta-ring
[ "Mathematics" ]
274
[ "Mathematical analysis", "Mathematical analysis stubs", "Combinatorics", "Basic concepts in set theory", "Families of sets" ]
6,939,566
https://en.wikipedia.org/wiki/Coring
Coring happens when a heated alloy, such as a Cu-Ni system, cools in non-equilibrium conditions. The center of each grain, which is the first part to freeze, is rich in the high-melting element (e.g., nickel for this Cu–Ni system), whereas the concentration of the low-melting element increases with position from this region to the grain boundary. This is termed a 'cored structure', which gives rise to less than the optimal properties. The distribution of the two elements within the grains is nonuniform, a phenomenon termed 'segregation'; that is, concentration gradients are established across the grains. As a casting having a cored structure is reheated, grain boundary regions will melt first in as much as they are richer in the low-melting component. This produces a sudden loss in mechanical integrity due to the thin liquid film that separates the grains. Furthermore, this melting may begin at a temperature below the equilibrium solidus temperature of the alloy. Coring may be eliminated by a homogenization heat treatment carried out at a temperature below the solidus point for the particular alloy composition. During this process, atomic diffusion occurs, which produces compositionally homogeneous grains. Coring is predominantly observed in alloys having a marked difference between liquidus and solidus temperatures. It is often being removed by subsequent annealing and/or hot-working. It is exploited in zone refining techniques to produce high purity metals. Coring was first discovered by Aubrey Tang. References Beddoes, J. and Bibby, M.J. Principles of Metal Manufacturing Processes. E William D. Callister, Jr. "Materials Science and Engineering. An Introduction". Metallurgy
Coring
[ "Chemistry", "Materials_science", "Engineering" ]
350
[ "Metallurgy", "Materials science", "nan" ]
6,940,601
https://en.wikipedia.org/wiki/Tiger%20trout
The tiger trout (Salmo trutta × Salvelinus fontinalis) is a sterile, intergeneric hybrid of the brown trout (Salmo trutta) and the brook trout (Salvelinus fontinalis). Pronounced vermiculations in the fish's patterning gave rise to its name, evoking the stripes of a tiger. Tiger trout are a rare anomaly in the wild, as the parent species are relatively unrelated, being members of different genera and possessing mismatched numbers of chromosomes. However, specialized hatchery rearing techniques are able to produce tiger trout reliably enough to meet the demands of stocking programs. Natural occurrence Prior to the 19th century, naturally occurring tiger trout were an impossibility, as the native range of brown trout in Eurasia and brook trout in North America do not overlap and the species could therefore never have encountered one another in the wild. When the widespread stocking of non-native gamefish began in the 1800s, brown trout and brook trout began establishing wild populations alongside each other in some places and the opportunity for hybridization in the wild arose. Instances of stream-born tiger trout were recorded in the United States at least as early as 1944 and, despite being exceptionally rare, they've been documented numerous times during the 20th and 21st centuries. Tiger trout result exclusively from the fertilization of brown trout eggs with brook trout milt, as brook trout eggs are generally too small to be successfully fertilized by brown trout milt. Tigers are known as intergeneric hybrids as the two parent species share only a relatively distant relationship, belonging to different genera within the Salmon family. In fact, brook trout and brown trout have non-matching numbers of chromosomes, with the former possessing 84 and the latter 80. Consequently, even in cases in which brown trout eggs are fertilized by brook trout in the wild, most of these eggs develop improperly and fail to yield any young. Hatchery rearing Tiger trout can be produced reliably in hatcheries and they have been incorporated into stocking programs in the United States at least as early as the 1960s. Hatchery productivity is enhanced by heat shocking the fertilized hybrid eggs, causing the creation of an extra set of chromosomes which increases survival rates from 5% to 85%. Tiger trout have been reported to grow faster than natural species, though this assessment is not universal. They are also known to be highly piscivorous and are consequently a useful control against rough fish populations. This, along with their desirability as novel gamefish, means tigers have continued to be popular with many fish stocking programs. US states with tiger trout stocking programs include Arizona, Arkansas, Colorado, Connecticut, Idaho, Washington, West Virginia, Wyoming, Utah, Virginia, Oregon, Massachusetts and Pennsylvania. See also Splake References Salmonidae Fish hybrids Salmo Salvelinus Intergeneric hybrids
Tiger trout
[ "Biology" ]
592
[ "Intergeneric hybrids", "Hybrid organisms" ]
6,940,878
https://en.wikipedia.org/wiki/XHTML%2BSMIL
XHTML+SMIL is a W3C Note that describes an integration of SMIL semantics with XHTML and CSS. It is based generally upon the HTML+TIME submission. The language is also known as HTML+SMIL. The XHTML+SMIL language profile shares many modules with the standard SMIL language profiles, including the core modules of timing, media objects, linking, animation, transitions and content control. Where the other SMIL profiles use a language-specific layout model, XHTML+SMIL leverages the HTML flow layout and CSS positioning model familiar to many web authors. The semantics of integrating SMIL animation with the CSS model were also adopted in SVG. XHTML+SMIL was issued as a W3C Note rather than a recommendation as there was only one implementation of the language profile (in MSIE). See also SMIL HTML+TIME (Microsoft's implementation) DAISY Digital Talking Book standard References External links XHTML+SMIL specification World Wide Web Consortium standards Markup languages HTML
XHTML+SMIL
[ "Technology" ]
218
[ "Computing stubs", "World Wide Web stubs" ]
6,941,141
https://en.wikipedia.org/wiki/Circular%20linhay
A circular linhay is an ancient type of structure found in England, particularly associated with Devon. Linhay (rhymes with finny), also spelt Linny, is a type of farm building with an open front and usually a lean-to roof. In Newfoundland English a linney is similar to a storage space, kitchen, or porch but as an addition to the rear of a house, and in American English it is an open, lean-to shed attached to a farmyard. Linhays were used to store hay above and shelter cattle (cattle linhay) or farm machinery (cart linhay). See also Linhay in References Agricultural buildings in England Round buildings
Circular linhay
[ "Engineering" ]
139
[ "Architecture stubs", "Architecture" ]
6,942,239
https://en.wikipedia.org/wiki/Hammock%20camping
Hammock camping is a form of camping in which a camper sleeps in a suspended hammock rather than a conventional tent on the ground. Due to the absence of poles and the reduced amount of material used, hammocks can be lighter than a tent, though this is not always the case. Most hammocks will also require less space in a pack than a similar occupancy tent. In foul weather, a tarp is suspended above the hammock to keep the rain off of the camper. Mosquito netting, sometimes integrated into the camping hammock itself, is also used as climatic conditions warrant. Camping hammocks are used by campers who are looking for lighter weight, protection from ground-dwelling insects, or other ground complications such as sloped ground, rocky terrain and flooded terrain. History The hammock was developed in Pre-Columbian Latin America and continues to be produced widely throughout the region, among the Urarina of the Peruvian Amazon, for several years in Ghana, and presently throughout North America, Europe, and Australia. The origin of the hammock remains unknown, though many maintain that it was created out of tradition and need. The word hammock comes from hamaca, a Taino Indian word which means "thrown fishing net". On long fishing trips, the Taíno would sleep in their nets, safe from snakes and other dangerous creatures. Appeal of hammock camping The primary appeal of hammock camping for most users is comfort and better sleep, as compared to sleeping on a pad on the ground. Enthusiasts argue that hammocks don't harm the environment in the way that conventional tents do. Most hammocks attach to trees via removable webbing straps, or "tree-huggers," which do not damage the bark and leave little or no marks afterward. Whereas it's easy to see a frequently used campground because of the effect on the grass, scrub and topsoil, the presence of a hammock camping site is much harder to detect. This has found favour with hikers and campers who follow the principles of Leave No Trace camping. Hammock camping also opens up many more sites for campers - stony ground, slopes, and so on - as well as keeping them off the ground and away from small mammals, reptiles and insects. Sleeping off the ground also keeps the camper out of any rainwater runoff that might seep in under a tent during a downpour. The relatively light weight of hammocks makes them ideal for reducing backpack weight, making it a good option for ultralight backpacking enthusiasts. One of the benefits of hammock camping, however, can also be a significant drawback. A suspended hammock allows for a cooling air flow to surround the camper in hot weather. However, this also makes it harder to stay warm when temperatures plummet, either during the evening or seasonally, as a sleeping bag will be compressed under a camper's weight, reducing its ability to trap air and provide insulation. When deciding to commit to hammock camping most "hangers" trade their sleeping bags for down-filled or synthetic quilts. The quilts are divided into two different types, top quilts (TQ) and under quilts (UQ). The UQ is suspended underneath the hammock so the weight of the hanger doesn't compress the baffles, thus providing air pockets for one's body to heat and keep one warm. Concurrently the TQ is just a down blanket, with some having the ability to make a small box for the feet. Essentially, it is just the top half of a sleeping bag. Because a sleeping bag's underside is compressed, it loses its insulating properties. A TQ cuts the unnecessary material to save weight and fabric. The TQ/UQ sleep system is not only warm, but each quilt packs into the size of a grapefruit, or smaller, depending on temperature rating. Some hammocks are designed with an extra layer of fabric, or a series of large pockets, on the bottom. Insulating material, such as foam, quilting, aluminum windscreen reflectors, clothes, or even dead leaves and brush from the campsite is stuffed between the bottom layers or inside the bottom pockets to create a buffer between the camper and the cold outside air. While the above solutions, except for the found materials, add weight and bulk to the hammock, some approaches use an ultralight open cell foam with a mylar space blanket to mitigate this increase in weight. Another drawback is that a camping hammock requires two trees close enough to hang it from, and strong enough to support the sleeper's weight. This can be a limitation depending on the environment; at higher elevations, trees are more sparse. In these situations hammock campers may bring along a light groundsheet and "go to ground" using their hammock as a ground tent. Suspension systems, tarpaulins, and amenities One of the unique concepts of hammock camping is the new diversity of suspension systems and add-ons which campers use in making their hammock set-up unique and functional. The line on which the hammock's weight is held is often swapped for a variety of lighter weight suspension made of Dyneema or other UHMWPE material. These reduce both weight and bulk. Many use similar lines formed into a constriction knot (colloquially referred to by the brand name "Whoopie Slings") for quick adjustment and setup. These may be connected to the webbing straps ("tree huggers") using a lightweight toggle or a carabiner, or more uniquely designed connectors such as Evo loops or specialized metal hardware. Some hammocks are designed with a dedicated tarpaulin. Others come without a tarpaulin, with the understanding the user will select the size and style of tarpaulin which best fits their needs. There are many ways in which hammock campers hang their tarpaulin. In some, the tarpaulin is connected to the hammock's suspension line using a system of mitten hooks and plastic connectors. In others the tarpaulin is hung separately using either the hammock's integrated ridge line, or a separate ridge line placed under or over the tarpaulin. Some tarps have an asymmetrical pattern which matches the shape of the hammock, but the majority of hammock campers use a hex-shaped tarpaulin, many of which have a catenary shape for strength against wind and reduction in size and weight. The diamond-shape tarpaulin is also used by some. Additional amenities for tarpaulins include removable tarpaulin doors (nylon pieces added to the main openings in cold or windy weather). Different designs of tarpaulin line tensioners are sometimes used to keep tarpaulin lines tight. References See also Tree tent Camping Camping equipment Beds Watercraft components
Hammock camping
[ "Biology" ]
1,443
[ "Beds", "Behavior", "Sleep" ]
6,943,774
https://en.wikipedia.org/wiki/Georges%20Friedel
Georges Friedel (19 July 1865 – 11 December 1933) was a French mineralogist and crystallographer. Life Georges was the son of the chemist Charles Friedel. Georges' grandfather was Louis Georges Duvernoy who held the chair in comparative anatomy from 1850 to 1855 at the Muséum national d'histoire naturelle. Georges studied at the École Polytechnique in Paris and the École Nationale des Mines in St. Etienne, and was a student of François Ernest Mallard. In 1893 he obtained a professorship at the École Nationale des Mines, of which he would later become the director. After the First World War, he returned as a professor to the University of Strasbourg in Alsace. Due to ill health, he took early retirement in 1930, and died in 1933. He was married with five children. Scientific works Like his teacher Mallard, Friedel concerned himself with the theories of Auguste Bravais, the founder of crystallography. Friedel was able to demonstrate the theoretical ideas of Bravais (the Bravais lattice) with the help of X-ray diffraction experiments on crystals, and so provide the physical basis for these ideas. One of his most important discoveries was the law that now bears his name. Friedel's salt In 1897, Georges Friedel synthesised and identified calcium chloroaluminate which received his name. Georges Friedel also synthesised calcium aluminate in 1903 in the framework of his work on the macles theory. Mesomorphic states of matter The presumption that solid and liquid are adjacent states of matter was underscored by Friedrich Reinitzer in 1888 when he noted a cloudy mesophase of cholesteryl benzoate between 145.5 °C and 178.5 °C. The subject was taken up in Germany, and in 1907 also in France by Georges Friedel and François Grandjean, as they described the "focal conic liquid". Friedel contributed his Mesomorphic States of Matter to the Annales des Physiques in 1922. This two-hundred-page work established much of the current terminology in mesophase physics. First, the nematic phase he characterized as having microscopic threads (these threads are today interpreted as disclinations in the director-field in the mesophase). Second, Friedel coined the term smectic phase for a layered mesophase having the structure of neat soap. Third, Friedel use the term cholesteric phase for materials like cholesteryl benzoate, and noted that such mesophases "involve strong twists around a direction normal to the positive optical axis". Scientists have followed Friedel's classification and the term mesophase for the intermediate states has also been adopted from him. He was of the conviction that the term liquid crystal did not bear scrutiny. Indeed, The liquid crystals were not crystals at all, but peculiar liquids with some hint of solid properties. In 1931, Georges published, with his son Edmond Friedel, the results of their X-ray crystallography studies: "The physical properties of the mesophases in general, and their importance in a scheme of classification." Important publications 1904: Groupements cristallins 1907: Etudes sur les lois de Bravais 1922: Les états mésomorphes de la matiere See also The Friedel family is a rich lineage of French scientists: Charles Friedel (1832–1899), French chemist known for the Friedel–Crafts reaction Georges Friedel (1865–1933), here above described, French crystallographer and mineralogist; son of Charles Edmond Friedel (1895–1972), French applied scientist and mining engineer, founder of BRGM, the French geological survey; son of Georges Jacques Friedel, (1921–2014), French physicist; son of Edmond, see the French site for Jacques Friedel References External links François Grandjean (1935) George Friedel, Bulletin de la Société Française de Minéralogie, weblink from Annales des Mines (French). Maurice Kleman (2005) Georges Friedel et "Les phases mésomorphes de la matière"., weblink to Institut de Minéralogie et de Physique des Milieux Condensés (French). 1865 births 1933 deaths Scientists from Mulhouse École Polytechnique alumni People from Alsace-Lorraine Crystallographers Liquid crystals French mineralogists
Georges Friedel
[ "Chemistry", "Materials_science" ]
911
[ "Crystallographers", "Crystallography" ]
6,944,658
https://en.wikipedia.org/wiki/Hot%20tube%20engine
The hot tube engine is a primitive and long-obsolete type of combustion engine. The timing of a hot tube engine is controlled by means of varying the length of the hot-tube ignitor, that does the job that a spark plug does in a spark-ignited engine. Length of the tube controls when the charge ignites, and the ignition timing can be optimized so as to allow different operating speeds to be selected, much like a spark advance control. It was mostly used as a stationary engine on farms but was also found in very early automobiles and motorcycles. Contrary to the aforementioned hot-bulb engine which only requires a heater to begin combustion but then self-sustains, the flame must be kept on the ignitor tube for the engine to keep working, because the byproduct heat from internal combustion is insufficient to maintain the required temperature for ignition. Modern recreations and restored engines are therefore built to run on propane, as it can easily be used both for the engine itself and for the heater flame. See also Hot-tube ignitor References Engine technology
Hot tube engine
[ "Technology" ]
220
[ "Engine technology", "Engines" ]
6,944,847
https://en.wikipedia.org/wiki/Magnussoft
magnussoft Deutschland GmbH is a German computer game developer and publisher. The company is seated in Kesselsdorf, close to the Saxon capital of Dresden. In the 1980s magnussoft released collections of software for popular 8-bit home computer systems: Commodore 64, Commodore Amiga, Atari XL/XE, and Atari ST. The games can also run on newer computer systems, such as Intel-based IBM PCs, using emulators. There are collections of magnussoft games, one is called Retro-Classix that covers games available on multiple platforms, and other collections that specialize on one particular system, like the Amiga Classix or the C64 Classix. The company released more than 160 products. Among their assortment are adventure games, board games, strategic games as well as shoot 'em up games and jump and runs. On the other hand, magnussoft also released computer applications and educational software. The software was brought under varied labels to the market in Germany, Austria, Switzerland, the Benelux countries, France, Great Britain, and the United States. By 2008 magnussoft, had gained access to the software market, especially in the lower budget and middle price range. They cooperate with acquainted German partners like for example "ak tronik Software & Services", "KOCH Media", and the "Verlagsgruppe Weltbild". In addition magnussoft has founded more subsidiaries in other parts of Europe. However, magnussoft does not publish outside of Europe, they leave that work to local companies. magnussoft have created their profile through the release of ZETA, a broad range of retro games, and classic computer games like Aquanoid, Barkanoid or Plot's. Games (selection) Amiga Classix Aquanoid Barkanoid Boulder Match Break It C64 Classix Colossus Chess Dr. Tool Serie Fix & Foxi Serie Jacks Crazy Cong Jump Jack KLIX METRIS MiniGolf Packs Serie PLOTS! Pool Island Retro-Classix Sokoman BURN Applications (selection) Dr Brain series Dr. Tool series Driver Cataloger Easy Bootmanager Typing Tutor Educational software (selection) Deutsch, Englisch und Mathe für Zwerge Deutsch– und Mathe Compilation Fahrschule Criticism In 2007 magnussoft incurred public criticism for ceasing the distribution and the funding of BeOS replacement Magnussoft Zeta OS because of its uncertain legal status. Trademarks Amiga Classix Aquanoid Barkanoid C64 Classix Dr. Brain Dr. Tool Retro Classix References External links magnussoft - official website Video game publishers Video game companies of Germany Companies based in Saxony BeOS Wilsdruff
Magnussoft
[ "Technology" ]
551
[ "BeOS", "Computing platforms" ]
6,945,073
https://en.wikipedia.org/wiki/Muhammad%20Raziuddin%20Siddiqui
Muhammad Raziuddin Siddiqui, FPAS, NI, HI, SI (Urdu: , ; 8 January 1908 – 8 January 1998), also known as Dr. Razi, was a Pakistani theoretical physicist and a mathematician who played a role in Pakistan's education system, and Pakistan's indigenous development of nuclear weapons. An educationist and a scientist, Siddiqui established educational research institutes and universities in his country. During the 1940s in Europe, he contributed in mathematical physics and worked on general relativity and the theory of relativity, nuclear energy, and quantum gravity. He was one of the notable students of Albert Einstein. He had been the vice-chancellor of four Pakistani universities, and the first vice-chancellor of Quaid-e-Azam University and served as the Emeritus professor of Physics there until his death in 1998. Biography Life and education Raziuddin Siddiqui was born on 8 January 1908 in Hyderabad- Deccan, India to Mohammed Muzaffer uddin Siddiqui and Baratunnisa Begum. His family consisted of one elder brother, Mohammed Zakiuddin Siddiqui and two sisters, Abida Begum and Sajida Begum, he was the youngest in the family. He attended the newly established Osmania University. After passing the Rashidia Exams in 1918, Siddiqui completed his matriculation from Osmania University in 1921, and earned a Bachelor of Arts (BA) in mathematics, with distinction, in 1925. Siddiqui in Europe Siddiqui was then awarded a scholarship from the Government of the State of Hyderabad to pursue higher studies in United Kingdom where he completed his MA in mathematics, under Paul Dirac from the University of Cambridge in 1928. Then, he proceeded further to work for his PhD at the University of Leipzig in Germany (Weimar Republic). He studied mathematics and quantum mechanics under Albert Einstein. He completed his PhD in theoretical physics, writing a brief research thesis on the Theory of relativity and the nuclear binding energy. He did his post doctoral work at the University of Paris, France. Research in theoretical physics In Europe, while Siddique was working on his post-doctoral research at the Paris University, he had the opportunity to meet with the members of "The Paris Group" where he had led the discussions on unsolved problems in physics and in mathematics. During his stay in Great Britain, he studied Quantum mechanics and published scientific papers at the Cavendish Laboratory. Return to India In 1931, Siddiqui then returned to Hyderabad, British Indian Empire, and joined Osmania University there as an associate professor of mathematics. During 1948–49, he served as vice-chancellor of Osmania, appointed by the governor of Andhra Pradesh. Move to Pakistan After the Partition of India led to the independence of Pakistan in 1947, at the request of the Government of Pakistan, Siddiqui migrated to Karachi, Pakistan in 1950, along with some of his family. His brother Zakiuddin and one of his sisters, Sajida Begum, remained in Hyderabad, India with their families and parents. His father Muzaffer uddin Siddiqui died while his visit to Raziuddin Siddiqui in Pakistan later in his years. In Karachi, Siddiqui joined the Karachi University's teaching faculty and taught as professor of applied mathematics there. In 1953, he was simultaneously appointed to the post of vice-chancellor of the University of Sindh and the University of Peshawar. Siddiqui founded the first mathematical society in Pakistan in 1952 by the name of "All Pakistan Mathematics Association", and remained its president until 1972. In 1956, Siddiqui helped establish Nuclear power in Pakistan and its expansion in the country by first joining the newly established Pakistan Atomic Energy Commission (PAEC) and then establishing the first science directorate on mathematical physics. In 1964, he moved to Islamabad, where he joined PAEC. There he began his academic research in theoretical physics. In 1965, with the establishment of Quaid-e-Azam University (QAU), Siddiqui was appointed as its first vice-chancellor by the then foreign minister Zulfikar Ali Bhutto. He was one of the first professors of Physics at Quaid-e-Azam University where he also served as the chairman of the Physics Department. He continued his tenure until 1972, when he rejoined PAEC at the request of Prime Minister Bhutto. During the 1960s, he helped convince President of Pakistan Ayub Khan to make a proposed university a research institution. He, at first, established the "Institute of Physics" at the QAU, and invited Professor Riazuddin to be its first director, and the dean of the faculty. Then, Riazuddin, with the help of his mentor, Dr. Abdus Salam, convinced the then PAEC chairman Dr. Ishrat Hussain Usmani to send all the theoreticians to the Institute of Physics to form a physics group. This established the "Theoretical Physics Group" (TPG), which later designed nuclear weapons for Pakistan. With the establishment of TPG, Siddiqui began to work with Abdus Salam, and on his advice began research in Theoretical Physics at PAEC. In 1970, he established the Mathematical Physics Group (MPG) at PAEC, where he led academic research in advanced mathematics. He also delegated mathematicians to PAEC to specialise in their fields at the MPG Division of PAEC. Pakistan and its nuclear deterrent program After the Indo-Pakistani War of 1971, Siddiqui joined the Pakistan Atomic Energy Commission (PAEC) at the request of Prime Minister Zulfikar Ali Bhutto. Siddiqui was the first full-time Technical Member of PAEC and was responsible for preparation of its charter. During the 1970s, Siddiqui worked on problems in theoretical physics with Pakistani theoretical physicists in the nuclear weapons programme. Previously, he had worked in Europe, including carrying out nuclear research in the British nuclear weapon program, and the French atomic program. At PAEC, he became a mentor to some of the country's academic scientists. At PAEC, he was the director of the Mathematical Physics Group (MPG) and was tasked with performing mathematical calculations involved in nuclear fission and supercomputing. While both MPG and Theoretical Physics Group (TPG) had reported directly to Abdus Salam, Siddiqui co-ordinated each meeting with the scientists of TPG and mathematicians of the MPG. At PAEC, he directed the mathematical research directly involving the theory of general relativity, and helped establish the quantum computers laboratories at PAEC. Since theoretical physics plays a major role in identifying the parameters of nuclear physics, Siddiqui started the work on special relativity's complex applications, the 'relativity of simultaneity'. His Mathematical Physics Group undertook the research and performed calculations on the 'relativity of simultaneity' during the process of weapon detonation, where multiple explosive energy rays are bound to release in the same isolate and close medium at the same time interval. Post-war After his work at PAEC, Siddiqui again joined Quaid-e-Azam University's Physics Faculty. As professor of physics, he continued his research at the Institute of Physics, QAU. He helped develop the higher education sector, and placed mainframe policies in the institution. Death and legacy Siddiqui remained in Islamabad, and had associated himself with Quaid-e-Azam University. In 1990, he was made Professor Emeritus of Physics and Mathematics there. He died on 8 January 1998, at the age of 90. Siddiqui's biography was written by scientists who had worked with him. In 1960, due to his efforts to expand education, he was awarded the third-highest civilian award of Pakistan, Sitara-i-Imtiaz, from the then-President of Pakistan, Field Marshal Ayub Khan. In 1981, he was awarded the second highest civilian award, Hilal-i-Imtiaz, from President General Muhammad Zia-ul-Haq due to his efforts in Pakistan's atomic program, and for popularising science in Pakistan. In May 1998, the Government of Pakistan awarded him the highest civilian award, the Nishan-i-Imtiaz, posthumously by Prime Minister Nawaz Sharif when Pakistan conducted its first successful nuclear tests, 'Chagai-I'. Family His eldest daughter, Dr. Shirin Tahir-Kheli, is a former special assistant to the president of the United States of America, and Senior Adviser for women's empowerment. Civil awards Sitara-i-Imtiaz (1960) Hilal-i-Imtiaz (1981) Nishan-e-Imtiaz (1998) Gold Medal, Pakistan Academy of Sciences (1950) Gold Medal, Pakistan Mathematical Society (1980) Gold Medallion, Pakistan Physical Society (1953) Doctorate of Science Honoris Causa, Osmania University (1938) Books Quantum Mechanics and its Physics Dastan-e-Riazi (The Tale of Mathematics) Izafiat Tasawur-e-Zaman-o-Makaan Experiences in science and education by M. Raziuddin Siddiqui, published in 1977. Establishing a new university in a developing country: Policies and procedures by M. Raziuddin Siddiqui, published in 1990. See also Abdus Salam Salimuzzaman Siddiqui Quaid-i-Azam University Nuclear weapon References Bibliography External links Muhammad Raziuddin Siddiqui Dr. Raziuddin Siddiqui Memorial Library Ias.ac.in Iiit.ac.in, Iqbal Ka Tasawwuf-e-Zaman-o-MakaN at Digital Library of India 1908 births 1998 deaths Recipients of Sitara-i-Imtiaz Recipients of Nishan-e-Imtiaz Pakistani scientists Pakistani scholars Pakistani academics Pakistani educational theorists Project-706 Recipients of Hilal-i-Imtiaz Pakistani physicists Pakistani mathematicians Pakistani nuclear physicists Osmania University alumni University of Paris alumni Academic staff of the University of Peshawar Academic staff of the University of Sindh Alumni of the University of Cambridge Leipzig University alumni Scientists from Hyderabad, India People from Karachi People from Islamabad Nuclear weapons programme of Pakistan Fellows of Pakistan Academy of Sciences Academic staff of the University of Karachi Academic staff of Quaid-i-Azam University Vice-chancellors of the University of Sindh Theoretical physicists Pakistani people of Hyderabadi descent
Muhammad Raziuddin Siddiqui
[ "Physics" ]
2,138
[ "Theoretical physics", "Theoretical physicists" ]
6,945,368
https://en.wikipedia.org/wiki/DX%20Studio
DX Studio is a complete integrated development tool for the creation of 3D video games, simulations or real-time interactive applications for either standalone, web based, Microsoft Office or Visual Studio use. Development DX Studio is produced by Worldweaver Ltd, a company that was established in 1996 by Chris Sterling to develop PC games and high-end business GIS applications. Development of DX Studio began in 2002 and the first version was released to market in 2005. Since then the user base of DX Studio has grown to around 30,000 worldwide. Release history November 2005 - DX Studio 1.0 Released May 2007 - DX Studio 2.0 Released September 2008 - DX Studio 3.0 Released June 2009 - DX Studio 3.1 Released December 2010 - DX Studio 3.2 Released Features The system includes both 2D and 3D layout editors, and allows JavaScript control of scenes, objects and media in real-time. Documents can also be controlled from outside of the player using the ActiveX/COM interface or a TCP/IP port. The engine behind DX Studio uses DirectX 9.0c, and includes support for the latest pixel and vertex shader effects found on the more powerful 3D graphics cards. The DX Studio 2D and 3D editors can be used to build interactive layers and sequences, which are combined to produce a complete interactive document. The top level editor can be used just to drag and drop scenes together at a high level, or can 'drill down' to edit 3D and 2D scenes. Inside each scene, users can drill down further to edit the individual textures, backgrounds and sounds. Using ActiveX technology users can build their own C++, C# or VB.Net applications and drop the DX Studio Player in as a component. A complete interactive document can be compiled into a single redistributable EXE. This can then be pressed to CD, emailed, placed on a website or in another archive. The EXE also performs system checks and will download and install any DirectX upgrades that may be necessary. The files produced by DX Studio use standard XML to describe the entire scenes. The files also contain all the resources needed to display the 3D world, compressed into the same file using standard ZIP compatible algorithms. A security option allows this data to be encrypted if necessary. Built-in special effects include full lens flares, water ripples, particle systems, real-time shadows, 3D video projection (in MPEG or AVI format), 3D positioned sound, and post-production effects (such as 'sepia', 'bloom' and 'corona'). For advanced users a plug-in SDK is available which, with some DirectX/C++ or HLSL knowledge, users can code their effects. Licensing DX Studio is available in both standard and professional editions. A distinction in licensing between commercial and non-commercial users is also made. Full source code to the player is also available under a Corporate License. dx studio seitan 3dx Critical reception The DX Studio series overall has been critically well received, particularly for its development environment. Version 2.1 was recommended by PC Advisor for users wanting good results on a budget, highlighting the workspace and available technical support. Version 3.0 was rated 4 out of 5 by Digital Arts, who praised the high-quality effects, interactive real-time 3D previews, video streaming, online library, and the budget price suitable for students. Criticisms included web content deployment, which requires a plug-in; lack of Macintosh support; reliance on imported 3D models; some interface issues and operational quirks. Computer Active! also awarded 4 out of 5, pointing out the inexpensive price, ease of use and good support, but criticising some performance problems with legacy files. References External links Official DX Studio Website Windows games Video game development software 3D scenegraph APIs 3D graphics software Graphics libraries Computer libraries 2005 software Freeware game engines
DX Studio
[ "Technology" ]
810
[ "IT infrastructure", "Computer libraries" ]
6,945,419
https://en.wikipedia.org/wiki/Blondel%27s%20theorem
Blondel's theorem, named after its discoverer, French electrical engineer André Blondel, is the result of his attempt to simplify both the measurement of electrical energy and the validation of such measurements. The result is a simple rule that specifies the minimum number of watt-hour meters required to measure the consumption of energy in any system of electrical conductors. The theorem states that the power provided to a system of N conductors is equal to the algebraic sum of the power measured by N watt-meters. The N watt-meters are separately connected such that each one measures the current level in one of the N conductors and the potential level between that conductor and a common point. In a further simplification, if that common point is located on one of the conductors, that conductor's meter can be removed and only N-1 meters are required. An electrical energy meter is a watt-meter whose measurements are integrated over time, thus the theorem applies to watt-hour meters as well. Blondel wrote a paper on his results that was delivered to the International Electric Congress held in Chicago in 1893. Although he was not present at the Congress, his paper is included in the published Proceedings. Instead of using N-1 separate meters, the meters are combined into a single housing for commercial purposes such as measuring energy delivered to homes and businesses. Each pairing of a current measuring unit plus a potential measuring unit is then termed a stator or element. Thus, for example, a meter for a four wire service will include three elements. Blondel's Theorem simplifies the work of an electrical utility worker by specifying that an N wire service will be correctly measured by a N-1 element meter. Unfortunately, confusion arises for such workers due to the existence of meters that don't contain tidy pairings of single potential measuring units with single current measuring units. For example, a meter was previously used for four wire services containing two potential coils and three current coils and called a 2.5 element meter. Blondel Noncompliance Electric energy meters that meet the requirement of N-1 elements for an N wire service are often said to be Blondel Compliant. This label identifies the meter as one that will measure correctly under all conditions when correctly installed. However, a meter doesn't have to be Blondel compliant in order to provide suitably accurate measurements and industry practice often includes the use of such non compliant meters. The form 2S meter is extensively used in the metering of residential three wire services, despite being non compliant in such services. This common residential service consists of two 120 volt wires and one neutral wire. A Blondel compliant meter for such a service would need two elements (and a five jaw socket to accept such a meter), but the 2S meter is a single element meter. The 2S meter includes one potential measuring device (a coil or a voltmeter) and two current measuring devices. The current measuring devices provide a measurement equal to one half of the actual current value. The combination of a single potential coil and two so called half coils provides highly accurate metering under most conditions. The meter has been used since the early days of the electrical industry. The advantages were the lower cost of a single potential coil and the avoidance of interference between two elements driving a single disc in an induction meter. For line to line loads, the meter is Blondel compliant. Such loads are two wire loads and a single element meter suffices. The non compliance of the meter occurs in measuring line to neutral loads. The meter design approximates a two element measurement by combining a half current value with the potential value of the line to line connection. The line to line potential is exactly twice the line to neutral connection if the two line to neutral connections are exactly balanced. Twice the potential times half the current then approximates the actual power value with equality under balanced potential. In the case of line to line loads, two times the half current value times the potential value equals the actual power. Error is introduced if the two line to neutral potentials are not balanced and if the line to neutral loads are not equally distributed. That error is given by 0.5(V1-V2)(I1-I2) where V1 and I1 are the potential and current connected between one line and neutral and V2 and I2 are those connected between the other line and neutral. Since the industry typically maintains five percent accuracy in potential, the error will be acceptably low if the loads aren't heavily unbalanced. This same meter has been modified or installed in modified sockets and used for two wire, 120 volt services (relabeled as 2W on the meter face). The modification places the two half coils in series such that a full coil is created. In such installations, the single element meter is Blondel compliant. There is also a three wire 240/480 volt version that is not Blondel compliant. Also in use are three phase meters that are not Blondel compliant, such as forms 14S and 15S, but they can be easily replaced by modern meters and can be considered obsolete. References Electric power Eponymous theorems of physics
Blondel's theorem
[ "Physics", "Engineering" ]
1,052
[ "Equations of physics", "Physical quantities", "Eponymous theorems of physics", "Power (physics)", "Electric power", "Electrical engineering", "Physics theorems" ]
6,945,596
https://en.wikipedia.org/wiki/Tegart%27s%20Wall
Tegart's Wall was a barbed wire fence erected in May–June 1938 by British Mandatory authorities in the Upper Galilee near the northern border of the territory in order to keep militants from infiltrating from French-controlled Mandatory Lebanon and Syria to join the 1936–1939 Arab revolt in Palestine. With time the security system further included police forts, smaller pillbox-type fortified positions, and mounted police squads patrolling along it. It was described as an "ingenious solution for handling terrorism in Mandatory Palestine." History The wall was built on the advice of Charles Tegart, adviser to the Palestine Government on the suppression of terrorism. In his first report, Tegart wrote that the border could not be defended along most of its length under the prevailing topographical conditions. The barrier was strung from Ras en Naqura on the Mediterranean coast to the north edge of Lake Tiberias at a cost of $450,000. It included a nine-foot barbed wire fence that roughly followed the border between Palestine and French-mandated Lebanon but the Galilee panhandle was left on the outside. Before the fence was completed, "a band of Arab terrorists swooped down on a section of the fence… ripped it up and carted it across the frontier into Lebanon." Five Tegart forts and twenty pillboxes were built along the route of the fence. Nevertheless, the infiltrators easily overcame the fence and evaded mobile patrols along the frontier road. The barrier, which impeded both legal and illegal trade, angered local inhabitants on both sides of the border because it bisected pastures and private property. After the rebellion was suppressed in 1939, the wall was dismantled. See also Separation barrier References Bibliography History of Mandatory Palestine 1936–1939 Arab revolt in Palestine Border barriers Walls 1938 establishments in Mandatory Palestine 1940s disestablishments in Mandatory Palestine Upper Galilee
Tegart's Wall
[ "Engineering" ]
379
[ "Separation barriers", "Border barriers" ]
6,945,717
https://en.wikipedia.org/wiki/Ultimate%20failure
In mechanical engineering, ultimate failure describes the breaking of a material. In general there are two types of failure: fracture and buckling. Fracture of a material occurs when either an internal or external crack elongates the width or length of the material. In ultimate failure this will result in one or more breaks in the material. Buckling occurs when compressive loads are applied to the material and instead of cracking the material bows. This is undesirable because most tools that are designed to be straight will be inadequate if curved. If the buckling continues, it will create tension on the outer side of the bend and compression on the inner side, potentially fracturing the material. In engineering there are multiple types of failure based upon the application of the material. In many machine applications any change in the part due to yielding will result in the machine piece needing to be replaced. Although this deformation or weakening of the material is not the technical definition of ultimate failure, the piece has failed. In most technical applications, pieces are rarely allowed to reach their ultimate failure or breakage point, instead for safety factors they are removed at the first signs of significant wear. There are two different types of fracture: brittle and ductile. Each of these types of failure occur based on the material's ductility. Brittle failure occurs with little to no plastic deformation before fracture. An example of this would be stretching a clay pot or rod, when it is stretched it will not neck or elongate, but merely break into two or more pieces. While applying a tensile stress to a ductile material, instead of immediately breaking the material will instead elongate. The material will begin by elongating uniformly until it reaches the yield point, then the material will begin to neck. When necking occurs the material will begin to stretch more in the middle and the radius will decrease. Once this begins the material has entered a stage called plastic deformation. Once the material has reached its ultimate tensile strength it will elongate more easily until it reaches ultimate failure and breaks. See also Failure causes Material strength Fabrication (metal) References Manufacturing Processes for Engineering Materials Fifth Edition Reliability engineering Maintenance
Ultimate failure
[ "Engineering" ]
436
[ "Systems engineering", "Maintenance", "Mechanical engineering", "Reliability engineering" ]
6,945,834
https://en.wikipedia.org/wiki/List%20of%20level%20editors
This is a list of level editors for video games. Level editors allow for the customization and modification of levels within games. Official or single games Generic Gamestudio a commercial level editor for the gamestudio engine Grome by Quad Software GtkRadiant by id Software, Loki Software, Infinity Ward, and Treyarch Future Pinball - A pinball editor. QuArK, Quake Army Knife editor, for a variety of engines (such as Quake III Arena, Half-Life, Source engine games, Torque, etc.) Quiver (level editor), a level editor for the original Quake engine developed solely for the Classic Macintosh Operating System by Scott Kevill, who is also the developer and administrator of GameRanger Visual Pinball Stencyl includes a Scene Designer module which is used to place tiles, actors, and assign behaviors and settings. See also noclip.website, an online map viewer for levels from various games References Level editors
List of level editors
[ "Technology" ]
193
[ "Computing-related lists", "Video game lists" ]
6,946,171
https://en.wikipedia.org/wiki/Canonical%20Huffman%20code
In computer science and information theory, a canonical Huffman code is a particular type of Huffman code with unique properties which allow it to be described in a very compact manner. Rather than storing the structure of the code tree explicitly, canonical Huffman codes are ordered in such a way that it suffices to only store the lengths of the codewords, which reduces the overhead of the codebook. Motivation Data compressors generally work in one of two ways. Either the decompressor can infer what codebook the compressor has used from previous context, or the compressor must tell the decompressor what the codebook is. Since a canonical Huffman codebook can be stored especially efficiently, most compressors start by generating a "normal" Huffman codebook, and then convert it to canonical Huffman before using it. In order for a symbol code scheme such as the Huffman code to be decompressed, the same model that the encoding algorithm used to compress the source data must be provided to the decoding algorithm so that it can use it to decompress the encoded data. In standard Huffman coding this model takes the form of a tree of variable-length codes, with the most frequent symbols located at the top of the structure and being represented by the fewest bits. However, this code tree introduces two critical inefficiencies into an implementation of the coding scheme. Firstly, each node of the tree must store either references to its child nodes or the symbol that it represents. This is expensive in memory usage and if there is a high proportion of unique symbols in the source data then the size of the code tree can account for a significant amount of the overall encoded data. Secondly, traversing the tree is computationally costly, since it requires the algorithm to jump randomly through the structure in memory as each bit in the encoded data is read in. Canonical Huffman codes address these two issues by generating the codes in a clear standardized format; all the codes for a given length are assigned their values sequentially. This means that instead of storing the structure of the code tree for decompression only the lengths of the codes are required, reducing the size of the encoded data. Additionally, because the codes are sequential, the decoding algorithm can be dramatically simplified so that it is computationally efficient. Algorithm The normal Huffman coding algorithm assigns a variable length code to every symbol in the alphabet. More frequently used symbols will be assigned a shorter code. For example, suppose we have the following non-canonical codebook: A = 11 B = 0 C = 101 D = 100 Here the letter A has been assigned 2 bits, B has 1 bit, and C and D both have 3 bits. To make the code a canonical Huffman code, the codes are renumbered. The bit lengths stay the same with the code book being sorted first by codeword length and secondly by alphabetical value of the letter: B = 0 A = 11 C = 101 D = 100 Each of the existing codes are replaced with a new one of the same length, using the following algorithm: The first symbol in the list gets assigned a codeword which is the same length as the symbol's original codeword but all zeros. This will often be a single zero ('0'). Each subsequent symbol is assigned the next binary number in sequence, ensuring that following codes are always higher in value. When you reach a longer codeword, then after incrementing, append zeros until the length of the new codeword is equal to the length of the old codeword. This can be thought of as a left shift. By following these three rules, the canonical version of the code book produced will be: B = 0 A = 10 C = 110 D = 111 As a fractional binary number Another perspective on the canonical codewords is that they are the digits past the radix point (binary point) in a binary representation of a certain series. Specifically, suppose the lengths of the codewords are l1 ... ln. Then the canonical codeword for symbol i is the first li binary digits past the radix point in the binary representation of This perspective is particularly useful in light of Kraft's inequality, which says that the sum above will always be less than or equal to 1 (since the lengths come from a prefix free code). This shows that adding one in the algorithm above never overflows and creates a codeword that is longer than intended. Encoding the codebook The advantage of a canonical Huffman tree is that it can be encoded in fewer bits than an arbitrary tree. Let us take our original Huffman codebook: A = 11 B = 0 C = 101 D = 100 There are several ways we could encode this Huffman tree. For example, we could write each symbol followed by the number of bits and code: ('A',2,11), ('B',1,0), ('C',3,101), ('D',3,100) Since we are listing the symbols in sequential alphabetical order, we can omit the symbols themselves, listing just the number of bits and code: (2,11), (1,0), (3,101), (3,100) With our canonical version we have the knowledge that the symbols are in sequential alphabetical order and that a later code will always be higher in value than an earlier one. The only parts left to transmit are the bit-lengths (number of bits) for each symbol. Note that our canonical Huffman tree always has higher values for longer bit lengths and that any symbols of the same bit length (C and D) have higher code values for higher symbols: A = 10 (code value: 2 decimal, bits: 2) B = 0 (code value: 0 decimal, bits: 1) C = 110 (code value: 6 decimal, bits: 3) D = 111 (code value: 7 decimal, bits: 3) Since two-thirds of the constraints are known, only the number of bits for each symbol need be transmitted: 2, 1, 3, 3 With knowledge of the canonical Huffman algorithm, it is then possible to recreate the entire table (symbol and code values) from just the bit-lengths. Unused symbols are normally transmitted as having zero bit length. Another efficient way representing the codebook is to list all symbols in increasing order by their bit-lengths, and record the number of symbols for each bit-length. For the example mentioned above, the encoding becomes: (1,1,2), ('B','A','C','D') This means that the first symbol B is of length 1, then the A of length 2, and remaining 2 symbols (C and D) of length 3. Since the symbols are sorted by bit-length, we can efficiently reconstruct the codebook. A pseudo code describing the reconstruction is introduced on the next section. This type of encoding is advantageous when only a few symbols in the alphabet are being compressed. For example, suppose the codebook contains only 4 letters C, O, D and E, each of length 2. To represent the letter O using the previous method, we need to either add a lot of zeros (Method1): 0, 0, 2, 2, 2, 0, ... , 2, ... or record which 4 letters we have used. Each way makes the description longer than the following (Method2): (0,4), ('C','O','D','E') The JPEG File Interchange Format uses Method2 of encoding, because at most only 162 symbols out of the 8-bit alphabet, which has size 256, will be in the codebook. Pseudocode Given a list of symbols sorted by bit-length, the following pseudocode will print a canonical Huffman code book: code := 0 while more symbols do print symbol, code code := (code + 1) << ((bit length of the next symbol) − (current bit length)) algorithm compute huffman code is input: message ensemble (set of (message, probability)). base D. output: code ensemble (set of (message, code)). 1- sort the message ensemble by decreasing probability. 2- N is the cardinal of the message ensemble (number of different messages). 3- compute the integer such as and is integer. 4- select the least probable messages, and assign them each a digit code. 5- substitute the selected messages by a composite message summing their probability, and re-order it. 6- while there remains more than one message, do steps thru 8. 7- select D least probable messages, and assign them each a digit code. 8- substitute the selected messages by a composite message summing their probability, and re-order it. 9- the code of each message is given by the concatenation of the code digits of the aggregate they've been put in. References Lossless compression algorithms Coding theory Data compression
Canonical Huffman code
[ "Mathematics" ]
1,870
[ "Discrete mathematics", "Coding theory" ]
6,946,381
https://en.wikipedia.org/wiki/Exploding%20trousers
In New Zealand in the 1930s, farmers reportedly had trouble with exploding trousers as a result of attempts to control ragwort, an agricultural weed. Farmers had been spraying sodium chlorate, a government recommended weedkiller, onto the ragwort, and some of the spray had ended up on their clothes. Sodium chlorate is a strong oxidizing agent, and reacted with the organic fibres (i.e., the wool and the cotton) of the clothes. Reports had farmers' trousers variously smoldering and bursting into flame, particularly when exposed to heat or naked flames. One report had trousers that were hanging on a washing line starting to smoke. There were also several reports of trousers exploding while farmers were wearing them, causing severe burns. The history was written up by James Watson of Massey University in a widely reported article, "The Significance of Mr. Richard Buckley's Exploding Trousers" − which later won him an Ig Nobel Prize. On television In their May 2006 "Exploding Pants" episode the popular U.S. television show MythBusters investigated the idea that trousers could explode based on the events of New Zealand in the 1930s. Experimenters tested four substances on 100% cotton overalls: a paste comprising a mixture of gunpowder and water; a "herbicide from the 1930s" which was sodium chlorate, a potentially explosive herbicide used at the time of the events; a "fertilizer from the 1930s" which was ammonium nitrate mixed with a liquid fuel (most likely diesel, as an ammonium nitrate bottle, with the label facing the camera, was in the foreground of the shot, in the presence of a red plastic fuel can on the table); gun cotton, the common name for nitrocellulose. Each of these were put to four different ignition methods: flame, radiant heat, friction, and impact. Although not naming "the herbicide" as sodium chlorate, they confirmed that trousers impregnated therewith would indeed vigorously combust upon exposure to flame, radiant heat, and impact, though their friction tests did not cause ignition. However, combustion (i.e. an exothermic chemical reaction between a fuel and an oxidant) is not the same as an explosion, which involves a rapid increase in volume accompanied by the release of energy in an extreme manner (i.e. a shock wave). Even so, a person witnessing such an event (especially if they were wearing the trousers) would likely describe such a sudden event as an explosion. The tests also revealed that none of the other three substances caused combustion of the trousers, thus indicating that sodium chlorate was probably a cause for the events that occurred. ABC's The Science Show described exploding trousers as "the scenario for a Goon Show", and, in an example of art imitating life, it actually was. The Goons wrote a script about a chemical which "when applied to the tail of a military soldier shirt, is tasteless, colourless, and odourless" but that "The moment the wearer sits down, the heat from his body causes the chemical to explode.". In the final episode of Blackadder Goes Forth, Captain Edmund Blackadder says that he's "Off to Hartlepool to buy some exploding trousers" when feigning madness to avoid going over the top. See also Agriculture in New Zealand References Further reading (The author won an Ig Nobel Prize in 2005 for this paper) Trousers Agriculture in New Zealand Pesticides in New Zealand Trousers and shorts Occupational safety and health Forteana
Exploding trousers
[ "Chemistry" ]
736
[ "Explosions" ]
15,900,261
https://en.wikipedia.org/wiki/Smart%20rubber
Smart rubber is a polymeric material that is able to "heal" when torn. Near room temperature this process is reversible and can be cycled several times. Supramolecular self-healing rubber can be processed, re-used, and ultimately recycled. The edges of a tear can be held together, and they will simply re-bond into apparent solidity. This is done by utilizing a hydrogen-bonding polymer, rather than producing a material whose structure would depend on covalent bonding and ionic bonding between chains, which is typical of normal rubber. In this case hydrogen bonding can occur simply by pressing two faces of the substance together, allowing the recovery of a continuous hydrogen bonding network. Hydrogen bonding networks Smart rubber will recover its original mechanical strength within several hours of being split and then subsequently recombined. Residual hydrogen bond donors and acceptors responsible for the self-healing properties of the elastomer remain unpaired until the newly exposed surface comes in contact with another complementary surface, allowing formation of new intermolecular hydrogen bonds. Comparisons with conventional rubber When compared to rubber, which is covalently cross-linked, smart rubber cannot continually hold mechanical stress without undergoing gradual plastic deformation, and strain recovery is typically slow. See also Ludwik Leibler (inventor of smart rubber) Smart polymer References Rubber Smart materials
Smart rubber
[ "Materials_science", "Engineering" ]
274
[ "Smart materials", "Materials science" ]
15,900,338
https://en.wikipedia.org/wiki/Clinical%20quality%20management%20system
Clinical quality management systems (CQMS) are systems used in the life sciences sector (primarily in the pharmaceutical, biologics and medical device industries) designed to manage quality management best practices throughout clinical research and clinical study management. A CQMS system is designed to manage all of the documents, activities, tasks, processes, quality events, relationships, audits and training that must be administered and controlled throughout the life of a clinical trial. The premise of a CQMS is to bring together the activities led by two sectors of clinical research, Clinical Quality and Clinical Operations, to facilitate cross-functional activities to improve efficiencies and transparency and to encourage the use of risk mitigation and risk management practices at the clinical study level. Based on the principles of quality management systems (QMS) which are used in many industries to create a framework for defining and delivering quality outcomes, managing risk, and continual improvement. Many guidelines and governance bodies have been established to ensure a common approach within a given industry to a set of parameters used to identify the minimally acceptable standard for that industry. The pharmaceutical industry is no exception, with several trade groups (e.g. PhRMA, EFPIA, RQA, etc.) coming together to enhance collaboration. However, as noted by the Academy of Medical Sciences, there are increasingly complex and bureaucratic legal and ethical frameworks that innovators must work within to develop new medicines for patients. The historical pharmaceutical QMS applies primarily to good manufacturing practice as described in existing ISO (International Organization for Standardization) and ICH (International Committee on Harmonization) guidelines. "Good Manufacturing Practices (GMP) relate to quality control and quality assurance enabling companies in the pharmaceutical sector to minimize or eliminate instances of contamination, mix-ups, and errors. This in turn, protects the customer from purchasing a product which is ineffective or even dangerous." These standards have historically been applied to the manufacturing environment, appropriate to how they have been written. However, according to FDA as well as other regulatory bodies, "Implementation of ICH Q10 throughout the product lifecycle should facilitate innovation and continual improvement", implying that the same standards that apply to the manufacturing environment should also be applied to the clinical research space, earlier in the lifecycle of an investigational or marketed product. Accordingly, a CQMS is any system developed to apply these principles to clinical operations within an organization. References Data modeling Rule engines Decision support systems Expert systems
Clinical quality management system
[ "Technology", "Engineering" ]
497
[ "Decision support systems", "Data modeling", "Data engineering", "Information systems", "Expert systems" ]
15,901,419
https://en.wikipedia.org/wiki/Mining%20Remediation%20Authority
The Mining Remediation Authority is a non-departmental public body of the United Kingdom government sponsored by the Department for Energy Security and Net Zero (DESNZ). It owns the vast majority of unworked coal in Great Britain, as well as former coal mines, and undertakes a range of functions including: licensing coal mining operations matters with respect to coal mining subsidence damage outside the areas of responsibility of coal mining licensees dealing with property and historical liability issues; for example environmental projects, mine water treatment schemes and surface hazards relating to past coal mining providing public access to information held by the Mining Remediation Authority on coal mining The Mining Remediation Authority changed its name from the Coal Authority in November 2024. Purpose The Mining Remediation Authority’s stated purpose is to: keep people safe and provide peace of mind protect and enhance the environment use its information and expertise to help people make informed decisions create value and minimise cost to the taxpayer The Mining Remediation Authority provides services to other government departments and agencies, local governments and commercial partners, while contributing to the delivery of the government’s Industrial Strategy and the 25-year Environment Plan. As a public body that holds significant geospatial data it is also working with the Geospatial Commission to look at how, by working together, it can unlock significant value across the economy. As part of the Mining Remediation Authority's duty to protect the public and the environment, it operates a 24-hour telephone line for reporting coal mine hazards and operates 82 mine water treatment schemes across the UK, cleaning more than 122 billion litres of mine water every year. Governance and strategy The Mining Remediation Authority has an independent board responsible for setting its strategic direction, policies and priorities, while ensuring its statutory duties are carried out effectively. Non-executive directors are recruited and appointed to the board by the Secretary of State for DESNZ. Executive directors are recruited to their posts by the board and appointed to the board by the Secretary of State for DESNZ. History It was established under the Coal Industry Act 1994 (c. 21) to manage some functions, in which the British Coal Corporation (formerly the National Coal Board) had previously undertaken, including ownership of unworked coal. In November 2024, the organisation's name was changed from the Coal Authority to the Mining Remediation Authority to "better reflect the organisation’s 24/7 role to manage the effects of historical mining in England, Scotland and Wales and its work to seek low-carbon opportunities from our mining heritage for the future." The Mining Remediation Authority's public task comprises all the functions, duties and responsibilities is set out in the following documents The Coal Industry Act 1994 ("the 1994 act") The Coal Mining Subsidence Act 1991, as amended by the 1994 act A Revised Coal Authority Explanatory Note, produced by the Department of Trade and Industry (DTI) for Parliament in June 1994 to explain the intended provisions of the Coal Industry Bill The Water Act 2003 The Water Environment and Water Services (Scotland) Act 2003 Statement by Lord Strathclyde in 1994 on the government's expectations of the Coal Authority relating to mine water remediation Re-statement by John Battle MP in 1998 that the Mining Remediation Authority is the designated body with responsibility for dealing with mine water discharges from former coal mines The Energy Act 2011 Its headquarters are in Mansfield, Nottinghamshire, where its Mining Heritage Centre is also based. This archive houses a large quantity of data, including historical information, relating to coal mining in Britain. The unique collection of around 120,000 coal abandonment plans, covering both opencast and deep mining operations, dates as far back as the 17th century and depict areas of extraction and the point of entry into the same. Historical mine plans can be accessed for research purposes, for desktop studies prior to development or simply by members of the public with an interest in the history of mining. The Mining Remediation Authority also has a large collection of more than 47,000 British Coal photographs, which feature a wide range of collieries and cover every aspect of coal mining. All plans and photographs have been digitally scanned and are available to any interested parties. They can be viewed at the Mining Heritage Centre in Mansfield. The Water and Abandoned Metal Mines (WAMM) programme To tackle the water pollution caused by historical metal mining in England, the Mining Remediation Authority works with the Environment Agency in the Water and Abandoned Metal Mines partnership, funded by the Department for Environment, Food and Rural Affairs (Defra). Mining reports The Mining Remediation Authority’s Commercial Reports and Advisory Services provides comprehensive mining report services, include desktop reports, pre planning advice, project management and civil, structural and environmental engineering. Reports available include: CON29M Coal mining report – an official coal mining report for the conveyancing industry Ground Stability report – a CON29M report, combined with non-coal related British Geological Survey subsidence information Enviro All-in-One report – a CON29M report, combined with Groundsure’s homebuyers environmental report Consultants Coal Mining report - builds on information in a CON29M report, giving additional details such as seam names and depths. Aimed towards mining consultants No Search Certificate – a certificate to show that a property is not within a known area of past, present or proposed future mining activity Mine energy and heat Abandoned coal mines are a source of geothermal energy, and could also be used for cooling and storing inter-seasonally waste or renewable energy. As mines become flooded they have the potential to meet all of the heating needs of the coalfield communities which account for 25% of UK population. In the case of a district heating network, this energy can be transferred to a pipe network using a heat exchanger, and then distributed to nearby homes. Abandoned coal mines present an opportunity to the UK as a source of geothermal energy, and this is being explored by the Mining Remediation Authority, which is working in partnerships with local authorities and other companies to fulfil this potential. Consultancy services The Mining Remediation Authority provides consultancy services on: Geothermal energy from abandoned coal mines Geothermal energy from abandoned coal mines is a low carbon, sustainable heat source which under the right conditions can compete with public supply gas prices. Coal mining risk assessments When submitting a planning application within a high risk area, it is likely a Coal Mining Risk Assessment will need to be submitted. Mine water treatment schemes It provides expert advice and assistance to help developers understand and investigate the risks of past mining. Tip inspection and management It has expertise in the long term management of the risks associated with tips and delivering inspection programmes to mitigate these risks. Water level monitoring The Mining Remediation Authority has significant experience gained in monitoring an extensive mine water treatment scheme portfolio Engineering design It has a team experienced in engineering design, with many years of providing the critical information required to complete complex civil engineering projects. Borehole drilling advice The complex nature of mine workings can cause operational issues when drilling boreholes, which requires specialist knowledge to be carried out accurately. Mining hydrogeology It has a team of hydrogeologists with a wide range of experience in all types of hydrogeology, specialising in complex mining hydrogeology. See also Coal mining in the United Kingdom Sources References External links Coal Industry Act 1994 Coal mining in the United Kingdom Coal organizations Department of Energy and Climate Change Mansfield Non-departmental public bodies of the United Kingdom government Organisations based in Nottinghamshire Organizations established in 1994 1994 establishments in the United Kingdom
Mining Remediation Authority
[ "Engineering" ]
1,529
[ "Coal organizations", "Energy organizations" ]
15,901,488
https://en.wikipedia.org/wiki/Prescribed%20scalar%20curvature%20problem
In Riemannian geometry, a branch of mathematics, the prescribed scalar curvature problem is as follows: given a closed, smooth manifold M and a smooth, real-valued function ƒ on M, construct a Riemannian metric on M whose scalar curvature equals ƒ. Due primarily to the work of J. Kazdan and F. Warner in the 1970s, this problem is well understood. The solution in higher dimensions If the dimension of M is three or greater, then any smooth function ƒ which takes on a negative value somewhere is the scalar curvature of some Riemannian metric. The assumption that ƒ be negative somewhere is needed in general, since not all manifolds admit metrics which have strictly positive scalar curvature. (For example, the three-dimensional torus is such a manifold.) However, Kazdan and Warner proved that if M does admit some metric with strictly positive scalar curvature, then any smooth function ƒ is the scalar curvature of some Riemannian metric. See also Prescribed Ricci curvature problem Yamabe problem References Aubin, Thierry. Some nonlinear problems in Riemannian geometry. Springer Monographs in Mathematics, 1998. Kazdan, J., and Warner F. Scalar curvature and conformal deformation of Riemannian structure. Journal of Differential Geometry. 10 (1975). 113–134. Riemannian geometry Mathematical problems Scalar curvature
Prescribed scalar curvature problem
[ "Physics", "Mathematics" ]
285
[ "Geometric measurement", "Mathematical problems", "Physical quantities", "Curvature (mathematics)" ]
15,902,562
https://en.wikipedia.org/wiki/IT%20risk
Information technology risk, IT risk, IT-related risk, or cyber risk is any risk relating to information technology. While information has long been appreciated as a valuable and important asset, the rise of the knowledge economy and the Digital Revolution has led to organizations becoming increasingly dependent on information, information processing and especially IT. Various events or incidents that compromise IT in some way can therefore cause adverse impacts on the organization's business processes or mission, ranging from inconsequential to catastrophic in scale. Assessing the probability or likelihood of various types of event/incident with their predicted impacts or consequences, should they occur, is a common way to assess and measure IT risks. Alternative methods of measuring IT risk typically involve assessing other contributory factors such as the threats, vulnerabilities, exposures, and asset values. Definitions ISO IT risk: the potential that a given threat will exploit vulnerabilities of an asset or group of assets and thereby cause harm to the organization. It is measured in terms of a combination of the probability of occurrence of an event and its consequence. Committee on National Security Systems The Committee on National Security Systems of United States of America defined risk in different documents: From CNSS Instruction No. 4009 dated 26 April 2010 the basic and more technical focused definition: Risk – Possibility that a particular threat will adversely impact an IS by exploiting a particular vulnerability. National Security Telecommunications and Information Systems Security Instruction (NSTISSI) No. 1000, introduces a probability aspect, quite similar to NIST SP 800-30 one: Risk – A combination of the likelihood that a threat will occur, the likelihood that a threat occurrence will result in an adverse impact, and the severity of the resulting impact National Information Assurance Training and Education Center defines risk in the IT field as: The loss potential that exists as the result of threat-vulnerability pairs. Reducing either the threat or the vulnerability reduces the risk. The uncertainty of loss expressed in terms of probability of such loss. The probability that a hostile entity will successfully exploit a particular telecommunications or COMSEC system for intelligence purposes; its factors are threat and vulnerability. A combination of the likelihood that a threat shall occur, the likelihood that a threat occurrence shall result in an adverse impact, and the severity of the resulting adverse impact. the probability that a particular threat will exploit a particular vulnerability of the system. NIST Many NIST publications define risk in IT context in different publications: FISMApedia term provide a list. Between them: According to NIST SP 800-30: Risk is a function of the likelihood of a given threat-source’s exercising a particular potential vulnerability, and the resulting impact of that adverse event on the organization. From NIST FIPS 200 Risk – The level of impact on organizational operations (including mission, functions, image, or reputation), organizational assets, or individuals resulting from the operation of an information system given the potential impact of a threat and the likelihood of that threat occurring. NIST SP 800-30 defines: IT-related risk The net mission impact considering: the probability that a particular threat-source will exercise (accidentally trigger or intentionally exploit) a particular information system vulnerability and the resulting impact if this should occur. IT-related risks arise from legal liability or mission loss due to: Unauthorized (malicious or accidental) disclosure, modification, or destruction of information Unintentional errors and omissions IT disruptions due to natural or man-made disasters Failure to exercise due care and diligence in the implementation and operation of the IT system. Risk management insight IT risk is the probable frequency and probable magnitude of future loss. ISACA ISACA published the Risk IT Framework in order to provide an end-to-end, comprehensive view of all risks related to the use of IT. There, IT risk is defined as: The business risk associated with the use, ownership, operation, involvement, influence and adoption of IT within an enterprise According to Risk IT, IT risk has a broader meaning: it encompasses not just only the negative impact of operations and service delivery which can bring destruction or reduction of the value of the organization, but also the benefit\value enabling risk associated to missing opportunities to use technology to enable or enhance business or the IT project management for aspects like overspending or late delivery with adverse business impact Measuring IT risk You can't effectively and consistently manage what you can't measure, and you can't measure what you haven't defined. Measuring IT risk (or cyber risk) can occur at many levels. At a business level, the risks are managed categorically. Front line IT departments and NOC's tend to measure more discrete, individual risks. Managing the nexus between them is a key role for modern CISO's. When measuring risk of any kind, selecting the correct equation for a given threat, asset, and available data is an important step. Doing so is subject unto itself, but there are common components of risk equations that are helpful to understand. There are four fundamental forces involved in risk management, which also apply to cybersecurity. They are assets, impact, threats, and likelihood. You have internal knowledge of and a fair amount of control over assets, which are tangible and intangible things that have value. You also have some control over impact, which refers to loss of, or damage to, an asset. However, threats that represent adversaries and their methods of attack are external to your control. Likelihood is the wild card in the bunch. Likelihoods determine if and when a threat will materialize, succeed, and do damage. While never fully under your control, likelihoods can be shaped and influenced to manage the risk. Mathematically, the forces can be represented in a formula such as: where p() is the likelihood that a Threat will materialize/succeed against an Asset, and d() is the likelihood of various levels of damage that may occur. The field of IT risk management has spawned a number of terms and techniques which are unique to the industry. Some industry terms have yet to be reconciled. For example, the term vulnerability is often used interchangeably with likelihood of occurrence, which can be problematic. Often encountered IT risk management terms and techniques include: Information security event An identified occurrence of a system, service or network state indicating a possible breach of information security policy or failure of safeguards, or a previously unknown situation that may be security relevant. Occurrence of a particular set of circumstances The event can be certain or uncertain. The event can be a single occurrence or a series of occurrences. :(ISO/IEC Guide 73) Information security incident is indicated by a single or a series of unwanted information security events that have a significant probability of compromising business operations and threatening information security An event [G.11] that has been assessed as having an actual or potentially adverse effect on the security or performance of a system. Impact The result of an unwanted incident [G.17].(ISO/IEC PDTR 13335-1) Consequence Outcome of an event [G.11] There can be more than one consequence from one event. Consequences can range from positive to negative. Consequences can be expressed qualitatively or quantitatively (ISO/IEC Guide 73) The risk R is the product of the likelihood L of a security incident occurring times the impact I that will be incurred to the organization due to the incident, that is: R = L × I The likelihood of a security incident occurrence is a function of the likelihood that a threat appears and the likelihood that the threat can successfully exploit the relevant system vulnerabilities. The consequence of the occurrence of a security incident are a function of likely impact that the incident will have on the organization as a result of the harm the organization assets will sustain. Harm is related to the value of the assets to the organization; the same asset can have different values to different organizations. So R can be function of four factors: A = Value of the assets T = the likelihood of the threat V = the nature of vulnerability i.e. the likelihood that can be exploited (proportional to the potential benefit for the attacker and inversely proportional to the cost of exploitation) I = the likely impact, the extent of the harm If numerical values (money for impact and probabilities for the other factors), the risk can be expressed in monetary terms and compared to the cost of countermeasures and the residual risk after applying the security control. It is not always practical to express this values, so in the first step of risk evaluation, risk are graded dimensionless in three or five steps scales. OWASP proposes a practical risk measurement guideline based on: Estimation of Likelihood as a mean between different factors in a 0 to 9 scale: Threat agent factors Skill level: How technically skilled is this group of threat agents? No technical skills (1), some technical skills (3), advanced computer user (4), network and programming skills (6), security penetration skills (9) Motive: How motivated is this group of threat agents to find and exploit this vulnerability? Low or no reward (1), possible reward (4), high reward (9) Opportunity: What resources and opportunity are required for this group of threat agents to find and exploit this vulnerability? full access or expensive resources required (0), special access or resources required (4), some access or resources required (7), no access or resources required (9) Size: How large is this group of threat agents? Developers (2), system administrators (2), intranet users (4), partners (5), authenticated users (6), anonymous Internet users (9) Vulnerability Factors: the next set of factors are related to the vulnerability involved. The goal here is to estimate the likelihood of the particular vulnerability involved being discovered and exploited. Assume the threat agent selected above. Ease of discovery: How easy is it for this group of threat agents to discover this vulnerability? Practically impossible (1), difficult (3), easy (7), automated tools available (9) Ease of exploit: How easy is it for this group of threat agents to actually exploit this vulnerability? Theoretical (1), difficult (3), easy (5), automated tools available (9) Awareness: How well known is this vulnerability to this group of threat agents? Unknown (1), hidden (4), obvious (6), public knowledge (9) Intrusion detection: How likely is an exploit to be detected? Active detection in application (1), logged and reviewed (3), logged without review (8), not logged (9) Estimation of Impact as a mean between different factors in a 0 to 9 scale Technical Impact Factors; technical impact can be broken down into factors aligned with the traditional security areas of concern: confidentiality, integrity, availability, and accountability. The goal is to estimate the magnitude of the impact on the system if the vulnerability were to be exploited. Loss of confidentiality: How much data could be disclosed and how sensitive is it? Minimal non-sensitive data disclosed (2), minimal critical data disclosed (6), extensive non-sensitive data disclosed (6), extensive critical data disclosed (7), all data disclosed (9) Loss of integrity: How much data could be corrupted and how damaged is it? Minimal slightly corrupt data (1), minimal seriously corrupt data (3), extensive slightly corrupt data (5), extensive seriously corrupt data (7), all data totally corrupt (9) Loss of availability How much service could be lost and how vital is it? Minimal secondary services interrupted (1), minimal primary services interrupted (5), extensive secondary services interrupted (5), extensive primary services interrupted (7), all services completely lost (9) Loss of accountability: Are the threat agents' actions traceable to an individual? Fully traceable (1), possibly traceable (7), completely anonymous (9) Business Impact Factors: The business impact stems from the technical impact, but requires a deep understanding of what is important to the company running the application. In general, one should be aiming to support one's risk assessment with an evaluation of the impact on the business if the business fails to guard against risk, particularly if one's audience is at the executive level. The business risk is what justifies investment in fixing security problems. Financial damage: How much financial damage will result from an exploit? Less than the cost to fix the vulnerability (1), minor effect on annual profit (3), significant effect on annual profit (7), bankruptcy (9) Reputation damage: Would an exploit result in reputation damage that would harm the business? Minimal damage (1), Loss of major accounts (4), loss of goodwill (5), brand damage (9) Non-compliance: How much exposure does non-compliance introduce? Minor violation (2), clear violation (5), high-profile violation (7) Privacy violation: How much personally identifiable information could be disclosed? One individual (3), hundreds of people (5), thousands of people (7), millions of people (9) If the business impact is calculated accurately use it in the following otherwise use the Technical impact Rate likelihood and impact in a LOW, MEDIUM, HIGH scale assuming that less than 3 is LOW, 3 to less than 6 is MEDIUM, and 6 to 9 is HIGH. Calculate the risk using the following table IT risk management The NIST Cybersecurity Framework encourages organizations to manage IT risk as part the Identify (ID) function: Risk Assessment (ID.RA): The organization understands the cybersecurity risk to organizational operations (including mission, functions, image, or reputation), organizational assets, and individuals. ID.RA-1: Asset vulnerabilities are identified and documented ID.RA-2: Cyber threat intelligence and vulnerability information is received from information sharing forums and source ID.RA-3: Threats, both internal and external, are identified and documented ID.RA-4: Potential business impacts and likelihoods are identified ID.RA-5: Threats, vulnerabilities, likelihoods, and impacts are used to determine risk ID.RA-6: Risk responses are identified and prioritized Risk Management Strategy (ID.RM): The organization’s priorities, constraints, risk tolerances, and assumptions are established and used to support operational risk decisions. ID.RM-1: Risk management processes are established, managed, and agreed to by organizational stakeholders ID.RM-2: Organizational risk tolerance is determined and clearly expressed ID.RM-3: The organization’s determination of risk tolerance is informed by its role in critical infrastructure and sector specific risk analysis IT risk laws and regulations In the following a brief description of applicable rules organized by source. OECD OECD issued the following: Organisation for Economic Co-operation and Development (OECD) Recommendation of the Council concerning guidelines governing the protection of privacy and trans-border flows of personal data (23 September 1980) OECD Guidelines for the Security of Information Systems and Networks: Towards a Culture of Security (25 July 2002). Topic: General information security. Scope: Non binding guidelines to any OECD entities (governments, businesses, other organisations and individual users who develop, own, provide, manage, service, and use information systems and networks). The OECD Guidelines state the basic principles underpinning risk management and information security practices. While no part of the text is binding as such, non-compliance with any of the principles is indicative of a serious breach of RM/RA good practices that can potentially incur liability. European Union The European Union issued the following, divided by topic: Privacy Regulation (EC) No 45/2001 on the protection of individuals with regard to the processing of personal data by the Community institutions and bodies and on the free movement of such data provide an internal regulation, which is a practical application of the principles of the Privacy Directive described below. Furthermore, article 35 of the Regulation requires the Community institutions and bodies to take similar precautions with regard to their telecommunications infrastructure, and to properly inform the users of any specific risks of security breaches. Directive 95/46/EC on the protection of individuals with regard to the processing of personal data and on the free movement of such data require that any personal data processing activity undergoes a prior risk analysis in order to determine the privacy implications of the activity, and to determine the appropriate legal, technical and organisation measures to protect such activities;is effectively protected by such measures, which must be state of the art keeping into account the sensitivity and privacy implications of the activity (including when a third party is charged with the processing task) is notified to a national data protection authority, including the measures taken to ensure the security of the activity. Furthermore, article 25 and following of the Directive requires Member States to ban the transfer of personal data to non-Member States, unless such countries have provided adequate legal protection for such personal data, or barring certain other exceptions. Commission Decision 2001/497/EC of 15 June 2001 on standard contractual clauses for the transfer of personal data to third countries, under Directive 95/46/EC; and Commission Decision 2004/915/EC of 27 December 2004 amending Decision 2001/497/EC as regards the introduction of an alternative set of standard contractual clauses for the transfer of personal data to third countries. Topic: Export of personal data to third countries, specifically non-E.U. countries which have not been recognised as having a data protection level that is adequate (i.e. equivalent to that of the E.U.). Both Commission Decisions provide a set of voluntary model clauses which can be used to export personal data from a data controller (who is subject to E.U. data protection rules) to a data processor outside the E.U. who is not subject to these rules or to a similar set of adequate rules. International Safe Harbor Privacy Principles (see below USA and International Safe Harbor Privacy Principles ) Directive 2002/58/EC of 12 July 2002 concerning the processing of personal data and the protection of privacy in the electronic communications sector National Security Directive 2006/24/EC of 15 March 2006 on the retention of data generated or processed in connection with the provision of publicly available electronic communications services or of public communications networks and amending Directive 2002/58/EC (‘Data Retention Directive’). Topic: Requirement for the providers of public electronic telecommunications service providers to retain certain information for the purposes of the investigation, detection and prosecution of serious crime Council Directive 2008/114/EC of 8 December 2008 on the identification and designation of European critical infrastructures and the assessment of the need to improve their protection. Topic: Identification and protection of European Critical Infrastructures. Scope: Applicable to Member States and to the operators of European Critical Infrastructure (defined by the draft directive as ‘critical infrastructures the disruption or destruction of which would significantly affect two or more Member States, or a single Member State if the critical infrastructure is located in another Member State. This includes effects resulting from cross-sector dependencies on other types of infrastructure’). Requires Member States to identify critical infrastructures on their territories, and to designate them as ECIs. Following this designation, the owners/operators of ECIs are required to create Operator Security Plans (OSPs), which should establish relevant security solutions for their protection Civil and Penal law Council Framework Decision 2005/222/JHA of 24 February 2005 on attacks against information systems. Topic: General decision aiming to harmonise national provisions in the field of cyber crime, encompassing material criminal law (i.e. definitions of specific crimes), procedural criminal law (including investigative measures and international cooperation) and liability issues. Scope: Requires Member States to implement the provisions of the Framework Decision in their national legal frameworks. Framework decision is relevant to RM/RA because it contains the conditions under which legal liability can be imposed on legal entities for conduct of certain natural persons of authority within the legal entity. Thus, the Framework decision requires that the conduct of such figures within an organisation is adequately monitored, also because the Decision states that a legal entity can be held liable for acts of omission in this regard. Council of Europe Council of Europe Convention on Cybercrime, Budapest, 23.XI.2001, European Treaty Series-No. 185. Topic: General treaty aiming to harmonise national provisions in the field of cyber crime, encompassing material criminal law (i.e. definitions of specific crimes), procedural criminal law (including investigative measures and international cooperation), liability issues and data retention. Apart from the definitions of a series of criminal offences in articles 2 to 10, the Convention is relevant to RM/RA because it states the conditions under which legal liability can be imposed on legal entities for conduct of certain natural persons of authority within the legal entity. Thus, the Convention requires that the conduct of such figures within an organisation is adequately monitored, also because the Convention states that a legal entity can be held liable for acts of omission in this regard. United States United States issued the following, divided by topic: Civil and Penal law Amendments to the Federal Rules of Civil Procedure with regard to electronic discovery. Topic: U.S. Federal rules with regard to the production of electronic documents in civil proceedings. The discovery rules allow a party in civil proceedings to demand that the opposing party produce all relevant documentation (to be defined by the requesting party) in its possession, so as to allow the parties and the court to correctly assess the matter. Through the e-discovery amendment, which entered into force on 1 December 2006, such information may now include electronic information. This implies that any party being brought before a U.S. court in civil proceedings can be asked to produce such documents, which includes finalised reports, working documents, internal memos and e-mails with regard to a specific subject, which may or may not be specifically delineated. Any party whose activities imply a risk of being involved in such proceedings must therefore take adequate precautions for the management of such information, including the secure storage. Specifically: The party must be capable of initiating a ‘litigation hold’, a technical/organisational measure which must ensure that no relevant information can be modified any longer in any way. Storage policies must be responsible: while deletion of specific information of course remains allowed when this is a part of general information management policies (‘routine, good-faith operation of the information system’, Rule 37 (f)), the wilful destruction of potentially relevant information can be punished by extremely high fines (in one specific case of 1.6 billion US$). Thus, in practice, any businesses who risk civil litigation before U.S. courts must implement adequate information management policies, and must implement the necessary measures to initiate a litigation hold. Privacy California Consumer Privacy Act (CCPA) California Privacy Rights Act (CPRA) Gramm–Leach–Bliley Act (GLBA) USA PATRIOT Act, Title III Health Insurance Portability and Accountability Act (HIPAA) From an RM/RA perspective, the Act is particularly known for its provisions with regard to Administrative Simplification (Title II of HIPAA). This title required the U.S. Department of Health and Human Services (HHS) to draft specific rule sets, each of which would provide specific standards which would improve the efficiency of the health care system and prevent abuse. As a result, the HHS has adopted five principal rules: the Privacy Rule, the Transactions and Code Sets Rule, the Unique Identifiers Rule, the Enforcement Rule, and the Security Rule. The latter, published in the Federal Register on 20 February 2003 (see: http://www.cms.hhs.gov/SecurityStandard/Downloads/securityfinalrule.pdf ), is specifically relevant, as it specifies a series of administrative, technical, and physical security procedures to assure the confidentiality of electronic protected health information. These aspects have been further outlined in a set of Security Standards on Administrative, Physical, Organisational and Technical Safeguards, all of which have been published, along with a guidance document on the basics of HIPAA risk management and risk assessment. European or other countries health care service providers will generally not be affected by HIPAA obligations if they are not active on the U.S. market. However, since their data processing activities are subject to similar obligations under general European law (including the Privacy Directive), and since the underlying trends of modernisation and evolution towards electronic health files are the same, the HHS safeguards can be useful as an initial yardstick for measuring RM/RA strategies put in place by European health care service providers, specifically with regard to the processing of electronic health information. HIPAA security standards include the following: Administrative safeguards: Security Management Process Assigned Security Responsibility Workforce Security Information Access Management Security Awareness and Training Security Incident Procedures Contingency Plan Evaluation Business Associate Contracts and Other Arrangements Physical safeguards Facility Access Controls Workstation Use Workstation Security Device and Media Controls Technical safeguards Access Control Audit Controls Integrity Person or Entity Authentication Transmission Security Organisational requirements Business Associate Contracts & Other Arrangements Requirements for Group Health Plans International Safe Harbor Privacy Principles issued by the US Department of Commerce on July 21, 2000 Export of personal data from a data controller who is subject to E.U. privacy regulations to a U.S. based destination; before personal data may be exported from an entity subject to E.U. privacy regulations to a destination subject to U.S. law, the European entity must ensure that the receiving entity provides adequate safeguards to protect such data against a number of mishaps. One way of complying with this obligation is to require the receiving entity to join the Safe Harbor, by requiring that the entity self-certifies its compliance with the so-called Safe Harbor Principles. If this road is chosen, the data controller exporting the data must verify that the U.S. destination is indeed on the Safe Harbor list (see safe harbor list) The United States Department of Homeland Security also utilizes Privacy Impact Assessment (PIA) as a decision making tool to identify and mitigate risks of privacy violations. Sarbanes–Oxley Act FISMA SEC Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure As legislation evolves, there has been increased focus to require 'reasonable security' for information management. CCPA states that "manufacturers of connected devices to equip the device with reasonable security." New York's SHIELD Act requires that organizations that manage NY residents' information “develop, implement and maintain reasonable safeguards to protect the security, confidentiality and integrity of the private information including, but not limited to, disposal of data.” This concept will influence how businesses manage their risk management plan as compliance requirements develop. Standards organizations and standards International standard bodies: International Organization for Standardization – ISO Payment Card Industry Security Standards Council Information Security Forum The Open Group United States standard bodies: National Institute of Standards and Technology – NIST Federal Information Processing Standards – FIPS by NIST devoted to Federal Government and Agencies UK standard bodies British Standard Institute Short description of standards The list is chiefly based on: ISO ISO/IEC 13335-1:2004 – Information technology—Security techniques—Management of information and communications technology security—Part 1: Concepts and models for information and communications technology security management http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=39066. Standard containing generally accepted descriptions of concepts and models for information and communications technology security management. The standard is a commonly used code of practice, and serves as a resource for the implementation of security management practices and as a yardstick for auditing such practices. (See also http://csrc.nist.gov/publications/secpubs/otherpubs/reviso-faq.pdf ) ISO/IEC TR 15443-1:2005 – Information technology—Security techniques—A framework for IT security assurance reference:http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=39733 (Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Security assurance – the Technical Report (TR) contains generally accepted guidelines which can be used to determine an appropriate assurance method for assessing a security service, product or environmental factor ISO/IEC 15816:2002 – Information technology—Security techniques—Security information objects for access control reference:http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=29139 (Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Security management – Access control. The standard allows security professionals to rely on a specific set of syntactic definitions and explanations with regard to SIOs, thus avoiding duplication or divergence in other standardisation efforts. ISO/IEC TR 15947:2002 – Information technology—Security techniques—IT intrusion detection framework reference:http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=29580 (Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Security management – Intrusion detection in IT systems. The standard allows security professionals to rely on a specific set of concepts and methodologies for describing and assessing security risks with regard to potential intrusions in IT systems. It does not contain any RM/RA obligations as such, but it is rather a tool for facilitating RM/RA activities in the affected field. ISO/IEC 15408-1/2/3:2005 – Information technology — Security techniques — Evaluation criteria for IT security — Part 1: Introduction and general model (15408-1) Part 2: Security functional requirements (15408-2) Part 3: Security assurance requirements (15408-3) reference: http://isotc.iso.org/livelink/livelink/fetch/2000/2489/Ittf_Home/PubliclyAvailableStandards.htm Topic: Standard containing a common set of requirements for the security functions of IT products and systems and for assurance measures applied to them during a security evaluation. Scope: Publicly available ISO standard, which can be voluntarily implemented. The text is a resource for the evaluation of the security of IT products and systems, and can thus be used as a tool for RM/RA. The standard is commonly used as a resource for the evaluation of the security of IT products and systems; including (if not specifically) for procurement decisions with regard to such products. The standard can thus be used as an RM/RA tool to determine the security of an IT product or system during its design, manufacturing or marketing, or before procuring it. ISO/IEC 17799:2005 – Information technology—Security techniques—Code of practice for information security management. reference: http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=39612&ICS1=35&ICS2=40&ICS3= (Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Standard containing generally accepted guidelines and general principles for initiating, implementing, maintaining, and improving information security management in an organization, including business continuity management. The standard is a commonly used code of practice, and serves as a resource for the implementation of information security management practices and as a yardstick for auditing such practices. (See also ISO/IEC 17799) ISO/IEC TR 15446:2004 – Information technology—Security techniques—Guide for the production of Protection Profiles and Security Targets. reference: http://isotc.iso.org/livelink/livelink/fetch/2000/2489/Ittf_Home/PubliclyAvailableStandards.htm Topic: Technical Report (TR) containing guidelines for the construction of Protection Profiles (PPs) and Security Targets (STs) that are intended to be compliant with ISO/IEC 15408 (the "Common Criteria"). The standard is predominantly used as a tool for security professionals to develop PPs and STs, but can also be used to assess the validity of the same (by using the TR as a yardstick to determine if its standards have been obeyed). Thus, it is a (nonbinding) normative tool for the creation and assessment of RM/RA practices. ISO/IEC 18028:2006 – Information technology—Security techniques—IT network security reference: http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=40008 (Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Five part standard (ISO/IEC 18028-1 to 18028-5) containing generally accepted guidelines on the security aspects of the management, operation and use of information technology networks. The standard is considered an extension of the guidelines provided in ISO/IEC 13335 and ISO/IEC 17799 focusing specifically on network security risks. The standard is a commonly used code of practice, and serves as a resource for the implementation of security management practices and as a yardstick for auditing such practices. ISO/IEC 27001:2005 – Information technology—Security techniques—Information security management systems—Requirements reference: http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=42103 (Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Standard containing generally accepted guidelines for the implementation of an Information Security Management System within any given organisation. Scope: Not publicly available ISO standard, which can be voluntarily implemented. While not legally binding, the text contains direct guidelines for the creation of sound information security practices The standard is a very commonly used code of practice, and serves as a resource for the implementation of information security management systems and as a yardstick for auditing such systems and the surrounding practices. Its application in practice is often combined with related standards, such as BS 7799-3:2006 which provides additional guidance to support the requirements given in ISO/IEC 27001:2005 ISO/IEC 27001:2013, the updated standard for information security management systems. ISO/IEC TR 18044:2004 – Information technology—Security techniques—Information security incident management reference: http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=35396 (Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Technical Report (TR) containing generally accepted guidelines and general principles for information security incident management in an organization. Scope: Not publicly available ISO TR, which can be voluntarily used. While not legally binding, the text contains direct guidelines for incident management. The standard is a high level resource introducing basic concepts and considerations in the field of incident response. As such, it is mostly useful as a catalyst to awareness raising initiatives in this regard. ISO/IEC 18045:2005 – Information technology—Security techniques—Methodology for IT security evaluation reference: http://isotc.iso.org/livelink/livelink/fetch/2000/2489/Ittf_Home/PubliclyAvailableStandards.htm Topic: Standard containing auditing guidelines for assessment of compliance with ISO/IEC 15408 (Information technology—Security techniques—Evaluation criteria for IT security) Scope Publicly available ISO standard, to be followed when evaluating compliance with ISO/IEC 15408 (Information technology—Security techniques—Evaluation criteria for IT security). The standard is a ‘companion document’, which is thus primarily of used for security professionals involved in evaluating compliance with ISO/IEC 15408 (Information technology—Security techniques—Evaluation criteria for IT security). Since it describes minimum actions to be performed by such auditors, compliance with ISO/IEC 15408 is impossible if ISO/IEC 18045 has been disregarded. ISO/TR 13569:2005 – Financial services—Information security guidelines reference: http://www.iso.org/iso/en/CatalogueDetailPage.CatalogueDetail?CSNUMBER=37245 (Note: this is a reference to the ISO page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Standard containing guidelines for the implementation and assessment of information security policies in financial services institutions. The standard is a commonly referenced guideline, and serves as a resource for the implementation of information security management programmes in institutions of the financial sector, and as a yardstick for auditing such programmes. (See also http://csrc.nist.gov/publications/secpubs/otherpubs/reviso-faq.pdf ) ISO/IEC 21827:2008 – Information technology—Security techniques—Systems Security Engineering—Capability Maturity Model (SSE-CMM): ISO/IEC 21827:2008 specifies the Systems Security Engineering – Capability Maturity Model (SSE-CMM), which describes the essential characteristics of an organization's security engineering process that must exist to ensure good security engineering. ISO/IEC 21827:2008 does not prescribe a particular process or sequence, but captures practices generally observed in industry. The model is a standard metric for security engineering practices. BSI BS 25999-1:2006 – Business continuity management Part 1: Code of practice Note: this is only part one of BS 25999, which was published in November 2006. Part two (which should contain more specific criteria with a view of possible accreditation) is yet to appear. reference: http://www.bsi-global.com/en/Shop/Publication-Detail/?pid=000000000030157563 . Topic: Standard containing a business continuity code of practice. The standard is intended as a code of practice for business continuity management, and will be extended by a second part that should permit accreditation for adherence with the standard. Given its relative newness, the potential impact of the standard is difficult to assess, although it could be very influential to RM/RA practices, given the general lack of universally applicable standards in this regard and the increasing attention to business continuity and contingency planning in regulatory initiatives. Application of this standard can be complemented by other norms, in particular PAS 77:2006 – IT Service Continuity Management Code of Practice. The TR allows security professionals to determine a suitable methodology for assessing a security service, product or environmental factor (a deliverable). Following this TR, it can be determined which level of security assurance a deliverable is intended to meet, and if this threshold is actually met by the deliverable. BS 7799-3:2006 – Information security management systems—Guidelines for information security risk management reference: (Note: this is a reference to the BSI page where the standard can be acquired. However, the standard is not free of charge, and its provisions are not publicly available. For this reason, specific provisions cannot be quoted). Topic: Standard containing general guidelines for information security risk management. Scope: Not publicly available BSI standard, which can be voluntarily implemented. While not legally binding, the text contains direct guidelines for the creation of sound information security practices. The standard is mostly intended as a guiding complementary document to the application of the aforementioned ISO 27001:2005, and is therefore typically applied in conjunction with this standard in risk assessment practices Information Security Forum Standard of Good Practice for Information Security See also Asset (computer security) Availability BS 7799 BS 25999 Committee on National Security Systems Common Criteria Confidentiality Cyber-security regulation Data Protection Directive Electrical disruptions caused by squirrels Exploit (computer security) Factor analysis of information risk Federal Information Security Management Act of 2002 Gramm–Leach–Bliley Act Health Insurance Portability and Accountability Act Information security Information Security Forum Information technology Integrity International Safe Harbor Privacy Principles ISACA ISO ISO/IEC 27000-series ISO/IEC 27001:2013 ISO/IEC 27002 IT risk management Long-term support National Information Assurance Training and Education Center National Institute of Standards and Technology National security OWASP Patriot Act, Title III Privacy Risk Risk factor (computing) Risk IT Sarbanes–Oxley Act Threat (computer) Vulnerability Gordon–Loeb model for cyber security investments References External links Internet2 Information Security Guide: Effective Practices and Solutions for Higher Education Risk Management – Principles and Inventories for Risk Management / Risk Assessment methods and tools , Publication date: Jun 01, 2006 Authors:Conducted by the Technical Department of ENISA Section Risk Management Clusif Club de la Sécurité de l'Information Français 800-30 NIST Risk Management Guide 800-39 NIST DRAFT Managing Risk from Information Systems: An Organizational Perspective FIPS Publication 199, Standards for Security Categorization of Federal Information and Information FIPS Publication 200 Minimum Security Requirements for Federal Information and Information Systems 800-37 NIST Guide for Applying the Risk Management Framework to Federal Information Systems: A Security Life Cycle Approach FISMApedia is a collection of documents and discussions focused on USA Federal IT security Duty of Care Risk Analysis Standard (DoCRA) Data security Risk analysis Security compliance Operational risk
IT risk
[ "Engineering" ]
8,662
[ "Cybersecurity engineering", "Data security" ]
15,903,307
https://en.wikipedia.org/wiki/OGLE-TR-123
OGLE-TR-123 is a binary stellar system containing one of the smallest main-sequence stars whose radius has been measured. It was discovered when the Optical Gravitational Lensing Experiment (OGLE) survey observed the smaller star eclipsing the larger primary. The orbital period is approximately 1.80 days. OGLE-TR-123B The smaller star, OGLE-TR-123B, is estimated to have a radius around 0.13 solar radii, and a mass of around 0.085 solar masses (), or approximately 90 times Jupiter's. OGLE-TR-123b's mass is close to the lowest possible mass, estimated to be around 0.07 or 0.08 , for a hydrogen-fusing star. See also OGLE-TR-122 EBLM J0555-57 References Carina (constellation) Eclipsing binaries Carinae, V816 bg:OGLE-TR-122b pl:OGLE-TR-122b sk:OGLE-TR-122b
OGLE-TR-123
[ "Astronomy" ]
219
[ "Carina (constellation)", "Constellations" ]
15,903,638
https://en.wikipedia.org/wiki/Ellison%20Medical%20Foundation
The Ellison Medical Foundation, a 501(c)(3) Private Nonoperating Foundation, was founded in 1997 and is located in Bethesda, Maryland. The foundation supported research in the following discipline areas: biomedical research on aging, age-related diseases and disabilities. Its major philanthropic support came from Oracle CEO Larry Ellison. As of 2007, the foundation owned 1.3 million shares of Oracle Corporation. The foundation is classified as NTEE T99—Other Philanthropy, Voluntarism, and Grantmaking Foundations N.E.C. Since 1998 the Ellison Medical Foundation has spent hundreds of millions of dollars funding fundamental research on the biology of ageing. Forty million dollars per year was given to 25 Senior Scholars and 25 New Scholars. The Senior Scholars received $1 million each and the New Scholars received $400,000 each for four years of research. In late summer/early fall 2013 the Ellison Medical Foundation announced that it "will no longer be accepting new applications for New and Senior Scholar awards in Aging, Neuroscience, or other biomedical research topics. All currently funded awards will continue ... but no new applications or letters of intent will be accepted for these or other grant programs." References External links Guidestar Official Nonprofit Report Old Ellison Medical Foundation website 1997 establishments in Maryland Organizations established in 1997 Biomedical research foundations Medical and health foundations in the United States
Ellison Medical Foundation
[ "Engineering", "Biology" ]
275
[ "Biological engineering", "Bioengineering stubs", "Biotechnology stubs", "Medical technology stubs", "Biotechnology organizations", "Biomedical research foundations", "Medical technology" ]
15,903,779
https://en.wikipedia.org/wiki/Iron%20and%20Steel%20Corporation%20of%20Great%20Britain
The Iron and Steel Corporation of Great Britain was a nationalised industry, set up in 1949 by Clement Attlee's Labour government. The Iron & Steel Act 1949 took effect on 15 February 1951, the Corporation becoming the sole shareholder of 80 of the principal iron and steel companies (reduced from the 107 proposed in the first draft of the Bill). The model differed from previous nationalisations in that it was the share capital of the companies that was acquired, not their undertakings. The reason was that companies in the iron & steel industry had wide-ranging ancillary activities, from which the core business of iron & steel making could not easily be extracted. Firms whose chief activity consisted in the manufacture of motor vehicles were specifically excluded from the scheme. Companies not qualifying for acquisition were to require a licence if producing more than 5,000 tons of ore or other products. Some 2,000 iron & steel companies remained in business outside the nationalised sector. Nationalisation of steel production was strongly resisted by the Conservative opposition. On returning to power, they instructed the Corporation to make no change to the structure of the industry and made plans instead for its return to the private sector. The Corporation was superseded by the Iron and Steel Holding and Realisation Agency. The Agency succeeded in selling all of the nationalised companies with the exception of the largest, Richard Thomas and Baldwins. This remained in public ownership and was absorbed into the British Steel Corporation when the industry was re-nationalised by the Labour government of Harold Wilson in 1967. See also Iron and Steel Board Sources Whitaker's Almanack (various dates) External links Time article on denationalisation Defunct companies of the United Kingdom Steel companies of the United Kingdom Former nationalised industries of the United Kingdom 1967 disestablishments in the United Kingdom 1949 establishments in the United Kingdom Metallurgical industry of the United Kingdom
Iron and Steel Corporation of Great Britain
[ "Chemistry" ]
381
[ "Metallurgical industry of the United Kingdom", "Metallurgical industry by country" ]
15,906,222
https://en.wikipedia.org/wiki/Cyril%20Callister
Cyril Percy Callister (16 February 1893 – 5 October 1949) was an Australian chemist and food technologist who developed the Vegemite yeast spread. As well as Vegemite, he is known for his contributions towards processed cheese. Early life Callister was born on 16 February 1893, in Chute, Victoria near Ballarat, son of Rosetta Anne (née Dixon) and William Hugh Callister, a teacher and postmaster. The second son of seven children, he attended the Ballarat School of Mines and Grenville College, and later won a scholarship to the University of Melbourne. He gained a Bachelor of Science degree in 1914 and a Master of Science degree in 1917. In early 1915, Callister was employed by food manufacturer Lewis & Whitty, but later that year he enlisted in the Australian Imperial Force. After 53 days, however, he was withdrawn from active service on the order of the Minister for Defence and assigned to the Munitions Branch, making explosives in Britain due to his knowledge of chemistry. He worked on munitions in England, Wales, and then in Scotland, at HM Factory Gretna where he worked as a shift chemist. Whilst at Gretna he was elected as an Associate of the Institute of Chemistry in 1918. Following the end of World War I, he met and married Scottish girl Katherine Hope Mundell and returned to Australia and resumed employment with Lewis & Whitty in 1919. The invention of Vegemite In the early 1920s, Callister was employed by Fred Walker and given the task of developing a yeast extract, as imports from the United Kingdom of Marmite had been disrupted in the aftermath of World War I. He experimented on spent brewer's yeast and independently developed what came to be called Vegemite, first sold by Fred Walker & Co in 1923. Working from the details of a James L. Kraft patent, Callister was successful in producing processed cheese. The Walker Company negotiated a deal for the rights to manufacture the product, and in 1926, the Kraft Walker Cheese Co. was established. Callister was appointed chief scientist and production superintendent of the new company. Children Between 1919 and 1927 the Callisters had three children: Ian, Bill and Jean, who were "the original Vegemite kids". During World War II, Ian died. Later life Callister got his Doctorate from the University of Melbourne in 1931, with his submission largely based on his work in developing Vegemite. He was a prominent member of the Royal Australian Chemical Institute, helping it to get a Royal Charter in 1931. Callister died at his home in Wellington Street, Kew, Melbourne in 1949, following a heart attack and is buried at Box Hill Cemetery. He had a history of heart attacks, with his first occurring in late 1939. His estate was valued for probate at £45,917. Legacy A biography of Callister, The Man Who Invented Vegemite, written by his grandson Jamie Callister, was published in 2012. Callister is the great uncle to Kent Callister, a professional snowboarder who has competed at the Winter Olympics for Australia. The Cyril Callister Foundation, established in 2019, commemorates his life and work. It runs a museum in Beaufort, Victoria. References 1893 births 1949 deaths Australian chemists 20th-century Australian inventors University of Melbourne alumni Federation University Australia alumni Burials at Box Hill Cemetery Food chemists People from Victoria (state) 20th-century Australian scientists
Cyril Callister
[ "Chemistry" ]
696
[ "Food chemists", "Food chemistry" ]
15,906,926
https://en.wikipedia.org/wiki/Ratner%27s%20theorems
In mathematics, Ratner's theorems are a group of major theorems in ergodic theory concerning unipotent flows on homogeneous spaces proved by Marina Ratner around 1990. The theorems grew out of Ratner's earlier work on horocycle flows. The study of the dynamics of unipotent flows played a decisive role in the proof of the Oppenheim conjecture by Grigory Margulis. Ratner's theorems have guided key advances in the understanding of the dynamics of unipotent flows. Their later generalizations provide ways to both sharpen the results and extend the theory to the setting of arbitrary semisimple algebraic groups over a local field. Short description The Ratner orbit closure theorem asserts that the closures of orbits of unipotent flows on the quotient of a Lie group by a lattice are nice, geometric subsets. The Ratner equidistribution theorem further asserts that each such orbit is equidistributed in its closure. The Ratner measure classification theorem is the weaker statement that every ergodic invariant probability measure is homogeneous, or algebraic: this turns out to be an important step towards proving the more general equidistribution property. There is no universal agreement on the names of these theorems: they are variously known as the "measure rigidity theorem", the "theorem on invariant measures" and its "topological version", and so on. The formal statement of such a result is as follows. Let be a Lie group, a lattice in , and a one-parameter subgroup of consisting of unipotent elements, with the associated flow on . Then the closure of every orbit of is homogeneous. This means that there exists a connected, closed subgroup of such that the image of the orbit for the action of by right translations on under the canonical projection to is closed, has a finite -invariant measure, and contains the closure of the -orbit of as a dense subset. Example: The simplest case to which the statement above applies is . In this case it takes the following more explicit form; let be a lattice in and a closed subset which is invariant under all maps where . Then either there exists an such that (where ) or . In geometric terms is a cofinite Fuchsian group, so the quotient of the hyperbolic plane by is a hyperbolic orbifold of finite volume. The theorem above implies that every horocycle of has an image in which is either a closed curve (a horocycle around a cusp of ) or dense in . See also Danzer set Equidistribution theorem References Expositions Selected original articles Ergodic theory Lie groups Theorems in dynamical systems
Ratner's theorems
[ "Mathematics" ]
557
[ "Theorems in dynamical systems", "Lie groups", "Mathematical structures", "Mathematical theorems", "Ergodic theory", "Algebraic structures", "Mathematical problems", "Dynamical systems" ]
15,907,019
https://en.wikipedia.org/wiki/Ecological%20design
Ecological design or ecodesign is an approach to designing products and services that gives special consideration to the environmental impacts of a product over its entire lifecycle. Sim Van der Ryn and Stuart Cowan define it as "any form of design that minimizes environmentally destructive impacts by integrating itself with living processes." Ecological design can also be defined as the process of integrating environmental considerations into design and development with the aim of reducing environmental impacts of products through their life cycle. The idea helps connect scattered efforts to address environmental issues in architecture, agriculture, engineering, and ecological restoration, among others. The term was first used by John Button in 1998. Ecological design was originally conceptualized as the “adding in “of environmental factor to the design process, but later turned to the details of eco-design practice, such as product system or individual product or industry as a whole. With the inclusion of life cycle modeling techniques, ecological design was related to the new interdisciplinary subject of industrial ecology. Overview As the whole product's life cycle should be regarded in an integrated perspective, representatives from advanced product design, production, marketing, purchasing, and project management should work together on the Ecodesign of a further developed or new product. Together, they have the best chance to predict the holistic effects of changes of the product and their environmental impact. Considerations of ecological design during product development is a proactive approach to eliminate environmental pollution due to product waste. An eco-design product may have a cradle-to-cradle life cycle ensuring zero waste is created in the whole process. By mimicking life cycles in nature, eco-design can serve as a concept to achieve a truly circular economy. Environmental aspects which ought to be analysed for every stage of the life cycle are: Consumption of resources (energy, materials, water or land area) Emissions to air, water, and the ground (our Earth) as being relevant for the environment and human health, including noise emissions Waste (hazardous waste and other waste defined in environmental legislation) is only an intermediate step and the final emissions to the environment (e.g. methane and leaching from landfills) are inventoried. All consumables, materials and parts used in the life cycle phases are accounted for, and all indirect environmental aspects linked to their production. The environmental aspects of the phases of the life cycle are evaluated according to their environmental impact on the basis of a number of parameters, such as extent of environmental impact, potential for improvement, or potential of change. According to this ranking the recommended changes are carried out and reviewed after a certain time. As the impact of design and the design process has evolved, designers have become more aware of their responsibilities. The design of a product unrelated to its sociological, psychological, or ecological surroundings is no longer possible or acceptable in modern society. With respect to these concepts, online platforms dealing in only Ecodesign products are emerging, with the additional sustainable purpose of eliminating all unnecessary distribution steps between the designer and the final customer. Another area of ecological design is through designing with urban ecology in mind, similar to conservation biology, but designers take the natural world into account when designing landscapes, buildings. or anything that impacts interactions with wildlife. A such example in architecture is that of green roofs, offices, where these are spaces that nature can interact with the man made environment but also where humans benefit from these design technologies. Another area is with landscape architecture in the creation of natural gardens, and natural landscapes, these allow for natural wildlife to thrive in urban centres. Ecological design issues and the role of designers The rise and conceptualization of ecological design Since the Industrial Revolution, design fields have been criticized for employing unsustainable practices. The architect-designer Victor Papanek (1923–1998) suggested that industrial design has murdered by creating new species of permanent garbage and by choosing materials and processes that pollute the air. Papanek states that the designer-planner shares responsibility for nearly all of our products and tools, and hence, nearly all of our environmental mistakes. To address these issues, R. Buckminster Fuller (1895–1983) demonstrated how design could play a central role in identifying and addressing major world problems. Fuller was concerned with the Earth's finite energy resources and natural resources, and how to integrate machine tools into efficient systems of industrial production. He promoted the principle of "ephemeralization", a term he coined himself to do "more with less" and increase technological efficiency. This concept is key in ecological design that works towards sustainability. In 1986, the design theorist Clive Dilnot argued that design must once again become a means of ordering the world rather than merely of shaping products. Despite rising ecological awareness in the 20th century, unsustainable design practices continued. The 1992 conference "The Agenda 21: The Earth Summit Strategy to Save Our Planet” put forward a proposition that the world is on a path of energy production and consumption that cannot be sustained. The report drew attention to individuals and groups around the world who have a set of principles to develop strategies for change among many aspects of society, including design. More broadly, the conference emphasized that designers must address human issues. These problems included six items: quality of life, efficient use of natural resources, protecting the global commons, managing human settlements, the use of chemicals and the management of human industrial waste, and fostering sustainable economic growth on a global scale. Though Western society has only recently espoused ecological design principles, indigenous peoples have long coexisted with the environment. Scholars have discussed the importance of acknowledging and learning from Indigenous peoples and cultures to move towards a more sustainable society. Indigenous knowledge is valuable in ecological design as well as other ecological realms such as restoration ecology. Sustainable development issues These concepts of design tie into the concept of sustainable development. The three pillars addressed in sustainable development are: ecological integrity, social equity, and economic security. Gould and Lewis argue in their book Green Gentrification that urban redevelopment and projects have neglected the social equity pillar, resulting in development that focuses on profit and deepens social inequality. One result of this is green or environmental gentrification. This process is often the result of good intentions to clean up an area and provide green amenities, but without setting protections in place for existing residents to ensure they are not forced out by increased property values and influxes of new wealthier residents. Unhoused persons are one particularly vulnerable affected population of environmental gentrification. Government environmental planning agendas related to green spaces may lead to the displacement and exclusion of unhoused individuals, under a guise of pro-environmental ethics. One example of this type of design is hostile architecture in urban parks. Park benches designed with metal arched bars to prevent a person from laying on the bench restricts who benefits from green space and ecological design. Life Cycle Analysis Life Cycle Analysis (LCA) is a tool used to understand the how a product impacts the environment at each stage of its life cycle, from raw input to the end of the products' life cycle. Life Cycle Cost (LCC) is an economic metric that "identifies the minimum cost for each life cycle stage which would be presented in the aspects of material, procedures, usage, end-of-life and transportation." LCA and LCC can be used to identify particular aspects of a product that is particularly environmentally damaging and reduce those impacts. For example, LCA might reveal that the fabrication stage of a product's life cycle is particularly harmful for the environment and switching to a different material can drive emissions down. However, switching material may increase environmental effects later in a products life time; LCA takes into account the whole life cycle of a product and can alert designers to the many impacts of a product, which is why LCA is important. Some of the factors that LCA takes into account are the costs and emissions of: Transportation Materials Production Usage End-of-life End-of-life, or disposal, is an important aspect of LCA as waste management is a global issue, with trash found everywhere around the world from the ocean to within organisms. A framework was developed to assess sustainability of waste sites titled EcoSWaD, Ecological Sustainability of Waste Disposal Sites. The model focuses on five major concerns: (1) location suitability, (2) operational sustainability, (3) environmental sustainability, (4) socioeconomic sustainability, and (5) site capacity sustainability. This framework was developed in 2021, as such most established waste disposal sites do not take these factors into consideration. Waste facilities such as dumps and incinerators are disproportionately placed in areas with low education and income levels, burdening these vulnerable populations with pollution and exposure to hazardous materials. For example, legislation in the United States, such as the Cerrell Report, has encouraged these types of classist and racist processes for siting incinerators. Internationally, there has been a global 'race to the bottom' in which polluting industries move to areas with fewer restrictions and regulations on emissions, usually in developing countries, disproportionately exposing vulnerable and impoverished populations to environmental threats. These factors make LCA and sustainable waste sites important on a global scale. Urban Ecological Design Related to ecological urbanism, Urban Ecological Design integrates aesthetic, social, and ecological concerns into an urban design framework that seeks to increase ecological functioning, sustainably generate and consume resources, and create resilient built environments and the infrastructure to maintain them. Urban ecological design is inherently interdisciplinary: it integrates multiple academic and professional fields including environmental studies, sociology, justice studies, urban ecology, landscape ecology, urban planning, architecture, and landscape architecture. Urban ecological design aims to solve issues related to multiple large-scale trends including the growth of urban areas, climate change, and biodiversity loss. Urban ecological design has been described as a "process model" contrasted to a normative approach that outlines principles of design. Urban ecological design blends a multitude of frameworks and approaches to create solutions to these issues by improving Urban resilience, sustainable use and management of resources, and integrating ecological processes into the urban landscape. Applications in design EcoMaterials, such as the use of local raw materials, are less costly and reduce the environmental costs of shipping, fuel consumption, and CO₂ emissions generated from transportation. Certified green building materials, such as wood from sustainably managed forest plantations, with accreditations from companies such as the Forest Stewardship Council (FSC), or the Pan-European Forest Certification Council (PEFCC), can be used. Several other types of components and materials can be used in sustainable objects and buildings. Recyclable and recycled materials are commonly used in construction, but it is important that they don't generate any waste during manufacture or after their life cycle ends. Reclaimed materials such as timber at a construction site or junkyard can be given a second life by reusing them as support beams in a new building or as furniture. Stones from an excavation can be used in a retaining wall. The reuse of these items means that less energy is consumed in making new products and a new natural aesthetic quality is achieved. Architecture Off-grid homes only use clean electric power. They are completely separated and disconnected from the conventional electricity grid and receive their power supply by harnessing active or passive energy systems. Off-grid homes are also not served by other publicly or privately managed utilities, such as water and gas in addition to electricity. Art Increased applications of ecological design have gone along with the rise of environmental art. Recycling has been used in art since the early part of the 20th century, when cubist artist Pablo Picasso (1881–1973) and Georges Braque (1882–1963) created collages from newsprints, packaging and other found materials. Contemporary artists have also embraced sustainability, both in materials and artistic content. One modern artist who embraces the reuse of materials is Bob Johnson, creator of River Cubes. Johnson promotes "artful trash management" by creating sculptures from garbage and scraps found in rivers. Garbage is collected, then compressed into a cube that represents the place and people it came from. Clothing There are some clothing companies that are using several ecological design methods to change the future of the textile industry into a more environmentally friendly one. Some approaches include recycling used clothing to minimize the use of raw resources, using biodegradable textile materials to reduce the lasting impact on the environment, and using plant dyes instead of poisonous chemicals to improve the appearance and impact of fabric. Decorating The same principle can be used inside the home, where found objects are now displayed with pride and collecting certain objects and materials to furnish a home is now admired rather than looked down upon. Take for example the electric wire reel reused as a center table. There is a huge demand in Western countries to decorate homes in a "green" style. A lot of effort is placed into recycled product design and the creation of a natural look. This ideal is also a part of developing countries, although their use of recycled and natural products is often based in necessity and wanting to get maximum use out of materials. The focus on self-regulation and personal lifestyle changes (including decorating as well as clothing and other consumer choices) has shifted questions of social responsibility away from government and corporations and onto the individual. Biophilic design is a concept used within the building industry to increase occupant connectivity to the natural environment through the use of direct nature, indirect nature, and space and place conditions. Active systems These systems use the principle of harnessing the power generated from renewable and inexhaustible sources of energy, for example; solar, wind, thermal, biomass, geothermal, and hydropower energy. Solar power is a widely known and used renewable energy source. An increase in technology has allowed solar power to be used in a wide variety of applications. Two types of solar panels generate heat into electricity. Thermal solar panels reduce or eliminate the consumption of gas and diesel, and reduce CO₂ emissions. Photovoltaic panels convert solar radiation into an electric current which can power any appliance. This is a more complex technology and is generally more expensive to manufacture than thermal panels. Biomass is the energy source created from organic materials generated through a forced or spontaneous biological process. Geothermal energy is obtained by harnessing heat from the ground. This type of energy can be used to heat and cool homes. It eliminates dependence on external energy and generates minimum waste. It is also hidden from view as it is placed underground, making it more aesthetically pleasing and easier to incorporate in a design. Wind turbines are a useful application for areas without immediate conventional power sources, e.g., rural areas with schools and hospitals that need more power. Wind turbines can provide up to 30% of the energy consumed by a household but they are subject to regulations and technical specifications, such as the maximum distance at which the facility is located from the place of consumption and the power required and permitted for each property. Water recycling systems such as rainwater tanks that harvest water for multiple purposes. Reusing grey water generated by households are a useful way of not wasting drinking water. Hydropower, also known as water power, is the use of falling or fast-running water to produce electricity or to power machines. Hydropower is an attractive alternative to fossil fuels as it does not directly produce carbon dioxide or other atmospheric pollutants and it provides a relatively consistent source of power. Passive systems Buildings that integrate passive energy systems (bioclimatic buildings) are heated using non-mechanical methods, thereby optimizing natural resources. Passive daylighting involves the positioning and location of a building to allow for and make use of sunlight throughout the whole year. By using the sun's rays, thermal mass is stored in the building materials such as concrete and can generate enough heat for a room. Green roofs are roofs that are partially or completely covered with plants or other vegetation. Green roofs are passive systems in that they create insulation that helps regulate the building's temperature. They also retain water, providing a water recycling system, and can provide soundproofing. History 1971 Ian McHarg, in his book Design with Nature, popularized a system of analyzing the layers of a site in order to compile a complete understanding of the qualitative attributes of a place. McHarg gave every qualitative aspect of the site a layer, such as the history, hydrology, topography, vegetation, etc. This system became the foundation of today's Geographic Information Systems (GIS), a ubiquitous tool used in the practice of ecological landscape design. 1978 Permaculture. Bill Mollison and David Holmgren coin the phrase for a system of designing regenerative human ecosystems. (Founded in the work of Fukuoka, Yeoman, Smith, etc.. 1994 David Orr, in his book Earth in Mind: On Education, Environment, and the Human Prospect, compiled a series of essays on "ecological design intelligence" and its power to create healthy, durable, resilient, just, and prosperous communities. 1994 Canadian biologists John Todd and Nancy Jack Todd, in their book From Eco-Cities to Living Machines, describe the precepts of ecological design. 2000 Ecosa Institute begins offering an Ecological Design Certificate, teaching designers to design with nature. 2004 Fritjof Capra, in his book The Hidden Connections: A Science for Sustainable Living, wrote this primer on the science of living systems and considers the application of new thinking by life scientists to our understanding of social organization. 2004 K. Ausebel compiled personal stories of the world's most innovative ecological designers in Nature's Operating Instructions. Ecodesign research Ecodesign research focuses primarily on barriers to implementation, ecodesign tools and methods, and the intersection of ecodesign with other research disciplines. Several review articles provide an overview of the evolution and current state of ecodesign research: See also Biophilic design Circles of Sustainability Ecological restoration Eco-innovation Energy-efficient landscape design Environmental design Environmental graphic design European Ecodesign Directive 2009/125/EC Green building Green roof Permaculture Principles of Intelligent Urbanism Sustainability Sustainable development Sustainable design Sustainable landscape architecture Terreform ONE Notes and references Bibliography Lacoste, R., Robiolle, M., Vital, X., (2011), "Ecodesign of electronic devices", DUNOD, France McAloone, T. C. & Bey, N. (2009), Environmental improvement through product development - a guide, Danish EPA, Copenhagen Denmark, , 46 pages Lindahl, M.: Designer's utilization of DfE methods. Proceedings of the 1st International Workshop on "Sustainable Consumption", 2003. Tokyo, Japan, The Society of Non-Traditional Technology (SNTT) and Research Center for Life Cycle Assessment (AIST). Wimmer W., Züst R., Lee K.-M. (2004): Ecodesign Implementation – A Systematic Guidance on Integrating Environmental Considerations into Product Development, Dordrecht, Springer Charter, M./ Tischner, U. (2001): Sustainable Solutions. Developing Products and Services for the Future. Sheffield: Greenleaf ISO TC 207/WG3 ISO TR 14062 The Journal of Design History: Environmental conscious design and inverse manufacturing, 2005. Eco Design 2005, 4th International Symposium The Design Journal: Vol 13, Number 1, March 2010 - Design is the problem: The future of Design must be sustainable, N. Shedroff. "Eco Deco", S. Walton "Small ECO Houses - Living Green in Style", C. Paredes Benitez, A. Sanchez Vidiella Further reading From Bauhaus to Ecohouse: A History of Ecological Design. By Peder Anker, Published by Louisiana State University Press, 2010. . Ecological Design. By Sim Van der Ryn, Stuart Cowan, Published by Island Press, 2007. (2nd ed., 1st, 1996) Ignorance and Surprise: Science, Society, and Ecological Design. By Matthias Gross, Published by MIT Press, 2010. External links Sustainable Design & Development Resource Guide The European Commission's website on Ecodesign activities and related legislation including minimum requirements for energy using products The European Commission's Directory of LCA and Ecodesign services, tools and databases The European Commission's ELCD core database with Ecoprofiles (free of charge) Environmental Effect Analysis (EEA) – Principles and structure EIME, the ecodesign methodology of the electrical and electronic industry 4E, IEA Implementing Agreement on Efficient Electrical End-Use Equipment Environmental design Environmental social science Environmentalism Environmental terminology Sustainable design Landscape architecture
Ecological design
[ "Engineering", "Environmental_science" ]
4,184
[ "Environmental design", "Landscape architecture", "Design", "Environmental social science", "Architecture" ]
15,909,264
https://en.wikipedia.org/wiki/EF-G
EF-G (elongation factor G, historically known as translocase) is a prokaryotic elongation factor involved in mRNA translation. As a GTPase, EF-G catalyzes the movement (translocation) of transfer RNA (tRNA) and messenger RNA (mRNA) through the ribosome. Structure Encoded by the fusA gene on the str operon, EF-G is made up of 704 amino acids that form 5 domains, labeled Domain I through Domain V. Domain I may be referred to as the G-domain or as Domain I(G), since it binds to and hydrolyzes guanosine triphosphate (GTP). Domain I also helps EF-G bind to the ribosome, and contains the N-terminal of the polypeptide chain. Domain IV is important for translocation, as it undergoes a significant conformational change and enters the A site on the 30S ribosomal subunit, pushing the mRNA and tRNA molecules from the A site to the P site. The five domains may be also separated into two super-domains. Super-domain I consists of Domains I and II, and super-domain II consists of Domains III - IV. Throughout translocation, super-domain I will remain relatively unchanged, as it is responsible for binding tightly to the ribosome. However, super-domain II will undergo a large rotational motion from the pre-translocational (PRE) state to the post-translocational (POST) state. Super-domain I is similar to the corresponding sections of EF-Tu. Super-domain II in the POST state mimics the tRNA molecule of the EF-Tu • GTP • aa-tRNA ternary complex. EF-G on the ribosome Binding to L7/L12 L7/L12 is only a multicopy protein on the large ribosomal subunit of the bacterial ribosome that binds to certain GTPases, like Initiation Factor 2, Elongation factor-Tu, Release Factor 3, and EF-G. Specifically, the C-terminal of L7/L12 will bind to EF-G and is necessary for GTP hydrolysis. Interaction with the GTPase Associated Center The GTPase Associated Center (GAC) is a region on the large ribosomal subunit that consists of two smaller regions of 23S ribosomal RNA called the L11 stalk and the sarcin-ricin loop (SRL). As a highly conserved rRNA loop in evolution, the SRL is critical in helping GTPases bind to the ribosome, but is not essential for GTP hydrolysis. There is some evidence to support that a phosphate oxygen in the A2662 residue of the SRL may help hydrolyze GTP. Function in protein elongation EF-G catalyzes the translocation of the tRNA and mRNA down the ribosome at the end of each round of polypeptide elongation. In this process, the peptidyl transferase center (PTC) has catalyzed the formation of a peptide bond between amino acids, moving the polypeptide chain from the P site tRNA to the A site tRNA. The 50S and 30S ribosomal subunits are now allowed to rotate relative to each other by approximately 7°. The subunit rotation is coupled with the movement of the 3' ends of both tRNA molecules on the large subunit from the A and P sites to the P and E sites, respectively, while the anticodon loops remain unshifted. This rotated ribosomal intermediate, in which the first tRNA occupies a hybrid A/P position and the second tRNA occupies a hybrid P/E position is a substrate for EF-G-GTP. As a GTPase, EF-G binds to the rotated ribosome near the A site in its GTP-bound state, and hydrolyzes GTP, releasing GDP and inorganic phosphate: GTP + H2O -> GDP + P_{i} The hydrolysis of GTP allows for a large conformational change within EF-G, forcing the A/P tRNA to fully occupy the P site, the P/E tRNA to fully occupy the E site (and exit the ribosome complex), and the mRNA to shift three nucleotides down relative to the ribosome. The GDP-bound EF-G molecule then dissociates from the complex, leaving another free A-site where the elongation cycle can start again. Function in protein termination Protein elongation continues until a stop codon appears on the mRNA. A Class I release factor (RF1 or RF2) binds to the stop codon, which induces hydrolysis of the tRNA-peptide bond in the P site, allowing the newly-formed protein to exit the ribosome. The nascent peptide continues to fold and leaves the 70S ribosome, the mRNA, the deacylated tRNA (P site), and the Class I release factor (A site). In a GTP-dependent manner, the subsequent recycling is catalyzed by a Class II release factor named RF3/prfC, Ribosome recycling factor (RRF), Initiation Factor 3 (IF3) and EF-G. The protein RF3 releases the Class I release factor so that it may occupy the ribosomal A site. EF-G hydrolyzes GTP and undergoes a large conformational change to push RF3 down the ribosome, which occurs alongside tRNA dissociation and promotes the ribosomal subunit rotation. This motion actively splits the B2a/B2b bridge, which connects the 30S and the 50S subunits, so that the ribosome can split. IF3 then isolates the 30S subunit to prevent re-association of the large and small subunits. Clinical significance EF-G in pathogenic bacteria can be inhibited by antibiotics that prevent EF-G from binding to the ribosome, carrying out translocation or dissociating from the ribosome. For example, the antibiotic thiostrepton prevents EF-G from binding stably to the ribosome, while the antibiotics dityromycin and GE82832 inhibit the activity of EF-G by preventing the translocation of the A site tRNA. Dityromycin and GE82832 do not affect the binding of EF-G to the ribosome, however. The antibiotic fusidic acid is known to inhibit Staphylococcus aureus and other bacteria by binding to EF-G after one translocation event on the ribosome, preventing EF-G from dissociating. However, some bacterial strains have developed resistance to fusidic acid due to point mutations in the fusA gene, which prevents fusidic acid from binding to EF-G. Evolution EF-G has a complex evolutionary history, with numerous paralogous versions of the factor present in bacteria, suggesting subfunctionalization of different EF-G variants. Elongation factors exist in all three domains of life with similar function on the ribosome. The eukaryotic and archeal homologs of EF-G are eEF2 and aEF2, respectively. In bacteria (and some archaea), the fusA gene that encodes EF-G is found within the conserved str gene with the sequence 5′ - rpsL - rpsG - fusA - tufA - 3′. However, two other major forms of EF-G exist in some species of Spirochaetota, Planctomycetota, and δ-Proteobacteria (which has since been split and renamed Bdellovibrionota, Myxococcota, and Thermodesulfobacteriota), which form the spd group of bacteria that have elongation factors spdEFG1 and spdEFG2. From spdEFG1 and spdEFG2 evolved the mitochondrial elongation factors mtEFG1 (GFM1) and mtEFG2 (GFM2), respectively. The two roles of EF-G in elongation and termination of protein translation are split amongst the mitochondrial elongation factors, with mtEFG1 responsible for translocation and mtEFG2 responsible for termination and ribosomal recycling with mitochondrial RRF. See also Prokaryotic elongation factors EF-Ts (elongation factor thermo stable) EF-Tu (elongation factor thermo unstable) EF-P (elongation factor P) eEF2 (eukaryotic elongation factor 2) Protein translation GTPase References Further reading External links Protein biosynthesis
EF-G
[ "Chemistry" ]
1,868
[ "Protein biosynthesis", "Gene expression", "Biosynthesis" ]
2,964,336
https://en.wikipedia.org/wiki/Silyl%20ether
Silyl ethers are a group of chemical compounds which contain a silicon atom covalently bonded to an alkoxy group. The general structure is R1R2R3Si−O−R4 where R4 is an alkyl group or an aryl group. Silyl ethers are usually used as protecting groups for alcohols in organic synthesis. Since R1R2R3 can be combinations of differing groups which can be varied in order to provide a number of silyl ethers, this group of chemical compounds provides a wide spectrum of selectivity for protecting group chemistry. Common silyl ethers are: trimethylsilyl (TMS), tert-butyldiphenylsilyl (TBDPS), tert-butyldimethylsilyl (TBS/TBDMS) and triisopropylsilyl (TIPS). They are particularly useful because they can be installed and removed very selectively under mild conditions. Common silyl ethers Formation Commonly silylation of alcohols requires a silyl chloride and an amine base. One reliable and rapid procedure is the Corey protocol in which the alcohol is reacted with a silyl chloride and imidazole at high concentration in DMF. If DMF is replaced by dichloromethane, the reaction is somewhat slower, but the purification of the compound is simplified. A common hindered base for use with silyl triflates is 2,6-lutidine. Primary alcohols can be protected in less than one hour while some hindered alcohols may require days of reaction time. When using a silyl chloride, no special precautions are usually required, except for the exclusion of large amounts of water. An excess of silyl chloride can be employed but is not necessary. If excess reagent is used, the product will require flash chromatography to remove excess silanol and siloxane. Sometimes silyl triflate and a hindered amine base are used. Silyl triflates are more reactive than their corresponding chlorides, so they can be used to install silyl groups onto hindered positions. Silyl triflate is more reactive and also converts ketones to silyl enol ethers. Silyl triflates are water sensitive and must be run under inert atmosphere conditions. Purification involves the addition of an aqueous acid such as saturated ammonium chloride solution. Water quenches remaining silyl reagent and protonates amine bases prior to their removal from the reaction mixture. Following extraction, the product can be purified by flash chromatography. Ketones react with hydrosilanes in the presence of metal catalysts. Removal Reaction with acids or fluorides such as tetra-n-butylammonium fluoride removes the silyl group when protection is no longer needed. Larger substituents increase resistance to hydrolysis, but also make introduction of the silyl group more difficult. In acidic media, the relative resistance is: TMS (1) < TES (64) < TBS (20 000) < TIPS (700,000) < TBDPS (5,000,000) In basic media, the relative resistance is: TMS (1) < TES (10-100) < TBS~TBDPS (20 000) < TIPS (100,000) Monoprotection of symmetrical diols It is possible to monosilylate a symmetrical diol, although this is known to be problematic occasionally. For example, the following monosilylation was reported: However, it turns out that this reaction is hard to repeat. If the reaction were controlled solely by thermodynamics, and if the dianion is of similar reactivity to the monoanion, then a corresponding statistical mixture of 1:2:1 disilylated:monosilylated:unsilylated diol would be expected. However, the reaction in THF is made selective by two factors: 1. kinetic deprotonation of the first anion and 2. the insolubility of the monoanion. At the initial addition of TBSCl, there is only a minor amount of monoanion in solution with the rest being in suspension. This small portion reacts and shifts the equilibrium of the monoanion to draw more into solution, thereby allowing for high yields of the mono-TBS compound to be obtained. Superior results in some cases may be obtained with butyllithium: A third method uses a mixture of DMF and DIPEA. Alternatively, an excess (4 eq) of the diol can be used, forcing the reaction toward monoprotection. Selective deprotection Selective deprotection of silyl groups is possible in many instances. For example, in the synthesis of taxol: Silyl ethers are mainly differentiated on the basis of sterics or electronics. In general, acidic deprotections deprotect less hindered silyl groups faster, with the steric bulk on silicon being more significant than the steric bulk on oxygen. Fluoride-based deprotections deprotect electron-poor silyl groups faster than electron-rich silyl groups. There is some evidence that some silyl deprotections proceed via hypervalent silicon species. The selective deprotection of silyl ethers has been extensively reviewed. Although selective deprotections have been achieved under many different conditions, some procedures, outlined below, are more reliable. A selective deprotection will likely be successful if there is a substantial difference in sterics (e.g., primary TBS vs. secondary TBS or primary TES vs primary TBS) or electronics (e.g. primary TBDPS vs. primary TBS). Unfortunately, some optimization is inevitably required and it is often necessary to run deprotections partway and recycle material. Some common acidic conditions 100 mol% 10-CSA (camphorsulfonic acid) in MeOH, room temperature; a "blast" of acid, deprotects primary TBS groups within ten minutes. 10 mol% 10-CSA, 1:1 MeOH:DCM, −20 or 0 °C; deprotects a primary TBS group within two hours at 0; if CSA is replaced by PPTS, the rate is approximately ten times slower; with p-TsOH, approximately ten times faster; solvent mixture is crucial. 4:1:1 v/v/v AcOH:THF:water, room temp.; this is very slow, but can be very selective. Some common basic conditions HF-pyridine, 10:1 THF:pyridine, 0 °C; an excellent deprotection; removes primary TBS groups within eight hours; reactions using HF must be run in plastic containers. TBAF, THF or 1:1 TBAF/AcOH, THF; TBDPS and TBS groups can be deprotected in the presence of one another under different conditions. Application References External links Example deprotection TBS silyl ether Example deprotection TBDMS silyl ether Silicon-based Protection of the Hydroxyl Group silyl ether formation in carbohydrates Functional groups Protecting groups
Silyl ether
[ "Chemistry" ]
1,539
[ "Protecting groups", "Functional groups", "Reagents for organic chemistry" ]
2,964,515
https://en.wikipedia.org/wiki/Barracuda%20Networks
Barracuda Networks, Inc. provides security, networking and storage products based on network appliances and cloud services. History Barracuda Networks was founded in 2003 by CEO Dean Drako, Michael Perone, and Zach Levow; the company introduced the Barracuda Spam and Virus Firewall in the same year. In 2007, the company moved its headquarters to Campbell, California, and opened an office in Ann Arbor, Michigan. In January 2006, it closed its first outside investment of $40 million from Sequoia Capital and Francisco Partners. On January 29, 2008, Barracuda Networks was sued by Trend Micro over their use of the open source anti-virus software Clam AntiVirus, which Trend Micro claimed to be in violation of their patent on 'anti-virus detection on an SMTP or FTP gateway'. In addition to providing samples of prior art in an effort to render Trend Micro's patent invalid, in July 2008 Barracuda launched a countersuit against Trend Micro claiming Trend Micro violated several antivirus patents Barracuda Networks had acquired from IBM. In December 2008, the company launched the BRBL (Barracuda Reputation Block List), its proprietary and dynamic list of known spam servers, for free and public use in blocking spam at the gateway. Soon after opening BRBL many IP addresses got blacklisted without apparent reason and without any technical explanation. In 2012, the company became a co-sponsor of the Garmin-Barracuda UCI ProTour cycling team. Barracuda Networks expanded its research and development facility in Ann Arbor to a 12,500 square foot office building on Depot Street in 2008. By 2012, the Michigan-based research division had grown to about 180 employees, again outgrowing its space. In June 2012, Barracuda signed a lease to occupy the 45,000 square foot office complex previously used as the Borders headquarters on Maynard St in downtown Ann Arbor. In July 2012, Dean Drako, Barracuda Networks's co-founder, president and CEO since it was founded in 2003, resigned his operating position, remaining on the company's board of directors. At the time of Drako's departure, the company stated it had achieved profitability, a nearly ongoing 30 annual percent growth rate since inception, 150,000 customers worldwide, nearly 1,000 employees, 10 offices, and did business in 80 countries. The company created the office of the CEO as it started a CEO search. In November 2012, long-time EMC executive William "BJ" Jenkins joined the company as president and CEO. Jenkins worked at EMC since 1998 and previously served as president of EMC's Backup and Recovery Systems (BRS) Division. In November 2013, Barracuda Networks went public on the New York Stock Exchange under the ticker symbol CUDA. In November 2015, Barracuda added a new Next Generation Firewall to its firewall family. In November 2017, private equity firm Thoma Bravo announced they were taking Barracuda Networks private in a $1.6 billion buyout. In February 2018 Thoma Bravo announced that it had completed the acquisition. In April 2022, KKR announced the signing of an agreement to purchase Barracuda Networks from Thoma Bravo for about $4 billion, which was completed in August of that year. Acquisitions In September 2007, Barracuda Networks acquired NetContinuum, a company providing application controllers to secure and manage enterprise web applications. In November 2008, Barracuda Networks expanded into cloud-based backup services by acquiring BitLeap. In November 2008, Barracuda Networks acquired 3SP, allowing the company to introduce Secure Sockets Layer (SSL) Virtual Private Network (VPN) products to allow secure remote access to network file shares, internal Web sites and remote control capabilities for desktops and servers. In January 2009, Barracuda Networks acquired Yosemite Technologies to add software agents for incremental backups of applications such as Microsoft Exchange Server and SQL Server, and Windows system states. In September 2009, Barracuda Networks acquired controlling interest in phion AG, an Austria-based public company delivering enterprise-class firewalls. A month later, in October, the company acquired Purewire Inc, a software as a service (SaaS) company offering cloud-based web filtering and security. In April 2013, Barracuda Networks acquired SignNow. In 2014, Barracuda Networks purchased C2C Systems UK. In October 2015, Barracuda Networks acquired Intronis. In November 2017, Barracuda purchased Sonian. In the same month, Barracuda announced that it was being acquired by private equity investment firm Thoma Bravo, LLC. In January 2018, Barracuda acquired PhishLine. In July 2021, Barracuda Networks acquired SKOUT Cybersecurity. In April 2022, KKR purchased Barracuda Networks from Thoma Bravo for a reported $4 billion. In 2024, the company acquired Fyde. Controversies Security Issue In January 2013, a backdoor was discovered: "A variety of firewall, VPN, and spam filtering gear sold by Barracuda Networks contains undocumented backdoor accounts that allow people to remotely log in and access sensitive information, researchers with an Austrian security firm have warned." The backdoor was then secured shortly after the announcement. IP reputation and Emailreg.org On April 13, 2009, Emailreg.org published a notice clarifying that it is a whitelist of domains that had no impact on Barracuda Blog Lists. April 10, 2010, a blog entry appeared alleging that Barracuda Networks SPAM blocking deliberately targets non-spamming IP addresses and tries to get them to sign up for an email whitelisting service "emailreg.org". In 2019 Emailreg.org announced that it was no longer accepting new customers but would continue services for existing customers until further notice. Emailreg.org discontinued services shortly thereafter and is no longer in operation. As of May 2, 2020, the same warning appears for some IP addresses. See also Comparison of file hosting services Comparison of file synchronization software Comparison of online backup services References External links American companies established in 2003 Companies based in Campbell, California Networking companies of the United States Networking hardware companies Networking software companies Computer security software companies Telecommunications equipment vendors Computer security companies Anti-spam Content-control software Backup software Deep packet inspection Companies formerly listed on the New York Stock Exchange One-click hosting Private equity portfolio companies Kohlberg Kravis Roberts companies Software companies established in 2003 2003 establishments in California 2013 initial public offerings 2018 mergers and acquisitions 2022 mergers and acquisitions Computer companies of the United States Computer hardware companies Software companies of the United States Thoma Bravo companies
Barracuda Networks
[ "Technology" ]
1,380
[ "Computer hardware companies", "Computers" ]
2,964,547
https://en.wikipedia.org/wiki/POL%20Oxygen
POL Oxygen was an international design, art and architecture quarterly magazine. The magazine existed between 2001 and 2008. It contained extended profiles that look at the lives, ideas and work of people working internationally. It was edited in Sydney, art directed in London, printed in Hong Kong and distributed worldwide. References External links POL Oxygen magazine website (archived in 2008) Art direction by Marcus Piper Defunct architecture magazines Defunct magazines published in Australia Magazines established in 2001 Magazines disestablished in 2008 Magazines published in Sydney Quarterly magazines published in Australia Defunct design magazines Defunct visual arts magazines published in Australia
POL Oxygen
[ "Engineering" ]
114
[ "Architecture stubs", "Architecture" ]
2,964,610
https://en.wikipedia.org/wiki/Thioketone
In organic chemistry, thioketones (; also known as thiones or thiocarbonyls) are organosulfur compounds related to conventional ketones in which the oxygen has been replaced by a sulfur. Instead of a structure of , thioketones have the structure , which is reflected by the prefix "thio-" in the name of the functional group. Thus the simplest thioketone is thioacetone, the sulfur analog of acetone. Unhindered alkylthioketones typically tend to form polymers or rings. Structure and bonding The C=S bond length of thiobenzophenone is 1.63 Å, which is comparable to 1.64 Å, the C=S bond length of thioformaldehyde, measured in the gas phase. Due to steric interactions, the phenyl groups are not coplanar and the dihedral angle SC-CC is 36°. Unhindered dialkylthiones polymerize or oligomerize but thiocamphor is well characterized red solid. Consistent with the double bond rule, most alkyl thioketones are unstable with respect to dimerization. The energy difference between the p orbitals of sulfur and carbon is greater than that between oxygen and carbon in ketones. The relative difference in energy and diffusity of the atomic orbitals of sulfur compared with carbon results in poor overlap of the atomic orbitals and the energy gap between the HOMO and LUMO is thus reduced for C=S molecular orbitals relative to C=O. The striking blue appearance of thiobenzophenone is attributed to π→ π* transitions upon the absorption of red light. Thiocamphor is red. Preparative methods Thiones are usually prepared from ketones using reagents that exchange S and O atoms. A common reagent is phosphorus pentasulfide, although that reagent also tends to induce side-reactions. Lawesson's reagent is related. Other methods uses a mixture of hydrogen chloride combined with hydrogen sulfide. Bis(trimethylsilyl)sulfide has also been employed. Thiones can also be prepared from geminal dichlorides, but geminal dichlorides are typically prepared from ketones as well. There are no general methods to oxidize methylene groups to thioketones, reflecting sulfur's comparable electronegativity to carbon. Thiobenzophenone [(C6H5)2CS] is a stable deep blue compound that dissolves readily in organic solvents. It photooxidizes in air to benzophenone and sulfur. Since its discovery, a variety of related thiones have been prepared. Thiosulfines Thiosulfines, also called thiocarbonyl S-sulfides, are compounds with the formula R2CSS. Although superficially appearing to be cumulenes, with the linkage R2C=S=S, they are more usefully classified as 1,3-dipoles and indeed participate in 1,3-dipolar cycloadditions. Thiosulfines are proposed to exist in equilibrium with dithiiranes, three-membered CS2 rings. Thiosulfines are often invoked as intermediates in mechanistic discussions of the chemistry of thiones. For example, thiobenzophenone decomposes upon oxidation to the 1,2,4-trithiolane (Ph2C)2S3, which arises via the cycloaddition of Ph2CSS to its parent Ph2CS. External links and further reading See also Thial, for a description of thioaldehydes. Thioketene Sulfene Selone (often called selenone) References Functional groups
Thioketone
[ "Chemistry" ]
801
[ "Functional groups", "Thioketones" ]
2,964,718
https://en.wikipedia.org/wiki/Electronic%20remittance%20advice
An electronic remittance advice (ERA) is an electronic data interchange (EDI) version of a medical insurance payment explanation. It provides details about providers' claims payment, and if the claims are denied, it would then contain the required explanations. The explanations include the denial codes and the descriptions, which present at the bottom of ERA. ERA are provided by plans to Providers. In the United States the industry standard ERA is HIPAA X12N 835 (HIPAA = Health Insurance Portability and Accountability Act; X12N = insurance subcommittees of ASC X12; 835 is the specific code number for ERA), which is sent from insurer to provider either directly or via a bank. See also Remittance advice References Citations Data interchange standards Health insurance
Electronic remittance advice
[ "Technology" ]
158
[ "Computer standards", "Data interchange standards" ]
2,964,733
https://en.wikipedia.org/wiki/Trass
Trass is the local name of a volcanic tuff occurring in the Eifel, where it is worked for hydraulic mortar. It is a grey or cream-coloured fragmental rock, largely composed of pumiceous dust, and may be regarded as a trachytic tuff. It much resembles the Italian pozzolana and is applied to like purposes. Mixed with lime and sand, or with Portland cement, it is extensively employed for hydraulic work, especially in the Netherlands; while the compact varieties have been used as a building material and as a fire-stone in ovens. Trass was formerly worked extensively in the Brohl valley and is now obtained from the valley of the Nette, near Andernach. See also Pozzolan Pozzolana Pozzolanic reaction Pumice References Natural materials Geology of Germany Masonry
Trass
[ "Physics", "Engineering" ]
171
[ "Matter", "Natural materials", "Construction", "Materials", "Masonry" ]
2,964,744
https://en.wikipedia.org/wiki/Slype
The term slype is a variant of slip in the sense of a narrow passage; in architecture, the name for the covered passage usually found in monasteries or cathedrals between the transept and the chapter house, as at St Andrews, Winchester, Gloucester, Exeter, Durham, St. Albans, Sherborne and Christ Church Cathedral, Oxford. At St. Mary's Abbey, Dublin, it is, with the chapter house, one of only two remaining rooms. References Rooms Church architecture
Slype
[ "Engineering" ]
100
[ "Rooms", "Architecture" ]
2,964,944
https://en.wikipedia.org/wiki/Jean%20Fr%C3%A9chet
Jean M.J. Fréchet (born August 1944) is a French-American chemist and professor emeritus at the University of California, Berkeley. He is best known for his work on polymers including polymer-supported chemistry, chemically amplified photoresists, dendrimers, macroporous separation media, and polymers for therapeutics. Ranked among the top 10 chemists in 2021, he has authored nearly 900 scientific paper and 200 patents including 96 US patents. His research areas include organic synthesis and polymer chemistry applied to nanoscience and nanotechnology with emphasis on the design, fundamental understanding, synthesis, and applications of functional macromolecules. Fréchet is an elected fellow of the American Association for the Advancement of Science, the American Chemical Society, and the American Academy of Arts and Sciences, and an elected member of the US National Academy of Sciences, the US National Academy of Engineering, and the Academy of Europe (Academia Europaea). Education and academic career Fréchet received his first university degree at the Institut de Chimie et Physique Industrielles (now CPE) in Lyon, France, before coming to the US for studies in organic and polymer chemistry under Conrad Schuerch at the State University of New York College of Environmental Science and Forestry, and at Syracuse University   (Ph.D. 1971). He was on the Chemistry Faculty at the University of Ottawa in Canada from 1973 to 1987, when he became the IBM Professor of Polymer Chemistry at Cornell University. In 1997 Fréchet joined Chemistry faculty at the University of California, Berkeley and was named the Henry Rapoport Chair of Organic Chemistry in 2003 and Professor of Chemical and Biological Engineering in 2005. From 2010 to 2019 he served as the first Vice President for Research, then Senior Vice-President for Research, Innovation, and Economic Development at the King Abdullah University of Science and Technology. Research Fréchet’s early work focused on polymer-supported chemistry with the first approach to the solid-phase synthesis of oligosaccharides and pioneering work on polymeric reagents and polymer protecting groups. In 1979 Working with C.G. Willson at IBM during a sabbatical leave, he invented chemically amplified photoresists for micro and nanofabrication. This widely used patented technology which enables the extreme miniaturization of microelectronic devices is now ubiquitous for the fabrication of the very powerful computing and communication equipment in worldwide use. The addition of photogenerated bases led to additional advances in chemically amplified resists. In 1990 working with Craig Hawker at Cornell, he developed the convergent synthesis of dendrimers as well as approaches to hyperbranched polymers. In 1992, working with F. Svec at Cornell, he reported the first preparation of macroporous polymer monoliths that are now used in a variety of chemical separations. Later work at Berkeley saw the development of polymers and dendrimers as carriers for targeted therapeutics and successful approaches to new organic materials for transistors and solar cells. Honors and awards 2020 National Academy of Engineering Charles Stark Draper Prize in Engineering 2019 King Faisal International Prize in Chemistry 2013 Japan Prize for the "Development of chemically amplified resist polymer materials for innovative semiconductor manufacturing process" 2010 Grand Prix de la Maison de la Chimie (Paris) 2010 Erasmus Medal of the Academia Europaea 2010 Society of Polymer Science of Japan, International Award for the "Development of functional polymers from fundamentals to application" 2010 University of California Department of Chemistry Teaching Award 2010 Fellow of the American Chemical Society 2009 Elected to the Academy of Europe (Academia Europaea) 2009 Nagoya Gold Medal 2009 Arun Guthikonda Memorial Award, Columbia University. 2009 Society of Polymer Science of Japan, International Award for the "Development of functional polymers from fundamentals to applications". 2009 Carothers Award for "Outstanding contributions and advances in industrial applications of chemistry". 2009 Herman Mark Award, American Chemical Society. 2008 D.Sc. (Honoris Causa), University of Liverpool, UK 2007 Dickson Prize in Science, Carnegie Mellon University 2007 Arthur C. Cope Award (American Chemical Society award for outstanding achievement in the field of Organic Chemistry) 2006 Macro Group UK Medal (joint Royal Society for Chemistry and Society of Chemical Industry) for Outstanding Achievement in the field of Macromolecular Chemistry 2005 Esselen Award for Chemistry in the Service of the Public 2005 Chemical Communications 40th Anniversary Award 2004 Docteur de L'Université, Université d'Ottawa, Canada 2003 Henry Rapoport Chair of Organic Chemistry, University of California, Berkeley 2002 Docteur (Honoris Causa), Université de Lyon I, France 2001 American Chemical Society, Salute to Excellence Award 2001 American Chemical Society, A.C. Cope Scholar Award 2000 Elected Fellow of the American Academy of Arts and Sciences 2000 Elected Member of the US National Academy of Engineering 2000 Elected Fellow of the PMSE Division of the American Chemical Society 2000 Elected Fellow of the American Association for the Advancement of Science 2000 Elected Member of the US National Academy of Sciences 2000 American Chemical Society, ACS Award in Polymer Chemistry 2000 Myron L. Bender & Muriel S. Bender Distinguished Summer Lectureship 1999 Society of Imaging Science and Technology, Kosar Memorial Award 1996 American Chemical Society, ACS Award in Applied Polymer Science 1995 Peter J. Debye Chair of Chemistry, Cornell University, NY 1994 American Chemical Society, Cooperative Research Award in Polymer Science 1987 IBM Professor of Polymer Chemistry 1986 American Chemical Society, Doolittle Award in Polymer Materials Science & Engineering 1986 Polymer Society of Japan Lecture Award 1983 IUPAC Canadian National Committee Award References External links Berkeley College of Chemistry, Emeriti Faculty, Jean M. J. Fréchet 1944 births Living people Cornell University faculty Members of the United States National Academy of Engineering Members of the United States National Academy of Sciences Fellows of the American Chemical Society 20th-century French chemists 21st-century French chemists French emigrants to the United States Syracuse University alumni State University of New York College of Environmental Science and Forestry alumni UC Berkeley College of Chemistry faculty Solid state chemists Organic chemists Members of Academia Europaea
Jean Fréchet
[ "Chemistry" ]
1,234
[ "Solid state chemists", "Organic chemists", "French organic chemists", "American organic chemists" ]
2,964,983
https://en.wikipedia.org/wiki/Induction%20furnace
An induction furnace is an electrical furnace in which the heat is applied by induction heating of metal. Induction furnace capacities range from less than one kilogram to one hundred tons, and are used to melt iron and steel, copper, aluminum, and precious metals. The advantage of the induction furnace is a clean, energy-efficient and well-controlled melting process, compared to most other means of metal melting. Most modern foundries use this type of furnace, and many iron foundries are replacing cupola furnaces with induction furnaces to melt cast iron, as the former emit much dust and other pollutants. Induction furnaces do not require an arc, as in an electric arc furnace, or combustion, as in a blast furnace. As a result, the temperature of the charge (the material entered into the furnace for heating, not to be confused with electric charge) is no higher than required to melt it; this can prevent the loss of valuable alloying elements. The one major drawback to induction furnace usage in a foundry is the lack of refining capacity: charge materials must be free of oxides and be of a known composition, and some alloying elements may be lost due to oxidation, so they must be re-added to the melt. Types In the coreless type, metal is placed in a crucible surrounded by a water-cooled alternating current solenoid coil. A channel-type induction furnace has a loop of molten metal, which forms a single-turn secondary winding through an iron core. Operation An induction furnace consists of a nonconductive crucible holding the charge of metal to be melted, surrounded by a coil of copper wire. A powerful alternating current flows through the wire. The coil creates a rapidly reversing magnetic field that penetrates the metal. The magnetic field induces eddy currents, circular electric currents, inside the metal, by electromagnetic induction. The eddy currents, flowing through the electrical resistance of the bulk metal, heat it by Joule heating. In ferromagnetic materials like iron, the material may also be heated by magnetic hysteresis, the reversal of the molecular magnetic dipoles in the metal. Once melted, the eddy currents cause vigorous stirring of the melt, assuring good mixing. An advantage of induction heating is that the heat is generated within the furnace's charge itself rather than applied by a burning fuel or other external heat source, which can be important in applications where contamination is an issue. Operating frequencies range from utility frequency (50 or 60 Hz) to 400 kHz or higher, usually depending on the material being melted, the capacity (volume) of the furnace and the melting speed required. Generally, the smaller the volume of the melts, the higher the frequency of the furnace used; this is due to the skin depth which is a measure of the distance an alternating current can penetrate beneath the surface of a conductor. For the same conductivity, the higher frequencies have a shallow skin depth—that is less penetration into the melt. Lower frequencies can generate stirring or turbulence in the metal. A preheated, one-ton furnace melting iron can melt cold charge to tapping readiness within an hour. Power supplies range from 10 kW to 42 MW, with melt sizes of 20 kg to 65 tons of metal respectively. An operating induction furnace usually emits a hum or whine (due to fluctuating magnetic forces and magnetostriction), the pitch of which can be used by operators to identify whether the furnace is operating correctly or at what power level. Refractory lining There is a disposable refractory lining used during casting. See also Electric arc furnace—for another type of electric furnace, used in larger foundries and mini-mill steelmaking operations References Further reading External links "How Induction Furnace Are Making It Hot For The Axis", Popular Science, November 1943. Detailed article on the basics with numerous illustrations Industrial furnaces Furnace, Induction Metallurgy hr:Plasma Induction ja:高周波炉
Induction furnace
[ "Chemistry", "Materials_science", "Engineering" ]
808
[ "Metallurgical processes", "Metallurgy", "Materials science", "Industrial furnaces", "nan" ]
2,965,243
https://en.wikipedia.org/wiki/Captive%20breeding
Captive breeding, also known as captive propagation, is the process of keeping plants or animals in controlled environments, such as wildlife reserves, zoos, botanic gardens, and other conservation facilities. It is sometimes employed to help species that are being threatened by the effects of human activities such as climate change, habitat loss, fragmentation, overhunting or fishing, pollution, predation, disease, and parasitism. For many species, relatively little is known about the conditions needed for successful breeding. Information about a species' reproductive biology may be critical to the success of a captive breeding program. In some cases a captive breeding program can save a species from extinction, but for success, breeders must consider many factors—including genetic, ecological, behavioral, and ethical issues. Most successful attempts involve the cooperation and coordination of many institutions. The efforts put into captive breeding can aid in education about conservation because species in captivity are closer to the public than their wild conspecifics. These accomplishments from the continued breeding of species for generations in captivity is also aided by extensive research efforts ex-situ and in-situ. History Captive breeding techniques began with the first human domestication of animals such as goats, and plants like wheat, at least 10,000 years ago. These practices were expanded with the rise of the first zoos, which started as royal menageries such as the one at Hierakonpolis, capital in the Predynastic Period of Egypt. The first actual captive breeding programs were only started in the 1960s. These programs, such as the Arabian Oryx breeding program from the Phoenix Zoo in 1962, were aimed at the reintroduction of these species into the wild. These programs expanded under The Endangered Species Act of 1973 of the Nixon Administration which focused on protecting endangered species and their habitats to preserve biodiversity. Since then, research and conservation have been housed in zoos, such as the Institute for Conservation Research at the San Diego Zoo founded in 1975 and expanded in 2009, which have contributed to the successful conservation efforts of species such as the Hawaiian Crow. Coordination The breeding of species of conservation concern is coordinated by cooperative breeding programs containing international studbooks and coordinators, who evaluate the roles of individual animals and institutions from a global or regional perspective. These studbooks contain information on birth date, gender, location, and lineage (if known), which helps determine survival and reproduction rates, number of founders of the population, and inbreeding coefficients. A species coordinator reviews the information in studbooks and determines a breeding strategy that would produce most advantageous offspring. If two compatible animals are found at different zoos, the animals may be transported for mating, but this is stressful, which could in turn make mating less likely. However, this is still a popular breeding method among European zoological organizations. Artificial fertilization (by shipping semen) is another option, but male animals can experience stress during semen collection, and the same goes for females during the artificial insemination procedure. Furthermore, this approach yields lower-quality semen, because shipping requires extending the life of the sperm for the transit time. There are regional programmes for the conservation of endangered species: Americas: Species Survival Plan SSP (Association of Zoos and Aquariums AZA, Canadian Association of Zoos and Aquariums CAZA), Saving Animals from Extinction SAFE (Association of Zoos and Aquariums AZA) Europe: European Endangered Species Programme EEP (European Association of Zoos and Aquaria EAZA) Australasia: Australasian Species Management Program ASMP (Zoo and Aquarium Association ZAA) Africa: African Preservation Program APP (African Association of Zoological Gardens and Aquaria PAAZAB) Japan: Conservation activities of Japanese Association of Zoos and Aquariums JAZA South Asia: Conservation activities of South Asian Zoo Association for Regional Cooperation SAZARC South East Asia: Conservation activities of South East Asian Zoos Association SEAZA Challenges Genetics The objective of many captive populations is to hold similar levels of genetic diversity to what is found in wild populations. As captive populations are usually small and maintained in artificial environments, genetics factors such as adaptation, inbreeding and loss of diversity can be a major concern. Domestication adaptations Adaptive differences between plant and animal populations arise due to variations in environmental pressures. In the case of captive breeding prior to reintroduction into the wild, it is possible for species to evolve to adapt to the captive environment, rather than their natural environment. Reintroducing a plant or animal to an environment dissimilar to the one they were originally from can cause fixation of traits that may not be suited for that environment leaving the individual disadvantaged. Selection intensity, initial genetic diversity, and effective population size can impact how much the species adapts to its captive environment. Modeling works indicate that the duration of the programs (i.e., time from the foundation of the captive population to the last release event) is an important determinant of reintroduction success. Success is maximized for intermediate project duration allowing the release of a sufficient number of individuals, while minimizing the number of generations undergoing relaxed selection in captivity. Can be minimized by reducing the number of generations in captivity, minimizing selection for captive adaptations by creating environment similar to natural environment and maximizing the number of immigrants from wild populations. Genetic diversity One consequence of small captive population size is the increased impact of genetic drift, where genes have the potential to fix or disappear completely by chance, thereby reducing genetic diversity. Other factors that can impact genetic diversity in a captive population are bottlenecks and initial population size. Bottlenecks, such as rapid decline in the population or a small initial population impacts genetic diversity. Loss can be minimized by establishing a population with a large enough number of founders to genetically represent the wild population, maximize population size, maximize ratio of effective population size to actual population size, and minimize the number of generations in captivity. Inbreeding Inbreeding is when organisms mate with closely related individuals, lowering heterozygosity in a population. Although inbreeding can be relatively common, when it results in a reduction in fitness it is known as inbreeding depression. The detrimental effects of inbreeding depression are especially prevalent in smaller populations and can therefore be extensive in captive populations. To make these populations the most viable, it is important to monitor and reduce the effects of deleterious allele expression caused by inbreeding depression and to restore genetic diversity. Comparing inbred populations against non-inbred or less-inbred populations can help determine the extent of detrimental effects if any are present. Closely monitoring the possibility of inbreeding within the captive bred population is also key to the success of reintroduction into the species' native habitat. Outbreeding Outbreeding is when organisms mate with unrelated individuals, increasing heterozygosity in a population. Although new diversity is often beneficial, if there are large genetic differences between the two individuals it can result in outbreeding depression. This is a reduction in fitness, similar to that of inbreeding depression, but arises from a number of different mechanisms, including taxonomic issues, chromosomal differences, sexual incompatibility, or adaptive differences between the individuals. A common cause is chromosomal ploidy differences and hybridization between individuals leading to sterility. The best example is in the orangutan, which, prior to taxonomic revisions in the 1980s would be commonly mated in captive populations producing hybrid orangutans with lower fitness. If chromosomal ploidy is ignored during reintroduction, restoration efforts would fail due to sterile hybrids in the wild. If there are large genetic differences between individuals originally from distant populations, those individuals should only be bred in circumstances where no other mates exist. Behavior changes Captive breeding can contribute to changes in behavior in animals that have been reintroduced to the wild. Released animals are commonly less capable of hunting or foraging for food, which leads to starvation, possibly because the young animals spent the critical learning period in captivity. Released animals often display more risk-taking behavior and fail to avoid predators. Golden lion tamarin mothers often die in the wild before having offspring because they cannot climb and forage. This leads to continuing population declines despite reintroduction as the species are unable to produce viable offspring. Training can improve anti-predator skills, but its effectiveness varies. Salmon bred in captivity have shown similar declines in caution and are killed by predators when young. However, salmon that were reared in an enriched environment with natural prey showed less risk-taking behaviors and were more likely to survive. A study on mice has found that after captive breeding had been in place for multiple generations and these mice were "released" to breed with wild mice, that the captive-born mice bred amongst themselves instead of with the wild mice. This suggests that captive breeding may affect mating preferences, and has implications for the success of a reintroduction program. Human mediated recovery of species can unintentionally promote maladaptive behaviors in wild populations. In 1980 the number of wild Chatham Island Black Robins was reduced to a single mating pair. Intense management of populations helped the population recover and by 1998 there were 200 individuals. During recovery scientists observed "rim laying" an egg laying habit where individuals laid eggs on the rim of the nest instead of the center. Rim laid eggs never hatched. To combat this land managers pushed the egg to the center of the nest, which greatly increased reproduction. However, by allowing this maladaptive trait to persist, over half the population were now rim layers. Genetic studies found that this was an autosomal dominant mendelian trait that was selected for due to human intervention. Another challenge presented to captive breeding is an attempt to establish multi-partner mating systems in captive populations. It can be difficult to replicate the circumstances surrounding multiple mate systems and allow it to occur naturally in captivity due to limited housing space and lack of information. When brought into captivity, there is no guarantee that a pair of animals will pair bond or that all the members of a population will participate in breeding. Throughout facilities, there is limited housing space so allowing for mate choice may establish genetic issues in the population. A lack of information surrounding the effects of mating systems on captive populations can also present issues when attempting to breed. These mating systems are not always fully understood and the effects captivity may have on them cannot be known until they are studied in greater capacity. Successes The Phoenix Zoo had an Arabian Oryx breeding program in 1962. They were able to successfully breed over 200 individuals from a lineage of only 9 original founders. Members from this founding population were then sent to many other facilities worldwide, and many breeding herds were established. In 1982, the first of the population was reintroduced back into Oman, and over the next two decades, their population increased over time and was able to successfully reestablish in native regions. Arabian Oryx have now been reintroduced into areas such as Saudi Arabia, Oman, and Israel and they now number 1,100, showing a recovery thanks to captive breeding efforts. The De Wildt Cheetah and Wildlife Centre, established in South Africa in 1971, has a cheetah captive breeding program. Between 1975 and 2005, 242 litters were born with a total of 785 cubs. The survival rate of cubs was 71.3% for the first twelve months and 66.2% for older cubs, validating the fact that cheetahs can be bred successfully (and their endangerment decreased). It also indicated that failure in other breeding habitats may be due to "poor" sperm morphology. Przewalski's horse, the only horse species never to have been domesticated, was recovered from the brink of extinction by a captive breeding program, and successfully reintroduced in the 1990s to the Mongolia, with more than 750 wild roaming Przewalski's horses as of 2020. The Galápagos tortoise population, once reaching as low in population as 12 remaining individuals, as of 2014 was recovered to more than 2000 by a captive breeding program. A further 8 tortoise species were supported by captive breeding programs in the island chain. Wild Tasmanian devils have declined by 90% due to a transmissible cancer called Devil Facial Tumor Disease. A captive insurance population program was started, but the captive breeding rates as of 2012 were lower than they needed to be. Keeley, Fanson, Masters, and McGreevy (2012) sought to "increase our understanding of the estrous cycle of the devil and elucidate potential causes of failed male-female pairings" by examining temporal patterns of fecal progestogen and corticosterone metabolite concentrations. They found that the majority of unsuccessful females were captive-born, suggesting that if the species' survival depended solely on captive breeding, the population would probably disappear. In 2010, the Oregon Zoo found that Columbia Basin pygmy rabbit pairings based on familiarity and preferences resulted in a significant increase in breeding success. In 2019, researchers trying to breed captive American paddlefish and Russian sturgeon separately inadvertently bred sturddlefish - a hybrid fish between the two fish. Research Captive breeding can also be a research tool to understand the reproductive physiology and reproductive behaviors of species. In order to successfully breed animals, there must be an understanding of their mating systems, their reproductive physiology, and behavior or mating rituals. Through captive breeding programs, these factors can be measured in a finite setting and the results can be interpreted and used to aid in ex-situ and in-situ conservation. Through a greater understanding of these systems, captive breeding efforts can have greater success when attempting to reproduce a species. A lot of research about elephant reproductive physiology and estrus cycles has been conducted in captivity and a greater understanding of how these factors play into breeding attempts can be established. Behavioral research quantifies the effects of how estrus plays a role in the herds behaviors and how this effects the bulls of a herd. This research can help facilities monitor for behavior changes in their herd and conduct successful breeding attempts through this understanding. Research helps with better understanding these physiological systems which in turn helps increase successful breeding attempts and allows for more generations to be brought up in captivity. Not only does physiological research aid in captive breeding attempts, but multi-generational research is also another important research tool that is conducted on different species and genetic changes can be tracked through different lineages brought up in captivity. Genetic changes throughout a specific lineage can help provide breeding recommendations and allow for genetic diversity within a captive population to remain high. Studbooks are an important resource that contains records of species lineages to track all of the data throughout breeding histories to allow facilities to understand the genetic history of an individual, the births and deaths of involved in the captive breeding of a certain species, and the parentage of certain individual animals. These studbooks come from years of effort of conducting research involving captive breeding programs, which allows facilities view the history surrounding certain individuals and then work together to evaluate the best plan of action to increase breeding success and genetic diversity within certain species populations in captivity. This genetic record keeping is also used in order to understand phylogeny and to better understand fitness changes that may occur over generations in captive populations. This form of record keeping helps aid in research surrounding population genetics in order to evaluate the best method to sustain high genetic variation within captive populations. Research conducted on captive breeding populations is also important when creating SAFE's and SSP's for a certain species. Studies in behavior are important when developing captive breeding programs because they allow facilities to understand an animals response to captivity and allows facilities to adapt proper housing conditions for the animals. Populations that are currently being propagated in captivity are very important research tools for understanding how to carry out successful propagation of a certain species. This research allows the knowledge to be passed on to more facilities allowing for more breeding programs to be developed in order to increase the genetic diversity of captive populations. The research conducted on breeding populations is also an important gateway into understanding other aspects of an animal such as social dynamics, nutrition and diet requirements, and demographics to allow for captive populations to prosper. Methods used To found a captive breeding population with adequate genetic diversity, breeders usually select individuals from different source populations—ideally, at least 20-30 individuals. Founding populations for captive breeding programs have often had fewer individuals than ideal because of their threatened state, leaving them more susceptible to challenges such as inbreeding depression. To overcome challenges of captive breeding such as adaptive differences, loss of genetic diversity, inbreeding depression, and outbreeding depression and get desired results, captive breeding programs use many monitoring methods. Artificial insemination is used to produce the desired offspring from individuals who do not mate naturally to reduce effects of mating closely related individuals such as inbreeding. Methods as seen in panda pornography allow programs to mate chosen individuals by encouraging mating behavior. A concern in captive breeding is to minimize the effects of breeding closely related individuals, microsatellite regions from an organism's genome can be used to determine amounts of relationship among founders to minimize relatedness and pick the most distant individuals to breed. This method has successfully been used in the captive breeding of the California condor and the Guam rail. The maximum avoidance of inbreeding (MAI) scheme allows control at a group level rather than an individual level by rotating individuals between groups to avoid inbreeding. Facilities can use intensive housing compared to group housing to allow for easier reproductive success and create more genetic diversity within a population. Intensive housing is when a species is forced into monogamy so only two individuals mate with each other, compared to group housing where the entire population is kept in the same space to try and replicate multi-partner breeding systems. When using intensive housing and forcing monogamy to take place, it is seen that inbreeding is lowered and a greater genetic diversity results. Intensive housing efforts were used with Tasmanian Devil populations in captivity compared to allowing for group mate choice. This helped increase the populations reproductive success in captivity and saw less inbreeding depression within the population. Using intensive housing to help establish a genetically healthy population in captivity can allow facilities to further increase conservation efforts of a species and combat genetic issues that may arise in the captive population. New technologies Assisted reproduction technology (ART): Artificial insemination Getting captive wild animals to breed naturally can be a difficult task. Giant pandas for example lose interest in mating once they are captured, and female giant pandas only experience estrus once a year, which only lasts for 48 to 72 hours. Many researchers have turned to artificial insemination in an attempt to increase the populations of endangered animals. It may be used for many reasons, including to overcome physical breeding difficulties, to allow a male to inseminate a much larger number of females, to control the paternity of offspring, and to avoid injury incurred during natural mating. It also creates more genetically diverse captive populations, enabling captive facilities to easily share genetic material with each other without the need to move animals. Scientist of the Justus-Liebig-University of Giessen, Germany, from the working group of Michael Lierz, developed a novel technique for semen collection and artificial insemination in parrots producing the world's first macaw by assisted reproduction. Cryopreservation Animal species can be preserved in gene banks, which consist of a cryogenic facilities used to store live sperm, eggs, or embryos in ultracold conditions. The Zoological Society of San Diego has established a "frozen zoo" to store frozen tissue from the world's rarest and most endangered species samples using cryopreservation techniques. At present, there has been more than 355 species, including mammals, reptiles, and birds. Cryopreservation can be performed as oocyte cryopreservation before fertilization, or as embryo cryopreservation after fertilization. Cryogenically preserved specimens can potentially be used to revive breeds that are endangered or extinct, for breed improvement, crossbreeding, research and development. This method can be used for virtually indefinite storage of material without deterioration over a much greater time-period relative to all other methods of ex situ conservation. However, cryo-conservation can be an expensive strategy and requires long term hygienic and economic commitment for germplasms to remain viable. Cryo-conservation can also face unique challenges based on the species, as some species have a reduced survival rate of frozen germplasm, but cryobiology is a field of active research and many studies concerning plants are underway. An example of the use of cryoconservation to prevent the extinction of a livestock breed is the case of the Hungarian Grey cattle, or Magya Szurke. Hungarian Grey cattle were once a dominant breed in southeastern Europe with a population of 4.9 million head in 1884. They were mainly used for draft power and meat. However, the population had decreased to 280,000 head by the end of World War II and eventually reached the low population of 187 females and 6 males from 1965 to 1970. The breed's decreased use was due primarily to the mechanization of agriculture and the adoption of major breeds, which yield higher milk production. The Hungarian government launched a project to preserve the breed, as it possesses valuable traits, such as stamina, calving ease, disease resistance, and easy adaptation to a variety of climates. The government program included various conservation strategies, including the cryopreservation of semen and embryos. The Hungarian government's conservation effort brought the population up to 10,310 in 2012, which shows significant improvement using cryoconservation. Cloning The best current cloning techniques have an average success rate of 9.4 percent, when working with familiar species such as mice, while cloning wild animals is usually less than 1 percent successful. In 2001, a cow named Bessie gave birth to a cloned Asian gaur, an endangered species, but the calf died after two days. In 2003, a banteng was successfully cloned, followed by three African wildcats from a thawed frozen embryo. These successes provided hope that similar techniques (using surrogate mothers of another species) might be used to clone extinct species. Anticipating this possibility, tissue samples from the last bucardo (Pyrenean ibex) were frozen in liquid nitrogen immediately after it died in 2000. Researchers are also considering cloning endangered species such as the giant panda and cheetah. However, cloning of animals is opposed by animal-groups due to the number of cloned animals that suffer from malformations before they die. Interspecific pregnancy A potential technique for aiding in reproduction of endangered species is interspecific pregnancy, implanting embryos of an endangered species into the womb of a female of a related species, carrying it to term. It has been used for the Spanish Ibex and Houbara bustard. Conservation education Captive breeding is an important tool used in modern education of conservation issues because it provides a framework for how we care about species and allows institutions to show the beauty that is contained in our natural environment. These practices of captive breeding can be used to explain the function of the modern-day facilities and their importance in conservation. Through continued breeding efforts populations can continue to be displayed in closer proximity to the public and their role in conservation can be explained. These explanations help show a side of the world many people will not engage with because conservation is not something that is inherently known about, it must be shown and taught to others to raise awareness of the issues around the globe. By allowing people to view these species in captivity, it allows facilities to explain the issues they face in the wild and advocate for the conservation of these species and their natural habitats. Institutions focus efforts on large charismatic species, such as elephants, giraffes, rhinos etc., because these draw more visitors to institutions and garner more attention from the public.  While a lot of these charismatic megafauna do draw more attention than other species, we can still use captive breeding programs and facilities involving other species to educate the public about a broader range of issues. Bristol Zoo Gardens in the United Kingdom has maintained a species of medicinal leech (Hirudo medicinalis) in their facility to use as an education exhibit. Leeches normally have a negative connotation surrounded by them but they have been used as an important tool in medicine. The display at Bristol Zoo Gardens provides an educational piece and tells the story of a woman who sold leeches to the locals around her for medicinal purposes. This display advocates for a smaller species that would not normally be covered by facilities, but they are well maintained in this facility and are active conservation of the species is being done because of its significance around humans and in the environment. Facilities can use captive breeding for a number of possibilities, such as educating the populace about captive breeding which provides conservation advocacy and a maintenance of these populations helps make the conservation issues surrounding the species more prevalent in the minds of the general public. Ethical considerations With successes, captive-breeding programs have proven successful throughout history. Notable examples include the American black-footed ferret; in 1986, a dwindling wild population of only 18 was eventually raised to 500. A Middle-Eastern antelope, the Arabian oryx was hunted over centuries, reducing their population by the late 1960s to merely eleven living animals; not wanting to lose such a symbolic animal of the Middle East, these individuals were rescued and donated by King Saud to the Phoenix Zoo, the San Diego Zoo and their (at the time) newly developed, Wild Animal Park, prior to his death in 1969. From these actions, those eleven oryx were successfully bred from the brink of extinction, and would go on to be re-released in the deserts of Jordan, Oman, Bahrain, United Arab Emirates and Qatar. Starting in 1980, the first animals were set free. Currently, the wild animals number around 1,000 individuals, with a further 6,000-7,000 in zoos and breeding centres internationally. While captive breeding can be an ideal solution for preventing endangered animals from facing serious threats of extinction there are still reasons why these programs can occasionally do more harm than good. Some detrimental effects include delays in understanding optimal conditions required for reproduction, failure to reach self-sustaining levels or provide sufficient stock for release, loss of genetic diversity due to inbreeding, and poor success in reintroductions despite available captive-bred young. Although it has been proven that captive breeding programs have yielded negative genetic effects in decreasing the fitness of captive-bred organisms, there is no direct evidence to show that this negative effect also decreases the overall fitness of their wild-born descendants. It has been argued that animals should be released from captivity programs for four main reasons: a lack of sufficient space due to overly successful breeding programs, closure of facilities due to financial reasons, pressure from animal rights advocacy groups, and to aid the conservation of endangered species. Additionally, there are many ethical complications to reintroducing animals born in captivity back into the wild. For example, when scientists were reintroducing a rare species of toad back into the Mallorcan wild in 1993, a potentially deadly fungus that could kill frogs and toads was unintentionally introduced. It is also important to maintain the organism's original habitat, or replicate that specific habitat for species survival. There are ethical issues surrounding if a species truly needs human intervention and if the resources going toward the captive breeding of these species cannot be allocated to other areas. Some populations may not need intervention because they were never extinction-prone in the first place such as the peregrine falcon. The population of peregrine falcons had a crash in the 1950s and 1960s due to the effect of pesticides on egg production and species survival, causing a decline in the population. Many facilities at the time in the U.S. and in European countries brought in peregrine falcons in order to help their declining population and establish a steady population through captive breeding. It was later shown through research conducted on the reproductive success of Peregrine Falcons and an analysis of their population that human intervention was not necessary in order for the population to recover and reach a steady point of equilibrium. This raises the question of should efforts on captive breeding and population establishment be done with human intervention or should efforts be carried out to prevent the source of the issue. The efforts and finances used to help bring about new Peregrine Falcon populations could have been used to prevent some level of pollution or to help breeding effort for extinction-prone species who truly need intervention. See also Breeding in the wild European Endangered Species Programme (EEP) Ex-situ conservation Panda pornography Species Survival Plan or SSP World Conference on Breeding Endangered Species in Captivity as an Aid to their Survival or WCBESCAS ZooBorns References Conservation biology Zoology Animals in captivity Animal breeding
Captive breeding
[ "Biology" ]
5,861
[ "Conservation biology", "Zoology" ]
2,965,318
https://en.wikipedia.org/wiki/Time%20reversal%20signal%20processing
Time reversal signal processing is a signal processing technique that has three main uses: creating an optimal carrier signal for communication, reconstructing a source event, and focusing high-energy waves to a point in space. A Time Reversal Mirror (TRM) is a device that can focus waves using the time reversal method. TRMs are also known as time reversal mirror arrays since they are usually arrays of transducers. TRM are well-known and have been used for decades in the optical domain. They are also used in the ultrasonic domain. Overview If the source is passive, i.e. some type of isolated reflector, an iterative technique can be used to focus energy on it. The TRM transmits a plane wave which travels toward the target and is reflected off it. The reflected wave returns to the TRM, where it looks as if the target has emitted a (weak) signal. The TRM reverses and retransmits the signal as usual, and a more focused wave travels toward the target. As the process is repeated, the waves become more and more focused on the target. Yet another variation is to use a single transducer and an ergodic cavity. Intuitively, an ergodic cavity is one that will allow a wave originating at any point to reach any other point. An example of an ergodic cavity is an irregularly shaped swimming pool: if someone dives in, eventually the entire surface will be rippling with no clear pattern. If the propagation medium is lossless and the boundaries are perfect reflectors, a wave starting at any point will reach all other points an infinite number of times. This property can be exploited by using a single transducer and recording for a long time to get as many reflections as possible. Theory The time reversal technique is based upon a feature of the wave equation known as reciprocity: given a solution to the wave equation, then the time reversal (using a negative time) of that solution is also a solution. This occurs because the standard wave equation only contains even order derivatives. Some media are not reciprocal (e.g. very lossy or noisy media), but many very useful ones are approximately so, including sound waves in water or air, ultrasonic waves in human bodies, and electromagnetic waves in free space. The medium must also be approximately linear. Time reversal techniques can be modeled as a matched filter. If a delta function is the original signal, then the received signal at the TRM is the impulse response of the channel. The TRM sends the reversed version of the impulse response back through the same channel, effectively autocorrelating it. This autocorrelation function has a peak at the origin, where the original source was. The signal is concentrated in both space and time (in many applications, autocorrelation functions are functions of time only). Another way to think of a time reversal experiment is that the TRM is a "channel sampler". The TRM measures the channel during the recording phase, and uses that information in the transmission phase to optimally focus the wave back to the source. Experiments A notable researcher is Mathias Fink of École Supérieure de Physique et de Chimie Industrielles de la Ville de Paris. His team has done numerous experiments with ultrasonic TRMs. An interesting experiment involved a single source transducer, a 96-element TRM, and 2000 thin steel rods located between the source and the array. The source sent a 1 μs pulse both with and without the steel scatterers. The source point was measured for both time width and spatial width in the retransmission step. The spatial width was about 6 times narrower with the scatterers than without. Moreover, the spatial width was less than the diffraction limit as determined by the size of the TRM with the scatterers. This is possible because the scatterers increased the effective aperture of the array. Even when the scatterers were moved slightly (on the order of a wavelength) in between the receive and transmit steps, the focusing was still quite good, showing that time reversal techniques can be robust in the face of a changing medium. In addition, José M. F. Moura of Carnegie Mellon University has led a research team working to extend the principles of Time Reversal to electromagnetic waves, and they have achieved resolution in excess of the Rayleigh resolution limit, proving the efficacy of Time Reversal techniques. Their efforts are focused on radar systems, and trying to improve detection and imaging schemes in highly cluttered environments, where Time Reversal techniques seem to provide the greatest benefit. Applications The beauty of time reversal signal processing is that one need not know any details of the channel. The step of sending a wave through the channel effectively measures it, and the retransmission step uses this data to focus the wave. Thus one doesn't have to solve the wave equation to optimize the system, one only needs to know that the medium is reciprocal. Time reversal is therefore suited to applications with inhomogeneous media. An attractive aspect of time reversal signal processing is the fact that it makes use of multipath propagation. Many wireless communication systems must compensate and correct for multipath effects. Time reversal techniques use multipath to their advantage by using the energy from all paths. Fink imagines a cryptographic application based on the ergodic cavity configuration. The key would be composed of the locations of two transducers. One plays the message, the other records waves after they have bounced throughout the cavity; this recording will look like noise. When the recorded message is time reversed and played back, there is only one location to launch the waves from in order for them to focus. Given that the playback location is correct, only one other location will exhibit the focused message wave; all other locations should look noisy. See also Phase conjugation References External links Mathias Fink. Time Reversal of Ultrasonic Fields--Part 1: Basic Principles. IEEE Trans. Ultrasonics, Ferroelectrics, and Frequency Control. 39(5):pp 555--566. September 1992. Signal processing
Time reversal signal processing
[ "Technology", "Engineering" ]
1,256
[ "Telecommunications engineering", "Computer engineering", "Signal processing" ]
2,965,380
https://en.wikipedia.org/wiki/Glossary%20of%20blogging
This is a list of blogging terms. Blogging, like any hobby, has developed something of a specialized vocabulary. The following is an attempt to explain a few of the more common phrases and words, including etymologies when not obvious. Blog-related terms A Atom A popular feed format developed as an alternative to RSS. Autocasting An automated form of podcasting that allows bloggers and blog readers to generate audio versions of text blogs from RSS feeds. Audioblog A blog where the posts consist mainly of voice recordings sent by mobile phone, sometimes with some short text messages added for metadata purposes. (cf. podcasting) B Beauty Blog Beauty blogs are niche blogs that cover cosmetics, makeup or skincare related topics, events, product launches, product reviews, nail-art, makeup trends, highly curated products, insider tips from tastemakers and celebrities, et cetera. Blawg A blog about law and legal issues. Bleg An entry in a blog requesting information or contributions. A portmanteau of "blog" and "beg." also called "Lazyweb." Blog Carnival A blog article that contains links to other articles covering a specific topic. Most blog carnivals are hosted by a rotating list of frequent contributors to the carnival, and serve to both generate new posts by contributors and highlight new bloggers posting matter in that subject area. Blog client (weblog client) is software to manage (post, edit) blogs from the operating system with no need to launch a web browser. A typical blog client has an editor, a spell-checker, and a few more options that simplify content creation and editing. Blog publishing service A software that is used to create the blog. Some of the most popular are WordPress, Blogger, TypePad, Movable Type, and Joomla. Blogger Person who runs a blog. Also blogger.com, a popular blog hosting website. Rarely weblogger. Bloggernacle Blogs written by and for Mormons (a portmanteau of "blog" and "Tabernacle"). Generally refers to faithful Mormon bloggers and sometimes refers to a specific grouping of faithful Mormon bloggers. Bloggies One of the most popular blog awards Blogroll A list of other blogs that a blogger might recommend by providing links to them (usually in a sidebar list). Blogosphere All blogs, or the blogging community. Blogware A category of software that consists of a specialized form of a Content Management System specifically designed for creating and maintaining weblogs. The BOBs The largest international blog awards. C Catblogging (traditionally "Friday catblogging", sometime "Caturday") The practice of posting pictures of cats, in typical cat postures and expressions, on a blog. Collaborative blog A blog (usually focused on a single issue or political stripe) on which multiple users enjoy posting permission. Also known as group blog. Comment spam Like e-mail spam. Robot “spambots” flood a blog with advertising in the form of bogus comments. A serious problem that requires bloggers and blog platforms to have tools to exclude some users or ban some addresses in comments. D Desktop Blogging Client An off-line blog management (posting, editing, and archiving) tool Domain Name A domain name is the name of a blog/website. Google.com and Wikipedia.org are examples of blogs/websites' names. E Event blogging When marketers create new blogs for upcoming events. For that, they buy EMDs or exact match domains for the upcoming events to attract people who are searching for that event. Because their domain is rich with keywords, they get better rankings in search engines. However, to establish an event blog, event bloggers may start working on their blogs 6 months or more before the event to make a decent amount of content and to get quality backlinks. F Fisking/To fisk To rebut a news report line-by-line, or the result of doing so. Flog A portmanteau of "fake" and "blog"; a form of astroturfing. A food blog; sometimes, a blog dedicated to food porn. Feeds RSS Feeds. H Health blog A blog that covers health topics, events, and/or related content of the health industry and the general community. In short, anything related to health. J J-blog A journalist blog. A blog with a Jewish focus. L Legal blog A blog about the law. Lifelog A blog that captures a person's entire life. List blog A blog consisting solely of list-style posts. Listicle A short-form of writing that uses a list as its thematic structure but is fleshed out with sufficient copy to be published as an article. Litblog A blog that focuses primarily on the topic of literature. M Milblog A blog written by members or veterans of any branch of military service - Army, Navy, Air Force, or Marines. A contraction of military and blog. Moblog A portmanteau of "mobile" and "blog". A blog featuring posts sent mainly by mobile phone, using SMS or MMS messages. They are often photoblogs. Mommy blog A blog featuring discussions especially about home and family. Multiblog A blog constructed as a conversation between more than two people. P Permalink Permanent link. The unique URL of a single post. Use this when you want to link to a post somewhere. Phlog Type of blog utilising the Gopher protocol instead of HTTP A Photoblog. A portmanteau of "photo" and "blog". Photoblog A blog mostly containing photos, posted constantly and chronologically. Pingback The alert in the TrackBack system that notifies the original poster of a blog post when someone else writes an entry concerning the original post. Podcasting Contraction of “iPod” and “broadcasting” (but not for iPods only). Posting audio and video material on a blog and its RSS feed, for digital players. Post or blog Post A blog post is a piece of writings in the form of an article that's published on a blog by a blogger. Post Slug For blogs with common language URLs, the post slug is the portion of the URL that represents the post, such as "all-about-my-holiday" in www.example.com/all-about-my-holiday R RSS Really Simple Syndication is a family of Web feed formats used to publish frequently updated content such as blog entries, news headlines, or podcasts. RSS aggregator Software or online service allowing a blogger to read an RSS feed, especially the latest posts on their favorite blogs. Also called a reader or feedreader. RSS feed The file containing a blog’s latest posts It is read by an RSS aggregator or reader and shows at once when a blog has been updated. It may contain only the title of the post, the title plus the first few lines of tha post, or the entire post. S Search engine friendly URLs or, for short, SEF URLs, implemented via URL mapping. Spam blog A blog that is composed of spam. A Spam blog or "any blog whose creator doesn't add any written value." Slashdot effect The Slashdot effect can hit blogs or another website, and is caused by a major website (usually Slashdot, but also Digg, Metafilter, Boing Boing, Instapundit and others) sending huge amounts of temporary traffic that often slow down the server. Soldierblog see Milblog Subscribe The term used when a blogs feed is added to a feed reader like Bloglines or Google. Some blogging platforms have internal subscriptions, this allows readers to receive a notification when there are new posts in a blog. A subscriber is a person who is willing to receive blogger's news and updates. T Templates Templates, used on the "back end" of a blog that works together to handle information and present it on a blog. Theme CSS based code that when applied to the templates will result in visual element changes to the blog. The theme, as a whole, is also referred to as a blog design. TrackBack A system that allows a blogger to see who has seen the original post and has written another entry concerning it. The system works by sending a 'ping' between the blogs and therefore providing the alert. V Vlog A video blog; a vlogger is a video blogger (e.g. someone who records himself interviewing people of a certain field). W Warblog A blog devoted mostly or wholly to covering news events concerning an ongoing war. Weblog The unshortened version of 'blog'. References Works cited Blogs Internet terminology Neologisms Blogging Wikipedia glossaries using description lists
Glossary of blogging
[ "Technology" ]
1,822
[ "Computing terminology", "Glossaries of computers", "Internet terminology" ]
2,965,726
https://en.wikipedia.org/wiki/Markov%20number
A Markov number or Markoff number is a positive integer x, y or z that is part of a solution to the Markov Diophantine equation studied by . The first few Markov numbers are 1, 2, 5, 13, 29, 34, 89, 169, 194, 233, 433, 610, 985, 1325, ... appearing as coordinates of the Markov triples (1, 1, 1), (1, 1, 2), (1, 2, 5), (1, 5, 13), (2, 5, 29), (1, 13, 34), (1, 34, 89), (2, 29, 169), (5, 13, 194), (1, 89, 233), (5, 29, 433), (1, 233, 610), (2, 169, 985), (13, 34, 1325), ... There are infinitely many Markov numbers and Markov triples. Markov tree There are two simple ways to obtain a new Markov triple from an old one (x, y, z). First, one may permute the 3 numbers x,y,z, so in particular one can normalize the triples so that x ≤ y ≤ z. Second, if (x, y, z) is a Markov triple then so is (x, y, 3xy − z). Applying this operation twice returns the same triple one started with. Joining each normalized Markov triple to the 1, 2, or 3 normalized triples one can obtain from this gives a graph starting from (1,1,1) as in the diagram. This graph is connected; in other words every Markov triple can be connected to by a sequence of these operations. If one starts, as an example, with we get its three neighbors , and in the Markov tree if z is set to 1, 5 and 13, respectively. For instance, starting with and trading y and z before each iteration of the transform lists Markov triples with Fibonacci numbers. Starting with that same triplet and trading x and z before each iteration gives the triples with Pell numbers. All the Markov numbers on the regions adjacent to 2's region are odd-indexed Pell numbers (or numbers n such that 2n2 − 1 is a square, ), and all the Markov numbers on the regions adjacent to 1's region are odd-indexed Fibonacci numbers (). Thus, there are infinitely many Markov triples of the form where Fk is the kth Fibonacci number. Likewise, there are infinitely many Markov triples of the form where Pk is the kth Pell number. Other properties Aside from the two smallest singular triples (1, 1, 1) and (1, 1, 2), every Markov triple consists of three distinct integers. The unicity conjecture, as remarked by Frobenius in 1913, states that for a given Markov number c, there is exactly one normalized solution having c as its largest element: proofs of this conjecture have been claimed but none seems to be correct. Martin Aigner examines several weaker variants of the unicity conjecture. His fixed numerator conjecture was proved by Rabideau and Schiffler in 2020, while the fixed denominator conjecture and fixed sum conjecture were proved by Lee, Li, Rabideau and Schiffler in 2023. None of the prime divisors of a Markov number is congruent to 3 modulo 4, which implies that an odd Markov number is 1 more than a multiple of 4. Furthermore, if is a Markov number then none of the prime divisors of is congruent to 3 modulo 4. An even Markov number is 2 more than a multiple of 32. In his 1982 paper, Don Zagier conjectured that the nth Markov number is asymptotically given by The error is plotted below. Moreover, he pointed out that , an approximation of the original Diophantine equation, is equivalent to with f(t) = arcosh(3t/2). The conjecture was proved by Greg McShane and Igor Rivin in 1995 using techniques from hyperbolic geometry. The nth Lagrange number can be calculated from the nth Markov number with the formula The Markov numbers are sums of (non-unique) pairs of squares. Markov's theorem showed that if is an indefinite binary quadratic form with real coefficients and discriminant , then there are integers x, y for which f takes a nonzero value of absolute value at most unless f is a Markov form: a constant times a form such that where (p, q, r) is a Markov triple. Matrices Let tr denote the trace function over matrices. If X and Y are in SL2(), then so that if then In particular if X and Y also have integer entries then tr(X)/3, tr(Y)/3, and tr(XY)/3 are a Markov triple. If X⋅Y⋅Z = I then tr(XtY) = tr(Z), so more symmetrically if X, Y, and Z are in SL2() with X⋅Y⋅Z = I and the commutator of two of them has trace −2, then their traces/3 are a Markov triple. See also Markov spectrum Notes References Diophantine equations Diophantine approximation Fibonacci numbers
Markov number
[ "Mathematics" ]
1,172
[ "Recurrence relations", "Fibonacci numbers", "Mathematical objects", "Equations", "Golden ratio", "Diophantine equations", "Mathematical relations", "Diophantine approximation", "Approximations", "Number theory" ]
2,965,742
https://en.wikipedia.org/wiki/Teardrop%20tattoo
The teardrop tattoo or tear tattoo is a symbolic tattoo of a tear that is placed underneath the eye. The teardrop is one of the most widely recognised prison tattoos and has various meanings. It can signify that the wearer has spent time in prison, or more specifically that the wearer was raped while incarcerated and tattooed by the rapist as a "property" mark and for humiliation, since facial tattoos cannot be concealed. The tattoo is sometimes worn by the female companions of prisoners in solidarity with their loved ones. Amy Winehouse had a teardrop drawn on her face in eyeliner after her husband Blake entered the Pentonville prison hospital following a suspected drug overdose. It can acknowledge the loss of a friend or family member: Basketball player Amar'e Stoudemire has had a teardrop tattoo since 2012 honouring his older brother Hazell Jr., who died in a car accident. In West Coast United States gang culture, the tattoo may signify that the wearer has killed someone and in some of those circles, the tattoo's meaning can change: an empty outline meaning the wearer attempted murder. Sometimes the exact meaning of the tattoo is known only by the wearer as in the case of Portuguese footballer Ricardo Quaresma, who has never explained his teardrop tattoos. See also Criminal tattoo Prison rape Prison tattooing References Symbols Tattoo designs Prison culture Prison rape
Teardrop tattoo
[ "Mathematics" ]
280
[ "Symbols" ]
2,965,801
https://en.wikipedia.org/wiki/Behavior-driven%20development
Behavior-driven development (BDD) involves naming software tests using domain language to describe the behavior of the code. BDD involves use of a domain-specific language (DSL) using natural-language constructs (e.g., English-like sentences) that can express the behavior and the expected outcomes. Proponents claim it encourages collaboration among developers, quality assurance experts, and customer representatives in a software project. It encourages teams to use conversation and concrete examples to formalize a shared understanding of how the application should behave. BDD is considered an effective practice especially when the problem space is complex. BDD is considered a refinement of test-driven development (TDD). BDD combines the techniques of TDD with ideas from domain-driven design and object-oriented analysis and design to provide software development and management teams with shared tools and a shared process to collaborate on software development. At a high level, BDD is an idea about how software development should be managed by both business interests and technical insight. Its practice involves use of specialized tools. Some tools specifically for BDD can be used for TDD. The tools automate the ubiquitous language. Overview BDD is a process by which DSL structured natural-language statements are converted into executable tests. The result are tests that read like acceptance criteria for a given function. As such, BDD is an extension of TDD. BDD focuses on: Where to start in the process What to test and what not to test How much to test in one go What to call the tests How to understand why a test fails At its heart, BDD is about rethinking the approach to automated testing (including unit testing and acceptance testing) in order to avoid issues that naturally arise. For example, BDD suggests that unit test names be whole sentences starting with a conditional verb ("should" in English for example) and should be written in order of business value. Acceptance tests should be written using the standard agile framework of a user story: "Being a [role/actor/stakeholder] I want a [feature/capability] yielding a [benefit]". Acceptance criteria should be written in terms of scenarios and implemented in classes: Given [initial context], when [event occurs], then [ensure some outcomes] . Starting from this point, many people developed BDD frameworks over a period of years, finally framing it in terms of a communication and collaboration framework for developers, QA and non-technical or business participants in a software project. Principles BDD suggests that software tests should be named in terms of desired behavior. Borrowing from agile software development the "desired behavior" in this case consists of the requirements set by the business — that is, the desired behavior that has business value for whatever entity commissioned the software unit under construction. Within BDD practice, this is referred to as BDD being an "outside-in" activity. TDD does not differentiate tests in terms of high-level software requirements, low-level technical details or anything in between. One way of looking at BDD therefore, is that it is an evolution of TDD which makes more specific choices. Behavioral specifications Another BDD suggestion relates to how the desired behavior should be specified. BDD suggests using a semi-formal format for behavioral specification which is borrowed from user story specifications from the field of object-oriented analysis and design. The scenario aspect of this format may be regarded as an application of Hoare logic to behavioral specification of software using the domain-specific language. BDD suggests that business analysts and software developers should collaborate in this area and should specify behavior in terms of user stories, which are each explicitly documented. Each user story should, to some extent, follow the structure: Title An explicit title. Narrative A short introductory section with the following structure: As a: the person or role who will benefit from the feature; I want: the feature; so that: the benefit or value of the feature. Acceptance criteriaA description of each specific scenario of the narrative with the following structure: Given: the initial context at the beginning of the scenario, in one or more clauses; When: the event that triggers the scenario; Then: the expected outcome, in one or more clauses. BDD does not require how this information is formatted, but it does suggest that a team should decide on a relatively simple, standardized format with the above elements. It also suggests that the scenarios should be phrased declaratively rather than imperatively — in the business language, with no reference to elements of the UI through which the interactions take place. This format is referred to in Cucumber as the Gherkin language. Specification as a ubiquitous language BDD borrows the concept of the ubiquitous language from domain driven design. A ubiquitous language is a (semi-)formal language that is shared by all members of a software development team — both software developers and non-technical personnel. The language in question is both used and developed by all team members as a common means of discussing the domain of the software in question. In this way BDD becomes a vehicle for communication between all the different roles in a software project. A common risk with software development includes communication breakdowns between Developers and Business Stakeholders. BDD uses the specification of desired behavior as a ubiquitous language for the project Team members. This is the reason that BDD insists on a semi-formal language for behavioral specification: some formality is a requirement for being a ubiquitous language. In addition, having such a ubiquitous language creates a domain model of specifications, so that specifications may be reasoned about formally. This model is also the basis for the different BDD-supporting software tools that are available. The example given above establishes a user story for a software system under development. This user story identifies a stakeholder, a business effect and a business value. It also describes several scenarios, each with a precondition, trigger and expected outcome. Each of these parts is exactly identified by the more formal part of the language (the term Given might be considered a keyword, for example) and may therefore be processed in some way by a tool that understands the formal parts of the ubiquitous language. Most BDD applications use text-based DSLs and specification approaches. However, graphical modeling of integration scenarios has also been applied successfully in practice, e.g., for testing purposes. Specialized tooling Much like TDD, BDD may involve using specialized tooling. BDD requires not only test code as does TDD, but also a document that describes behavior in a more human-readable language. This requires a two-step process for executing the tests, reading and parsing the descriptions, and reading the test code and finding the corresponding test implementation to execute. This process makes BDD more laborious for developers. Proponents suggest that due to its human-readable nature the value of those documents extends to a relatively non-technical audience, and can hence serve as a communication means for describing requirements ("features"). Tooling principles In principle, a BDD support tool is a testing framework for software, much like the tools that support TDD. However, where TDD tools tend to be quite free-format in what is allowed for specifying tests, BDD tools are linked to the definition of the ubiquitous language. The ubiquitous language allows business analysts to document behavioral requirements in a way that will also be understood by developers. The principle of BDD support tooling is to make these same requirements documents directly executable as a collection of tests. If this cannot be achieved because of reasons related to the technical tool that enables the execution of the specifications, then either the style of writing the behavioral requirements must be altered or the tool must be changed. The exact implementation of behavioral requirements varies per tool, but agile practice has come up with the following general process: The tooling reads a specification document. The tooling directly understands completely formal parts of the ubiquitous language (such as the Given keyword in the example above). Based on this, the tool breaks each scenario up into meaningful clauses. Each individual clause in a scenario is transformed into some sort of parameter for a test for the user story. This part requires project-specific work by the software developers. The framework then executes the test for each scenario, with the parameters from that scenario. Story versus specification A separate subcategory of behavior-driven development is formed by tools that use specifications as an input language rather than user stories. Specification tools don't use user stories as an input format for test scenarios but rather use functional specifications for units that are being tested. These specifications often have a more technical nature than user stories and are usually less convenient for communication with business personnel than are user stories. An example of a specification for a stack might look like this: Specification: Stack When a new stack is created Then it is empty When an element is added to the stack Then that element is at the top of the stack When a stack has N elements And element E is on top of the stack Then a pop operation returns E And the new size of the stack is N-1 Such a specification may exactly specify the behavior of the component being tested, but is less meaningful to a business user. As a result, specification-based testing is seen in BDD practice as a complement to story-based testing and operates at a lower level. Specification testing is often seen as a replacement for free-format unit testing. The three amigos The "three amigos", also referred to as a "Specification Workshop", is a meeting where the product owner discusses the requirement in the form of specification by example with different stakeholders like the QA and development team. The key goal for this discussion is to trigger conversation and identify any missing specifications. The discussion also gives a platform for QA, development team and product owner to converge and hear out each other's perspective to enrich the requirement and also make sure if they are building the right product. The three amigos are: Business - Role of the business user is to define the problem only and not venture into suggesting a solution Development - Role of the developers involve suggesting ways to fix the problem Testing - Role of testers is to question the solution, bring up as many as different possibilities for brain storming through what-if scenarios and help make the solution more precise to fix the problem. See also Specification by example Behat (PHP framework) Cynefin framework Concordion (Java framework) RSpec Gauge Jasmine (JavaScript testing framework) Squish GUI Tester (BDD GUI Testing Tool for JavaScript, Python, Perl, Ruby and Tcl) Use case Fitnesse has been used to roll out BDD References Software design Software development philosophies Software testing Articles with example Java code
Behavior-driven development
[ "Engineering" ]
2,189
[ "Software engineering", "Software testing", "Software design", "Design" ]
2,966,204
https://en.wikipedia.org/wiki/241%20%28number%29
241 (two hundred [and] forty-one) is the natural number between 240 and 242. It is also a prime number. In mathematics 241 is the larger of the twin primes (239, 241). Twin primes are pairs of primes separated by 2. 241 is a regular prime and a lucky prime. Since 241 = 15 × 24 + 1, it is a Proth prime. 241 is a repdigit in base 15 (111). 241 is the only known Lucas–Wieferich prime to (U, V) = (3, −1). References Integers
241 (number)
[ "Mathematics" ]
124
[ "Mathematical objects", "Number stubs", "Elementary mathematics", "Integers", "Numbers" ]
2,966,688
https://en.wikipedia.org/wiki/Hazardous%20Substances%20and%20New%20Organisms%20Act%201996
The Hazardous Substances and New Organisms Act (HSNO) is an Act of Parliament passed in New Zealand in 1996. The New Zealand Environmental Protection Authority (EPA) administers the Act. External links Text of the Act Hazardous Substances and New Organisms at the Ministry of Environment Environmental Protection Authority (EPA New Zealand) Statutes of New Zealand Environmental law in New Zealand 1996 in New Zealand law 1996 in the environment Hazardous materials
Hazardous Substances and New Organisms Act 1996
[ "Physics", "Chemistry", "Technology" ]
83
[ "Materials", "Hazardous materials", "Matter" ]
2,966,798
https://en.wikipedia.org/wiki/Space%20Access%20Society
The Space Access Society (SAS) is an organization dedicated to increasing the viability and reducing the cost of commercial access to space travel. It was founded by Henry Vanderbilt, who was the president from the organizations' founding in 1992 until January 2006. Activities The SAS is primarily noted for two activities: Space policy activity and review reports, known as SAS updates, which were emailed and web posted at irregular intervals. These included both factual current events and policy analysis, and were largely or entirely written by Henry Vanderbilt. Space Access conferences, held in the spring in Phoenix, Arizona, from 1994 to 2016. In 2019, the event took place in Fremont, California. There was also a Making Orbit conference held in Berkeley, California in 1993.. The Space Access conferences are well known in the reusable space launch development community and space launch vehicle communities for bringing together key players at most of the companies working in the field, ranging from large conventional aerospace companies such as Boeing and Lockheed-Martin, and smaller companies such as Rotary Rocket, XCOR Aerospace, Pioneer Rocketplane, Armadillo Aerospace and the like. NASA and the Federal Aviation Administration Office of Commercial Space Transportation also have consistently sent representatives. Many Ansari X-Prize team members were consistent attendees. Presentations at the conference range from informal to viewgraphs and paper handouts. There is no conference proceedings, to encourage the free discussion of issues which participants may not want to go on documented record. Social networking among the industry leaders present is a major feature of the conferences as well. See also Private spaceflight Space advocacy Space colonization Space exploration Vision for Space Exploration References heise.de coverage of Space Access 2016 Conference held in Phoenix, Arizona HobbySpace.com coverage of Space Access 2008 Conference, Phoenix, Arizona HobbySpace.com coverage of Space Access 2007 Conference, Phoenix, Arizona HobbySpace.com coverage of Space Access 2006 Conference, Phoenix, Arizona External links Official website Non-profit organizations based in the United States Space colonization Space advocacy organizations Space access
Space Access Society
[ "Astronomy" ]
402
[ "Space advocacy organizations", "Astronomy organizations" ]
2,966,898
https://en.wikipedia.org/wiki/Outer%20billiards
Outer billiards is a dynamical system based on a convex shape in the plane. Classically, this system is defined for the Euclidean plane but one can also consider the system in the hyperbolic plane or in other spaces that suitably generalize the plane. Outer billiards differs from a usual dynamical billiard in that it deals with a discrete sequence of moves outside the shape rather than inside of it. Definitions The outer billiards map Let P be a convex shape in the plane. Given a point x0 outside P, there is typically a unique point x1 (also outside P) so that the line segment connecting x0 to x1 is tangent to P at its midpoint and a person walking from x0 to x1 would see P on the right. (See Figure.) The map F: x0 -> x1 is called the outer billiards map. The inverse (or backwards) outer billiards map is also defined, as the map x1 -> x0. One gets the inverse map simply by replacing the word right by the word left in the definition given above. The figure shows the situation in the Euclidean plane, but the definition in the hyperbolic plane is essentially the same. Orbits An outer billiards orbit is the set of all iterations of the point, namely ... x0 ↔ x1 ↔ x2 ↔ x3 ... That is, start at x0 and iteratively apply both the outer billiards map and the backwards outer billiards map. When P is a strictly convex shape, such as an ellipse, every point in the exterior of P has a well defined orbit. When P is a polygon, some points might not have well-defined orbits, on account of the potential ambiguity of choosing the midpoint of the relevant tangent line. Nevertheless, in the polygonal case, almost every point has a well-defined orbit. An orbit is called periodic if it eventually repeats. An orbit is called aperiodic (or non-periodic) if it is not periodic. An orbit is called bounded (or stable) if some bounded region in the plane contains the whole orbit. An orbit is called unbounded (or unstable) if it is not bounded. Higher-dimensional spaces Defining an outer billiards system in a higher-dimensional space is beyond the scope of this article. Unlike the case of ordinary billiards, the definition is not straightforward. One natural setting for the map is a complex vector space. In this case, there is a natural choice of line tangent to a convex body at each point. One obtains these tangents by starting with the normals and using the complex structure to rotate 90 degrees. These distinguished tangent lines can be used to define the outer billiards map roughly as above. History Most people attribute the introduction of outer billiards to Bernhard Neumann in the late 1950s, though it seems that a few people cite an earlier construction in 1945, due to M. Day. Jürgen Moser popularized the system in the 1970s as a toy model for celestial mechanics. This system has been studied classically in the Euclidean plane, and more recently in the hyperbolic plane. One can also consider higher-dimensional spaces, though no serious study has yet been made. Bernhard Neumann informally posed the question as to whether or not one can have unbounded orbits in an outer billiards system, and Moser put it in writing in 1973. Sometimes this basic question has been called the Moser-Neumann question. This question, originally posed for shapes in the Euclidean plane and solved only recently, has been a guiding problem in the field. Moser-Neumann question Bounded orbits in the Euclidean plane In the 70's, Jürgen Moser sketched a proof, based on K.A.M. theory, that outer billiards relative to a 6-times-differentiable shape of positive curvature has all orbits bounded. In 1982, Raphael Douady gave the full proof of this result. A big advance in the polygonal case came over a period of several years when three teams of authors, Vivaldi-Shaidenko, Kolodziej, and Gutkin-Simanyi, each using different methods, showed that outer billiards relative to a quasirational polygon has all orbits bounded. The notion of quasirational is technical (see references) but it includes the class of regular polygons and convex rational polygons, namely those convex polygons whose vertices have rational coordinates. In the case of rational polygons, all the orbits are periodic. In 1995, Sergei Tabachnikov showed that outer billiards for the regular pentagon has some aperiodic orbits, thus clarifying the distinction between the dynamics in the rational and regular cases. In 1996, Philip Boyland showed that outer billiards relative to some shapes can have orbits which accumulate on the shape. In 2005, Daniel Genin showed that all orbits are bounded when the shape is a trapezoid, thus showing that quasirationality is not a necessary condition for the system to have all orbits bounded. (Not all trapezoids are quasirational.) Unbounded orbits in the Euclidean plane In 2007, Richard Schwartz showed that outer billiards has some unbounded orbits when defined relative to the Penrose Kite, thus answering the original Moser-Neumann question in the affirmative. The Penrose kite is the convex quadrilateral from the kites-and-darts Penrose tilings. Subsequently, Schwartz showed that outer billiards has unbounded orbits when defined relative to any irrational kite. An irrational kite is a quadrilateral with the following property: One of the diagonals of the quadrilateral divides the region into two triangles of equal area and the other diagonal divides the region into two triangles whose areas are not rational multiples of each other. In 2008, Dmitry Dolgopyat and Bassam Fayad showed that outer billiards defined relative to the semidisk has unbounded orbits. The semidisk is the region one gets by cutting a disk in half. The proof of Dolgopyat-Fayad is robust, and also works for regions obtained by cutting a disk nearly in half, when the word nearly is suitably interpreted. Unbounded orbits in the hyperbolic plane In 2003, Filiz Doǧru and Sergei Tabachnikov showed that all orbits are unbounded for a certain class of convex polygons in the hyperbolic plane. The authors call such polygons large. (See the reference for the definition.) Filiz Doǧru and Samuel Otten then extended this work in 2011 by specifying the conditions under which a regular polygonal table in the hyperbolic plane have all orbits unbounded, that is, are large. Existence of periodic orbits In ordinary polygonal billiards, the existence of periodic orbits is a major unsolved problem. For instance, it is unknown if every triangular shaped table has a periodic billiard path. More progress has been made for outer billiards, though the situation is far from well understood. As mentioned above, all the orbits are periodic when the system is defined relative to a convex rational polygon in the Euclidean plane. Moreover, it is a recent theorem of Chris Culter (written up by Sergei Tabachnikov) that outer billiards relative to any convex polygon has periodic orbits—in fact a periodic orbit outside of any given bounded region. Open questions Outer billiards is a subject still in its beginning phase. Most problems are still unsolved. Here are some open problems in the area. Show that outer billiards relative to almost every convex polygon has unbounded orbits. Show that outer billiards relative to a regular polygon has almost every orbit periodic. The cases of the equilateral triangle and the square are trivial, and Tabachnikov answered this for the regular pentagon. These are the only cases known. more broadly, characterize the structure of the set of periodic orbits relative to the typical convex polygon. understand the structure of periodic orbits relative to simple shapes in the hyperbolic plane, such as small equilateral triangles. See also Illumination problem References Dynamical systems
Outer billiards
[ "Physics", "Mathematics" ]
1,694
[ "Mechanics", "Dynamical systems" ]
2,966,927
https://en.wikipedia.org/wiki/Astemizole
Astemizole (marketed under the brand name Hismanal, developmental code R43512) was a second-generation antihistamine drug that has a long duration of action. Astemizole was discovered by Janssen Pharmaceutica in 1977. It was withdrawn from the market globally in 1999 because of rare but potentially fatal side effects (QTc interval prolongation and related arrhythmias due to hERG channel blockade). Pharmacology Astemizole is a histamine H1-receptor antagonist. It has anticholinergic and antipruritic effects. Astemizole is rapidly absorbed from the gastrointestinal tract and competitively binds to histamine H1 receptor sites in the gastrointestinal tract, uterus, blood vessels, and bronchial muscle. This suppresses the formation of edema and pruritus (caused by histamine). Despite some earlier reports that astemizole does not cross the blood–brain barrier, several studies have shown high permeability and high binding to protein folds associated with Alzheimer's. Astemizole may also act on histamine H3 receptors, thereby producing adverse effects. Astemizole does also act as FIASMA (functional inhibitor of acid sphingomyelinase). Astemizole has been researched as a treatment for Creutzfeldt-Jakob Disease (CJD). Toxicity Astemizole has an oral LD50 of approximately 2052 mg/kg (in mice). References External links Statement on Astemizole from Health Canada Phenethylamines Belgian inventions Benzimidazoles 4-Fluorophenyl compounds H1 receptor antagonists HERG blocker Janssen Pharmaceutica 4-Methoxyphenyl compounds Piperidines Withdrawn drugs
Astemizole
[ "Chemistry" ]
384
[ "Drug safety", "Withdrawn drugs" ]
2,967,256
https://en.wikipedia.org/wiki/Engel%20expansion
The Engel expansion of a positive real number x is the unique non-decreasing sequence of positive integers such that For instance, Euler's number e has the Engel expansion 1, 1, 2, 3, 4, 5, 6, 7, 8, ... corresponding to the infinite series Rational numbers have a finite Engel expansion, while irrational numbers have an infinite Engel expansion. If x is rational, its Engel expansion provides a representation of x as an Egyptian fraction. Engel expansions are named after Friedrich Engel, who studied them in 1913. An expansion analogous to an Engel expansion, in which alternating terms are negative, is called a Pierce expansion. Engel expansions, continued fractions, and Fibonacci observe that an Engel expansion can also be written as an ascending variant of a continued fraction: They claim that ascending continued fractions such as this have been studied as early as Fibonacci's Liber Abaci (1202). This claim appears to refer to Fibonacci's compound fraction notation in which a sequence of numerators and denominators sharing the same fraction bar represents an ascending continued fraction: If such a notation has all numerators 0 or 1, as occurs in several instances in Liber Abaci, the result is an Engel expansion. However, Engel expansion as a general technique does not seem to be described by Fibonacci. Algorithm for computing Engel expansions To find the Engel expansion of x, let and where is the ceiling function (the smallest integer not less than r). If for any i, halt the algorithm. Iterated functions for computing Engel expansions Another equivalent method is to consider the map and set where and Yet another equivalent method, called the modified Engel expansion calculated by and The transfer operator of the Engel map The Frobenius–Perron transfer operator of the Engel map acts on functions with since and the inverse of the n-th component is which is found by solving for . Relation to the Riemann ζ function The Mellin transform of the map is related to the Riemann zeta function by the formula Example To find the Engel expansion of 1.175, we perform the following steps. The series ends here. Thus, and the Engel expansion of 1.175 is (1, 6, 20). Engel expansions of rational numbers Every positive rational number has a unique finite Engel expansion. In the algorithm for Engel expansion, if ui is a rational number x/y, then ui +&hairsp;1 = (−y mod x)/y. Therefore, at each step, the numerator in the remaining fraction ui decreases and the process of constructing the Engel expansion must terminate in a finite number of steps. Every rational number also has a unique infinite Engel expansion: using the identity the final digit n in a finite Engel expansion can be replaced by an infinite sequence of (n + 1)s without changing its value. For example, This is analogous to the fact that any rational number with a finite decimal representation also has an infinite decimal representation (see 0.999...). An infinite Engel expansion in which all terms are equal is a geometric series. Erdős, Rényi, and Szüsz asked for nontrivial bounds on the length of the finite Engel expansion of a rational number x/y&hairsp;; this question was answered by Erdős and Shallit, who proved that the number of terms in the expansion is O(y1/3 + ε) for any ε > 0. The Engel expansion for arithmetic progressions Consider this sum: where and . Thus, in general , where represents the lower Incomplete gamma function. Specifically, if , . Engel expansion for powers of q The Gauss identity of the q-analog can be written as: Using this identity, we can express the Engel expansion for powers of as follows: Furthermore, this expression can be written in closed form as: where is the second Theta function. Engel expansions for some well-known constants = (1, 1, 1, 8, 8, 17, 19, 300, 1991, 2492, ...) = (1, 3, 5, 5, 16, 18, 78, 102, 120, 144, ...) = (1, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, ...) More Engel expansions for constants can be found here. Growth rate of the expansion terms The coefficients ai of the Engel expansion typically exhibit exponential growth; more precisely, for almost all numbers in the interval (0,1], the limit exists and is equal to e. However, the subset of the interval for which this is not the case is still large enough that its Hausdorff dimension is one. The same typical growth rate applies to the terms in expansion generated by the greedy algorithm for Egyptian fractions. However, the set of real numbers in the interval (0,1] whose Engel expansions coincide with their greedy expansions has measure zero, and Hausdorff dimension 1/2. See also Euler's continued fraction formula Notes References . . . . . . . External links Mathematical analysis Continued fractions Egyptian fractions
Engel expansion
[ "Mathematics" ]
1,103
[ "Mathematical analysis", "Continued fractions", "Number theory" ]
2,967,608
https://en.wikipedia.org/wiki/Michael%20P.%20Collins
Michael Patrick Collins is a Canadian structural engineer whose research is focused on the design and evaluation of reinforced and prestressed concrete buildings, bridges, nuclear containment structures and offshore oil platforms. Biography Collins received his BE from the University of Canterbury in New Zealand in 1964 and his PhD from the University of New South Wales in Australia in 1968. He joined the University of Toronto in 1969, was appointed to the Bahen-Tanenbaum Chair in Civil Engineering in 1995 and was selected as a University Professor in 1999. He is currently working on his Doctorate of Science. Collins has concentrated his research effort on understanding how cracked reinforced concrete resists shear stress. Shear failures can cause concrete structures to collapse without warning and hence, accurate analytical models for shear behaviour are critical for public safety. Unfortunately, most traditional shear design procedures rely upon empirical design rules which lack a rigorous theoretical basis and can be dangerous if applied to new situations. The Compression Field Theory, and subsequently the Modified Compression Field Theory, developed by Professor Collins and his colleagues at the University of Toronto, Division of Engineering Science, provides a rational basis for shear design and has received worldwide recognition. A Simplified Modified Compression Field Theory is currently the design standard in the Canadian CAN/CSA A23.3-04 which is the basic truss model, and soon to be updated and included in the European Building Code. He is the author of over 80 technical papers, 8 of which have received a research prize. In 2005, Collins was chosen as one of 10 provincial finalists in TVOntario’s first Best Lecturer competition. In 2011, Collins was elected as a Fellow of the Royal Society of Canada. References Canadian civil engineers Living people Structural engineers University of Canterbury alumni University of New South Wales alumni Academic staff of the University of Toronto Engineers from Toronto Scientists from Toronto 20th-century Canadian scientists 21st-century Canadian scientists 20th-century Canadian engineers 21st-century Canadian engineers Year of birth missing (living people) New Zealand emigrants to Canada Members of the Order of Canada People from Oakville, Ontario
Michael P. Collins
[ "Engineering" ]
405
[ "Structural engineering", "Structural engineers" ]
2,967,816
https://en.wikipedia.org/wiki/SeaRose%20FPSO
SeaRose FPSO is a floating production, storage and offloading vessel primarily located in the White Rose oil and gas field, approximately 350 kilometres (217 Nm) east-southeast off the coast of Newfoundland, Canada in the North Atlantic Ocean. The White Rose field is currently operated by Cenovus Energy (as of 2021, after its acquisition of Husky Energy), with a 60% ownership interest. Suncor Energy owns a 35% interest and Nalcor owning the remaining 5%. SeaRose is approximately east of the successful Hibernia field and the more recent Terra Nova field. All three fields are in the Jeanne d'Arc Basin on the eastern edge of the famous Grand Banks fishing territory. SeaRose made her way from the Samsung Heavy Industries shipyard in Geoje South Korea to Marystown, Newfoundland, for final preparation, in April 2004; a trip that took two months. In August 2005 she left Marystown for her work duty at Cenovus Energy's White Rose oil field. In January 2018, the C-NLOPB suspended White Rose operations because of Husky's failure to disconnect when an iceberg approached, contrary to Husky's ice management plan. As of June 2024, SeaRose was undergoing refit in dry dock in Belfast. References External links SeaRose at Ship Technology "SeaRose FPSO Husky’s White Rose floater preparing to make sail" from Ocean Resources "SeaRose FPSO Arrives at White Rose Oil Field" from Rigzone "Husky Energy's SeaRose FPSO Arrives in Marystown" at Rigzone White Rose at Huskey Energy White Rose field @ Offshore Technology Floating production storage and offloading vessels Service vessels of Canada Petroleum industry in Newfoundland and Labrador Petroleum industry in Canada 2004 ships Economy of Newfoundland and Labrador Ships built by Samsung Heavy Industries Oil platforms off Canada
SeaRose FPSO
[ "Chemistry" ]
374
[ "Floating production storage and offloading vessels", "Petroleum technology" ]
2,968,115
https://en.wikipedia.org/wiki/Cup%20%28unit%29
The cup is a cooking measure of volume, commonly associated with cooking and serving sizes. In the US, it is traditionally equal to . Because actual drinking cups may differ greatly from the size of this unit, standard measuring cups may be used, with a metric cup commonly being rounded up to 240 millilitres (legal cup), but 250 ml is also used depending on the measuring scale. United States Customary cup In the United States, the customary cup is half of a US liquid pint. Legal cup The cup currently used in the United States for nutrition labelling is defined in United States law as 240 ml. Conversion table to US legal cup The following information is describing that how to measure US legal cup in different ways. Coffee cup A "cup" of coffee in the US is usually 4 fluid ounces (118 ml), brewed using 5 fluid ounces (148 ml) of water. Coffee carafes used with drip coffee makers, e.g. Black and Decker models, have markings for both water and brewed coffee as the carafe is also used for measuring water prior to brewing. A 12-cup carafe, for example, has markings for 4, 6, 8, 10, and 12 cups of water or coffee, which correspond to 20, 30, 40, 50, and 60 US fluid ounces (0.59, 0.89, 1.18, 1.48, and 1.77 litres) of water or 16, 24, 32, 40, and 48 US fluid ounces (0.47, 0.71, 0.95, 1.18, and 1.42 litres) of brewed coffee respectively, the difference being the volume absorbed by the coffee grounds and lost to evaporation during brewing. Commonwealth of Nations Metric cup Australia, Canada, New Zealand, and some other members of the Commonwealth of Nations, being former British colonies that have since metricated, employ a "metric cup" of 250millilitres. Although derived from the metric system, it is not an SI unit. A "coffee cup" is 1.5 dL (i.e. 150 millilitres or 5.07 US customary fluid ounces), and is occasionally used in recipes; in older recipes, cup may mean "coffee cup". It is also used in the US to specify coffeemaker sizes (what can be referred to as a Tasse à café). A "12-cup" US coffeemaker makes 57.6 US customary fluid ounces of coffee, which is equal to 6.8 metric cups of coffee. Canadian cup Canada now usually employs the metric cup of 250ml, but its conventional cup was somewhat smaller than both American and imperial units. 1 Canadian cup = 8 imperial fluid ounces = imperial gallon =                = UK tumbler = 1 UK breakfast cup = 1 UK cups = 1 UK teacups = 3 UK coffee cups = 4 UK wine glasses                ≈ 0·96 US customary cup                ≈ 0·91 metric cup 1 Canadian tablespoon =                       = 1 UK tablespoon                       ≈ 0·96 US customary tablespoon                       ≈ 0·95 international metric tablespoon ≈ 0·71 Australian metric tablespoon 1 Canadian teaspoon =                      = 1 UK teaspoons                      ≈ 0·96 US customary teaspoon                      ≈ 0·95 metric teaspoon British cup In the United Kingdom, 1 cup is traditionally 6 imperial fluid ounces. The unit is named after a typical drinking cup. There are three related British culinary measurement units of volume bearing names with the word, ‘cup’: the breakfast cup (8 imperial fluid ounces), the teacup (5 imperial fluid ounces), and the coffee cup (2 imperial fluid ounces). Further, there are two related British culinary measurement units of volume without the word, ‘cup’, in their names: the tumbler (10 imperial fluid ounces) and the wine glass (2 imperial fluid ounces). All six units are the traditional British equivalents of the US customary cup and the metric cup, used in situations where a US cook would use the US customary cup and a cook using metric units the metric cup. The breakfast cup is the most similar in size to the US customary cup and the metric cup. Which of these six units is used depends on the quantity or volume of the ingredient: there is division of labour between these six units, like the tablespoon and the teaspoon. British cookery books and recipes, especially those from the days before the UK's partial metrication, commonly use two or more of the aforesaid units simultaneously: for example, the same recipe may call for a ‘tumblerful’ of one ingredient and a ‘wineglassful’ of another one; or a ‘breakfastcupful’ or ‘cupful’ of one ingredient, a ‘teacupful’ of a second one, and a ‘coffeecupful’ of a third one. Unlike the US customary cup and the metric cup, a tumbler, a breakfast cup, a cup, a teacup, a coffee cup, and a wine glass are not measuring cups: they are simply everyday drinking vessels commonly found in British households and typically having the respective aforementioned capacities; due to long‑term and widespread use, they have been transformed into measurement units for cooking. There is not a British imperial unit⁠–⁠based culinary measuring cup. International Similar units in other languages and cultures are sometimes translated "cup", usually with various values around to of a litre. Latin American cup In Latin America, the amount of a "cup" () varies from country to country, using a cup of 200ml (about 7·04 British imperial fluid ounces or 6·76 US customary fluid ounces), 250ml (about 8·80 British imperial fluid ounces or 8·45 US customary fluid ounces), and the US legal or customary amount. Japanese cup The traditional Japanese unit equated with a "cup" size is the gō, legally equated with litre (≈ 180.4 ml/6·35 British imperial fluid ounces/6·1 US customary fluid ounces) in 1891, and is still used for reckoning amounts of rice and sake. The Japanese later defined a "cup" as 200 ml. Russian cup The traditional Russian measurement system included two cup sizes: the "charka" (cup proper) and the "stakan" ("glass"). The charka was usually used for alcoholic drinks and is 123mL (about 4·33 British imperial fluid ounces or 4·16 US customary fluid ounces), while the stakan, used for other liquids, was twice as big and is 246mL (about 8·66 British imperial fluid ounces or 8·32 US customary fluid ounces). Since metrication, the charka was informally redefined as 100 ml (about 3·52 British imperial fluid ounces or 3·38 US customary fluid ounces), acquiring a new name of "stopka" (related to the traditional Russian measurement unit "stopa"), while there are currently two widely used glass sizes of 250mL (about 8·80 British imperial fluid ounces or 8·45 US customary fluid ounces) and 200 ml (about 7·04 British imperial fluid ounces or 6·76 US customary fluid ounces). Dutch cup In The Netherlands, traditionally a "cup" (Dutch: kopje) amounts to 150 ml (about 5·28 British imperial fluid ounces or 5·07 US customary fluid ounces). However, in modern recipes, the US legal cup of 240 ml (about 8·45 British imperial fluid ounces or 8·12 US customary fluid ounces) is more commonly used. Dry measure In Europe, recipes normally weigh non-liquid ingredients in grams rather than measuring volume. For example, where an American recipe might specify "1 cup of sugar and 2 cups of milk", a European recipe might specify "200 g sugar and 500 ml of milk". A precise conversion between the two measures takes into account the density of the ingredients, and some recipes specify both weight and volume to facilitate this conversion. Many European measuring cups have markings that indicate the weight of common ingredients for a given volume. See also Cooking weights and measures Notes References Measurement Units of volume Customary units of measurement in the United States Imperial units Metricated units Alcohol measurement Cooking weights and measures
Cup (unit)
[ "Physics", "Mathematics" ]
1,712
[ "Units of volume", "Physical quantities", "Metricated units", "Quantity", "Measurement", "Size", "Units of measurement" ]
2,968,531
https://en.wikipedia.org/wiki/Single%20address%20space%20operating%20system
In computer science, a single address space operating system (or SASOS) is an operating system that provides only one globally shared address space for all processes. In a single address space operating system, numerically identical (virtual memory) logical addresses in different processes all refer to exactly the same byte of data. In a traditional OS with private per-process address space, memory protection is based on address space boundaries ("address space isolation"). Single address-space operating systems make translation and protection orthogonal, which in no way weakens protection. The core advantage is that pointers (i.e. memory references) have global validity, meaning their meaning is independent of the process using it. This allows sharing pointer-connected data structures across processes, and making them persistent, i.e. storing them on backup store. Some processor architectures have direct support for protection independent of translation. On such architectures, a SASOS may be able to perform context switches faster than a traditional OS. Such architectures include Itanium, and Version 5 of the Arm architecture, as well as capability architectures such as CHERI. A SASOS should not be confused with a flat memory model, which provides no address translation and generally no memory protection. In contrast, a SASOS makes protection orthogonal to translation: it may be possible to name a data item (i.e. know its virtual address) while not being able to access it. SASOS projects using hardware-based protection include the following: Angel IBM i (formerly called OS/400) Iguana at NICTA, Australia Mungi at NICTA, Australia Nemesis Opal Scout Sombrero Related are OSes that provide protection through language-level type safety Br1X Genera JX a research Java OS Phantom OS Singularity Theseus OS Torsion See also Exokernel Hybrid kernel Kernel Microkernel Nanokernel Unikernel Flat memory model Virtual memory References Bibliography . Operating systems
Single address space operating system
[ "Technology", "Engineering" ]
397
[ "Computing stubs", "Computer engineering stubs", "Computer engineering" ]
2,968,782
https://en.wikipedia.org/wiki/Weil%20reciprocity%20law
In mathematics, the Weil reciprocity law is a result of André Weil holding in the function field K(C) of an algebraic curve C over an algebraically closed field K. Given functions f and g in K(C), i.e. rational functions on C, then f((g)) = g((f)) where the notation has this meaning: (h) is the divisor of the function h, or in other words the formal sum of its zeroes and poles counted with multiplicity; and a function applied to a formal sum means the product (with multiplicities, poles counting as a negative multiplicity) of the values of the function at the points of the divisor. With this definition there must be the side-condition, that the divisors of f and g have disjoint support (which can be removed). In the case of the projective line, this can be proved by manipulations with the resultant of polynomials. To remove the condition of disjoint support, for each point P on C a local symbol (f, g)P is defined, in such a way that the statement given is equivalent to saying that the product over all P of the local symbols is 1. When f and g both take the values 0 or ∞ at P, the definition is essentially in limiting or removable singularity terms, by considering (up to sign) fagb with a and b such that the function has neither a zero nor a pole at P. This is achieved by taking a to be the multiplicity of g at P, and −b the multiplicity of f at P. The definition is then (f, g)P = (−1)ab fagb. See for example Jean-Pierre Serre, Groupes algébriques et corps de classes, pp. 44–46, for this as a special case of a theory on mapping algebraic curves into commutative groups. There is a generalisation of Serge Lang to abelian varieties (Lang, Abelian Varieties). References André Weil, Oeuvres Scientifiques I, p. 291 (in Lettre à Artin, a 1942 letter to Artin, explaining the 1940 Comptes Rendus note Sur les fonctions algébriques à corps de constantes finis) for a proof in the Riemann surface case Algebraic curves Theorems in algebraic geometry
Weil reciprocity law
[ "Mathematics" ]
501
[ "Theorems in algebraic geometry", "Theorems in geometry" ]
2,969,334
https://en.wikipedia.org/wiki/FFTW
The Fastest Fourier Transform in the West (FFTW) is a software library for computing discrete Fourier transforms (DFTs) developed by Matteo Frigo and Steven G. Johnson at the Massachusetts Institute of Technology. FFTW is one of the fastest free software implementations of the fast Fourier transform (FFT). It implements the FFT algorithm for real and complex-valued arrays of arbitrary size and dimension. Library FFTW expeditiously transforms data by supporting a variety of algorithms and choosing the one (a particular decomposition of the transform into smaller transforms) it estimates or measures to be preferable in the particular circumstances. It works best on arrays of sizes with small prime factors, with powers of two being optimal and large primes being worst case (but still O(n log n)). To decompose transforms of composite sizes into smaller transforms, it chooses among several variants of the Cooley–Tukey FFT algorithm (corresponding to different factorizations and/or different memory-access patterns), while for prime sizes it uses either Rader's or Bluestein's FFT algorithm. Once the transform has been broken up into subtransforms of sufficiently small sizes, FFTW uses hard-coded unrolled FFTs for these small sizes that were produced (at compile time, not at run time) by code generation; these routines use a variety of algorithms including Cooley–Tukey variants, Rader's algorithm, and prime-factor FFT algorithms. For a sufficiently large number of repeated transforms it is advantageous to measure the performance of some or all of the supported algorithms on the given array size and platform. These measurements, which the authors refer to as "wisdom", can be stored in a file or string for later use. FFTW has a "guru interface" that intends "to expose as much as possible of the flexibility in the underlying FFTW architecture". This allows, among other things, multi-dimensional transforms and multiple transforms in a single call (e.g., where the data is interleaved in memory). FFTW has limited support for out-of-order transforms (using the Message Passing Interface (MPI) version). The data reordering incurs an overhead, which for in-place transforms of arbitrary size and dimension is non-trivial to avoid. It is undocumented for which transforms this overhead is significant. FFTW is licensed under the GNU General Public License. It is also licensed commercially (for a cost of up to $12,500) by MIT and is used in the commercial MATLAB matrix package for calculating FFTs. FFTW is written in the C language, but Fortran and Ada interfaces exist, as well as interfaces for a few other languages. While the library itself is C, the code is actually generated from a program called 'genfft', which is written in OCaml. In 1999, FFTW won the J. H. Wilkinson Prize for Numerical Software. See also FFTPACK References External links Numerical libraries FFT algorithms OCaml software Free mathematics software Massachusetts Institute of Technology software Software using the GNU General Public License
FFTW
[ "Mathematics" ]
651
[ "Free mathematics software", "Mathematical software" ]
2,969,494
https://en.wikipedia.org/wiki/Joseph%20Wedderburn
Joseph Henry Maclagan Wedderburn FRSE FRS (2 February 1882 – 9 October 1948) was a Scottish mathematician, who taught at Princeton University for most of his career. A significant algebraist, he proved that a finite division algebra is a field (Wedderburn's little theorem), and part of the Artin–Wedderburn theorem on simple algebras. He also worked on group theory and matrix algebra. His younger brother was the lawyer Ernest Wedderburn. Life Joseph Wedderburn was the tenth of fourteen children of Alexander Wedderburn of Pearsie, a physician, and Anne Ogilvie. He was educated at Forfar Academy then in 1895 his parents sent Joseph and his younger brother Ernest to live in Edinburgh with their paternal uncle, J R Maclagan Wedderburn, allowing them to attend George Watson's College. This house was at 3 Glencairn Crescent in the West End of the city. In 1898 Joseph entered the University of Edinburgh. In 1903, he published his first three papers, worked as an assistant in the Physical Laboratory of the University, obtained an MA degree with First Class Honours in mathematics, and was elected a Fellow of the Royal Society of Edinburgh, upon the proposal of George Chrystal, James Gordon MacGregor, Cargill Gilston Knott and William Peddie. Aged only 21 he remains one of the youngest Fellows ever. He then studied briefly at the University of Leipzig and the University of Berlin, where he met the algebraists Frobenius and Schur. A Carnegie Scholarship allowed him to spend the 1904–1905 academic year at the University of Chicago where he worked with Oswald Veblen, E. H. Moore, and most importantly, Leonard Dickson, who was to become the most important American algebraist of his day. Returning to Scotland in 1905, Wedderburn worked for four years at the University of Edinburgh as an assistant to George Chrystal, who supervised his D.Sc, awarded in 1908 for a thesis titled On Hypercomplex Numbers. He gained a PhD in algebra from the University of Edinburgh in 1908. From 1906 to 1908, Wedderburn edited the Proceedings of the Edinburgh Mathematical Society. In 1909, he returned to the United States to become a Preceptor in Mathematics at Princeton University; his colleagues included Luther P. Eisenhart, Oswald Veblen, Gilbert Ames Bliss, and George Birkhoff. Upon the outbreak of the First World War, Wedderburn enlisted in the British Army as a private. He was the first person at Princeton to volunteer for that war, and had the longest war service of anyone on the staff. He served with the Seaforth Highlanders in France, as Lieutenant (1914), then as Captain of the 10th Battalion (1915–18). While a Captain in the Fourth Field Survey Battalion of the Royal Engineers in France, he devised sound-ranging equipment to locate enemy artillery. He returned to Princeton after the war, becoming Associate Professor in 1921 and editing the Annals of Mathematics until 1928. While at Princeton, he supervised only three PhDs, one of them being Nathan Jacobson. In his later years, Wedderburn became an increasingly solitary figure and may even have suffered from depression. His isolation after his 1945 early retirement was such that his death from a heart attack was not noticed for several days. His Nachlass was destroyed, as per his instructions. Wedderburn received the MacDougall-Brisbane Gold Medal and Prize from the Royal Society of Edinburgh in 1921, and was elected to the Royal Society of London in 1933. Work In all, Wedderburn published about 40 books and papers, making important advances in the theory of rings, algebras and matrix theory. In 1905, Wedderburn published a paper that included three claimed proofs of a theorem stating that a noncommutative finite division ring could not exist. The proofs all made clever use of the interplay between the additive group of a finite division algebra A, and the multiplicative group A* = A-{0}. Parshall (1983) notes that the first of these three proofs had a gap not noticed at the time. Meanwhile, Wedderburn's Chicago colleague Dickson also found a proof of this result but, believing Wedderburn's first proof to be correct, Dickson acknowledged Wedderburn's priority. But Dickson also noted that Wedderburn constructed his second and third proofs only after having seen Dickson's proof. Parshall concludes that Dickson should be credited with the first correct proof. This theorem yields insights into the structure of finite projective geometries. In their paper on "Non-Desarguesian and non-Pascalian geometries" in the 1907 Transactions of the American Mathematical Society, Wedderburn and Veblen showed that in these geometries, Pascal's theorem is a consequence of Desargues' theorem. They also constructed finite projective geometries which are neither "Desarguesian" nor "Pascalian" (the terminology is Hilbert's). Wedderburn's best-known paper was his sole-authored "On hypercomplex numbers," published in the 1907 Proceedings of the London Mathematical Society, and for which he was awarded the D.Sc. the following year. This paper gives a complete classification of simple and semisimple algebras. He then showed that every finite-dimensional semisimple algebra can be constructed as a direct sum of simple algebras and that every simple algebra is isomorphic to a matrix algebra for some division ring. The Artin–Wedderburn theorem generalises these results to algebras with the descending chain condition. His best known book is his Lectures on Matrices (1934), which Jacobson praised as follows: About Wedderburn's teaching: See also Hypercomplex numbers Wedderburn–Etherington number References Further reading Robert Hooke (1984) Recollections of Princeton, 1939–1941 Karen Parshall (1983) "In pursuit of the finite division algebra theorem and beyond: Joseph H M Wedderburn, Leonard Dickson, and Oswald Veblen," Archives of International History of Science 33: 274–99. Karen Parshall (1985) "Joseph H. M. Wedderburn and the structure theory of algebras," Archive for History of Exact Sciences 32: 223–349. Karen Parshall (1992) "New Light on the Life and Work of Joseph Henry Maclagan Wedderburn (1882–1948)," in Menso Folkerts et al. (eds.): Amphora: Festschrift für Hans Wußing zu seinem 65. Geburtstag, Birkhäuser Verlag, 523–537. 1882 births 1948 deaths 20th-century American mathematicians People from Forfar People educated at Forfar Academy People educated at George Watson's College Alumni of the University of Edinburgh Leipzig University alumni Humboldt University of Berlin alumni University of Chicago alumni Academics of the University of Edinburgh Princeton University faculty Fellows of the Royal Society of Edinburgh Fellows of the Royal Society Seaforth Highlanders officers Royal Engineers officers Algebraists British Army personnel of World War I Scottish emigrants to the United States Scottish mathematicians Military personnel from Angus, Scotland
Joseph Wedderburn
[ "Mathematics" ]
1,478
[ "Algebra", "Algebraists" ]
2,969,642
https://en.wikipedia.org/wiki/PolyAMPS
PolyAMPS, or poly(2-acrylamido-2-methyl-1-propanesulfonic acid) (trademark of the Lubrizol Corporation), is an organic polymer. It is water-soluble, forms gels when cross linked, and acts as a strong anionic polyelectrolyte. It can be used for ion exchange resins. It can form hydrogels. See also 2-Acrylamido-2-methylpropane sulfonic acid (AMPS) References Acrylate polymers Polyelectrolytes
PolyAMPS
[ "Chemistry" ]
122
[ "Polymer stubs", "Organic chemistry stubs" ]
2,969,710
https://en.wikipedia.org/wiki/PolyAPTAC
PolyAPTAC, or poly (acrylamido-N-propyltrimethylammonium chloride), is an organic polymer. It is water-soluble, forms gels when cross linked, and acts as a cationic polyelectrolyte. It can be used for ion exchange resins. It can form hydrogels. PolyMAPTAC, or poly[(3-(methacryloylamino)-propyl] trimethylammonium chloride), is similar. References Acrylate polymers Polyelectrolytes
PolyAPTAC
[ "Chemistry" ]
121
[ "Polymer stubs", "Organic chemistry stubs" ]
2,969,851
https://en.wikipedia.org/wiki/EcoHealth%20Alliance
EcoHealth Alliance (EHA) is a US-based non-governmental organization with a stated mission of protecting people, animals, and the environment from emerging infectious diseases. The nonprofit organization focuses on research aimed at preventing pandemics and promoting conservation in hotspot regions worldwide. The EcoHealth Alliance focuses on diseases caused by deforestation and increased interaction between humans and wildlife. The organization has researched the emergence of diseases such as Severe Acute Respiratory Syndrome (SARS), Nipah virus, Middle East respiratory syndrome (MERS), Rift Valley fever, the Ebola virus, and COVID-19. The EcoHealth Alliance also advises the World Organization for Animal Health (OIE), the International Union for Conservation of Nature (IUCN), the United Nations Food and Agriculture Organization (FAO), and the World Health Organization (WHO) on global wildlife trade, threats of disease, and the environmental damage posed by these. Following the outbreak of the COVID-19 pandemic, EcoHealth's ties with the Wuhan Institute of Virology were put into question in relation to investigations into the origin of COVID-19. Citing these concerns, the National Institutes of Health (NIH) withdrew funding to the organization in April 2020. Significant criticism followed this decision, including a joint letter signed by 77 Nobel laureates and 31 scientific societies. The NIH later reinstated funding to the organization as one of 11 institutions partnering in the Centers for Research in Emerging Infectious Diseases (CREID) initiative in August 2020, but all activities funded by the grant remain suspended. In 2022, the NIH terminated the EcoHealth Alliance grant, stating that "EcoHealth Alliance had not been able to hand over lab notebooks and other records from its Wuhan partner that relate to controversial experiments involving modified bat viruses, despite multiple requests." In 2023, an audit by the Office of Inspector General of the Department of Health and Human Services found that "NIH did not effectively monitor or take timely action to address" compliance problems with the EcoHealth Alliance. In December 2023, the EcoHealth Alliance denied allegations that it double-billed the NIH and United States Agency for International Development for research in China. In May 2024, the United States Department of Health and Human Services banned all federal funding for the EcoHealth Alliance. History Founded under the name Wildlife Preservation Trust International in 1971 by British naturalist, author, and television personality, Gerald Durrell, it then became The Wildlife Trust in 1999. In the fall of 2010, the organization changed its name to EcoHealth Alliance. The rebrand reflected a change in the organization's focus, moving solely from a conservation nonprofit, which focused mainly on the captive breeding of endangered species, to an environmental health organization with its foundation in conservation. The organization held an early professional conservation medicine meeting in 1996. In 2002, they published an edited volume on the field through Oxford University Press: Conservation Medicine: Ecological Health in Practice. In February 2008, they published a paper in Nature entitled “Global trends in emerging infectious diseases” which featured an early rendition of a global disease hotspot map. Using epidemiological, social, and environmental data from the past 50 years, the map outlined regions of the globe most at risk for emergent disease threats. EcoHealth Alliance's funding comes mostly from U.S. federal agencies such as the Department of Defense, Department of Homeland Security, and U.S. Agency for International Development. Between 2011 and 2020, its annual budget fluctuated between US$9 and US$15 million per year. COVID-19 pandemic Following the outbreak of the COVID-19 pandemic, EcoHealth Alliance has been the subject of controversy and increased scrutiny due to its ties to the Wuhan Institute of Virology (WIV)—which has been at the center of speculation since early 2020 that SARS-CoV-2 may have escaped in a lab incident. Prior to the pandemic, EcoHealth Alliance was the only U.S.-based organization researching coronavirus evolution and transmission in China, where they partnered with the WIV, among others. EcoHealth president Peter Daszak co-authored a February 2020 letter in The Lancet condemning "conspiracy theories suggesting that COVID-19 does not have a natural origin". However, Daszak failed to disclose EcoHealth's ties to the WIV, which some observers noted as an apparent conflict of interest. In June 2021, The Lancet published an addendum in which Daszak disclosed his cooperation with researchers in China. In April 2020, the NIH ordered EcoHealth Alliance to cease spending the remaining $369,819 from its current NIH grant at the request of the Trump administration, pressuring them by stating "it must hand over information and materials from the Chinese research facility to resume funding for suspended grant" in reference to the Wuhan Institute of Virology. The canceled grant was supposed to run through 2024. Funding from NIH resumed in August 2020 after an uproar from "77 U.S. Nobel laureates and 31 scientific societies". Work conducted at the Wuhan Institute of Virology under an NIH grant to the EHA has been at the center of political controversies during the pandemic. One such controversy centered on whether any experiments conducted under the grant could be accurately described as "gain-of-function" (GoF) research. NIH officials (including Anthony Fauci) unequivocally denied during 2020 congressional hearings that the EHA had conducted GoF research with NIH funding. In October 2021, the EHA submitted a progress report detailing the results of a past experiment where some laboratory mice lost more weight than expected after being infected with a modified bat coronavirus. The NIH subsequently sent a letter to the congressional House Committee on Energy and Commerce describing this experiment, but did not refer to it as "gain-of-function." Whether such research qualifies as "gain-of-function" is a matter of considerable debate among relevant experts. In May 2024, the United States Department of Health and Human Services banned all federal funding for the EcoHealth Alliance, saying that the EcoHealth Alliance did not properly monitor research activities at the WIV and failed to report on their high-risk experiments. On 17 January, 2025, the Department of Health and Human Services (HHS) issued formal, 5-year debarments for both Daszak and his group. EcoHealth had dismissed Daszak as president as of 6 January, according to an HHS notice. Programs PREDICT EcoHealth Alliance partners with USAID on the PREDICT subset of USAID's EPT (Emerging Pandemic Threats) program. PREDICT seeks to identify which emerging infectious diseases are of the greatest risk to human health. Many of EcoHealth Alliance's international collaborations with in-country organizations and institutions fall under the PREDICT umbrella. Scientists in the field collect samples from local fauna in order to track the spread of potentially harmful pathogens and to stop them from becoming outbreaks. Scientists also train local technicians and veterinarians in animal sampling and information gathering. Active countries include Bangladesh, Cameroon, China, Democratic Republic of the Congo, Egypt, Ethiopia, Guinea, India, Indonesia, Jordan, Kenya, Liberia, Malaysia, Myanmar, Nepal, Sierra Leone, Sudan, South Sudan, Thailand, Uganda, and Vietnam. IDEEAL IDEEAL (Infectious Disease Emergence and Economics of Altered Landscapes Program) attempts to investigate the impact of deforestation and land-use change on the risk of zoonoses in Sabah, Malaysia. This project focuses on the local palm oil industry in particular. The study also offers to the country's corporate leaders and policymakers long-term alternatives to large-scale deforestation. The program is headquartered at the Malaysian Development Health Research Unit (DHRU), which was developed in collaboration with the Malaysian University of Sabah. Bat Conservation A growing body of research indicates that bats are an important factor in both ecosystem health and disease emergence. A number of hypotheses have been proposed for the high number of zoonoses that have come from bat populations in recent decades. One group of researchers hypothesized “that flight, a factor common to all bats but to no other mammals, provides an intensive selective force for coexistence with viral parasites through a daily cycle that elevates metabolism and body temperature analogous to the fever response in other mammals. On an evolutionary scale, this host-virus interaction might have resulted in the large diversity of zoonotic viruses in bats, possibly through bat viruses adapting to be more tolerant of the fever response and less virulent to their natural hosts.” Project Deep Forest According to the FAO (Food and Agriculture Organization), roughly 18 million acres of forest (roughly the size of Panama) are lost every year due to deforestation. Increased contact between humans and the animal species whose habitat is being destroyed has led to increases in zoonotic disease. EcoHealth Alliance scientists are testing species for pathogens in areas with very little, moderate, and complete deforestation in order to track potential outbreaks. This data is used to promote the preservation of natural lands and diminish the negative effects of land-use change. Project DEFUSE Project DEFUSE was a rejected DARPA grant application, which proposed to sample bat coronaviruses from various locations in China and Southeast Asia. To evaluate whether bat coronaviruses might spill over into the human population, the grantees proposed to create chimeric coronaviruses which were mutated in different locations, before evaluating their ability to infect human cells in the laboratory. One proposed alteration was to modify bat coronaviruses to insert a cleavage site for the Furin protease at the S1/S2 junction of the spike (S) viral protein. Another part of the grant aimed to create noninfectious protein-based vaccines containing just the spike protein of dangerous coronaviruses. These vaccines would then be administered to bats in caves in southern China to help prevent future outbreaks. Co-investigators on the rejected proposal included Ralph Baric from UNC, Linfa Wang from Duke–NUS Medical School in Singapore, and Shi Zhengli from the Wuhan Institute of Virology. See also Durrell Wildlife Conservation Trust Wildlife Preservation Canada References External links Durrell Wildlife Conservation Trust Wildlife Preservation Canada Environmental organizations based in the United States Environmental microbiology
EcoHealth Alliance
[ "Environmental_science" ]
2,146
[ "Environmental microbiology" ]
2,970,014
https://en.wikipedia.org/wiki/Beam%20emittance
In accelerator physics, emittance is a property of a charged particle beam. It refers to the area occupied by the beam in a position-and-momentum phase space. Each particle in a beam can be described by its position and momentum along each of three orthogonal axes, for a total of six position and momentum coordinates. When the position and momentum for a single axis are plotted on a two dimensional graph, the average spread of the coordinates on this plot are the emittance. As such, a beam will have three emittances, one along each axis, which can be described independently. As particle momentum along an axis is usually described as an angle relative to that axis, an area on a position-momentum plot will have dimensions of length × angle (for example, millimeters × milliradian). Emittance is important for analysis of particle beams. As long as the beam is only subjected to conservative forces, Liouville's theorem shows that emittance is a conserved quantity. If the distribution over phase space is represented as a cloud in a plot (see figure), emittance is the area of the cloud. A variety of more exact definitions handle the fuzzy borders of the cloud and the case of a cloud that does not have an elliptical shape. In addition, the emittance along each axis is independent unless the beam passes through beamline elements (such as solenoid magnets) which correlate them. A low-emittance particle beam is a beam where the particles are confined to a small distance and have nearly the same momentum, which is a desirable property for ensuring that the entire beam is transported to its destination. In a colliding beam accelerator, keeping the emittance small means that the likelihood of particle interactions will be greater resulting in higher luminosity. In a synchrotron light source, low emittance means that the resulting x-ray beam will be small, and result in higher brightness. Definitions The coordinate system used to describe the motion of particles in an accelerator has three orthogonal axes, but rather than being centered on a fixed point in space, they are oriented with respect to the trajectory of an "ideal" particle moving through the accelerator with no deviation from the intended speed, position, or direction. Motion along this design trajectory is referred to as the longitudinal axis, and the two axes perpendicular to this trajectory (usually oriented horizontally and vertically) are referred to as transverse axes. The most common convention is for the longitudinal axis to be labelled and the transverse axes to be labelled and . Emittance has units of length, but is usually referred to as "length × angle", for example, "millimeter × milliradians". It can be measured in all three spatial dimensions. Geometric transverse emittance When a particle moves through a circular accelerator or storage ring, the position and angle of the particle in the x direction will trace an ellipse in phase space. (All of this section applies equivalently to and ) This ellipse can be described by the following equation: where x and are the position and angle of the particle, and are the Courant–Snyder (Twiss) parameters, calculated from the shape of the ellipse. The emittance is given by , and has units of length × angle. However, many sources will move the factor of into the units of emittance rather than including the specific value, giving units of "length × angle × ." This formula is the single particle emittance, which describes the area enclosed by the trajectory of a single particle in phase space. However, emittance is more useful as a description of the collective properties of the particles in a beam, rather than of a single particle. Since beam particles are not necessarily distributed uniformly in phase space, definitions of emittance for an entire beam will be based on the area of the ellipse required to enclose a specific fraction of the beam particles. If the beam is distributed in phase space with a Gaussian distribution, the emittance of the beam may be specified in terms of the root mean square value of and the fraction of the beam to be included in the emittance. The equation for the emittance of a Gaussian beam is: where is the root mean square width of the beam, is the Courant-Snyder , and is the fraction of the beam to be enclosed in the ellipse, given as a number between 0 and 1. Here the factor of is shown on the right of the equation, and would often be included in the units of emittance, rather than being multiplied in to the computed value. The value chosen for will depend on the application and the author, and a number of different choices exist in the literature. Some common choices and their equivalent definition of emittance are: {| class="wikitable" |- ! !! |- | || 0.15 |- | || 0.39 |- | || 0.87 |- | || 0.95 |} While the x and y axes are generally equivalent mathematically, in horizontal rings where the x coordinate represents the plane of the ring, consideration of dispersion can be added to the equation of the emittance. Because the magnetic force of a bending magnet is dependent on the energy of the particle being bent, particles of different energies will be bent along different trajectories through the magnet, even if their initial position and angle are the same. The effect of this dispersion on the beam emittance is given by: where is the dispersion at location s, is the ideal particle momentum, and is the root mean square of the momentum difference of the particles in the beam from the ideal momentum. (This definition assumes F=0.15) Longitudinal emittance The geometrical definition of longitudinal emittance is more complex than that of transverse emittance. While the and coordinates represent deviation from a reference trajectory which remains static, the coordinate represents deviation from a reference particle, which is itself moving with a specified energy. This deviation can be expressed in terms of distance along the reference trajectory, time of flight along the reference trajectory (how "early" or "late" the particle is compared to the reference), or phase (for a specified reference frequency). In turn, the coordinate is generally not expressed as an angle. Since represents the change in z over time, it corresponds to the forward motion of the particle. This can be given in absolute terms, as a velocity, momentum, or energy, or in relative terms, as a fraction of the position, momentum, or energy of the reference particle. However, the fundamental concept of emittance is the same—the positions of the particles in a beam are plotted along one axis of a phase space plot, the rate of change of those positions over time is plotted on the other axis, and the emittance is a measure of the area occupied on that plot. One possible definition of longitudinal emittance is given by: where the integral is taken along a path which tightly encloses the beam particles in phase space. Here is the reference frequency and the longitudinal coordinate is the phase of the particles relative to a reference particle. Longitudinal equations such as this one often must be solved numerically, rather than analytically. RMS emittance The geometric definition of emittance assumes that the distribution of particles in phase space can be reasonably well characterized by an ellipse. In addition, the definitions using the root mean square of the particle distribution assume a Gaussian particle distribution. In cases where these assumptions do not hold, it is still possible to define a beam emittance using the moments of the distribution. Here, the RMS emittance () is defined to be, where is the variance of the particle's position, is the variance of the angle a particle makes with the direction of travel in the accelerator ( with along the direction of travel), and represents an angle-position correlation of particles in the beam. This definition is equivalent to the geometric emittance in the case of an elliptical particle distribution in phase space. The emittance may also be expressed as the determinant of the variance-covariance matrix of the beam's phase space coordinates where it becomes clear that quantity describes an effective area occupied by the beam in terms of its second order statistics. Depending on context, some definitions of RMS emittance will add a scaling factor to correspond to a fraction of the total distribution, to facilitate comparison with geometric emittances using the same fraction. RMS emittance in higher dimensions It is sometimes useful to talk about phase space area for either four dimensional transverse phase space (IE , , , ) or the full six dimensional phase space of particles (IE , , , , , ). The RMS emittance generalizes to full three dimensional space as shown: In the absences of correlations between different axes in the particle accelerator, most of these matrix elements become zero and we are left with a product of the emittance along each axis. Normalized emittance Although the previous definitions of emittance remain constant for linear beam transport, they do change when the particles undergo acceleration (an effect called adiabatic damping). In some applications, such as for linear accelerators, photoinjectors, and the accelerating sections of larger systems, it becomes important to compare beam quality across different energies. Normalized emittance, which is invariant under acceleration, is used for this purpose. Normalized emittance in one dimension is given by: The angle in the prior definition has been replaced with the normalized transverse momentum , where is the Lorentz factor and is the normalized transverse velocity. Normalized emittance is related to the previous definitions of emittance through and the normalized velocity in the direction of the beam's travel (): The normalized emittance does not change as a function of energy and so can be used to indicate beam degradation if the particles are accelerated. For speeds close to the speed of light, where is close to one, the emittance is approximately inversely proportional to the energy. In this case, the physical width of the beam will vary inversely with the square root of the energy. Higher dimensional versions of the normalized emittance can be defined in analogy to the RMS version by replacing all angles with their corresponding momenta. Measurement Quadrupole scan technique One of the most fundamental methods of measuring beam emittance is the quadrupole scan method. The emittance of the beam for a particular plane of interest (i.e., horizontal or vertical) can be obtained by varying the field strength of a quadrupole (or quadrupoles) upstream of a monitor (i.e., a wire or a screen). The properties of a beam can be described as the following beam matrix. where is the derivative of x with respect to the longitudinal coordinate. The forces experienced by the beam as it travels down the beam line and passes through the quadrupole(s) are described using the transfer matrix (referenced to transfer maps page) of the beam line, including the quadrupole(s) and other beam line components such as drifts: Here is the transfer matrix between the original beam position and the quadrupole(s), is the transfer matrix of the quadrupole(s), and is the transfer matrix between the quadrupole(s) and the monitor screen. During the quadrupole scan process, and stay constant, and changes with the field strength of the quadrupole(s). The final beam when it reaches the monitor screen at distance s from its original position can be described as another beam matrix : The final beam matrix can be calculated from the original beam matrix by doing matrix multiplications with the beam line transfer matrix : Where is the transpose of . Now, focusing on the (1,1) element of the final beam matrix throughout the matrix multiplications, we get the equation: Here the middle term has a factor of 2 because . Now divide both sides of the above equation by , the equation becomes: Which is a quadratic equation of the variable . Since the RMS emittance RMS is defined to be the following. The RMS emittance of the original beam can be calculated using its beam matrix elements: To obtain the emittance measurement, the following procedure is employed: For each value (or value combination) of the quadrupole(s), the beam line transfer transfer matrix is calculated to determine values of and . The beam propagates through the varied beam line, and is observed at the monitor screen, where the beam size is measured. Repeat step 1 and 2 to obtain a series of values for and , fit the results with a parabola . Equate parabola fit parameters with original beam matrix elements: , , . Calculate RMS emittance of the original beam: If the length of the quadrupole is short compared to its focal length , where is the field strength of the quadrupole, its transfer matrix can be approximated by the thin lens approximation: Then the RMS emittance can be calculated by fitting a parabola to values of measured beam size versus quadrupole strength . By adding additional quadrupoles, this technique can be extended to a full 4-D reconstruction. Mask-based reconstruction Another fundamental method for measuring emittance is by using a predefined mask to imprint a pattern on the beam and sample the remaining beam at a screen downstream.  Two such masks are pepper pots and TEM grids.  A schematic of the TEM grid measurement is shown below. By using the knowledge of the spacing of the features in the mask one can extract information about the beam size at the mask plane.  By measuring the spacing between the same features on the sampled beam downstream, one can extract information about the angles in the beam.  The quantities of merit can be extracted as described in Marx et al. The choice of mask is generally dependent on the charge of the beam; low-charge beams are better suited to the TEM grid mask over the pepper pot, as more of the beam is transmitted. Emittance of electrons versus heavy particles To understand why the RMS emittance takes on a particular value in a storage ring, one needs to distinguish between electron storage rings and storage rings with heavier particles (such as protons). In an electron storage ring, radiation is an important effect, whereas when other particles are stored, it is typically a small effect. When radiation is important, the particles undergo radiation damping (which slowly decreases emittance turn after turn) and quantum excitation causing diffusion which leads to an equilibrium emittance. When no radiation is present, the emittances remain constant (apart from impedance effects and intrabeam scattering). In this case, the emittance is determined by the initial particle distribution. In particular if one injects a "small" emittance, it remains small, whereas if one injects a "large" emittance, it remains large. Acceptance The acceptance, also called admittance, is the maximum emittance that a beam transport system or analyzing system is able to transmit. This is the size of the chamber transformed into phase space and does not suffer from the ambiguities of the definition of beam emittance. Conservation of emittance Lenses can focus a beam, reducing its size in one transverse dimension while increasing its angular spread, but cannot change the total emittance. This is a result of Liouville's theorem. Ways of reducing the beam emittance include radiation damping, stochastic cooling, and electron cooling. Emittance and brightness Emittance is also related to the brightness of the beam. In microscopy brightness is very often used, because it includes the current in the beam and most systems are circularly symmetric. Consider the brightness of the incident beam at the sample, where indicates the beam current and represents the total emittance of the incident beam and the wavelength of the incident electron. The intrinsic emittance , describing a normal distribution in the initial phase space, is diffused by the emittance introduced by aberrations . The total emittance is approximately the sum in quadrature. Under the assumption of uniform illumination of the aperture with current per unit angle , we have the following emittance-brightness relation, See also Accelerator physics Etendue Mean transverse energy References Accelerator physics
Beam emittance
[ "Physics" ]
3,384
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
2,970,044
https://en.wikipedia.org/wiki/Radiation%20damping
Radiation damping in accelerator physics is a phenomenum where betatron oscillations and longitudinal oscilations of the particle are damped due to energy loss by synchrotron radiation. It can be used to reduce the beam emittance of a high-velocity charged particle beam. The two main ways of using radiation damping to reduce the emittance of a particle beam are the use of undulators and damping rings (often containing undulators), both relying on the same principle of inducing synchrotron radiation to reduce the particles' momentum, then replacing the momentum only in the desired direction of motion. Damping rings As particles are moving in a closed orbit, the lateral acceleration causes them to emit synchrotron radiation, thereby reducing the size of their momentum vectors (relative to the design orbit) without changing their orientation (ignoring quantum effects for the moment). In longitudinal direction, the loss of particle impulse due to radiation is replaced by accelerating sections (RF cavities) that are installed in the beam path so that an equilibrium is reached at the design energy of the accelerator. Since this is not happening in transverse direction, where the emittance of the beam is only increased by the quantization of radiation losses (quantum effects), the transverse equilibrium emittance of the particle beam will be smaller with large radiation losses, compared to small radiation losses. Because high orbit curvatures (low curvature radii) increase the emission of synchrotron radiation, damping rings are often small. If long beams with many particle bunches are needed to fill a larger storage ring, the damping ring may be extended with long straight sections. Undulators and wigglers When faster damping is required than can be provided by the turns inherent in a damping ring, it is common to add undulator or wiggler magnets to induce more synchrotron radiation. These are devices with periodic magnetic fields that cause the particles to oscillate transversely, equivalent to many small tight turns. These operate using the same principle as damping rings and this oscillation causes the charged particles to emit synchrotron radiation. The many small turns in an undulator have the advantage that the cone of synchrotron radiation is all in one direction, forward. This is easier to shield than the broad fan produced by a large turn. Energy loss The power radiated by a charged particle is given by a generalization of the Larmor formula derived by Liénard in 1898 , where is the velocity of the particle, the acceleration, e the elementary charge, the vacuum permittivity, the Lorentz factor and the speed of light. Note: is the momentum and is the mass of the particle. Linac and RF Cavities In case of an acceleration parallel to the longitudinal axis ( ), the radiated power can be calculated as below Inserting in Larmor's formula gives Bending In case of an acceleration perpendicular to the longitudinal axis ( ) Inserting in Larmor's formula gives (Hint: Factor and use ) Using magnetic field perpendicular to velocity Using radius of curvature and inserting in gives Electron Here are some useful formulas to calculate the power radiated by an electron accelerated by a magnetic field perpendicular to the velocity and . where , is the perpendicular magnetic field, the electron mass. Using the classical electron radius where is the radius of curvature, can also be derived from particle coordinates (using common 6D phase space coordinates system x,x',y,y',s,): Note: The transverse magnetic field is often normalized using the magnet rigidity: Field expansion (using Laurent_series): where is the transverse field expressed in [T], the multipole field strengths (skew and normal) expressed in , the particle position and the multipole order, k=0 for a dipole,k=1 for a quadrupole,k=2 for a sextupole, etc... See also Particle beam cooling References External links SLAC damping rings home page, including a non-technical description of the damping rings at SLAC. Studies Pertaining to a Small Damping Ring for the International Linear Collider, a report describing the constraints on minimum damping ring size. Accelerator physics Synchrotron radiation
Radiation damping
[ "Physics" ]
881
[ "Applied and interdisciplinary physics", "Accelerator physics", "Experimental physics" ]
2,970,168
https://en.wikipedia.org/wiki/Einstein%20relation%20%28kinetic%20theory%29
In physics (specifically, the kinetic theory of gases), the Einstein relation is a previously unexpected connection revealed independently by William Sutherland in 1904, Albert Einstein in 1905, and by Marian Smoluchowski in 1906 in their works on Brownian motion. The more general form of the equation in the classical case is where is the diffusion coefficient; is the "mobility", or the ratio of the particle's terminal drift velocity to an applied force, ; is the Boltzmann constant; is the absolute temperature. This equation is an early example of a fluctuation-dissipation relation. Note that the equation above describes the classical case and should be modified when quantum effects are relevant. Two frequently used important special forms of the relation are: Einstein–Smoluchowski equation, for diffusion of charged particles: Stokes–Einstein–Sutherland equation, for diffusion of spherical particles through a liquid with low Reynolds number: Here is the electrical charge of a particle; is the electrical mobility of the charged particle; is the dynamic viscosity; is the radius of the spherical particle. Special cases Electrical mobility equation (classical case) For a particle with electrical charge , its electrical mobility is related to its generalized mobility by the equation . The parameter is the ratio of the particle's terminal drift velocity to an applied electric field. Hence, the equation in the case of a charged particle is given as where is the diffusion coefficient (). is the electrical mobility (). is the electric charge of particle (C, coulombs) is the electron temperature or ion temperature in plasma (K). If the temperature is given in volts, which is more common for plasma: where is the charge number of particle (unitless) is electron temperature or ion temperature in plasma (V). Electrical mobility equation (quantum case) For the case of Fermi gas or a Fermi liquid, relevant for the electron mobility in normal metals like in the free electron model, Einstein relation should be modified: where is Fermi energy. Stokes–Einstein–Sutherland equation In the limit of low Reynolds number, the mobility μ is the inverse of the drag coefficient . A damping constant is frequently used for the inverse momentum relaxation time (time needed for the inertia momentum to become negligible compared to the random momenta) of the diffusive object. For spherical particles of radius r, Stokes' law gives where is the viscosity of the medium. Thus the Einstein–Smoluchowski relation results into the Stokes–Einstein–Sutherland relation This has been applied for many years to estimating the self-diffusion coefficient in liquids, and a version consistent with isomorph theory has been confirmed by computer simulations of the Lennard-Jones system. In the case of rotational diffusion, the friction is , and the rotational diffusion constant is This is sometimes referred to as the Stokes–Einstein–Debye relation. Semiconductor In a semiconductor with an arbitrary density of states, i.e. a relation of the form between the density of holes or electrons and the corresponding quasi Fermi level (or electrochemical potential) , the Einstein relation is where is the electrical mobility (see for a proof of this relation). An example assuming a parabolic dispersion relation for the density of states and the Maxwell–Boltzmann statistics, which is often used to describe inorganic semiconductor materials, one can compute (see density of states): where is the total density of available energy states, which gives the simplified relation: Nernst–Einstein equation By replacing the diffusivities in the expressions of electric ionic mobilities of the cations and anions from the expressions of the equivalent conductivity of an electrolyte the Nernst–Einstein equation is derived: were R is the gas constant. Proof of the general case The proof of the Einstein relation can be found in many references, for example see the work of Ryogo Kubo. Suppose some fixed, external potential energy generates a conservative force (for example, an electric force) on a particle located at a given position . We assume that the particle would respond by moving with velocity (see Drag (physics)). Now assume that there are a large number of such particles, with local concentration as a function of the position. After some time, equilibrium will be established: particles will pile up around the areas with lowest potential energy , but still will be spread out to some extent because of diffusion. At equilibrium, there is no net flow of particles: the tendency of particles to get pulled towards lower , called the drift current, perfectly balances the tendency of particles to spread out due to diffusion, called the diffusion current (see drift-diffusion equation). The net flux of particles due to the drift current is i.e., the number of particles flowing past a given position equals the particle concentration times the average velocity. The flow of particles due to the diffusion current is, by Fick's law, where the minus sign means that particles flow from higher to lower concentration. Now consider the equilibrium condition. First, there is no net flow, i.e. . Second, for non-interacting point particles, the equilibrium density is solely a function of the local potential energy , i.e. if two locations have the same then they will also have the same (e.g. see Maxwell-Boltzmann statistics as discussed below.) That means, applying the chain rule, Therefore, at equilibrium: As this expression holds at every position , it implies the general form of the Einstein relation: The relation between and for classical particles can be modeled through Maxwell-Boltzmann statistics where is a constant related to the total number of particles. Therefore Under this assumption, plugging this equation into the general Einstein relation gives: which corresponds to the classical Einstein relation. See also Smoluchowski factor Conductivity (electrolytic) Stokes radius Ion transport number References External links Einstein relation calculators ion diffusivity Statistical mechanics Relation
Einstein relation (kinetic theory)
[ "Physics" ]
1,214
[ "Statistical mechanics" ]
2,970,178
https://en.wikipedia.org/wiki/International%20Lefthanders%20Day
International Left Handers Day is an international day observed annually on August 13 to celebrate the uniqueness and differences of left-handed individuals. The day was first observed in 1976 by Dean R. Campbell, founder of the Left-handers Club. This day was established to raise awareness about the challenges and experiences faced by left-handed individuals in a predominantly right-handed world. The holiday celebrates left-handed people's uniqueness and differences, a subset of humanity comprising seven to ten percent of the world's population. The day also spreads awareness on issues faced by left-handers, e.g. the importance of the special needs for left-handed children, and the likelihood for left-handers to develop schizophrenia. Several media outlets and commercial associations have made one-off posts and compilations of accomplished left-handed people in recognition of the holiday. Further reading References August observances International observances Handedness 1976 introductions
International Lefthanders Day
[ "Physics", "Chemistry", "Biology" ]
190
[ "Behavior", "Motor control", "Chirality", "Asymmetry", "Handedness", "Symmetry" ]
2,970,278
https://en.wikipedia.org/wiki/Added%20value
Added value in financial analysis of shares is to be distinguished from value added. It is used as a measure of shareholder value, calculated using the formula: Added Value = The selling price of a product - the cost of bought-in materials and components Added Value can also be defined as the difference between a particular product's final selling price and the direct and indirect input used in making that particular product. Also it can be said to be the process of increasing the perceived value of the product in the eyes of the consumers (formally known as the value proposition). The difference is profit for the firm and its shareholders after all the costs and taxes owed by the business have been paid for that financial year. Value added or any related measure may help investors decide if this is a business that is worthwhile investing on, or that there are other and better opportunities (fixed deposits, debentures). Example A jewelry business could display products in an attractive display or offer a gift wrapping service. These changes could make customers more willing to pay a higher price for products that appear to be of higher quality. Other consultancy measures For other consultancy measures for shareholder value, see Economic value added Market value added References Kay, J. (1993) Foundations of Corporate Success, Oxford: Oxford University Press. Financial ratios Corporate development Accounting terminology
Added value
[ "Mathematics" ]
264
[ "Financial ratios", "Quantity", "Metrics" ]
2,970,426
https://en.wikipedia.org/wiki/Putto
A putto (; plural putti ) is a figure in a work of art depicted as a chubby male child, usually naked and very often winged. Originally limited to profane passions in symbolism, the putto came to represent a sort of baby angel in religious art, often called a cherub (plural cherubim), though in traditional Christian theology a cherub is actually one of the most senior types of angel. The same figures were also seen in representations of classical myth, and increasingly in general decorative art. In Baroque art the putto came to represent the omnipresence of God. A putto representing a cupid is also called an amorino (plural amorini) or amoretto (plural amoretti). Etymology The more commonly found form putti is the plural of the Italian word putto. The Italian word comes from the Latin word putus, meaning "boy" or "child". Today, in Italian, putto means either toddler winged angel or, rarely, toddler boy. It may have been derived from the same Indo-European root as the Sanskrit word "putra" (meaning "boy child", as opposed to "son"), Avestan puθra-, Old Persian puça-, Pahlavi (Middle Persian) pus and pusar, all meaning "son", and the New Persian pesar "boy, son". History Putti, in the ancient classical world of art, were winged infants that were believed to influence human lives. In Renaissance art, the form of the putto was derived in various ways including the Greek Eros or Roman Amor/Cupid, the god of love and companion of Aphrodite or Venus; the Roman, genius, a type of guardian spirit; or sometimes the Greek, daemon, a type of messenger spirit, being halfway between the realms of the human and the divine. Revival of the putto in the Renaissance Putti are a classical motif found primarily on child sarcophagi of the 2nd century, where they are depicted fighting, dancing, participating in bacchic rites, playing sports, etc. The putto disappeared during the Middle Ages and was revived during the Quattrocento. The revival of the figure of the putto is generally attributed to Donatello, in Florence, in the 1420s, although there are some earlier manifestations (for example the tomb of Ilaria del Carretto, sculpted by Jacopo della Quercia in Lucca). Since then, Donatello has been called the originator of the putto because of the contribution to art he made in restoring the classical form of putto. He gave putti a distinct character by infusing the form with Christian meanings and using it in new contexts such as musician angels. Putti also began to feature in works showing figures from classical mythology, which became popular in the same period. Some of Donatello's putti are rather older than the usual toddler type, and also behaving in a less than angelic way. The bronze figure of Amore-Attis is the most extreme of these. These are often termed spiritelli, sometimes translated as "imps". Older putto-like figures are seen in other art; they are very typical as winged teenage boys in the borders of works by the Embriachi workshop from the years around 1400. Most Renaissance putti are essentially decorative and they ornament both religious and secular works, without usually taking any actual part in the events depicted in narrative paintings. There are two popular forms of the putto as the main subject of a work of art in 16th-century Italian Renaissance art: the sleeping putto and the standing putto with an animal or other object. Where putti are found Putti, cupids, and angels (see below) can be found in both religious and secular art from the 1420s in Italy, the turn of the 16th century in the Netherlands and Germany, the Mannerist period and late Renaissance in France, and throughout Baroque ceiling frescoes. Many artists have depicted them, but among the best-known are the sculptor Donatello and the painter Raphael. The two relaxed and curious putti who appear at the foot of Raphael's Sistine Madonna are often reproduced. They also experienced a major revival in the 19th century, where they gamboled through paintings by French academic painters, from advertisements to Gustave Doré’s illustrations for Orlando Furioso. Iconography of the putto The iconography of putti is deliberately unfixed, so that it is difficult to tell the difference between putti, cupids, and various forms of angels. They have no unique, immediately identifiable attributes, so that putti may have many meanings and roles in the context of art. Some of the more common associations are: Associations with Aphrodite, and so with romantic—or erotic—love Associations with Heaven Associations with peace, prosperity, mirth, and leisure Historiography The historiography of this subject matter is very short. Many art historians have commented on the importance of the putto in art, but few have undertaken a major study. One useful scholarly examination is Charles Dempsey's Inventing the Renaissance Putto. Gallery See also Puer Mingens – Artistic depictions of boys urinating Four Kumāras – A group of semi-divine sage boys in Hinduism Gohō dōji – Buddhist guardian deities in the form of young boys References External links Warburg Institute Iconographic Database (ca 1,400 images of Amoretti in secular contexts) Angels in art Italian words and phrases Renaissance art Visual motifs Eros Cupid Cherubim
Putto
[ "Mathematics" ]
1,148
[ "Symbols", "Visual motifs" ]
1,500,452
https://en.wikipedia.org/wiki/Steering%20pole
A steering pole is a light spar extending from the bow of a straight deck ship which aids the wheelsman in steering. Ancient literature indicates that steering poles have long been part of boat construction, and are referred to in ancient texts such as the Epic of Gilgamesh. References Shipbuilding
Steering pole
[ "Engineering" ]
59
[ "Shipbuilding", "Marine engineering" ]
1,500,456
https://en.wikipedia.org/wiki/Samuel%20Haughton
Samuel Haughton (21 December 1821 – 31 October 1897) was an Irish clergyman, medical doctor, and scientific writer. Biography The scientist Samuel Haughton was born in Carlow, the son of another Samuel Haughton (1786-1874) and grandson (by his second wife Jane Boake) of the three-times-married Samuel Pearson Haughton (1748-1828), a Quaker. Samuel Pearson Haughton was also father, by his third wife Mary Pim, of James "Vegetable" Haughton (1795–1873), a Unitarian, an active philanthropist, a strong supporter of Father Theobald Mathew, a vegetarian, and an anti-slavery worker and writer. The scientist Samuel Haughton had a distinguished career in Trinity College Dublin and in 1844 he was elected a fellow. Working on mathematical models under James MacCullagh, he was awarded in 1848 the Cunningham Medal by the Royal Irish Academy. In 1847 he had his ordination to the priesthood but he was not someone who preached. He was appointed as professor of geology in Trinity College in 1851. He held the position for thirty years. Haughton began to study medicine in 1859. He earned his MD degree in 1862 from Trinity College Dublin. Haughton became the registrar of the Medical School. He focused on improving the status of the school and representing the university on the General Medical Council from 1878 to 1896. In 1858 he was elected fellow of the Royal Society,. He gained honorary degrees from Oxford, Cambridge and Edinburgh. In Trinity College Dublin he moved the first-ever motion at the Academic Council to admit women to the University on 10 March 1880. He proposed that 'In the opinion of the Council, the time has come when the Degrees in Arts of the University should be opened to women, by examination, on the same terms as men' (Thomson, 2004). Haughton, through his work as Professor of Geology, and his involvement with the Royal Zoological Society had witnessed the enthusiasm and contribution of women in the natural sciences. Although thwarted by opponents on the Council he continued to campaign for the admission of women to TCD until his death in 1897. It was 1902 before his motion was finally passed, 5 years after his death. In 1866, Haughton developed the original equations for hanging as a humane method of execution, whereby the neck was broken at the time of the drop, so that the condemned person did not slowly strangle to death. "On hanging considered from a Mechanical and Physiological point of view" was published in the London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science, Vol. 32 No. 213 (July 1866), calling for a drop energy of 2,240 ft-lbs. From 1886 to 1888, he served as a member of the Capital Sentences Committee, the report of which suggested a Table of Drops based on 1,260 ft-lbs of energy. Haughton wrote papers on many subjects for journals in London and Dublin. His topics included the laws of equilibrium, the motion of solid and fluid bodies, sun-heat, radiation, climates and tides. His papers covered the granites of Leinster and Donegal and the cleavage and joint-planes of the Old Red Sandstone of Waterford. Haughton was president of the Royal Irish Academy from 1886 to 1891, and secretary of the Royal Zoological Society of Ireland for twenty years. In 1880 he gave the Croonian Lecture on animal mechanics to the Royal Society. Samuel Haughton was also involved in the Dublin and Kingstown Railway company, in which he looked after the building of the first locomotives. It was the first railway company in the world to build its own locomotives. Criticism of Darwin "Haughton has the dubious honour of being the first person to comment on Darwin's theory when the joint papers of Darwin and Alfred Russel Wallace were read to the Linnean Society of London in 1858. They were presented by Darwin's close allies, the geologist Charles Lyell and the botanist Joseph Dalton Hooker. Haughton presumably saw the printed version of the papers and attacked the theory briefly in remarks made to the Geological Society of Dublin on 9 February 1859. These were reported in the society's journal, and a clipping of this found its way into Darwin's possession. Haughton wrote: This speculation of Mess. Darwin and Wallace would not be worthy of note were it not for the weight of authority of the names under whose auspices it has been brought forward. If it means what it says, it is a truism; if it means anything more, it is contrary to fact. Darwin later commented in his autobiography that this was the only response to the papers, summarising Haughton’s verdict as ‘all that [was] new in there was false, and what was true was old'." In an anonymous article written in 1860 in the Natural History Review, Haughton set out his opinion that Darwin's theory was founded almost entirely upon speculation but also that this speculative theory belonged originally to Lamarck and that the differences between the two men's work were negligible. to establish a character for subtlety and skill, in drawing large conclusions on this subject from slender premises, the first requisite is, ignorance of what other speculators have attempted before us in the same field: and the second is, a firm confidence in our own special theory. Neither of these requisites can be considered wanting in those who are engaged in the task of reproducing Lamarck’s theory of organic life, either as altogether new, or with but a tattered threadbare cloak, thrown over its original nakedness. Theistic evolution Haughton's work on animal mechanics led him to believe that the structure of species was designed by an intelligent creator. In his book Animal Mechanics (1873, page 238) he commented that the "Framer of the Universe" had designed all muscles so they could perform the maximum work possible under given external conditions. In the preface to his book, he was open to the possibility of "teleological evolution". Evolution was governed by a "Divine mind" and nothing was left to chance. Publications Manual of Geology (1865) Principles of Animal Mechanics (1873) Six Lectures on Physical Geography (1880) In conjunction with his friend, Joseph Allen Galbraith, he issued a series of Manuals of Mathematical and Physical Science. References Thomson, L. (2004). "The campaign for Admission". In Parkes, S. M. (ed.) A Danger to the Men? A History of Women in Trinity College Dublin 1904–2004. Lilliput Press, Dublin 19–54. External links 1821 births 1897 deaths Scientists from County Carlow 19th-century Irish geologists 19th-century Irish zoologists 19th-century Irish mathematicians Irish Unitarians Irish temperance activists Fellows of the Royal Society Presidents of the Royal Irish Academy Theistic evolutionists Alumni of Trinity College Dublin Activists from County Carlow
Samuel Haughton
[ "Biology" ]
1,416
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
1,500,520
https://en.wikipedia.org/wiki/Cross%20sea
A cross sea (also referred to as a squared sea or square waves) is a sea state of wind-generated ocean waves that form nonparallel wave systems. Cross seas have a large amount of directional spreading. This may occur when water waves from one weather system continue despite a shift in wind. Waves generated by the new wind run at an angle to the old. Two weather systems that are far from each other may create a cross sea when the waves from the systems meet at a place far from either weather system. Until the older waves have dissipated, they can present a perilous sea hazard. This sea state is fairly common and a large percentage of ship accidents have been found to occur in this state. Vessels fare better against large waves when sailing directly perpendicular to oncoming surf. In a cross sea scenario, that becomes impossible as sailing into one set of waves necessitates sailing parallel to the other. A cross swell is generated when the wave systems are longer-period swells, rather than short-period wind-generated waves. Notes References Water waves
Cross sea
[ "Physics", "Chemistry" ]
217
[ "Water waves", "Waves", "Physical phenomena", "Fluid dynamics" ]
1,500,631
https://en.wikipedia.org/wiki/Channel%20Definition%20Format
Channel Definition Format (CDF) was an XML file format formerly used in conjunction with Microsoft's Active Channel, Active Desktop and Smart Offline Favorites technologies. The format was designed to "offer frequently updated collections of information, or channels, from any web server for automatic delivery to compatible receiver programs." Active Channel allowed users to subscribe to channels and have scheduled updates delivered to their desktop. Smart Offline Favorites, like channels, enabled users to view webpages from the cache. History Submitted to the World Wide Web Consortium (W3C) in March 1997 for consideration as a web standard, CDF marked Microsoft's attempt to capitalize on the push technology trend led by PointCast. The most notable implementation of CDF was Microsoft's Active Desktop, an optional feature introduced with the Internet Explorer 4.0 browser in September 1997. Smart Offline Favorites was introduced in Internet Explorer 5.0. CDF prefigured aspects of the RSS file format introduced by Netscape in March 1999, and of web syndication at large. Unlike RSS, CDF was never widely adopted and its use remained very limited. As a consequence, Microsoft removed CDF support from Internet Explorer 7 in 2006. Example A generic CDF file: <?xml version="1.0" encoding="UTF-8"?> <CHANNEL HREF="http://domain/folder/pageOne.extension" BASE="http://domain/folder/" LASTMOD="1998-11-05T22:12" PRECACHE="YES" LEVEL="0"> <TITLE>Title of Channel</TITLE> <ABSTRACT>Synopsis of channel's contents.</ABSTRACT> <SCHEDULE> <INTERVALTIME DAY="14"/> </SCHEDULE> <LOGO HREF="wideChannelLogo.gif" STYLE="IMAGE-WIDE"/> <LOGO HREF="imageChannelLogo.gif" STYLE="IMAGE"/> <LOGO HREF="iconChannelLogo.gif" STYLE="ICON"/> <ITEM HREF="pageTwo.extension" LASTMOD="1998-11-05T22:12" PRECACHE="YES" LEVEL="1"> <TITLE>Page Two's Title</TITLE> <ABSTRACT>Synopsis of Page Two's contents.</ABSTRACT> <LOGO HREF="pageTwoLogo.gif" STYLE="IMAGE"/> <LOGO HREF="pageTwoLogo.gif" STYLE="ICON"/> </ITEM> </CHANNEL> See also Active Channel Active Desktop Push technology Semantic Web List of content syndication markup languages History of web syndication technology References External links Introduction to Active Channel Technology How to Create Channel Definition Format (CDF) Files 1997 W3 Submission of Channel Definition Format Internet Explorer Push technology Windows 98 Windows communication and services Web syndication formats XML-based standards
Channel Definition Format
[ "Technology" ]
626
[ "Computer standards", "XML-based standards" ]
1,500,768
https://en.wikipedia.org/wiki/Leopard%20%28rocket%29
Leopard is the name of a British double stage experimental supersonic test rocket, which was launched between 1959 and 1962 eleven times from Aberporth. The Leopard has a flight altitude of 20 kilometres, a launch mass of 1.5 tons and a length of 6 metres. The 2 stage aerodynamic test vehicle consisted of one Rook solid rocket motor as the first stage, with an added Gosling solid rocket engine second stage as an evolution of the Rook vehicle capable of reaching higher velocities. The Gosling engine would go on to be used on missiles like the Thunderbird and the Bloodhound, while the Rook flew with until 1972 when it was decommissioned. References https://web.archive.org/web/20050309014028/http://www.astronautix.com/lvs/leopard.htm Rockets and missiles
Leopard (rocket)
[ "Astronomy" ]
176
[ "Rocketry stubs", "Astronomy stubs" ]
1,500,806
https://en.wikipedia.org/wiki/Rook%20%28rocket%29
Rook is the name of a British rocket. Twenty five Rook rockets were launched between 1959 and 1972. The launches took place from Aberporth in Wales and from Woomera in South Australia. The Rook has a maximum flight altitude of 20 kilometres, a launch mass of 1.2 tons and a length of 5 metres. External links https://web.archive.org/web/20080709063130/http://www.astronautix.com/lvs/rook.htm Rockets and missiles
Rook (rocket)
[ "Astronomy" ]
110
[ "Rocketry stubs", "Astronomy stubs" ]
1,501,024
https://en.wikipedia.org/wiki/Dyadic%20transformation
The dyadic transformation (also known as the dyadic map, bit shift map, 2x mod 1 map, Bernoulli map, doubling map or sawtooth map) is the mapping (i.e., recurrence relation) (where is the set of sequences from ) produced by the rule . Equivalently, the dyadic transformation can also be defined as the iterated function map of the piecewise linear function The name bit shift map arises because, if the value of an iterate is written in binary notation, the next iterate is obtained by shifting the binary point one bit to the right, and if the bit to the left of the new binary point is a "one", replacing it with a zero. The dyadic transformation provides an example of how a simple 1-dimensional map can give rise to chaos. This map readily generalizes to several others. An important one is the beta transformation, defined as . This map has been extensively studied by many authors. It was introduced by Alfréd Rényi in 1957, and an invariant measure for it was given by Alexander Gelfond in 1959 and again independently by Bill Parry in 1960. Relation to the Bernoulli process The map can be obtained as a homomorphism on the Bernoulli process. Let be the set of all semi-infinite strings of the letters and . These can be understood to be the flips of a coin, coming up heads or tails. Equivalently, one can write the space of all (semi-)infinite strings of binary bits. The word "infinite" is qualified with "semi-", as one can also define a different space consisting of all doubly-infinite (double-ended) strings; this will lead to the Baker's map. The qualification "semi-" is dropped below. This space has a natural shift operation, given by where is an infinite string of binary digits. Given such a string, write The resulting is a real number in the unit interval The shift induces a homomorphism, also called , on the unit interval. Since one can easily see that For the doubly-infinite sequence of bits the induced homomorphism is the Baker's map. The dyadic sequence is then just the sequence That is, The Cantor set Note that the sum gives the Cantor function, as conventionally defined. This is one reason why the set is sometimes called the Cantor set. Rate of information loss and sensitive dependence on initial conditions One hallmark of chaotic dynamics is the loss of information as simulation occurs. If we start with information on the first s bits of the initial iterate, then after m simulated iterations (m < s) we only have s − m bits of information remaining. Thus we lose information at the exponential rate of one bit per iteration. After s iterations, our simulation has reached the fixed point zero, regardless of the true iterate values; thus we have suffered a complete loss of information. This illustrates sensitive dependence on initial conditions—the mapping from the truncated initial condition has deviated exponentially from the mapping from the true initial condition. And since our simulation has reached a fixed point, for almost all initial conditions it will not describe the dynamics in the qualitatively correct way as chaotic. Equivalent to the concept of information loss is the concept of information gain. In practice some real-world process may generate a sequence of values (xn) over time, but we may only be able to observe these values in truncated form. Suppose for example that x0 = 0.1001101, but we only observe the truncated value 0.1001. Our prediction for x1 is 0.001. If we wait until the real-world process has generated the true x1 value 0.001101, we will be able to observe the truncated value 0.0011, which is more accurate than our predicted value 0.001. So we have received an information gain of one bit. Relation to tent map and logistic map The dyadic transformation is topologically semi-conjugate to the unit-height tent map. Recall that the unit-height tent map is given by The conjugacy is explicitly given by so that That is, This is stable under iteration, as It is also conjugate to the chaotic r = 4 case of the logistic map. The r = 4 case of the logistic map is ; this is related to the bit shift map in variable x by There is also a semi-conjugacy between the dyadic transformation (here named angle doubling map) and the quadratic polynomial. Here, the map doubles angles measured in turns. That is, the map is given by Periodicity and non-periodicity Because of the simple nature of the dynamics when the iterates are viewed in binary notation, it is easy to categorize the dynamics based on the initial condition: If the initial condition is irrational (as almost all points in the unit interval are), then the dynamics are non-periodic—this follows directly from the definition of an irrational number as one with a non-repeating binary expansion. This is the chaotic case. If x0 is rational the image of x0 contains a finite number of distinct values within [0, 1) and the forward orbit of x0 is eventually periodic, with period equal to the period of the binary expansion of x0. Specifically, if the initial condition is a rational number with a finite binary expansion of k bits, then after k iterations the iterates reach the fixed point 0; if the initial condition is a rational number with a k-bit transient (k ≥ 0) followed by a q-bit sequence (q > 1) that repeats itself infinitely, then after k iterations the iterates reach a cycle of length q. Thus cycles of all lengths are possible. For example, the forward orbit of 11/24 is: which has reached a cycle of period 2. Within any subinterval of [0, 1), no matter how small, there are therefore an infinite number of points whose orbits are eventually periodic, and an infinite number of points whose orbits are never periodic. This sensitive dependence on initial conditions is a characteristic of chaotic maps. Periodicity via bit shifts The periodic and non-periodic orbits can be more easily understood not by working with the map directly, but rather with the bit shift map defined on the Cantor space . That is, the homomorphism is basically a statement that the Cantor set can be mapped into the reals. It is a surjection: every dyadic rational has not one, but two distinct representations in the Cantor set. For example, This is just the binary-string version of the famous 0.999... = 1 problem. The doubled representations hold in general: for any given finite-length initial sequence of length , one has The initial sequence corresponds to the non-periodic part of the orbit, after which iteration settles down to all zeros (equivalently, all-ones). Expressed as bit strings, the periodic orbits of the map can be seen to the rationals. That is, after an initial "chaotic" sequence of , a periodic orbit settles down into a repeating string of length . It is not hard to see that such repeating sequences correspond to rational numbers. Writing one then clearly has Tacking on the initial non-repeating sequence, one clearly has a rational number. In fact, every rational number can be expressed in this way: an initial "random" sequence, followed by a cycling repeat. That is, the periodic orbits of the map are in one-to-one correspondence with the rationals. This phenomenon is note-worthy, because something similar happens in many chaotic systems. For example, geodesics on compact manifolds can have periodic orbits that behave in this way. Keep in mind, however, that the rationals are a set of measure zero in the reals. Almost all orbits are not periodic! The aperiodic orbits correspond to the irrational numbers. This property also holds true in a more general setting. An open question is to what degree the behavior of the periodic orbits constrain the behavior of the system as a whole. Phenomena such as Arnold diffusion suggest that the general answer is "not very much". Density formulation Instead of looking at the orbits of individual points under the action of the map, it is equally worthwhile to explore how the map affects densities on the unit interval. That is, imagine sprinkling some dust on the unit interval; it is denser in some places than in others. What happens to this density as one iterates? Write as this density, so that . To obtain the action of on this density, one needs to find all points and write The denominator in the above is the Jacobian determinant of the transformation, here it is just the derivative of and so . Also, there are obviously only two points in the preimage of , these are and Putting it all together, one gets By convention, such maps are denoted by so that in this case, write The map is a linear operator, as one easily sees that and for all functions on the unit interval, and all constants . Viewed as a linear operator, the most obvious and pressing question is: what is its spectrum? One eigenvalue is obvious: if for all then one obviously has so the uniform density is invariant under the transformation. This is in fact the largest eigenvalue of the operator , it is the Frobenius–Perron eigenvalue. The uniform density is, in fact, nothing other than the invariant measure of the dyadic transformation. To explore the spectrum of in greater detail, one must first limit oneself to a suitable space of functions (on the unit interval) to work with. This might be the space of Lebesgue measurable functions, or perhaps the space of square integrable functions, or perhaps even just polynomials. Working with any of these spaces is surprisingly difficult, although a spectrum can be obtained. Borel space A vast amount of simplification results if one instead works with the Cantor space , and functions Some caution is advised, as the map is defined on the unit interval of the real number line, assuming the natural topology on the reals. By contrast, the map is defined on the Cantor space , which by convention is given a very different topology, the product topology. There is a potential clash of topologies; some care must be taken. However, as presented above, there is a homomorphism from the Cantor set into the reals; fortunately, it maps open sets into open sets, and thus preserves notions of continuity. To work with the Cantor set , one must provide a topology for it; by convention, this is the product topology. By adjoining set-complements, it can be extended to a Borel space, that is, a sigma algebra. The topology is that of cylinder sets. A cylinder set has the generic form where the are arbitrary bit values (not necessarily all the same), and the are a finite number of specific bit-values scattered in the infinite bit-string. These are the open sets of the topology. The canonical measure on this space is the Bernoulli measure for the fair coin-toss. If there is just one bit specified in the string of arbitrary positions, the measure is 1/2. If there are two bits specified, the measure is 1/4, and so on. One can get fancier: given a real number one can define a measure if there are heads and tails in the sequence. The measure with is preferred, since it is preserved by the map So, for example, maps to the interval and maps to the interval and both of these intervals have a measure of 1/2. Similarly, maps to the interval which still has the measure 1/2. That is, the embedding above preserves the measure. An alternative is to write which preserves the measure That is, it maps such that the measure on the unit interval is again the Lebesgue measure. Frobenius–Perron operator Denote the collection of all open sets on the Cantor set by and consider the set of all arbitrary functions The shift induces a pushforward defined by This is again some function In this way, the map induces another map on the space of all functions That is, given some , one defines This linear operator is called the transfer operator or the Ruelle–Frobenius–Perron operator. The largest eigenvalue is the Frobenius–Perron eigenvalue, and in this case, it is 1. The associated eigenvector is the invariant measure: in this case, it is the Bernoulli measure. Again, when Spectrum To obtain the spectrum of , one must provide a suitable set of basis functions for the space One such choice is to restrict to the set of all polynomials. In this case, the operator has a discrete spectrum, and the eigenfunctions are (curiously) the Bernoulli polynomials! (This coincidence of naming was presumably not known to Bernoulli.) Indeed, one can easily verify that where the are the Bernoulli polynomials. This follows because the Bernoulli polynomials obey the identity Note that Another basis is provided by the Haar basis, and the functions spanning the space are the Haar wavelets. In this case, one finds a continuous spectrum, consisting of the unit disk on the complex plane. Given in the unit disk, so that , the functions obey for This is a complete basis, in that every integer can be written in the form The Bernoulli polynomials are recovered by setting and A complete basis can be given in other ways, as well; they may be written in terms of the Hurwitz zeta function. Another complete basis is provided by the Takagi function. This is a fractal, differentiable-nowhere function. The eigenfunctions are explicitly of the form where is the triangle wave. One has, again, All of these different bases can be expressed as linear combinations of one-another. In this sense, they are equivalent. The fractal eigenfunctions show an explicit symmetry under the fractal groupoid of the modular group; this is developed in greater detail in the article on the Takagi function (the blancmange curve). Perhaps not a surprise; the Cantor set has exactly the same set of symmetries (as do the continued fractions.) This then leads elegantly into the theory of elliptic equations and modular forms. Relation to the Ising model The Hamiltonian of the zero-field one-dimensional Ising model of spins with periodic boundary conditions can be written as Letting be a suitably chosen normalization constant and be the inverse temperature for the system, the partition function for this model is given by We can implement the renormalization group by integrating out every other spin. In so doing, one finds that can also be equated with the partition function for a smaller system with but spins, provided we replace and with renormalized values and satisfying the equations Suppose now that we allow to be complex and that for some . In that case we can introduce a parameter related to via the equation and the resulting renormalization group transformation for will be precisely the dyadic map: See also Bernoulli process Bernoulli scheme Gilbert–Shannon–Reeds model, a random distribution on permutations given by applying the doubling map to a set of n uniformly random points on the unit interval Notes References Dean J. Driebe, Fully Chaotic Maps and Broken Time Symmetry, (1999) Kluwer Academic Publishers, Dordrecht Netherlands Linas Vepstas, The Bernoulli Map, the Gauss-Kuzmin-Wirsing Operator and the Riemann Zeta, (2004) Chaotic maps
Dyadic transformation
[ "Mathematics" ]
3,235
[ "Functions and mappings", "Mathematical objects", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
1,501,173
https://en.wikipedia.org/wiki/Virtual%20DOS%20machine
Virtual DOS machines (VDM) refer to a technology that allows running 16-bit/32-bit DOS and 16-bit Windows programs when there is already another operating system running and controlling the hardware. Overview Virtual DOS machines can operate either exclusively through typical software emulation methods (e.g. dynamic recompilation) or can rely on the virtual 8086 mode of the Intel 80386 processor, which allows real mode 8086 software to run in a controlled environment by catching all operations which involve accessing protected hardware and forwarding them to the normal operating system (as exceptions). The operating system can then perform an emulation and resume the execution of the DOS software. VDMs generally also implement support for running 16- and 32-bit protected mode software (DOS extenders), which has to conform to the DOS Protected Mode Interface (DPMI). When a DOS program running inside a VDM needs to access a peripheral, Windows will either allow this directly (rarely), or will present the DOS program with a virtual device driver (VDD) which emulates the hardware using operating system functions. A VDM will systematically have emulations for the Intel 8259A interrupt controllers, the 8254 timer chips, the 8237 DMA controller, etc. Concurrent DOS 8086 emulation mode In January 1985 Digital Research together with Intel previewed Concurrent DOS 286 1.0, a version of Concurrent DOS capable of running real mode DOS programs in the 80286's protected mode. The method devised on B-1 stepping processor chips, however, in May 1985 stopped working on the C-1 and subsequent processor steppings shortly before Digital Research was about to release the product. Although with the E-1 stepping Intel started to address the issues in August 1985, so that Digital Research's "8086 emulation mode" worked again utilizing the undocumented LOADALL processor instruction, it was too slow to be practical. Microcode changes for the E-2 stepping improved the speed again. This early implementation can be seen as a predecessor to actual virtual DOS machines. Eventually, Concurrent DOS 286 was reworked from a potential desktop operating system to become FlexOS 286 for industrial use in 1986. It was also licensed by IBM for their 4680 OS in 1986. When Intel's 80386 with its virtual 8086 mode became available (as samples since October 1985 and in quantities since June 1986), Digital Research switched to use this to run real mode DOS programs in virtual DOS machines in protected mode under Concurrent DOS 386 1.0 (February 1987) and FlexOS 386 1.0 (June 1987). However, the architecture of these multiuser multitasking protected mode operating systems was not DOS-based by themselves. Concurrent DOS 386 was later developed to become Multiuser DOS (since 1991) and REAL/32 (since 1995). FlexOS 386 later became 4690 OS in 1993. DOS-based VDMs In contrast to these protected mode operating systems, DOS, by default, is a real-mode operating system, switching to protected mode and virtual 86 mode only on behalf of memory managers and DOS extenders in order to provide access to extended memory or map in memory into the first megabyte, which is accessible to normal DOS programs. DOS-based VDMs appeared with Microsoft's Windows/386 2.01 in September 1987. DOS-based virtual DOS machines were also present in Windows 3.0, 3.1x and Windows for Workgroups 3.1x running in 386 Enhanced Mode as well as in Windows 95, 98, 98 SE and ME. One of the characteristics of these solutions running on top of DOS is that the memory layout shown inside virtual DOS machines are virtual instances of the DOS system and DOS driver configuration run before the multitasker is loaded, and that requests which cannot be handled in protected mode are passed down into the system domain to be executed by the underlying DOS system. Similar to Windows 3.x 386 Enhanced Mode in architecture, EMM386 3.xx of Novell DOS 7, Caldera OpenDOS 7.01, DR-DOS 7.02 (and later) also uses DOS-based VDMs to support pre-emptive multitasking of multiple DOS applications, when the EMM386 /MULTI option is used. This component has been under development at Digital Research / Novell since 1991 under the codename "Vladivar" (originally a separate device driver KRNL386.SYS instead of a module of EMM386). While primarily developed for the next major version of DR DOS, released as Novell DOS 7 in 1994, it was also used in the never released DR DOS "Panther" and "Star Trek" project in 1992/1993. OS/2 MVDM Multiple virtual DOS machines (MVDM) are used in OS/2 2.0 and later since 1992. OS/2 MVDMs are considerably more powerful than NTVDM. For example, block devices are supported, and various DOS versions can be booted into an OS/2 MVDM. While the OS/2 1.x DOS box was based on DOS 3.0, OS/2 2.x MVDMs emulate DOS 5.0. Seamless integration of Windows 3.1 and later Win32s applications in OS/2 is a concept looking similar on surface to the seamless integration of XP Mode based on Windows Virtual PC in Windows 7. A redirector in a "guest" VDM or NTVDM allows access on the disks of the OS/2 or NT "host". Applications in a "guest" can use named pipes for communication with their "host". Due to a technical limitation, DOS and 16-bit Windows applications under OS/2 were unable to see more than 2 GB of hard drive space; this was fixed in ArcaOS 5.0.4. Windows NTVDM NTVDM is a system component of all IA-32 editions of the Windows NT family since 1993 with the release of Windows NT 3.1. It allows execution of 16-bit Windows and 16-bit / 32-bit DOS applications. The Windows NT 32-bit user-mode executable which forms the basis for a single DOS (or Windows 3.x) environment is called . In order to execute DOS programs, NTVDM loads which in turn loads , which executes a modified in order to run the application that was passed to NTVDM as command-line argument. The 16-bit real-mode system files are stripped down derivations of their MS-DOS 5.0 equivalents , and with all hard-wired assumptions on the FAT file system removed and using the invalid opcode 0xC4 0xC4 to bop down into the 32-bit NTVDM to handle the requests. Originally, NTDOS reported a DOS version of 30.00 to programs, but this was soon changed to report a version of 5.00 at and 5.50 at to allow more programs to run unmodified. This holds true even in the newest releases of Windows; many additional MS-DOS functions and commands introduced in MS-DOS versions 6.x and in Windows 9x are missing. 16-bit Windows applications by default all run in their own thread within a single NTVDM process. Although NTVDM itself is a 32-bit process and pre-emptively multitasked with respect to the rest of the system, the 16-bit applications within it are cooperatively multitasked with respect to each other. When the "Run in separate memory space" option is checked in the Run box or the application's shortcut file, each 16-bit Windows application gets its own NTVDM process and is therefore pre-emptively multitasked with respect to other processes, including other 16-bit Windows applications. NTVDM emulates BIOS calls and tables as well as the Windows 3.1 kernel and 16-bit API stubs. The 32-bit WoW translation layer thunks 16-bit API routines. 32-bit DOS emulation is present for DOS Protected Mode Interface (DPMI) and 32-bit memory access. This layer converts the necessary extended and expanded memory calls for DOS functions into Windows NT memory calls. is the emulation layer that emulates 16-bit Windows. Windows 2000 and Windows XP added Sound Blaster 2.0 emulation. 16-bit virtual device drivers and DOS block device drivers (e.g., RAM disks) are not supported. Inter-process communication with other subsystems can take place through OLE, DDE and named pipes. Since virtual 8086 mode is not available on non-x86-based processors (more specifically, MIPS, DEC Alpha, and PowerPC) NTVDM is instead implemented as a full emulator in these versions of NT, using code licensed from Insignia's SoftPC. Up to Windows NT 3.51, only 80286 emulation is available. With Windows NT 4.0, 486 emulation was added. NTVDM is not included with 64-bit versions of Windows or ARM32 based versions such as Windows RT or Windows 10 IoT Core. The last version of Windows to include the component is Windows 10, as Windows 11 dropped support for 32-bit processors. Commands The following commands are part of the Windows XP MS-DOS subsystem. APPEND DEBUG EDIT EDLIN EXE2BIN FASTOPEN FORCEDOS GRAPHICS LOADFIX LOADHIGH (LH) MEM NLSFUNC SETVER SHARE Security issue In January 2010, Google security researcher Tavis Ormandy revealed a serious security flaw in Windows NT's VDM implementation that allowed unprivileged users to escalate their privileges to SYSTEM level, noted as applicable to the security of all x86 versions of the Windows NT kernel since 1993. This included all 32-bit versions of Windows NT, 2000, XP, Server 2003, Vista, Server 2008, and Windows 7. Ormandy published a proof-of-concept exploit for the vulnerability. Prior to Microsoft's release of a security patch, the workaround for this issue was to turn off 16-bit application support, which prevented older programs (those written for DOS and Windows 3.1) from running. 64-bit versions of Windows are not affected since the NTVDM subsystem is not included. Once the Microsoft security patches had been applied to the affected operating systems the VDM could be safely reenabled. Limitations A limitation exists in the Windows XP 16-bit subsystem (but not in earlier versions of Windows NT) because of the raised per-session limit for GDI objects which causes GDI handles to be shifted to the right by two bits, when converting them from 32 to 16 bits. As a result, the actual handle cannot be larger than 14 bits and consequently 16-bit applications that happen to be served a handle larger than 16384 by the GDI system crash and terminate with an error message. In general, VDM and similar technologies do not satisfactorily run most older DOS games on today's computers. Emulation is only provided for the most basic peripherals, often implemented incompletely. For example, sound emulation in NTVDM is very limited. NT-family versions of Windows only update the real screen a few times per second when a DOS program writes to it, and they do not emulate higher resolution graphics modes. Because software mostly runs native at the speed of the host CPU, all timing loops will expire prematurely. This either makes a game run much too fast or causes the software not even to notice the emulated hardware peripherals, because it does not wait long enough for an answer. Absence in x64 and AArch64 architectures In an x86-64 CPU, virtual 8086 mode is available as a sub-mode only in its legacy mode (for running 16- and 32-bit operating systems), not in the native 64-bit long mode. NTVDM is not supported on x86-64 editions of Windows, including DOS programs, because NTVDM uses VM86 CPU mode instead of the Local Descriptor Table in order to enable 16‑bits segment required for addressing. NTVDM is also unavailable on AArch64 (or ARM64) versions of Windows (such as Windows RT), because Microsoft did not release a full emulator for this incompatible instruction set like it did on previous incompatible architectures. While NTVDM is not supported on x86-64 and AArch64 versions of Windows, they can still be run using virtualization software, such as Windows XP Mode in non-home versions of Windows 7 or VMware Workstation. Other methods include using ReactOS-derived NTVDM, an clean room reimplementation of the emulated implementation of NTVDM from Windows NT 4.0 for non-x86 platforms, or OTVDM (WineVDM), a 16-bit Windows interpreter based on MAME's i386 emulation and the 16-bit portion of the popular Windows compatibility layer, Wine (see the section on WineVDM below). WineVDM A VDM is included in Wine and CrossOver for Linux and Mac OS X, known as WineVDM (also known as OTVDM). It has also been ported to Windows itself, as 64-bit versions of Windows do not include the NTVDM subsystem (see above). See also Comparison of platform virtualization software DESQview 386 (since 1988) Wine (software) DOSBox DOSEMU Merge (software) List of Microsoft Windows components Hypervisor Windows on Windows (WoW) Virtual machine (VM) Notes References Further reading External links Virtual DOS Machine Structure Troubleshooting MS-DOS-based programs in Windows XP Troubleshooting an MS-DOS application which hangs the NTVDM subsystem in Windows XP and Windows Server 2003 Troubleshooting MS-DOS-based serial communication programs in Windows 2000 and later NTVDM from ReactOS, the custom standalone variant of NTVDM by Michael Stamper (able to run windowed text mode MS-DOS software in 64 bit Windows NT systems, this NTVDM works by using the following syntax: ntvdm.exe program.exe, like start command in Windows. MS-DOS Player for Win32-x64, a Microsoft MS-DOS Emulator, runs many command line DOS programs like compilers or other tools, also packaged into one standalone executable file. vDOS, a DOS emulator designed for the running the more "serious" DOS apps (not games) on 64-bit NT systems (effectively a replacement for NTVDM on modern systems). Virtualization DOS technology Windows administration DOS emulators Discontinued Windows components
Virtual DOS machine
[ "Engineering" ]
3,054
[ "Computer networks engineering", "Virtualization" ]
1,501,218
https://en.wikipedia.org/wiki/Baratol
Baratol is an explosive made of a mixture of TNT and barium nitrate, with a small quantity (about 1%) of paraffin wax used as a phlegmatizing agent. TNT typically makes up 25% to 33% of the mixture. Because of the high density of barium nitrate, Baratol has a density of at least 2.5 g/cm3. Baratol, which has a detonation velocity of only about 4,900 metres per second, was used as the slow-detonating explosive in the explosive lenses of some early atomic bomb designs, with Composition B often used as the fast-detonating component. Atomic bombs detonated at Trinity in 1945, the Soviet Joe 1 in 1949, and in India in 1972 all used Baratol and Composition B. Baratol was also used in the Mills bomb, a British hand grenade. References Explosives Trinitrotoluene British inventions
Baratol
[ "Chemistry" ]
192
[ "Explosive chemicals", "Trinitrotoluene", "Explosives", "Explosions" ]
1,501,233
https://en.wikipedia.org/wiki/Double%20fault
On the x86 architecture, a double fault exception occurs if the processor encounters a problem while trying to service a pending interrupt or exception. An example situation when a double fault would occur is when an interrupt is triggered but the segment in which the interrupt handler resides is invalid. If the processor encounters a problem when calling the double fault handler, a triple fault is generated and the processor shuts down. As double faults can only happen due to kernel bugs, they are rarely caused by user space programs in a modern protected mode operating system, unless the program somehow gains kernel access (some viruses and also some low-level DOS programs). Other processors like PowerPC or SPARC generally save state to predefined and reserved machine registers. A double fault will then be a situation where another exception happens while the processor is still using the contents of these registers to process the exception. SPARC processors have four levels of such registers, i.e. they have a 4-window register system. See also Triple fault Further reading * Computer errors Central processing unit
Double fault
[ "Technology" ]
212
[ "Computer errors" ]
1,501,313
https://en.wikipedia.org/wiki/SN%201006
SN 1006 was a supernova that is likely the brightest observed stellar event in recorded history, reaching an estimated −7.5 visual magnitude, and exceeding roughly sixteen times the brightness of Venus. Appearing between April 30 and May 1, 1006, in the constellation of Lupus, this "guest star" was described by observers across China, Japan, modern-day Iraq, Egypt, and Europe, and was possibly recorded in North American petroglyphs. Some reports state it was clearly visible in the daytime. Modern astronomers now consider its distance from Earth to be about 7,200 light-years or 2,200 parsecs. Historic reports Egyptian astrologer and astronomer Ali ibn Ridwan, writing in a commentary on Ptolemy's Tetrabiblos, stated that the "spectacle was a large circular body, 2 to 3 times as large as Venus. The sky was shining because of its light. The intensity of its light was a little more than a quarter that of Moon light" (or perhaps "than the light of the Moon when one-quarter illuminated"). Like all other observers, Ali ibn Ridwan noted that the new star was low on the southern horizon. Some astrologers interpreted the event as a portent of plague and famine. The most northerly sighting is recorded in the Annales Sangallenses maiores of the Abbey of Saint Gall in Switzerland, at a latitude of 47.5° north. Monks at St. Gall provided independent data as to its magnitude and location in the sky, writing that "[i]n a wonderful manner this was sometimes contracted, sometimes diffused, and moreover sometimes extinguished ... It was seen likewise for three months in the inmost limits of the south, beyond all the constellations which are seen in the sky". This description is often taken as probable evidence that the supernova was of type Ia. In The Book of Healing, Iranian philosopher Ibn Sina reported observing this supernova from northeastern Iran. He reported it as a transient celestial object which was stationary and/or tail-less (a star among the stars), that it remained for close to 3 months getting fainter and fainter until it disappeared, that it threw out sparks, that is, it was scintillating and very bright, and that the color changed with time. Some sources state that the star was bright enough to cast shadows; it was certainly seen during daylight hours for some time. According to Songshi, the official history of the Song dynasty (sections 56 and 461), the star seen on May 1, 1006, appeared to the south of constellation Di, between Lupus and Centaurus. It shone so brightly that objects on the ground could be seen at night. By December, it was again sighted in the constellation Di. The Chinese astrologer Zhou Keming, who was on his return to Kaifeng from his duty in Guangdong, interpreted the star to the emperor on May 30 as an auspicious star, yellow in color and brilliant in its brightness, that would bring great prosperity to the state over which it appeared. The reported color yellow should be taken with some suspicion, however, because Zhou may have chosen a favorable color for political reasons. There appear to have been two distinct phases in the early evolution of this supernova. There was first a three-month period at which it was at its brightest; after this period it diminished, then returned for a period of about eighteen months. Petroglyphs by the Hohokam in White Tank Mountain Regional Park, Arizona, and by the Ancestral Puebloans in Chaco Culture National Historical Park, New Mexico, have been interpreted as the first known North American representations of the supernova, though other researchers remain skeptical. The White Tank Mountain Regional Park petroglyph depicts a "star-like object" over a scorpion symbol. It has been contested that the scorpion represents the constellation Scorpius given a lack of evidence that the Native Americans interpreted the stars of that constellation as a scorpion. Earlier observations discovered from Yemen may indicate a sighting of SN 1006 on April 17, two weeks before its previously assumed earliest observation. Remnant SN 1006's associated supernova remnant from this event was not identified until 1965, when Doug Milne and Frank Gardner used the Parkes radio telescope to demonstrate a connection to known radio source PKS 1459−41. This is located near the star Beta Lupi, displaying a 30 arcmin circular shell. X-ray and optical emission from this remnant have also been detected, and during 2010 the H.E.S.S. gamma-ray observatory announced the detection of very-high-energy gamma-ray emission from the remnant. No associated neutron star or black hole has been found, which is the situation expected for the remnant of a Type Ia supernova (a class of explosion believed to completely disrupt its progenitor star). A survey in 2012 to find any surviving companions of the SN 1006 progenitor found no subgiant or giant companion stars, indicating that SN 1006 most likely had double degenerate progenitors; that is, the merging of two white dwarf stars. Remnant SNR G327.6+14.6 has an estimated distance of 2.2 kpc from Earth, making the true linear diameter approximately 20 parsecs. Effect on Earth Research has suggested that type Ia supernovae can irradiate the Earth with significant amounts of gamma-ray flux, compared with the typical flux from the Sun, up to distances on the order of 1 kiloparsec. SN 1006 lies well beyond 1 kiloparsec, and it did not appear to have significant effects on Earth. However, a signal of its outburst can be found in nitrate deposits in Antarctic ice. See also History of supernova observation List of supernova candidates List of supernova remnants List of supernovae References External links Cause of Supernova SN 1006 Revealed (27 Sept 2012 @ Universitat de Barcelona) Stories of SN 1006 in Chinese literature (PowerPoint) National Optical Observatory Press Release for March 2003 Simulation of SN 1006 as it appeared in the southern sky at midnight, May 1, 1006 Entry for supernova remnant of SN 1006 from the Galactic Supernova Remnant Catalogue X-ray image of supernova remnant of SN 1006, as seen with the Chandra X-ray Observatory Ancient rock art may depict exploding star Astronomy Picture of the Day (APOD), March 17, 2003 Astronomy Picture of the Day (APOD), July 4, 2008 Margaret Donsbach: ''The Scholar's Supernova Lupus (constellation) Supernova remnants Supernovae 1006 06 06 Historical supernovae
SN 1006
[ "Chemistry", "Astronomy" ]
1,378
[ "Supernovae", "History of astronomy", "Astronomical events", "Constellations", "Historical supernovae", "Explosions", "Lupus (constellation)" ]
1,501,608
https://en.wikipedia.org/wiki/Montmorillonite
Montmorillonite is a very soft phyllosilicate group of minerals that form when they precipitate from water solution as microscopic crystals, known as clay. It is named after Montmorillon in France. Montmorillonite, a member of the smectite group, is a 2:1 clay, meaning that it has two tetrahedral sheets of silica sandwiching a central octahedral sheet of alumina. The particles are plate-shaped with an average diameter around 1 μm and a thickness of 0.96 nm; magnification of about 25,000 times, using an electron microscope, is required to resolve individual clay particles. Members of this group include saponite, nontronite, beidellite, and hectorite. Montmorillonite is a subclass of smectite, a 2:1 phyllosilicate mineral characterized as having greater than 50% octahedral charge; its cation exchange capacity is due to isomorphous substitution of Mg for Al in the central alumina plane. The substitution of lower valence cations in such instances leaves the nearby oxygen atoms with a net negative charge that can attract cations. In contrast, beidellite is smectite with greater than 50% tetrahedral charge originating from isomorphous substitution of Al for Si in the silica sheet. The individual crystals of montmorillonite clay are not tightly bound hence water can intervene, causing the clay to swell, hence montmorillonite is a characteristic component of swelling soil. The water content of montmorillonite is variable and it increases greatly in volume when it absorbs water. Chemically, it is hydrated sodium calcium aluminium magnesium silicate hydroxide . Potassium, iron, and other cations are common substitutes, and the exact ratio of cations varies with source. It often occurs intermixed with chlorite, muscovite, illite, cookeite, and kaolinite. Cave conditions Montmorillonite can be concentrated and transformed within cave environments. The natural weathering of the cave can leave behind concentrations of aluminosilicates which were contained within the bedrock. Montmorillonite can form slowly in solutions of aluminosilicates. High concentrations and long periods of time can aid in its formation. Montmorillonite can then transform to palygorskite under dry conditions and to halloysite-10Å (endellite) in acidic conditions (pH 5 or lower). Halloysite-10Å can further transform into halloysite-7Å by drying. Uses Montmorillonite is used in the oil drilling industry as a component of drilling mud, making the mud slurry viscous, which helps to keep the drill bit cool, and to remove drilled solids. It is also used as a soil additive to hold soil water in drought-prone soils, used in the construction of earthen dams and levees, and to prevent the leakage of fluids. It is also used as a component of foundry sand and as a desiccant to remove moisture from air and gases. Montmorillonite clays have been extensively used in catalytic processes. Cracking catalysts have used montmorillonite clays for over 60 years. Other acid-based catalysts use acid-treated montmorillonite clays. Similar to many other clays, montmorillonite swells with the addition of water. Montmorillonites expand considerably more than other clays due to water penetrating the interlayer molecular spaces and concomitant adsorption. The amount of expansion is due largely to the type of exchangeable cation contained in the sample. The presence of sodium as the predominant exchangeable cation can result in the clay swelling to several times its original volume. Hence, sodium montmorillonite has come to be used as the major constituent in nonexplosive agents for splitting rock in natural stone quarries in an effort to limit the amount of waste, or for the demolition of concrete structures where the use of explosive charges is unacceptable. This swelling property makes montmorillonite-containing bentonite useful also as an annular seal or plug for water wells and as a protective liner for landfills. Other uses include as an anticaking agent in animal feed, in papermaking to minimize deposit formation, and as a retention and drainage aid component. Montmorillonite has also been used in cosmetics. Sodium montmorillonite is also used as the base of some cat litter products, due to its adsorbent and clumping properties. Montmorillonite can be used to remove arsenic from wastewater. Calcined clay products Montmorillonite can be calcined to produce arcillite, a porous material. This calcined clay is sold as a soil conditioner for playing fields and other soil products such as for use as bonsai soil as an alternative to akadama. Medicine and pharmacology Montmorillonite is effective as an adsorptive of heavy metals, however the impact this has on human health is unknown. It's assumed that heavy metal adsorption would only be applicable when the clay has direct contact. Hence it will not help when ingested, as it almost certainly doesn't pass through the intestinal mucous membranes. For external use, montmorillonite has been used to treat contact dermatitis. Pet food Montmorillonite clay is added to some dog and cat foods as an anti-caking agent and because it may provide some resistance to environmental toxins, though research on the subject is not yet conclusive. In a fine powder form, it can also be used as a flocculant in ponds. Tossed on the surface as it drops into the water, making the water "clouded", it attracts minute particles in the water and then settles to the bottom, cleaning the water. Koi and goldfish (carp) then actually feed on the "clump" which can aid in the digestion of the fish. It is sold in pond supply shops. Discovery Montmorillonite was first described in 1847 for an occurrence in Montmorillon in the department of Vienne, France, more than 50 years before the discovery of bentonite in the US. It is found in many locations worldwide and known by other names. Recently, a new source of Montmorillonite has been explored in Sulaiman Mountains of Pakistan. See also References Papke, Keith G. Montmorillonite, Bentonite and Fuller’s Earth Deposits in Nevada, Nevada Bureau of Mines Bulletin 76, Mackay School of Mines, University of Nevada-Reno, 1970. Mineral Galleries Mineral web Smectite group Magnesium minerals Sodium minerals Calcium minerals Desiccants Medicinal clay Cave minerals Luminescent minerals Monoclinic minerals Minerals in space group 12 Catalysts
Montmorillonite
[ "Physics", "Chemistry" ]
1,410
[ "Catalysis", "Catalysts", "Luminescence", "Luminescent minerals", "Desiccants", "Materials", "Chemical kinetics", "Matter" ]
1,501,761
https://en.wikipedia.org/wiki/Exopolymer
An exopolymer is a biopolymer that is secreted by an organism into the environment (i.e. external to the organism). These exopolymers include the biofilms produced by bacteria to anchor them and protect them from environmental conditions. One type of expolymer, Transparent Exopolymers (TEP), found in both marine and aquatic ecosystems, are planktonic acidic polysaccharides of a gel-like consistency, originally defined by their ability to be stained visible by acidic Alcian Blue. Their free-floating characteristic sets TEPs aside from other extracellular polymeric substance subgroups where exopolymers exists as cell coating, dissolved slime or as part of biofilm matrices. The formation of Transparent Exopolymer Particles (TEP) is mainly due to the abiotic coagulation of dissolved carbohydrates, which are secreted by phytoplankton communities. TEP have the ability to form larger aggregates because of their strong surface active properties or “stickiness”. This particular property of TEP allows them to perform as a glue matrix for other solid particles including detritus. Transparent Exopolymer Particles (TEP) are also a carbon source for bacteria, playing a significant role in affecting the food web structure and the ocean's carbon cycle. Additionally, the conversion of dissolved organic carbon (DOC) to particulate organic carbon (POC) is an aggregation process that is due to TEP formation. References Biomolecules Polymers
Exopolymer
[ "Chemistry", "Materials_science", "Biology" ]
324
[ "Natural products", "Biochemistry", "Biotechnology stubs", "Organic compounds", "Biochemistry stubs", "Biomolecules", "Molecular biology", "Polymer chemistry", "Polymers", "Structural biology" ]
1,501,800
https://en.wikipedia.org/wiki/Destructive%20distillation
Destructive distillation is a chemical process in which decomposition of unprocessed material is achieved by heating it to a high temperature; the term generally applies to processing of organic material in the absence of air or in the presence of limited amounts of oxygen or other reagents, catalysts, or solvents, such as steam or phenols. It is an application of pyrolysis. The process breaks up or "cracks" large molecules. Coke, coal gas, gaseous carbon, coal tar, ammonia liquor, and coal oil are examples of commercial products historically produced by the destructive distillation of coal. Destructive distillation of any particular inorganic feedstock produces only a small range of products as a rule, but destructive distillation of many organic materials commonly produces very many compounds, often hundreds, although not all products of any particular process are of commercial importance. The distillate are generally lower molecular weight. Some fractions however polymerise or condense small molecules into larger molecules, including heat-stable tarry substances and chars. Cracking feedstocks into liquid and volatile compounds, and polymerising, or the forming of chars and solids, may both occur in the same process, and any class of the products might be of commercial interest. Currently the major industrial application of destructive distillation is to coal. Historically the process of destructive distillation and other forms of pyrolysis led to the discovery of many chemical compounds or elucidation of their structures before contemporary organic chemists had developed the processes to synthesise or specifically investigate the parent molecules. It was especially in the early days that investigation of the products of destructive distillation, like those of other destructive processes, played parts in enabling chemists to deduce the chemical nature of many natural materials. Well known examples include the deduction of the structures of pyranoses and furanoses. History In his encyclopedic work Natural History () the Roman naturalist and author Pliny the Elder (23/24 –79 CE) describes how, in the destructive distillation of pine wood, two liquid fractions are produced: a lighter (aromatic oils) and a heavier (pitch_(resin)). The lighter fraction is released in the form of gases, which are condensed and collected. Process The process of pyrolysis can be conducted in a distillation apparatus (retort) to form the volatile products for collection. The mass of the product will represent only a part of the mass of the feedstock, because much of the material remains as char, ash, and non-volatile tars. In contrast, combustion consumes most of the organic matter, and the net weight of the products amount to roughly the same mass as the fuel and oxidant consumed. Destructive distillation and related processes are in effect the modern industrial descendants of traditional charcoal burning crafts. As such they are of industrial significance in many regions, such as Scandinavia. The modern processes are sophisticated and require careful engineering to produce the most valuable possible products from the available feedstocks. Applications Destructive distillation of wood produces methanol and acetic acid, together with a solid residue of charcoal. Destructive distillation of a tonne of coal can produce 700 kg of coke, 100 liters of liquor ammonia, 50 liters of coal tar and 400 m3 of coal gas. Destructive distillation is an increasingly promising method for recycling monomers derived from waste polymers. Destructive distillation of natural rubber resulted in the discovery of isoprene which led to the creation of synthetic rubbers such as neoprene. See also Dry distillation Pyrolysis Thermolysis Cracking (chemistry) References External links What is destructive distillation ? Distillation Pyrolysis
Destructive distillation
[ "Chemistry" ]
773
[ "Separation processes", "Pyrolysis", "Oil shale technology", "Organic reactions", "Distillation", "Synthetic fuel technologies" ]
1,501,948
https://en.wikipedia.org/wiki/Crystal%20momentum
In solid-state physics, crystal momentum or quasimomentum is a momentum-like vector associated with electrons in a crystal lattice. It is defined by the associated wave vectors of this lattice, according to (where is the reduced Planck constant). Frequently, crystal momentum is conserved like mechanical momentum, making it useful to physicists and materials scientists as an analytical tool. Lattice symmetry origins A common method of modeling crystal structure and behavior is to view electrons as quantum mechanical particles traveling through a fixed infinite periodic potential such that where is an arbitrary lattice vector. Such a model is sensible because crystal ions that form the lattice structure are typically on the order of tens of thousands of times more massive than electrons, making it safe to replace them with a fixed potential structure, and the macroscopic dimensions of a crystal are typically far greater than a single lattice spacing, making edge effects negligible. A consequence of this potential energy function is that it is possible to shift the initial position of an electron by any lattice vector without changing any aspect of the problem, thereby defining a discrete symmetry. Technically, an infinite periodic potential implies that the lattice translation operator commutes with the Hamiltonian, assuming a simple kinetic-plus-potential form. These conditions imply Bloch's theorem, which states , or that an electron in a lattice, which can be modeled as a single particle wave function , finds its stationary state solutions in the form of a plane wave multiplied by a periodic function . The theorem arises as a direct consequence of the aforementioned fact that the lattice symmetry translation operator commutes with the system's Hamiltonian. One of the notable aspects of Bloch's theorem is that it shows directly that steady state solutions may be identified with a wave vector , meaning that this quantum number remains a constant of motion. Crystal momentum is then conventionally defined by multiplying this wave vector by the Planck constant: While this is in fact identical to the definition one might give for regular momentum (for example, by treating the effects of the translation operator by the effects of a particle in free space), there are important theoretical differences. For example, while regular momentum is completely conserved, crystal momentum is only conserved to within a lattice vector. For example, an electron can be described not only by the wave vector , but also with any other wave vector such that where is an arbitrary reciprocal lattice vector. This is a consequence of the fact that the lattice symmetry is discrete as opposed to continuous, and thus its associated conservation law cannot be derived using Noether's theorem. Physical significance The phase modulation of the Bloch state is the same as that of a free particle with momentum , i.e. gives the state's periodicity, which is not the same as that of the lattice. This modulation contributes to the kinetic energy of the particle (whereas the modulation is entirely responsible for the kinetic energy of a free particle). In regions where the band is approximately parabolic the crystal momentum is equal to the momentum of a free particle with momentum if we assign the particle an effective mass that's related to the curvature of the parabola. Relation to velocity Crystal momentum corresponds to the physically measurable concept of velocity according to This is the same formula as the group velocity of a wave. More specifically, due to the Heisenberg uncertainty principle, an electron in a crystal cannot have both an exactly-defined k and an exact position in the crystal. It can, however, form a wave packet centered on momentum k (with slight uncertainty), and centered on a certain position (with slight uncertainty). The center position of this wave packet changes as the wave propagates, moving through the crystal at the velocity v given by the formula above. In a real crystal, an electron moves in this way—traveling in a certain direction at a certain speed—for only a short period of time, before colliding with an imperfection in the crystal that causes it to move in a different, random direction. These collisions, called electron scattering, are most commonly caused by crystallographic defects, the crystal surface, and random thermal vibrations of the atoms in the crystal (phonons). Response to electric and magnetic fields Crystal momentum also plays a seminal role in the semiclassical model of electron dynamics, where it follows from the acceleration theorem that it obeys the equations of motion (in cgs units): Here perhaps the analogy between crystal momentum and true momentum is at its most powerful, for these are precisely the equations that a free space electron obeys in the absence of any crystal structure. Crystal momentum also earns its chance to shine in these types of calculations, for, in order to calculate an electron's trajectory of motion using the above equations, one need only consider external fields, while attempting the calculation from a set of equations of motion based on true momentum would require taking into account individual Coulomb and Lorentz forces of every single lattice ion in addition to the external field. Applications Angle-resolved photo-emission spectroscopy (ARPES) In angle-resolved photo-emission spectroscopy (ARPES), irradiating light on a crystal sample results in the ejection of an electron away from the crystal. Throughout the course of the interaction, one is allowed to conflate the two concepts of crystal and true momentum and thereby gain direct knowledge of a crystal's band structure. That is to say, an electron's crystal momentum inside the crystal becomes its true momentum after it leaves, and the true momentum may be subsequently inferred from the equation by measuring the angle and kinetic energy at which the electron exits the crystal, where is a single electron's mass. Because crystal symmetry in the direction normal to the crystal surface is lost at the crystal boundary, crystal momentum in this direction is not conserved. Consequently, the only directions in which useful ARPES data can be gleaned are directions parallel to the crystal surface. References Electronic band structures Moment (physics) Momentum
Crystal momentum
[ "Physics", "Chemistry", "Materials_science", "Mathematics" ]
1,204
[ "Electron", "Physical quantities", "Quantity", "Electronic band structures", "Condensed matter physics", "Momentum", "Moment (physics)" ]
1,501,977
https://en.wikipedia.org/wiki/Receptor-mediated%20endocytosis
Receptor-mediated endocytosis (RME), also called clathrin-mediated endocytosis, is a process by which cells absorb metabolites, hormones, proteins – and in some cases viruses – by the inward budding of the plasma membrane (invagination). This process forms vesicles containing the absorbed substances and is strictly mediated by receptors on the surface of the cell. Only the receptor-specific substances can enter the cell through this process. Process Although receptors and their ligands can be brought into the cell through a few mechanisms (e.g. caveolin and lipid raft), clathrin-mediated endocytosis remains the best studied. Clathrin-mediated endocytosis of many receptor types begins with the ligands binding to receptors on the cell plasma membrane. The ligand and receptor will then recruit adaptor proteins and clathrin triskelions to the plasma membrane around where invagination will take place. Invagination of the plasma membrane then occurs, forming a clathrin-coated pit. Other receptors can nucleate a clathrin-coated pit allowing formation around the receptor. A mature pit will be cleaved from the plasma membrane through the use of membrane-binding and fission proteins such as dynamin (as well as other BAR domain proteins), forming a clathrin-coated vesicle that then uncoats of clathrin and typically fuses to a sorting endosome. Once fused, the endocytosed cargo (receptor and/or ligand) can then be sorted to lysosomal, recycling, or other trafficking pathways. Function The function of receptor-mediated endocytosis is diverse. It is widely used for the specific uptake of certain substances required by the cell (examples include LDL via the LDL receptor or iron via transferrin). The role of receptor-mediated endocytosis is well recognized to up take downregulation of transmembrane signal transduction but can also promote sustained signal transduction. The activated receptor becomes internalised and is transported to late endosomes and lysosomes for degradation. However, receptor-mediated endocytosis is also actively implicated in transducing signals from the cell periphery to the nucleus. This became apparent when it was found that the association and formation of specific signaling complexes via clathrin-mediated endocytosis is required for the effective signaling of hormones (e.g. EGF). Additionally it has been proposed that the directed transport of active signaling complexes to the nucleus might be required to enable signaling, due to the fact that random diffusion is too slow, and mechanisms permanently downregulating incoming signals are strong enough to shut down signaling completely without additional signal-transducing mechanisms. Experiments Using fluorescent or EM visible dyes to tag specific molecules in living cells, it is possible to follow the internalization of cargo molecules and the evolution of a clathrin-coated pit by fluorescence microscopy and immuno electron microscopy. Since the process is non-specific, the ligand can be a carrier for larger molecules. If the target cell has a known specific pinocytotic receptor, drugs can be attached and will be internalized. To achieve internalisation of nanoparticles into cells, such as T cells, antibodies can be used to target the nanoparticles to specific receptors on the cell surface (such as CCR5). This is one method of improving drug delivery to immune cells. The development of photoswitchable peptide inhibitors of protein-protein interactions involved in clathrin-mediated endocytosis (Traffic Lights peptides) and photoswitchable small molecule inhibitors of dynamin (Dynazos) has been reported. These photopharmacological compounds allow spatiotemporal control of the endocytosis with light. Characteristics Induction within minutes of exposure to excess ligand. The formation of these vesicles is sensitive to inhibition by wortmannin The initiation of vesicle formation can be delayed/inhibited by temperature variations See also Bulk endocytosis Endocytosis Non-specific, adsorptive pinocytosis Phagocytosis Pinocytosis Viropexis References External links CytoChemistry.net- A lecture on RME with some nice pictures Cellular processes
Receptor-mediated endocytosis
[ "Biology" ]
894
[ "Cellular processes" ]
1,501,989
https://en.wikipedia.org/wiki/Orbital%20plane%20of%20reference
In celestial mechanics, the orbital plane of reference (or orbital reference plane) is the plane used to define orbital elements (positions). The two main orbital elements that are measured with respect to the plane of reference are the inclination and the longitude of the ascending node. Depending on the type of body being described, there are four different kinds of reference planes that are typically used: The ecliptic or invariable plane for planets, asteroids, comets, etc. within the Solar System, as these bodies generally have orbits that lie close to the ecliptic. The equatorial plane of the orbited body for satellites orbiting with small semi-major axes The local Laplace plane for satellites orbiting with intermediate-to-large semi-major axes The plane tangent to celestial sphere for extrasolar objects On the plane of reference, a zero-point must be defined from which the angles of longitude are measured. This is usually defined as the point on the celestial sphere where the plane crosses the prime hour circle (the hour circle occupied by the First Point of Aries), also known as the equinox. See also Fundamental plane Plane (geometry) References Reference Planes Spherical astronomy Orbits Planes (geometry)
Orbital plane of reference
[ "Mathematics" ]
241
[ "Planes (geometry)", "Mathematical objects", "Infinity" ]
1,502,597
https://en.wikipedia.org/wiki/Sex-chromosome%20dosage%20compensation
Dosage compensation is the process by which organisms equalize the expression of genes between members of different biological sexes. Across species, different sexes are often characterized by different types and numbers of sex chromosomes. In order to neutralize the large difference in gene dosage produced by differing numbers of sex chromosomes among the sexes, various evolutionary branches have acquired various methods to equalize gene expression among the sexes. Because sex chromosomes contain different numbers of genes, different species of organisms have developed different mechanisms to cope with this inequality. Replicating the actual gene is impossible; thus organisms instead equalize the expression from each gene. For example, in humans, female (XX) cells randomly silence the transcription of one X chromosome, and transcribe all information from the other, expressed X chromosome. Thus, human females have the same number of expressed X-linked genes per cell as do human males (XY), both sexes having essentially one X chromosome per cell, from which to transcribe and express genes. Different lineages have evolved different mechanisms to cope with the differences in gene copy numbers between the sexes that are observed on sex chromosomes. Some lineages have evolved dosage compensation, an epigenetic mechanism which restores expression of X or Z specific genes in the heterogametic sex to the same levels observed in the ancestor prior to the evolution of the sex chromosome. Other lineages equalize the expression of the X- or Z- specific genes between the sexes, but not to the ancestral levels, i.e. they possess incomplete compensation with "dosage balance". One example of this is X-inactivation which occurs in humans. The third documented type of gene dose regulatory mechanism is incomplete compensation without balance (sometimes referred to as incomplete or partial dosage compensation). In this system gene expression of sex-specific loci is reduced in the heterogametic sex i.e. the females in ZZ/ZW systems and males in XX/XY systems. There are three main mechanisms of achieving dosage compensation which are widely documented in the literature and which are common to most species. These include random inactivation of one female X chromosome (as observed in humans and Mus musculus; this is called X-inactivation), a two-fold increase in the transcription of a single male X chromosome (as observed in Drosophila melanogaster), and decreased transcription by half in both of the X chromosomes of a hermaphroditic organism (as observed in Caenorhabditis elegans). These mechanisms have been widely studied and manipulated in model organisms commonly used in the laboratory research setting. A summary of these forms of dosage compensation is illustrated below. However, there are also other less common forms of dosage compensation, which are not as widely researched and are sometimes specific to only one species (as observed in certain bird and monotreme species). Random inactivation of one ♀ X One logical way to equalize gene expression amongst males and females that follow a XX/XY sex differentiation scheme would be to decrease or altogether eliminate the expression of one of the X chromosomes in an XX, or female, homogametic individual, such that both males and females then express only one X chromosome. This is the case in many mammalian organisms, including humans and mice. The evidence for this mechanism of dosage compensation was discovered prior to scientists' understanding of what its implications were. In 1949, Murray Barr and Ewert Bertram published data describing the presence of "nucleolar satellites, which they observed were present in the mature somatic tissue of different female species. Further characterization of these satellites revealed that they were actually packages of condensed heterochromatin, but a decade would pass before scientists grasped the significance of this specialized DNA. Then, in 1959 Susumu Ohno proved that these satellite-like structures found exclusively in female cells were actually derived from female X chromosomes. He called these structures Barr bodies after one of the investigators who originally documented their existence. Ohno's studies of Barr bodies in female mammals with multiple X chromosomes revealed that such females used Barr bodies to inactivate all but one of their X chromosomes. Thus, Ohno described the "n-1" rule to predict the number of Barr bodies in a female with n number of X chromosomes in her karyotype. Simultaneously, Mary F. Lyon began investigating manipulations of X-linked traits that had phenotypically visible consequences, particularly in mice, whose fur color is a trait intimately linked to the X chromosome. Building on work done by Ohno and his colleagues, Lyon eventually proved that either the maternal or paternal X chromosome is randomly inactivated in every cell of the female body in the species she was studying, which explained the heterogeneous fur patterns she observed in her mosaic mice. This process is known as X-inactivation, and is sometimes referred to as "lyonization". This discovery can be easily extrapolated to explain the mixed color patterns observed in the coats of tortoiseshell cats. The fur patterns characteristic of tortoiseshell cats are found almost exclusively in females, because only they randomly inactivate one X chromosome in every somatic hair cell. Thus, presuming that hair color determining genes are X-linked, it makes sense that whether the maternal or paternal X chromosome is inactivated in a particular hair cell can result in differential fur color expression. Compounding on Lyon's discoveries, in 1962 Ernest Beutler used female fibroblast cell lineages grown in culture to demonstrate the heritability of lyonization or random X-inactivation. By analyzing the differential expression of two existing, viable alleles for the X-linked enzyme glucose-6-phosphate dehydrogenase (G6PD) gene, Beutler observed that the inactivation of the gene was heritable across passaged generations of the cells. This pattern of dosage compensation, caused by random X-inactivation, is regulated across development in female mammals, following concerted patterns throughout development; for example, at the beginning of most female mammal development, both X chromosomes are initially expressed, but gradually undergo epigenetic processes to eventually achieve random inactivation of one X. In germ cells, inactivated X chromosomes are then once again activated to ensure their expression in gametes produced by female mammals. Thus, dosage compensation in mammals is largely achieved through the silencing of one of two female X chromosomes via X-inactivation. This process involves histone tail modifications, DNA methylation patterns, and reorganization of large-scale chromatin structure encoded by the X-ist gene. In spite of these extensive modifications, not all genes along the X chromosome are subject to X-inactivation; active expression at some loci is required for homologous recombination with the pseudo-autosomal region (PAR) of the Y chromosome during meiosis. Additionally, 10-25% of human X chromosome genes, and 3-7% of mouse X chromosome genes outside of the PARs show weak expression from the inactive X chromosome. Random X-inactivation demands that the cell can determine if it contains more than one active X-chromosome before acting to silence any extraneous X-chromosome(s). This process is known as "counting". The exact molecular mechanism of counting is still unknown, but a popular model posits that autosomes produce factors that repress X-inactivation, while X-chromosome products that promote X-inactivation. These two conflicting forces are balanced such that if there is more than one X-chromosome X-inactivation will occur, but if there is only one, the autosomal products will successfully prevent the process. Not all random X-inactivation is entirely random. Some alleles, generally mutations in the X-inactivation center on the X-chromosome have been demonstrated to confer a bias towards inactivation for the chromosome on which they sit. Truly random X-inactivation may also appear to be non-random if one X-chromosome carries a deleterious mutation. This can result in fewer cells which express the lower-fitness X-chromosome to be present in the body as these cells are selected against. Two-fold increased transcription of a single ♂ X Another mechanism common for achieving equal X-related genetic expression between males and females involves two-fold increased transcription of a single male X chromosome. Thus, heterogametic male organisms with one X chromosome may match the level of expression achieved in homogametic females with two active X chromosomes. This mechanism is observed in Drosophila. The concept of dosage compensation actually originated from an understanding of organisms in which males upregulated X-linked genes two-fold, and was much later extended to account for the observation of the once mysterious Barr bodies. As early as 1932, H.J. Muller carried out a set of experiments which allowed him to track the expression of eye color in flies, which is an X-linked gene. Muller introduced a mutant gene that caused loss of pigmentation in fly eyes, and subsequently noted that males with only one copy of the mutant gene had similar pigmentation to females with two copies of the mutant gene. This led Muller to coin the phrase "dosage compensation" to describe the observed phenomenon of gene expression equalization. Despite these advances, it was not until Ardhendu Mukherjee and W. Beermann performed more advanced autoradiography experiments in 1965 that scientists could confirm that transcription of genes in the single male X chromosome was double that observed in the two female X chromosomes. Mukherjee and Beermann confirmed this by designing a cellular autoradiography experiment that allowed them to visualize incorporation of into ribonucleic acid of the X chromosomes. Their studies showed equal levels of incorporation in the single male X chromosome and the two female X chromosomes. Thus, the investigators concluded that the two-fold increase in the rate of RNA synthesis in the X chromosome of the male relative to those of the female could account for Muller's hypothesized dosage compensation. In the case of two-fold increased transcription of a single male X chromosome, there is no use for a Barr body, and the male organism must use different genetic machinery to increase the transcriptional output of their single X chromosome. It is common in such organisms for the Y chromosome to be necessary for male fertility, but not for it to play an explicit role in sex determination. In Drosophila, for example, the sex lethal (SXL) gene acts as a key regulator of sexual differentiation and maturation in somatic tissue; in XX animals, SXL is activated to repress increased transcription, while in XY animals SXL is inactive and allows male development to proceed via increased transcription of the single X. Several binding sites exist on the Drosophila X chromosome for the dosage compensation complex (DCC), a ribonucleoprotein complex; these binding sites have varying levels of affinity, presumably for varying expression of specific genes. The Male Specific Lethal complex, composed of protein and RNA binds and selectively modifies hundreds of X-linked genes, increasing their transcription to levels comparable to female D. melanogaster. In organisms that use this method of dosage compensation, the presence of one or more X chromosomes must be detected early on in development, as failure to initiate the appropriate dosage compensation mechanisms is lethal. Male specific lethal proteins (MSLs) are a family of four proteins that bind to the X chromosome exclusively in males. The name "MSL" is used because mutations in these genes cause inability to effectively upregulate X-linked genes appropriately, and are thus lethal to males only and not their female counterparts. SXL regulates pre-messenger RNA in males to differentially splice MSLs and result in the appropriate increase in X chromosome transcription observed in male Drosophila. The immediate target of SXL is male specific lethal-2 (MSL-2). Current dogma suggests that the binding of MSL-2 at multiple sites along the SXL gene in females prevents proper MSL-2 translation, and thus, as previously stated, represses the possibility for X-linked genetic upregulation in females. However, all other transcription factors in the MSL family—maleless, MSL-1, and MSL-3—are able to act when SXL is not expressed, as in the case in males. These factors act to increase male X chromosome transcriptional activity. Histone acetylation and the consequent upregulation of X-linked genes in males is dictated by the MSL complex. Specifically, special roX non-coding RNAs on the MSL complexes facilitate binding to the single male X chromosome, and dictate acetylation of specific loci along the X chromosome as well as the formation of euchromatin. Though these RNAs bind at specific sites along the male X chromosome, their effects spread along the length of the chromosome and have the ability to influence large-scale chromatin modifications. The implications of this spreading epigenetic regulation along the male X chromosome is thought to have implications for understanding the transfer of epigenetic activity along long genomic stretches. Decreased transcription of both hermaphroditic Xs by half Other species that do not follow the previously discussed conventions of XX females and XY males must find alternative ways to equalize X-linked gene expression among differing sexes. For example, in Caenorhabditis elegans (or C. elegans), sex is determined by the ratio of X chromosomes relative to autosomes; worms with two X chromosomes (XX worms) develop as hermaphrodites, whereas those with only one X chromosome (XO worms) develop as males. This system of sex determination is unique, because there is no male specific chromosome, as is the case in XX/XY sex determination systems. However, as is the case with the previously discussed mechanisms of dosage compensation, failure to express X-linked genes appropriately can still be lethal. In this XX/XO sex determination system, gene expression on the X chromosome is equalized by downregulating expression of genes on both X chromosomes of hermaphroditic XX organisms by half. In these XX organisms, the dosage compensation complex (DCC) is assembled on both X chromosomes to allow for this tightly regulated change in transcription levels. The DCC is often compared to the condensin complex, which is conserved across the mitotic and meiotic processes of many species. This complex is crucial to the condensation and segregation of chromosomes during both meiosis and mitosis. Because data substantiates the theory that dosage compensation in other species is caused by chromatin-wide modifications, many theorize that the DCC in particular functions similar to the condensin complex in its ability to condense or remodel the chromatin of the X chromosome. The role of the DCC in this form of dosage compensation was postulated by Barbara J. Meyer in the 1980s, and its individual components and their cooperative function were later parsed out by her lab. Notably, in 1999, data from Meyer's lab showed that SDC-2 is a particularly important transcriptional factor for targeting the DCC to the X chromosome and for assembling DCC components onto the X chromosomes in XX embryos. More recently, Meyer's lab has shown that proteins known as X-linked signal elements (XSEs) operate in concert with SDC-2 to differentially repress and activate other genes in the dosage compensation pathway. By selectively mutating a panel of genes hypothesized to contribute to dosage compensation in worms, Meyer's group demonstrated which XSEs specifically play a role in determining normal dosage compensation. They found that during embryonic development, several X-linked genes—including sex-1, sex-2, fox-1, and ceh-39—act in a combinatorial fashion to selectively repress transcriptional activity of the xol-1 gene in hermaphrodites. Xol-1 expression is tightly regulated during early development, and is considered the most upstream gene in sex determination of C. elegans. In fact, xol-1 is often referred to in the literature as the master sex regulatory gene of C. elegans. XX C. elegans embryos have much lower xol-1 expression than their XO counterparts, resulting from overall increases in the amount of SEX-1, SEX-2, CEH-39, and FOX-1 transcription produced in the female embryos. This consequent decrease in xol-1 expression then allows higher SDC-2 expression levels, which aids in the formation and function of the DCC complex in the XX hermaphroditic worms, and in turn results in equalized expression of X-linked genes in the hermaphrodite. Though all of the above-mentioned XSEs act to reduce xol-1 expression, experimentally reducing expression levels of these individual XSEs has been shown to have a minimal effect on sex determination and successful dosage compensation. This could be in part because these genes encode different proteins that act cooperatively rather than in an isolated fashion; for example, SEX-1 is a nuclear hormone receptor, while FOX-1 is an RNA-binding protein with properties capable of inducing post-transcriptional modifications in the xol-1 target. However, reducing the level of more than one XSE in different combinational permutations seems to have an additive effect on ensuring proper sex determination and resultant dosage compensation mechanics. This supports the hypothesis that these XSEs act together to achieve the desired sex determination and dosage compensation fate. Thus, in this model organism, the achieved level of X-chromosome expression is directly correlated to the activation of multiple XSEs that ultimately function to repress xol-1 expression in a developing worm embryo. A summary of this C. elegans mechanism of dosage compensation is illustrated below. Other species-specific methods The ZZ/ZW sex system is used by most birds, as well as some reptiles and insects. In this system the Z is the larger chromosome so the males (ZZ) must silence some genetic material to compensate for the female's (ZW) smaller W chromosome. Instead of silencing the entire chromosome as humans do, male chickens (the model ZZ organism) seem to engage in selective Z silencing, in which they silence only certain genes on the extra Z chromosome. Thus, male chickens express an average of 1.4-1.6 of the Z chromosome DNA expressed by female chickens. The Z chromosome expression of male zebra finches and chickens is higher than the autosomal expression rates, whereas X chromosome expression in female humans is equal to autosomal expression rates, illustrating clearly that both male chickens and male zebra finches practice incomplete silencing. Few other ZZ/ZW Systems have been analyzed as thoroughly as the chicken; however a recent study on silkworms revealed similar levels of unequal compensation across male Z chromosomes. Z-specific genes were over-expressed in males when compared to females, and a few genes had equal expression in both male and female Z chromosomes. In chickens, most of the dosage compensated genes exist on the Zp, or short, arm of the chromosome while the non-compensated genes are on the Zq, or long, arm of the chromosome. The compensated (silenced) genes on Zp resemble a region on the primitive platypus sex chromosome, suggesting an ancestor to the XX/XY system. Birds The sex chromosomes of birds evolved separately from those of mammals and share very little sequence homology with the XY chromosomes. As such, scientists refer to bird sex chromosomes as a ZW sex-determining system, with males possessing two Z chromosomes, and females possessing one Z chromosome and one W. Thus, dosage compensation in birds could be hypothesized to follow a pattern similar to the random X-inactivation observed in most mammals. Alternatively, birds might show decreased transcription of the two Z chromosomes present in the male heterogametic sex, similar to the system observed in the two hermaphrodite X chromosomes of C. elegans. However, bird mechanisms of dosage compensation differ significantly from these precedents. Instead, male birds appear to selectively silence only a few genes along one of their Z chromosomes, rather than randomly silencing one entire Z chromosome. This type of selective silencing has led some people to label birds as "less effective" at dosage compensation than mammals. However, more recent studies have shown that those genes on the Z chromosome which are not inactivated in birds may play an important role in recruiting dosage compensation machinery to the Z chromosome in ZZ organisms. In particular, one of these genes, ScII has been demonstrated to be an ortholog of xol-1, the master sex regulator gene in C. elegans. Thus, the function of the selective silencing may be to spare dosage compensation of genes crucial for sex determination of homologous pairing. Recent studies are focusing on how epigenetic mechanisms could contribute to dosage compensation in birds, with a particular emphasis on methylation. It is already known that some regions on the Z chromosome of birds are heavily methylated, called MHM regions. So far, only two such regions have been well studied: one located at around 27.3 Mb and the other at 73.16–73.17 Mb (designated MHM2). The first MHM region discovered consists of tandem repeats of a BamHI 2.2-kb sequence and has a high degree of methylation on the cytosine of CpG islands (segments of cytosine-phosphate-guanine that are more readily methylated and silenced than other DNA segments) in both copies of the Z chromosome in males, and less so in the Z chromosome of females. This region is transcribed only in females and produces a long non-coding RNA, which gathers at the transcription site next to the DMRT1 gene. The second MHM region, at 73.16 Mb, is not as extensively studied due to its recent discovery. It appears to be smaller in size and contains three long non-coding RNA sequences with higher expression in females. Findings also suggest that the mechanism is more gene-specific, as certain genetic variants, called methylation quantitative trait loci (meQTLs), can affect methylation. These meQTLs are hypothesized to impact a larger part of the Z chromosome in males and are mostly located on autosomes, affecting the Z chromosome in a trans manner. Monotremes Monotremes are a class of basal mammals that also lay eggs. They are an order of mammals that includes platypuses and four species of echidna, all of which are egg-laying mammals. While monotremes use an XX/XY system, unlike other mammals, monotremes have more than two sex chromosomes. The male short-beaked echidna, for example, has nine sex chromosomes—5 Xs and 4 Ys, and the male platypus has 5 Xs and 5 Ys. Platypuses are a monotreme species whose mechanism of sex determination has been extensively studied. There is some contention in academia about the evolutionary origin and the proper taxonomy of platypuses. A recent study revealed that four platypus X chromosomes, as well as a Y chromosome, are homologous to some regions on the avian Z chromosome. Specifically, platypus X1 shares homology with the chicken Z chromosome, and both share homology with the human chromosome 9. This homology is important when considering the mechanism of dosage compensation in monotremes. In 50% of female platypus cells, only one of the alleles on these X chromosomes is expressed while in the remaining 50% multiple alleles are expressed. This, combined with the portions that are homologous to chicken Z and human 9 chromosomes imply that this level of incomplete silencing may be the ancestral form of dosage compensation. Regardless of their ambiguous evolutionary history, platypuses have been empirically determined to follow an XY sex-determination system, with females possessing five pairs of X chromosomes as the homogametic sex, and males possessing five X and five Y chromosomes as the heterogametic sex. Because the entire genome of the platypus has yet to be completely sequenced (including one of the X chromosomes), there is still continued investigation as to the definitive mechanism of dosage compensation that Platypuses follow. Research from the laboratory of Jennifer Graves used qPCR and SNP analysis of BACs containing various genes from X chromosomes in order to find whether multiple alleles for particular X-linked genes were being expressed at once, or were otherwise being dosage compensated. Her group found that in female platypuses, some X-linked genes only expressed an allele from one X chromosomes, while other genes expressed multiple alleles. This appears to be a system similar to the selective silencing method of dosage compensation observed in birds. However, about half of all X-linked genes also seemed to stochastically express only one active copy of said gene, alluding to the system of random X-inactivation observed in humans. These findings suggest that platypuses may employ a hybrid form of dosage compensation that combines feature from mammals as well as birds. Understanding the evolution of such a system may have implications for solidifying the true ancestral lineage of monotremes. Plants In addition to humans and flies, some plants also make use of the XX/XY dosage compensation systems. Silene latifolia plants are also either male (XY) or female (XX), with the Y chromosome being smaller, with fewer genes expressed, than the X chromosome. Two separate studies have shown male S. latifolia expression of X-linked genes to be about 70% of the expression in females. If the S. latifolia did not practice dosage compensation, the expected level of X-linked gene expression in males would be 50% that of females, thus the plant practices some degree of dosage compensation but, because male expression is not 100% that of females, it has been suggested that S. latiforia and its dosage compensation system is still evolving. Additionally, in plant species that lack dimorphic sex chromosomes, dosage compensation can occur when aberrant meiotic events or mutations result in either aneuploidy or polyploidy. Genes on the affected chromosome may be upregulated or down-regulated to compensate for the change in the normal number of chromosomes present. Reptiles Research into dosage compensation has been carried out in six species of toxicoferan reptiles and in one species of softshell turtle. Two species of caenophidian snake (one which belongs to the family Viperidae and the other to the family Colubridae) have been investigated and both of these exhibit female heterogametic sex determination systems (ZZ\ZW) and have incomplete compensation without balance. The Komodo dragon exhibits incomplete compensation without dosage balance in their independently evolved ZZ/ZW system. In the XX/XY system of Basiliscus vittatus and multiple neo-sex chromosomes with male heterogamety in the pygopodid gecko Lialis burtonis incomplete compensation without dosage balance were also seen. The Green anole (Anolis carolinensis; Dactyloidea), has XX/XY sex determination and unlike the other squamates studied to date has complete dosage compensation with dosage balance. In the Florida softshell turtle (Apalone ferox) with ZZ/ZW sex chromosomes, the lack of dosage balance in the expression of Z-linked genes was also found. X chromosome inactivation and embryonic stem cells XCI is initiated very early during female embryonic development or upon differentiation of female embryonic stem (ES) cells and results in inactivation of one X chromosome in every female somatic cell. This process is initiated very early during development, around the two- to eight-cell stage and is maintained in the developing extra-embryonic tissues of the embryo, including the fetal placenta. Xist RNA induces heterochromatinization of the X chromosome by attracting chromatin modifiers, involved in gene silencing. Xist RNA is tightly associated with the Xi and it is required for X Chromosome Inactivation to occur in cis. Knockout studies in female ES cells and mice have shown that X chromosomes bearing a deletion of the Xist gene are unable to inactivate the mutated X. Most of the human female ES cell lines display an inactivated X chromosome already in the undifferentiated state characterized by XIST expression, XIST coating and accumulated markers of heterochromatin on the Xi. It is widely thought that human embryos do not employ XCI prior to implantation. Female embryos have an accumulation of Xist RNA on one of the two X chromosomes, beginning around the 8-cell stage. Xist RNA accumulates at the morula and blastocyst stages and is shown to be associated with transcriptional silencing of the Xist-coated chromosomal region, therefore indicating dosage compensation has occurred. Recently, however, it has become increasingly apparent that XCI of the paternal X chromosome is already present from the 4-cell stage onward in all cells of preimplantation mouse embryos, not the 8-cell stages. Xist, Xite, and Tsix and their roles in X-inactivation Xite and Xist, are both long non-coding RNAs that regulate and facilitate the process of X-inactivation and are important in the silencing of genes within the X chromosome that is being inactivated. These work in combination with Tsix, which is non-coding RNA that is an antisense which downregulates the effects of Xist on the X chromosome in which it is expressed on the maternal X chromosome upon the start regulation of X-inactivation. These three RNAs regulate the X-X pair in a in order to be able to have both chromosomes available for inhibitory actions. Tsix and Xite have basic lncRNAs functions in addition to X-inactivation and regulate the X-X pair in the . This ensures exclusive silencing for both X chromosomes. Xite and Tsix are both essential within the orientational directional processes in cis and trans as it is seen that without Tsix and Xite in trans it perturbs pairing and counting of genes. Once the Xist is turned off and no longer regulates the process, the Tsix will slowly decrease in expression as well until both RNAs are no longer being changed by Xic. Xite is the locus that harbors intergenic transcription start sites from hypersensitive sites of allelic crossovers/differences. When X-inactivation begins, the transcription of Xite increases and signals for the downregulation of Tsix in , which is on the silent X chromosome, all while promoting the Tsix persistence on the active X chromosome. Xite also has major roles to play in the asymmetry of Tsix expression and generates X chromosome inequality through moving and helping orient the chromosomes to be acted upon by the correct subsequent lncRNA, either Tsix or Xist. Neo-sex chromosomes and dosage compensation The monarch butterfly Danaus plexippus belongs to the order Lepidoptera and has 30 chromosomes one of which is a neo-sex chromosome which is the result of a fusion between one of the sex chromosomes and an autosome. A study using a combination of methods (Hi-C assembly, coverage analysis and ChIp-seq) found that the neo-Z segment exhibits complete dosage compensation which is achieved by increased transcription in ZW females. Interestingly, the ancestral Z segment exhibits dosage balance with transcription levels being equal between both genders but less than the expected ancestral level, and this is achieved by decreased transcription in ZZ males. See also 2R hypothesis Barr body Gene dosage X-inactivation Tsix XY sex-determination system epigenetics References Further reading Genetics de:Gonosom#Dosiskompensation
Sex-chromosome dosage compensation
[ "Biology" ]
6,704
[ "Genetics" ]
1,502,660
https://en.wikipedia.org/wiki/X-inactivation
X-inactivation (also called Lyonization, after English geneticist Mary Lyon) is a process by which one of the copies of the X chromosome is inactivated in therian female mammals. The inactive X chromosome is silenced by being packaged into a transcriptionally inactive structure called heterochromatin. As nearly all female mammals have two X chromosomes, X-inactivation prevents them from having twice as many X chromosome gene products as males, who only possess a single copy of the X chromosome (see dosage compensation). The choice of which X chromosome will be inactivated in a particular embryonic cell is random in placental mammals such as humans, but once an X chromosome is inactivated it will remain inactive throughout the lifetime of the cell and its descendants in the organism (its cell line). The result is that the choice of inactivated X chromosome in all the cells of the organism is a random distribution, often with about half the cells having the paternal X chromosome inactivated and half with an inactivated maternal X chromosome; but commonly, X-inactivation is unevenly distributed across the cell lines within one organism (skewed X-inactivation). Unlike the random X-inactivation in placental mammals, inactivation in marsupials applies exclusively to the paternally-derived X chromosome. Mechanism Cycle of X-chromosome activation in rodents The paragraphs below have to do only with rodents and do not reflect XI in the majority of mammals. X-inactivation is part of the activation cycle of the X chromosome throughout the female life. The egg and the fertilized zygote initially use maternal transcripts, and the whole embryonic genome is silenced until zygotic genome activation. Thereafter, all mouse cells undergo an early, imprinted inactivation of the paternally-derived X chromosome in 4–8 cell stage embryos. The extraembryonic tissues (which give rise to the placenta and other tissues supporting the embryo) retain this early imprinted inactivation, and thus only the maternal X chromosome is active in these tissues. In the early blastocyst, this initial, imprinted X-inactivation is reversed in the cells of the inner cell mass (which give rise to the embryo), and in these cells both X chromosomes become active again. Each of these cells then independently and randomly inactivates one copy of the X chromosome. This inactivation event is irreversible during the lifetime of the individual, with the exception of the germline. In the female germline before meiotic entry, X-inactivation is reversed, so that after meiosis all haploid oocytes contain a single active X chromosome. Overview The Xi marks the inactive, Xa the active X chromosome. XP denotes the paternal, and XM to denotes the maternal X chromosome. When the egg (carrying XM), is fertilized by a sperm (carrying a Y or an XP) a diploid zygote forms. From zygote, through adult stage, to the next generation of eggs, the X chromosome undergoes the following changes: XiP XiM zygote → undergoing zygotic genome activation, leading to: XaP XaM → undergoing imprinted (paternal) X-inactivation, leading to: XiP XaM → undergoing X-activation in the early blastocyst stage, leading to: XaP XaM → undergoing random X-inactivation in the embryonic lineage (inner cell mass) in the blastocyst stage, leading to: XiP XaM OR XaP XiM → undergoing X-reactivation in primordial germ cells before meiosis, leading to: XaM XaP diploid germ cells in meiotic arrest. As the meiosis I only completes with ovulation, human germ cells exist in this stage from the first weeks of development until puberty. The completion of meiosis leads to: XaM AND XaP haploid germ cells (eggs). The X activation cycle has been best studied in mice, but there are multiple studies in humans. As most of the evidence is coming from mice, the above scheme represents the events in mice. The completion of the meiosis is simplified here for clarity. Steps 1–4 can be studied in in vitro fertilized embryos, and in differentiating stem cells; X-reactivation happens in the developing embryo, and subsequent (6–7) steps inside the female body, therefore much harder to study. Timing The timing of each process depends on the species, and in many cases the precise time is actively debated. [The whole part of the human timing of X-inactivation in this table is highly questionable and should be removed until properly substantiated by empirical data] Inheritance of inactivation status across cell generations The descendants of each cell which inactivated a particular X chromosome will also inactivate that same chromosome. This phenomenon, which can be observed in the coloration of tortoiseshell cats when females are heterozygous for the X-linked pigment gene, should not be confused with mosaicism, which is a term that specifically refers to differences in the genotype of various cell populations in the same individual; X-inactivation, which is an epigenetic change that results in a different phenotype, is not a change at the genotypic level. For an individual cell or lineage the inactivation is therefore skewed or 'non-random', and this can give rise to mild symptoms in female 'carriers' of X-linked genetic disorders. Selection of one active X chromosome Typical females possess two X chromosomes, and in any given cell one chromosome will be active (designated as Xa) and one will be inactive (Xi). However, studies of individuals with extra copies of the X chromosome show that in cells with more than two X chromosomes there is still only one Xa, and all the remaining X chromosomes are inactivated. This indicates that the default state of the X chromosome in females is inactivation, but one X chromosome is always selected to remain active. It is understood that X-chromosome inactivation is a random process, occurring at about the time of gastrulation in the epiblast (cells that will give rise to the embryo). The maternal and paternal X chromosomes have an equal probability of inactivation. This would suggest that women would be expected to suffer from X-linked disorders approximately 50% as often as men (because women have two X chromosomes, while men have only one); however, in actuality, the occurrence of these disorders in females is much lower than that. One explanation for this disparity is that 12–20% of genes on the inactivated X chromosome remain expressed, thus providing women with added protection against defective genes coded by the X-chromosome. Some suggest that this disparity must be evidence of preferential (non-random) inactivation. Preferential inactivation of the paternal X-chromosome occurs in both marsupials and in cell lineages that form the membranes surrounding the embryo, whereas in placental mammals either the maternally or the paternally derived X-chromosome may be inactivated in different cell lines. The time period for X-chromosome inactivation explains this disparity. Inactivation occurs in the epiblast during gastrulation, which gives rise to the embryo. Inactivation occurs on a cellular level, resulting in a mosaic expression, in which patches of cells have an inactive maternal X-chromosome, while other patches have an inactive paternal X-chromosome. For example, a female heterozygous for haemophilia (an X-linked disease) would have about half of her liver cells functioning properly, which is typically enough to ensure normal blood clotting. Chance could result in significantly more dysfunctional cells; however, such statistical extremes are unlikely. Genetic differences on the chromosome may also render one X-chromosome more likely to undergo inactivation. Also, if one X-chromosome has a mutation hindering its growth or rendering it non viable, cells which randomly inactivated that X will have a selective advantage over cells which randomly inactivated the normal allele. Thus, although inactivation is initially random, cells that inactivate a normal allele (leaving the mutated allele active) will eventually be overgrown and replaced by functionally normal cells in which nearly all have the same X-chromosome activated. It is hypothesized that there is an autosomally-encoded 'blocking factor' which binds to the X chromosome and prevents its inactivation. The model postulates that there is a limiting blocking factor, so once the available blocking factor molecule binds to one X chromosome the remaining X chromosome(s) are not protected from inactivation. This model is supported by the existence of a single Xa in cells with many X chromosomes and by the existence of two active X chromosomes in cell lines with twice the normal number of autosomes. Sequences at the X inactivation center (XIC), present on the X chromosome, control the silencing of the X chromosome. The hypothetical blocking factor is predicted to bind to sequences within the XIC. Expression of X-linked disorders in heterozygous females The effect of female X heterozygosity is apparent in some localized traits, such as the unique coat pattern of a calico cat. It can be more difficult, however, to fully understand the expression of un-localized traits in these females, such as the expression of disease. Since males only have one copy of the X chromosome, all expressed X-chromosomal genes (or alleles, in the case of multiple variant forms for a given gene in the population) are located on that copy of the chromosome. Females, however, will primarily express the genes or alleles located on the X-chromosomal copy that remains active. Considering the situation for one gene or multiple genes causing individual differences in a particular phenotype (i.e., causing variation observed in the population for that phenotype), in homozygous females it does not particularly matter which copy of the chromosome is inactivated, as the alleles on both copies are the same. However, in females that are heterozygous at the causal genes, the inactivation of one copy of the chromosome over the other can have a direct impact on their phenotypic value. Because of this phenomenon, there is an observed increase in phenotypic variation in females that are heterozygous at the involved gene or genes than in females that are homozygous at that gene or those genes. There are many different ways in which the phenotypic variation can play out. In many cases, heterozygous females may be asymptomatic or only present minor symptoms of a given disorder, such as with X-linked adrenoleukodystrophy. The differentiation of phenotype in heterozygous females is furthered by the presence of X-inactivation skewing. Typically, each X-chromosome is silenced in half of the cells, but this process is skewed when preferential inactivation of a chromosome occurs. It is thought that skewing happens either by chance or by a physical characteristic of a chromosome that may cause it to be silenced more or less often, such as an unfavorable mutation. On average, each X chromosome is inactivated in half of the cells, although 5-20% of women display X-inactivation skewing. In cases where skewing is present, a broad range of symptom expression can occur, resulting in expression varying from minor to severe depending on the skewing proportion. An extreme case of this was seen where monozygotic female twins had extreme variance in expression of Menkes disease (an X-linked disorder) resulting in the death of one twin while the other remained asymptomatic. It is thought that X-inactivation skewing could be caused by issues in the mechanism that causes inactivation, or by issues in the chromosome itself. However, the link between phenotype and skewing is still being questioned, and should be examined on a case-by-case basis. A study looking at both symptomatic and asymptomatic females who were heterozygous for Duchenne and Becker muscular dystrophies (DMD) found no apparent link between transcript expression and skewed X-Inactivation. The study suggests that both mechanisms are independently regulated, and there are other unknown factors at play. Chromosomal component The X-inactivation center (or simply XIC) on the X chromosome is necessary and sufficient to cause X-inactivation. Chromosomal translocations which place the XIC on an autosome lead to inactivation of the autosome, and X chromosomes lacking the XIC are not inactivated. The XIC contains four non-translated RNA genes, Xist, Tsix, Jpx and Ftx, which are involved in X-inactivation. The XIC also contains binding sites for both known and unknown regulatory proteins. Xist and Tsix RNAs The X-inactive specific transcript (Xist) gene encodes a large non-coding RNA that is responsible for mediating the specific silencing of the X chromosome from which it is transcribed. The inactive X chromosome is coated by Xist RNA, whereas the Xa is not (See Figure to the right). X chromosomes that lack the Xist gene cannot be inactivated. Artificially placing and expressing the Xist gene on another chromosome leads to silencing of that chromosome. Prior to inactivation, both X chromosomes weakly express Xist RNA from the Xist gene. During the inactivation process, the future Xa ceases to express Xist, whereas the future Xi dramatically increases Xist RNA production. On the future Xi, the Xist RNA progressively coats the chromosome, spreading out from the XIC; the Xist RNA does not localize to the Xa. The silencing of genes along the Xi occurs soon after coating by Xist RNA. Like Xist, the Tsix gene encodes a large RNA which is not believed to encode a protein. The Tsix RNA is transcribed antisense to Xist, meaning that the Tsix gene overlaps the Xist gene and is transcribed on the opposite strand of DNA from the Xist gene. Tsix is a negative regulator of Xist; X chromosomes lacking Tsix expression (and thus having high levels of Xist transcription) are inactivated much more frequently than normal chromosomes. Like Xist, prior to inactivation, both X chromosomes weakly express Tsix RNA from the Tsix gene. Upon the onset of X-inactivation, the future Xi ceases to express Tsix RNA (and increases Xist expression), whereas Xa continues to express Tsix for several days. Rep A is a long non coding RNA that works with another long non coding RNA, Xist, for X inactivation. Rep A inhibits the function of Tsix, the antisense of Xist, in conjunction with eliminating expression of Xite. It promotes methylation of the Tsix region by attracting PRC2 and thus inactivating one of the X chromosomes. Silencing The inactive X chromosome does not express the majority of its genes, unlike the active X chromosome. This is due to the silencing of the Xi by repressive heterochromatin, which compacts the Xi DNA and prevents the expression of most genes. Compared to the Xa, the Xi has high levels of DNA methylation, low levels of histone acetylation, low levels of histone H3 lysine-4 methylation, and high levels of histone H3 lysine-9 methylation and H3 lysine-27 methylation mark which is placed by the PRC2 complex recruited by Xist, all of which are associated with gene silencing. PRC2 regulates chromatin compaction and chromatin remodeling in several processes including the DNA damage response. Additionally, a histone variant called macroH2A (H2AFY) is exclusively found on nucleosomes along the Xi. Barr bodies DNA packaged in heterochromatin, such as the Xi, is more condensed than DNA packaged in euchromatin, such as the Xa. The inactive X forms a discrete body within the nucleus called a Barr body. The Barr body is generally located on the periphery of the nucleus, is late replicating within the cell cycle, and, as it contains the Xi, contains heterochromatin modifications and the Xist RNA. Expressed genes on the inactive X chromosome A fraction of the genes along the X chromosome escape inactivation on the Xi. The Xist gene is expressed at high levels on the Xi and is not expressed on the Xa. Many other genes escape inactivation; some are expressed equally from the Xa and Xi, and others, while expressed from both chromosomes, are still predominantly expressed from the Xa. Up to one quarter of genes on the human Xi are capable of escape. Studies in the mouse suggest that in any given cell type, 3% to 15% of genes escape inactivation, and that escaping gene identity varies between tissues. Many of the genes which escape inactivation are present along regions of the X chromosome which, unlike the majority of the X chromosome, contain genes also present on the Y chromosome. These regions are termed pseudoautosomal regions, as individuals of either sex will receive two copies of every gene in these regions (like an autosome), unlike the majority of genes along the sex chromosomes. Since individuals of either sex will receive two copies of every gene in a pseudoautosomal region, no dosage compensation is needed for females, so it is postulated that these regions of DNA have evolved mechanisms to escape X-inactivation. The genes of pseudoautosomal regions of the Xi do not have the typical modifications of the Xi and have little Xist RNA bound. The existence of genes along the inactive X which are not silenced explains the defects in humans with atypical numbers of the X chromosome, such as Turner syndrome (X0, caused by SHOX gene) or Klinefelter syndrome (XXY). Theoretically, X-inactivation should eliminate the differences in gene dosage between affected individuals and individuals with a typical chromosome complement. In affected individuals, however, X-inactivation is incomplete and the dosage of these non-silenced genes will differ as they escape X-inactivation, similar to an autosomal aneuploidy. The precise mechanisms that control escape from X-inactivation are not known, but silenced and escape regions have been shown to have distinct chromatin marks. It has been suggested that escape from X-inactivation might be mediated by expression of long non-coding RNA (lncRNA) within the escaping chromosomal domains. Uses in experimental biology Stanley Michael Gartler used X-chromosome inactivation to demonstrate the clonal origin of cancers. Examining normal tissues and tumors from females heterozygous for isoenzymes of the sex-linked G6PD gene demonstrated that tumor cells from such individuals express only one form of G6PD, whereas normal tissues are composed of a nearly equal mixture of cells expressing the two different phenotypes. This pattern suggests that a single cell, and not a population, grows into a cancer. However, this pattern has been proven wrong for many cancer types, suggesting that some cancers may be polyclonal in origin. Besides, measuring the methylation (inactivation) status of the polymorphic human androgen receptor (HUMARA) located on X-chromosome is considered the most accurate method to assess clonality in female cancer biopsies. A great variety of tumors was tested by this method, some, such as renal cell carcinoma, found monoclonal while others (e.g. mesothelioma) were reported polyclonal. Researchers have also investigated using X-chromosome inactivation to silence the activity of autosomal chromosomes. For example, Jiang et al. inserted a copy of the Xist gene into one copy of chromosome 21 in stem cells derived from an individual with trisomy 21 (Down syndrome). The inserted Xist gene induces Barr body formation, triggers stable heterochromatin modifications, and silences most of the genes on the extra copy of chromosome 21. In these modified stem cells, the Xist-mediated gene silencing seems to reverse some of the defects associated with Down syndrome. History In 1959 Susumu Ohno showed that the two X chromosomes of mammals were different: one appeared similar to the autosomes; the other was condensed and heterochromatic. This finding suggested, independently to two groups of investigators, that one of the X chromosomes underwent inactivation. In 1961, Mary Lyon proposed the random inactivation of one female X chromosome to explain the mottled phenotype of female mice heterozygous for coat color genes. The Lyon hypothesis also accounted for the findings that one copy of the X chromosome in female cells was highly condensed, and that mice with only one copy of the X chromosome developed as infertile females. This suggested to Ernest Beutler, studying heterozygous females for glucose-6-phosphate dehydrogenase (G6PD) deficiency, that there were two red cell populations of erythrocytes in such heterozygotes: deficient cells and normal cells, depending on whether the inactivated X chromosome (in the nucleus of the red cell's precursor cell) contains the normal or defective G6PD allele. See also Sex-determination system Dosage compensation Barr body Heterochromatin Epigenetics Skewed X-inactivation Developmental disorders thought to be related to X-inactivation: Early infantile epileptic encephalopathy type 9 Frontonasal dysplasia References Further reading External links Molecular genetics Genetics Epigenetics Female Sex-determination systems
X-inactivation
[ "Chemistry", "Biology" ]
4,592
[ "Female", "Genetics", "Sex", "Sex-determination systems", "Molecular genetics", "Molecular biology" ]
1,502,748
https://en.wikipedia.org/wiki/La%20Noumbi
La Noumbi is a floating production storage and offloading (FPSO) unit operated by Perenco. The vessel, converted from the former Finnish Aframax crude oil tanker Tempera by Keppel Corporation, will replace an older FPSO unit in the Yombo field off the Republic of Congo in 2018. Built at Sumitomo Heavy Industries in Japan in 2002, Tempera was the first ship to utilize the double acting tanker (DAT) concept in which the vessel is designed to travel ahead in open water and astern in severe ice conditions. Tempera and her sister ship Mastera, built in 2003, were used mainly to transport crude oil, year-round, from the Russian oil terminal in Primorsk to Neste Oil refineries in Porvoo and Naantali. In 2015, Neste sold Tempera to the oil and gas company Perenco for conversion to an FPSO. Concept Although icebreaking cargo ships had been built in the past, their hull forms were always compromises between open water performance and icebreaking capability. A good icebreaking bow, designed to break the ice by bending it under the ship's weight, has very poor open water characteristics and is subjected to slamming in heavy weather while a hydrodynamically efficient bulbous bow greatly increases the ice resistance. However, already in the late 1800s captains operating ships in icebound waters discovered that sometimes it was easier to break through ice by running their vessels astern. This was because the forward-facing propellers generated a water flow that lowered the resistance by reducing friction between the ship's hull and ice. These findings resulted in the adoption of bow propellers in older icebreakers operating in the Great Lakes and the Baltic Sea, but as forward-facing propellers have a very low propulsion efficiency and the steering ability of a ship is greatly reduced when running astern, it could not be considered a main operating mode for merchant ships. For this reason it was not until the development of electric podded propulsion, ABB's Azipod, that the concept of double acting ships became feasible. The superiority of podded propulsion in icebreaking merchant ships, especially when running astern, was proved when Finnish product tankers Uikku and Lunni were converted to Azipod propulsion in 1993 and 1994, respectively. Even though the ships were originally designed with icebreaking capability in mind, after the conversion ice resistance in level ice when running astern was only 40% of that when breaking ice ahead despite the ships being equipped with an icebreaking bow and not designed to break ice astern. History Development and construction Following the successful operation of the Azipod-converted tankers Uikku and Lunni in the Northern Sea Route, Kværner Masa-Yards Arctic Research Centre developed the first double acting tanker concept in the early 1990s. The 90,000 DWT tankers were designed to transport oil and gas condensate from the Pechora Sea in the Russian Arctic, where ice conditions during winter can be considered moderate and the ships would operate mainly in astern mode, first to Murmansk and then Rotterdam, where most of the distance can be travelled in open water year round. Other early double acting concepts included a similar ship with an icebreaking bow that would be utilized in summer time when the ship was traveling in areas with low ice concentration but with a risk of colliding with multi-year ice blocks. In early 2000s Fortum Shipping, the transportation arm of the Finnish energy company Fortum, started a major fleet renewal program to increase the efficiency and reduce the average age of its vessels. The program also included replacing the company's old tankers, such as the 90,000-ton Natura, that were used to transport crude oil to the company's oil refineries in the Gulf of Finland. The old ships had traffic restrictions during the worst part of the winter because of their lower ice class of 1C and could not deliver their cargo all the way to the refineries in Porvoo and Naantali because they were denied icebreaker assistance. When this happened, the oil had to be transported to smaller ships of higher ice class at the edge of the ice — a practice that was both uneconomical and hazardous. To solve these problems Kværner Masa-Yards Arctic Research Centre developed a new 100,000 DWT Aframax tanker concept together with Fortum Shipping, which ordered two vessels from Sumitomo Heavy Industries in 2001. The new ships were designed to the highest Finnish-Swedish ice class, 1A Super, and to be capable of operating in all ice conditions encountered in the Baltic Sea. The possibility to operate in the Pechora Sea was also taken into account in the design process. Extensive ice model tests confirmed the vessel's operational capability in level ice, rubble fields, ice channels and ridges. The world's first double acting tanker and the largest 1A Super class oil tanker at that time, Tempera, was delivered from Yokosuka shipyard in late August 2002. She was awarded the Ship Of The Year 2002 award by the Society of Naval Architects of Japan (SNAJ). The second ship, Mastera, was delivered in the following year. Both ships were named after the company's oil products. While the price of the contract was not made public, the company later admitted that the 60–70 million euro estimate was "quite close to the truth". The ships were owned by ABB Credit, which leased them to Fortum for ten years. The leasing business was later sold to SEB Leasing. Tempera (2002–2018) Since the beginning, Tempera and Mastera were used primarily for year-round transportation of crude oil from the Russian oil terminal of Primorsk to company's own oil terminals in Porvoo and Naantali, where they have been the only ships capable of operating without delays or problems during the harshest winters. Occasionally they have carried cargoes in the Gulf of Bothnia and even outside the Baltic Sea depending on the amount of oil in the refineries' storage tanks. Tempera has also visited Murmansk, where she loaded crude oil from FSO Belokamenka. However, due to draft restrictions the ships could not carry a full cargo of 100,000 tons to the port of Naantali until April 2010 — they had to stop at Porvoo on the way and unload 20,000 tons of oil to reduce the draft of the vessel. In 2005 Fortum's oil division was transferred to the re-established Neste Oil and the management of the ships, including the double acting tankers, was handed over to a subsidiary company Neste Shipping. Throughout their career, Tempera and Mastera were the only double acting tankers operating in the Baltic Sea. While other double acting ships have been built in the recent years, the tankers operated by Neste Shipping are also the only ones equipped with a bulbous bow designed primarily with open water performance in mind — the tankers and container ships built for the Russian Arctic have a more traditional icebreaking bow due to the more severe ice conditions. On 29 May 2015, Neste announced a decision to sell Tempera to Perenco. La Noumbi (2018 onwards) In 2017, Tempera left the Baltic Sea and sailed around the Cape of Good Hope, headed for Singapore where she would be converted to a floating production storage and offloading (FPSO) unit at Keppel Corporation shipyard in 2017–2018, after which she would replace FPSO Conkouati at the Yombo field off the Republic of Congo. The conversion included installing additional accommodation capacity as well as production-related equipment. The converted vessel was renamed La Noumbi on 26 July 2018. Design (as oil tanker) General characteristics Tempera is long overall and between perpendiculars. The moulded breadth and depth of her hull are and , respectively, and from keel to mast she measured . Her gross tonnage is 64,259 and net tonnage 30,846, and the deadweight tonnage corresponding to the draft at summer freeboard, , is 106,034 tons, slightly less than in Mastera. In ballast Tempera draws only of water. The foreship of Tempera is designed for open water performance with a bulbous bow to maximize the hydrodynamic efficiency. The ship is, just like any other ice-strengthened vessel, also capable of running ahead in light ice conditions. The stern is, however, shaped like an icebreaker's bow, and Tempera is designed to operate independently in the most severe ice conditions of the Baltic Sea. For this purpose the ship is also equipped with two bridges for navigating in both directions. The ship is served by a crew of 15 to 20 depending on operating conditions during winter and maintenance work during summer. Tempera is classified by Lloyd's Register of Shipping. Cargo tanks and handling Tempera has six pairs of heated, partially epoxy-coated cargo tanks and one pair of fully coated slop tanks, all divided by a longitudinal center bulkhead and protected by double hull, with a combined capacity of at 98% filling. For cargo handling she has three electrically driven cargo oil pumps with a capacity of 3,500 m3/h × 130 m and one cargo oil stripping pump rated for 300 m3/h × 130 m. The cargo can be loaded in 10 hours and discharged in 12 hours. Each cargo tank has two and both slop tanks one automated tank cleaning machines as well as holes for portable tank cleaning machines. The ship's ballast water capacity of is divided into sixteen segregated ballast tanks, six pairs in the double hull around the cargo tanks, two fore peak tanks and two aft peak tanks. She has two electrically driven ballast pumps rated at 2,500 m3/h × 35 m and 3,000 m3/h × 70 m. The ballast capacity is needed to maintain correct trim especially during drydocking — the empty ship has an aft trim of and an uneven weight distribution may damage the hull girder. Power and propulsion Tempera has diesel-electric powertrain with four main generating sets, two nine-cylinder Wärtsilä 9L38B and two six-cylinder 6L38B four-stroke medium-speed diesel engines, with a combined output of . The main engines are equipped with exhaust gas economizers. In addition Tempera has one auxiliary diesel generator that can be used when the ship is at port. The auxiliary generator, six-cylinder Wärtsilä 6L26A, has an output of . While underway at , the fuel consumption of the ship's main engines is 56 tons of heavy fuel oil per day when loaded and 40 tons per day in ballast. Her tanks can store of heavy fuel oil for the main engines, of diesel oil for the auxiliary generator, steam boilers and inert gas system, and of lubrication oil. Tempera and her sister ship are the first tankers propelled by ABB Azipod electric azimuth thrusters capable of rotating 360 degrees around the vertical axis. The pulling-type VI2500 pods in these two ships, with a nominal output of 16 MW and fixed-pitch propellers turning at 86 rpm, are the most powerful ice-strengthened Azipod units ABB has ever produced. The forward-facing propeller increases the propulsion efficiency due to optimal water flow to the propeller and thus improves fuel efficiency. In addition an azimuthing thruster's ability to direct thrust to any direction also results in excellent manoeuvrability characteristics that exceeds those of ships utilizing traditional mechanical shaftlines and rudders. The turning circle of Mastera and Tempera is only half a kilometer at full speed, half of that of a traditional oil tanker of the same size. This is a significant safety factor as the stopping distance of a traditional tanker can be up to . For maneuvering at low speeds in harbours, Tempera is also equipped with two 1,750kW bow thrusters. Icebreaking capability The icebreaking capability of the double acting tankers proved to be superior to other ships since the beginning — in shuttle service between Primorsk, Russia, and the Finnish refineries the tankers require no icebreaker assistance and have even acted as an icebreaker for other merchant ships that have utilized the wide channel opened by the Aframax tanker. However, this has not been intentional — when the world's largest 1A Super class ice tanker Stena Arctica, also owned by Neste Shipping, became stuck in ice outside the port of Primorsk during the winter of 2009–2010, a decision was made to leave the assisting to Russian icebreakers. The ships have performed beyond expectations in both level ice up to thick, which can be broken in continuous motion at , and ridges up to deep, which can be penetrated by either allowing the forward-facing propeller to mill (crush) the ice or breaking the ridge apart with the propeller wash. While the vessels have been immobilized occasionally by pack ice, they have been able to free themselves by using the rotating propeller pod to clear the ice around the hull. References 2002 ships Double acting ships Floating production storage and offloading vessels Ships built by Sumitomo Heavy Industries
La Noumbi
[ "Chemistry" ]
2,703
[ "Floating production storage and offloading vessels", "Petroleum technology" ]
1,502,835
https://en.wikipedia.org/wiki/Magnetohydrodynamic%20generator
A magnetohydrodynamic generator (MHD generator) is a magnetohydrodynamic converter that transforms thermal energy and kinetic energy directly into electricity. An MHD generator, like a conventional generator, relies on moving a conductor through a magnetic field to generate electric current. The MHD generator uses hot conductive ionized gas (a plasma) as the moving conductor. The mechanical dynamo, in contrast, uses the motion of mechanical devices to accomplish this. MHD generators are different from traditional electric generators in that they operate without moving parts (e.g. no turbines), so there is no limit on the upper temperature at which they can operate. They have the highest known theoretical thermodynamic efficiency of any electrical generation method. MHD has been developed for use in combined cycle power plants to increase the efficiency of electric generation, especially when burning coal or natural gas. The hot exhaust gas from an MHD generator can heat the boilers of a steam power plant, increasing overall efficiency. Practical MHD generators have been developed for fossil fuels, but these were overtaken by less expensive combined cycles in which the exhaust of a gas turbine or molten carbonate fuel cell heats steam to power a steam turbine. MHD dynamos are the complement of MHD accelerators, which have been applied to pump liquid metals, seawater, and plasmas. Natural MHD dynamos are an active area of research in plasma physics and are of great interest to the geophysics and astrophysics communities since the magnetic fields of the Earth and Sun are produced by these natural dynamos. Background In a conventional thermal power plant, like a coal-fired power station or nuclear power plant, the energy created by the chemical or nuclear reactions is absorbed in a working fluid, usually water. In a coal plant, for instance, the coal burns in an open chamber which is surrounded by tubes carrying water. The heat from the combustion is absorbed by the water which boils into steam. The steam is then sent into a steam turbine which extracts energy from the steam by turning it into rotational motion. The steam is slowed and cooled as it passes through the turbine. The rotational motion then turns an electrical generator. The efficiency of this overall cycle, known the Rankine cycle, is a function of the temperature difference between the inlet to the boiler and the outlet to the turbine. The maximum temperature at the turbine is a function of the energy source; and the minimum temperature at the inlet is a function of the surrounding environment's ability to absorb waste heat. For many practical reasons, coal plants generally extract about 35% of the heat energy from the coal, the rest is ultimately dumped into the cooling system or escapes through other losses. MHD generators can extract more energy from the fuel source than turbine-generator systems. They do this by skipping the step where the heat is transferred to another working fluid. Instead, they use the hot exhaust directly as the working fluid. In the case of a coal plant, the exhaust is directed through a nozzle that increases its velocity, essentially a rocket nozzle, and then directs it through a magnetic system that directly generates electricity. In a conventional generator, rotating magnets move past a material filled with nearly-free electrons, typically copper wire (or vice-versa depending on the design). In the MHD system the electrons in the exhaust gas move past a stationary magnet. Ultimately the effect is the same, the working fluid is slowed down and cools as its kinetic energy is transferred to electrons, and is thereby converted to electrical power. MHD can only be used with power sources that produce large amounts of fast moving plasma, like the gas from burning coal. This means it is not suitable for systems that work at lower temperatures or do not produce an ionized gas, like a solar power tower or nuclear reactor. In the early days of development of nuclear power, one alternative design was the gaseous fission reactor, which did produce plasma, and this led to some interest in MHD for this role. This style of reactor was never built, however, and interest from the nuclear industry waned. The vast majority of work on MHD for electrical generation has been related to coal fired plants. Principle The Lorentz Force Law describes the effects of a charged particle moving in a constant magnetic field. The simplest form of this law is given by the vector equation. where is the force acting on the particle. is the charge of the particle, is the velocity of the particle, and is the magnetic field. The vector is perpendicular to both and according to the right hand rule. Power generation Typically, for a large power station to approach the operational efficiency of computer models, steps must be taken to increase the electrical conductivity of the conductive substance. Heating a gas to its plasma state, or adding other easily ionizable substances like the salts of alkali metals, can help to accomplish this. In practice, a number of issues must be considered in the implementation of an MHD generator: generator efficiency, economics, and toxic byproducts. These issues are affected by the choice of one of the three MHD generator designs: the Faraday generator, the Hall generator, and the disc generator. Faraday generator The Faraday generator is named for Michael Faraday's experiments on moving charged particles in the Thames River. A simple Faraday generator consists of a wedge-shaped pipe or tube of some non-conductive material. When an electrically conductive fluid flows through the tube, in the presence of a significant perpendicular magnetic field, a voltage is induced in the fluid. This can be drawn off as electrical power by placing electrodes on the sides, at 90-degree angles to the magnetic field. There are limitations on the density and type of field used in this example. The amount of power that can be extracted is proportional to the cross-sectional area of the tube and the speed of the conductive flow. The conductive substance is also cooled and slowed by this process. MHD generators typically reduce the temperature of the conductive substance from plasma temperatures to just over 1000 °C. The main practical problem of a Faraday generator is that differential voltages and currents in the fluid may short through the electrodes on the sides of the duct. The generator can also experience losses from the Hall effect current, which makes the Faraday duct inefficient. Most further refinements of MHD generators have tried to solve this problem. The optimal magnetic field on duct-shaped MHD generators is a sort of saddle shape. To get this field, a large generator requires an extremely powerful magnet. Many research groups have tried to adapt superconducting magnets to this purpose, with varying success. Hall generator The typical solution has been to use the Hall effect to create a current that flows with the fluid. (See illustration.) This design has arrays of short, segmented electrodes on the sides of the duct. The first and last electrodes in the duct power the load. Each other electrode is shorted to an electrode on the opposite side of the duct. These shorts of the Faraday current induce a powerful magnetic field within the fluid, but in a chord of a circle at right angles to the Faraday current. This secondary, induced field makes the current flow in a rainbow shape between the first and last electrodes. Losses are less than in a Faraday generator, and voltages are higher because there is less shorting of the final induced current. However, this design has problems because the speed of the material flow requires the middle electrodes to be offset to "catch" the Faraday currents. As the load varies, the fluid flow speed varies, misaligning the Faraday current with its intended electrodes, and making the generator's efficiency very sensitive to its load. Disc generator The third and, currently, the most efficient design is the Hall effect disc generator. This design currently holds the efficiency and energy density records for MHD generation. A disc generator has fluid flowing between the center of a disc, and a duct wrapped around the edge. (The ducts are not shown.) The magnetic excitation field is made by a pair of circular Helmholtz coils above and below the disk. (The coils are not shown.) The Faraday currents flow in a perfect dead short around the periphery of the disk. The Hall effect currents flow between ring electrodes near the center duct and ring electrodes near the periphery duct. The wide flat gas flow reduced the distance, hence the resistance of the moving fluid. This increases efficiency. Another significant advantage of this design is that the magnets are more efficient. First, they cause simple parallel field lines. Second, because the fluid is processed in a disk, the magnet can be closer to the fluid, and in this geometry, magnetic field strengths increase as the 7th power of distance. Finally, the generator is compact, so the magnet is smaller and uses a much smaller percentage of the generated power. Generator efficiency The efficiency of the direct energy conversion in MHD power generation increases with the magnetic field strength and the plasma conductivity, which depends directly on the plasma temperature, and more precisely on the electron temperature. As very hot plasmas can only be used in pulsed MHD generators (for example using shock tubes) due to the fast thermal material erosion, it was envisaged to use nonthermal plasmas as working fluids in steady MHD generators, where only free electrons are heated a lot (10,000–20,000 kelvins) while the main gas (neutral atoms and ions) remains at a much lower temperature, typically 2500 kelvins. The goal was to preserve the materials of the generator (walls and electrodes) while improving the limited conductivity of such poor conductors to the same level as a plasma in thermodynamic equilibrium; i.e. completely heated to more than 10,000 kelvins, a temperature that no material could stand. Evgeny Velikhov first discovered theoretically in 1962 and experimentally in 1963 that an ionization instability, later called the Velikhov instability or electrothermal instability, quickly arises in any MHD converter using magnetized nonthermal plasmas with hot electrons, when a critical Hall parameter is reached, depending on the degree of ionization and the magnetic field. This instability greatly degrades the performance of nonequilibrium MHD generators. The prospects of this technology, which initially predicted high efficiencies, crippled MHD programs all over the world as no solution to mitigate the instability was found at that time. Without implementing solutions to overcome the electrothermal instability, practical MHD generators had to limit the Hall parameter or use moderately-heated thermal plasmas instead of cold plasmas with hot electrons, which severely lowers efficiency. As of 1994, the 22% efficiency record for closed-cycle disc MHD generators was held by Tokyo Technical Institute. The peak enthalpy extraction in these experiments reached 30.2%. Typical open-cycle Hall & duct coal MHD generators are lower, near 17%. These efficiencies make MHD unattractive, by itself, for utility power generation, since conventional Rankine cycle power plants can reach 40%. However, the exhaust of an MHD generator burning fossil fuel is almost as hot as a flame. By routing its exhaust gases into a heat exchanger for a turbine Brayton cycle or steam generator Rankine cycle, MHD can convert fossil fuels into electricity with an overall estimated efficiency of up to 60 percent, compared to the 40 percent of a typical coal plant. A magnetohydrodynamic generator might also be the first stage of a gas core reactor. Material and design issues MHD generators have problems in regard to materials, both for the walls and the electrodes. Materials must not melt or corrode at very high temperatures. Exotic ceramics were developed for this purpose, selected to be compatible with the fuel and ionization seed. The exotic materials and the difficult fabrication methods contribute to the high cost of MHD generators. MHDs also work better with stronger magnetic fields. The most successful magnets have been superconducting, and very close to the channel. A major difficulty was refrigerating these magnets while insulating them from the channel. The problem is worse because the magnets work better when they are closer to the channel. There are also risks of damage to the hot, brittle ceramics from differential thermal cracking: magnets are usually near absolute zero, while the channel is several thousand degrees. For MHDs, both alumina (Al2O3) and magnesium peroxide (MgO2) were reported to work for the insulating walls. Magnesium peroxide degrades near moisture. Alumina is water-resistant and can be fabricated to be quite strong, so in practice, most MHDs have used alumina for the insulating walls. For the electrodes of clean MHDs (i.e. burning natural gas), one good material was a mix of 80% CeO2, 18% ZrO2, and 2% Ta2O5. Coal-burning MHDs have highly corrosive environments with slag. The slag both protects and corrodes MHD materials. In particular, migration of oxygen through the slag accelerates the corrosion of metallic anodes. Nonetheless, very good results have been reported with stainless steel electrodes at 900K. Another, perhaps superior option is a spinel ceramic, FeAl2O4 - Fe3O4. The spinel was reported to have electronic conductivity, absence of a resistive reaction layer but with some diffusion of iron into the alumina. The diffusion of iron could be controlled with a thin layer of very dense alumina, and water cooling in both the electrodes and alumina insulators. Attaching the high-temperature electrodes to conventional copper bus bars is also challenging. The usual methods establish a chemical passivation layer, and cool the busbar with water. Economics MHD generators have not been used for large-scale mass energy conversion because other techniques with comparable efficiency have a lower lifecycle investment cost. Advances in natural gas turbines achieved similar thermal efficiencies at lower costs, by having the turbine exhaust drive a Rankine cycle steam plant. To get more electricity from coal, it is cheaper to simply add more low-temperature steam-generating capacity. A coal-fueled MHD generator is a type of Brayton power cycle, similar to the power cycle of a combustion turbine. However, unlike the combustion turbine, there are no moving mechanical parts; the electrically conducting plasma provides the moving electrical conductor. The side walls and electrodes merely withstand the pressure within, while the anode and cathode conductors collect the electricity that is generated. All Brayton cycles are heat engines. Ideal Brayton cycles also have an ideal efficiency equal to ideal Carnot cycle efficiency. Thus, the potential for high energy efficiency from an MHD generator. All Brayton cycles have higher potential for efficiency the higher the firing temperature. While a combustion turbine is limited in maximum temperature by the strength of its air/water or steam-cooled rotating airfoils; there are no rotating parts in an open-cycle MHD generator. This upper bound in temperature limits the energy efficiency in combustion turbines. The upper bound on Brayton cycle temperature for an MHD generator is not limited, so inherently an MHD generator has a higher potential capability for energy efficiency. The temperatures at which linear coal-fueled MHD generators can operate are limited by factors that include: (a) the combustion fuel, oxidizer, and oxidizer preheat temperature which limit the maximum temperature of the cycle; (b) the ability to protect the sidewalls and electrodes from melting; (c) the ability to protect the electrodes from electrochemical attack from the hot slag coating the walls combined with the high current or arcs that impinge on the electrodes as they carry off the direct current from the plasma; and (d) by the capability of the electrical insulators between each electrode. Coal-fired MHD plants with oxygen/air and high oxidant preheats would probably provide potassium-seeded plasmas of about 4200°F, 10 atmospheres pressure, and begin expansion at Mach1.2. These plants would recover MHD exhaust heat for oxidant preheat, and for combined cycle steam generation. With aggressive assumptions, one DOE-funded feasibility study of where the technology could go, 1000 MWe Advanced Coal-Fired MHD/Steam Binary Cycle Power Plant Conceptual Design, published in June 1989, showed that a large coal-fired MHD combined cycle plant could attain a HHV energy efficiency approaching 60 percent—well in excess of other coal-fueled technologies, so the potential for low operating costs exists. However, no testing at those aggressive conditions or size has yet occurred, and there are no large MHD generators now under test. There is simply an inadequate reliability track record to provide confidence in a commercial coal-fuelled MHD design. U25B MHD testing in Russia using natural gas as fuel used a superconducting magnet, and had an output of 1.4 megawatts. A coal-fired MHD generator series of tests funded by the U.S. Department of Energy (DOE) in 1992 produced MHD power from a larger superconducting magnet at the Component Development and Integration Facility (CDIF) in Butte, Montana. None of these tests were conducted for long-enough durations to verify the commercial durability of the technology. Neither of the test facilities were in large-enough scale for a commercial unit. Superconducting magnets are used in the larger MHD generators to eliminate one of the large parasitic losses: the power needed to energize the electromagnet. Superconducting magnets, once charged, consume no power and can develop intense magnetic fields 4 teslas and higher. The only parasitic load for the magnets are to maintain refrigeration, and to make up the small losses for the non-supercritical connections. Because of the high temperatures, the non-conducting walls of the channel must be constructed from an exceedingly heat-resistant substance such as yttrium oxide or zirconium dioxide to retard oxidation. Similarly, the electrodes must be both conductive and heat-resistant at high temperatures. The AVCO coal-fueled MHD generator at the CDIF was tested with water-cooled copper electrodes capped with platinum, tungsten, stainless steel, and electrically conducting ceramics. Toxic byproducts MHD reduces the overall production of fossil fuel wastes because it increases plant efficiency. In MHD coal plants, the patented commercial "Econoseed" process developed by the U.S. (see below) recycles potassium ionization seed from the fly ash captured by the stack-gas scrubber. However, this equipment is an additional expense. If molten metal is the armature fluid of an MHD generator, care must be taken with the coolant of the electromagnetics and channel. The alkali metals commonly used as MHD fluids react violently with water. Also, the chemical byproducts of heated, electrified alkali metals and channel ceramics may be poisonous and environmentally persistent. History The first practical MHD power research was funded in 1938 in the U.S. by Westinghouse in its Pittsburgh, Pennsylvania laboratories, headed by Hungarian Bela Karlovitz. The initial patent on MHD is by B. Karlovitz, U.S. Patent No. 2,210,918, "Process for the Conversion of Energy", August 13, 1940. World War II interrupted development. In 1962, the First International Conference on MHD Power was held in Newcastle upon Tyne, UK by Dr. Brian C. Lindley of the International Research and Development Company Ltd. The group set up a steering committee to set up further conferences and disseminate ideas. In 1964, the group held a second conference in Paris, France, in consultation with the European Nuclear Energy Agency. Since membership in the ENEA was limited, the group persuaded the International Atomic Energy Agency to sponsor a third conference, in Salzburg, Austria, July 1966. Negotiations at this meeting converted the steering committee into a periodic reporting group, the ILG-MHD (international liaison group, MHD), under the ENEA, and later in 1967, also under the International Atomic Energy Agency. Further research in the 1960s by R. Rosa established the practicality of MHD for fossil-fueled systems. In the 1960s, AVCO Everett Aeronautical Research began a series of experiments, ending with the Mk. V generator of 1965. This generated 35MW, but used about 8 MW to drive its magnet. In 1966, the ILG-MHD had its first formal meeting in Paris, France. It began issuing a periodic status report in 1967. This pattern persisted, in this institutional form, up until 1976. Toward the end of the 1960s, interest in MHD declined because nuclear power was becoming more widely available. In the late 1970s, as interest in nuclear power declined, interest in MHD increased. In 1975, UNESCO became persuaded that MHD might be an efficient way to utilise world coal reserves, and in 1976, sponsored the ILG-MHD. In 1976, it became clear that no nuclear reactor in the next 25 years would use MHD, so the International Atomic Energy Agency and ENEA (both nuclear agencies) withdrew support from the ILG-MHD, leaving UNESCO as the primary sponsor of the ILG-MHD. Former Yugoslavia development Engineers in former Yugoslavian Institute of Thermal and Nuclear Technology (ITEN), Energoinvest Co., Sarajevo, built and patented the first experimental Magneto-Hydrodynamic facility power generator in 1989. U.S. development In the 1980s, the U.S. Department of Energy began a multiyear program, culminating in a 1992 50 MW demonstration coal combustor at the Component Development and Integration Facility (CDIF) in Butte, Montana. This program also had significant work at the Coal-Fired-In-Flow-Facility (CFIFF) at University of Tennessee Space Institute. This program combined four parts: An integrated MHD topping cycle, with channel, electrodes, and current control units developed by AVCO, later known as Textron Defence of Boston. This system was a Hall effect duct generator heated by pulverized coal, with a potassium ionisation seed. AVCO had developed the famous Mk. V generator, and had significant experience. An integrated bottoming cycle, developed at the CDIF. A facility to regenerate the ionization seed was developed by TRW. Potassium carbonate is separated from the sulphate in the fly ash from the scrubbers. The carbonate is removed, to regain the potassium. A method to integrate MHD into preexisting coal plants. The Department of Energy commissioned two studies. Westinghouse Electric performed a study based on the Scholtz Plant of Gulf Power in Sneads, Florida. The MHD Development Corporation also produced a study based on the J.E. Corrette Plant of the Montana Power Company of Billings, Montana. Initial prototypes at the CDIF operated for short durations, with various coals: Montana Rosebud, and a high-sulphur corrosive coal, Illinois No. 6. A great deal of engineering, chemistry, and material science was completed. After the final components were developed, operational testing completed with 4,000 hours of continuous operation, 2,000 on Montana Rosebud, 2,000 on Illinois No. 6. The testing ended in 1993. Japanese development The Japanese program in the late 1980s concentrated on closed-cycle MHD. The belief was that it would have higher efficiencies, and smaller equipment, especially in the clean, small, economical plant capacities near 100 megawatts (electrical) which are suited to Japanese conditions. Open-cycle coal-powered plants are generally thought to become economic above 200 megawatts. The first major series of experiments was FUJI-1, a blow-down system powered from a shock tube at the Tokyo Institute of Technology. These experiments extracted up to 30.2% of enthalpy, and achieved power densities near 100 megawatts per cubic meter. This facility was funded by Tokyo Electric Power, other Japanese utilities, and the Department of Education. Some authorities believe this system was a disc generator with a helium and argon carrier gas and potassium ionization seed. In 1994, there were detailed plans for FUJI-2, a 5 MWe continuous closed-cycle facility, powered by natural gas, to be built using the experience of FUJI-1. The basic MHD design was to be a system with inert gases using a disk generator. The aim was an enthalpy extraction of 30% and an MHD thermal efficiency of 60%. FUJI-2 was to be followed by a retrofit to a 300MWe natural gas plant. Australian development From the 1980s, Professor Hugo Messerle at The University of Sydney researched coal-fueled MHD. This resulted in a 28MWe topping facility that was operated outside Sydney. Messerle also wrote a key reference work on MHD, as part of a UNESCO education program. Italian development The Italian program began in 1989 with a budget of about 20 million $US, and had three main development areas: MHD Modelling. Superconducting magnet development. The goal in 1994 was a prototype 2m long, storing 66MJ, for an MHD demonstration 8m long. The field was to be 5teslas, with a taper of 0.15T/m. The geometry was to resemble a saddle shape, with cylindrical and rectangular windings of niobium-titanium copper. Retrofits to natural gas powerplants. One was to be at the Enichem-Anic factor in Ravenna. In this plant, the combustion gases from the MHD would pass to the boiler. The other was a 230MW (thermal) installation for a power station in Brindisi, that would pass steam to the main power plant. Chinese development A joint U.S.-China national programme ended in 1992 by retrofitting the coal-fired No. 3 plant in Asbach. A further eleven-year program was approved in March 1994. This established centres of research in: The Institute of Electrical Engineering in the Chinese Academy of Sciences, Beijing, concerned with MHD generator design. The Shanghai Power Research Institute, concerned with overall system and superconducting magnet research. The Thermoenergy Research Engineering Institute at the Nanjing's Southeast University, concerned with later developments. The 1994 study proposed a 10W (electrical, 108MW thermal) generator with the MHD and bottoming cycle plants connected by steam piping, so either could operate independently. Russian developments In 1971, the natural-gas-fired U-25 plant was completed near Moscow, with a designed capacity of 25 megawatts. By 1974 it delivered 6 megawatts of power. By 1994, Russia had developed and operated the coal-operated facility U-25, at the High-Temperature Institute of the Russian Academy of Science in Moscow. U-25's bottoming plant was operated under contract with the Moscow utility, and fed power into Moscow's grid. There was substantial interest in Russia in developing a coal-powered disc generator. In 1986 the first industrial power plant with MHD generator was built, but in 1989 the project was cancelled before MHD launch and this power plant later joined to Ryazan Power Station as a 7th unit with ordinary construction. See also Computational magnetohydrodynamics Electrohydrodynamics Electromagnetic pump Ferrofluid Magnetic flow meter Magnetohydrodynamic turbulence MHD sensor Plasma stability Shocks and discontinuities (magnetohydrodynamics) References Further reading Hugo K. Messerle, Magnetohydrodynamic Power Generation, 1994, John Wiley, Chichester, Part of the UNESCO Energy Engineering Series (This is the source of the historical and generator design information). Shioda, S. "Results of Feasibility Studies on Closed-Cycle MHD Power Plants", Proc. Plasma Tech. Conf., 1991, Sydney, Australia, pp. 189–200. R.J. Rosa, Magnetohydrodynamic Energy Conversion, 1987, Hemisphere Publishing, Washington, D.C. G.J. Womac, MHD Power Generation, 1969, Chapman and Hall, London. External links MHD generator Research at the University of Tennessee Space Institute (archive) - 2004 Model of an MHD-generator at the Institute of Computational Modelling, Akademgorodok, Russia - 2003 The Magnetohydrodynamic Engineering Laboratory Of The University Of Bologna, Italy - 2003 High Efficiency Magnetohydrodynamic Power Generation - 2015 Chemical engineering Electrical generators Energy conversion American inventions Plasma technology and applications Power station technology
Magnetohydrodynamic generator
[ "Physics", "Chemistry", "Technology", "Engineering" ]
5,936
[ "Electrical generators", "Machines", "Plasma physics", "Plasma technology and applications", "Chemical engineering", "Physical systems", "nan" ]
1,502,929
https://en.wikipedia.org/wiki/Calomel
Calomel is a mercury chloride mineral with formula Hg2Cl2 (see mercury(I) chloride). It was used as a medicine from the 16th to early 20th century, despite frequently causing mercury poisoning in patients. The name derives from Greek kalos (beautiful) and melas (black) because it turns black on reaction with ammonia. This was known to alchemists. Calomel occurs as a secondary mineral which forms as an alteration product in mercury deposits. It occurs with native mercury, amalgam, cinnabar, mercurian tetrahedrite, eglestonite, terlinguaite, montroydite, kleinite, moschelite, kadyrelite, kuzminite, chursinite, kelyanite, calcite, limonite and various clay minerals. The type locality is Moschellandsburg, Alsenz-Obermoschel, Rhineland-Palatinate, Germany. History The substance later known as calomel was first documented in ancient Persia by medical historian Rhazes in year 850. Only a few of the compounds he mentioned could be positively identified as calomel, as not every alchemist disclosed what compounds they used in their drugs. Calomel first entered Western medical literature in 1608, when Oswald Croll wrote about its preparation in his Tyroncium Chemicum. It was not called calomel until 1655, when the name was created by Théodore de Mayerne, who had published its preparation and formula in “Pharmacopoeia Londinensis" in 1618. By the 19th century, calomel was viewed as a panacea, or miracle drug, and was used against almost every disease, including syphilis, bronchitis, cholera, ingrown toenails, teething, gout, tuberculosis, influenza, and cancer. During the 18th and early 19th centuries pharmacists used it sparingly; but by the late 1840s, it was being prescribed in heroic doses—due in part to the research of Benjamin Rush, who coined the term "heroic dose" to mean about taken four times daily. This stance was supported by Samuel Cartwright, who believed that large doses were "gentlest" on the body. As calomel rose in popularity, more research was done into how it worked. J. Annesley was one of the first to write about the differing effects of calomel when taken in small or large doses. Through experimentation on dogs, Annesley concluded that calomel acted more like a laxative on the whole body rather than acting specifically on the vascular system or liver as previous physicians believed. In 1853, Samuel Jackson described the harmful effects of calomel on children in his publication for Transactions of Physicians of Philadelphia. He noted that calomel had harmful effects causing gangrene on the skin, loss of teeth, and deterioration of the gums. On May 4, 1863, William A. Hammond, the United States' surgeon-general, stated that calomel would no longer be used in the army as it was being abused by soldiers and physicians alike. This caused much debate in the medical field, and eventually led to his removal as surgeon-general. Calomel continued to be used well into the 1890s and even into the early 20th century. Eventually calomel’s popularity began to wane as more research was done, and scientists discovered that the mercury in the compound was poisoning patients. Calomel was the main of the three components of the pill number 9 of the British army during the First World War. Electrochemistry Calomel is used as the interface between metallic mercury and a chloride solution in a saturated calomel electrode, which is used in electrochemistry to measure pH and electrical potentials in solutions. In most electrochemical measurements, it is necessary to keep one of the electrodes in an electrochemical cell at a constant potential. This so-called reference electrode allows control of the potential of a working electrode. Chemical properties Calomel is a powder that is white when pure, and it has been used as a pigment in painting in 17th century South Americas art and in European medieval manuscripts. When it is exposed to light or contains impurities it takes on a darker tint. Calomel is made up of mercury and chlorine with the chemical formula Hg2Cl2. Depending on how calomel was administered, it affected the body in different ways. Taken orally, calomel damaged mainly the lining of the gastrointestinal tract. Mercury salts (such as calomel) are insoluble in water and therefore do not absorb well through the wall of the small intestine. Some of the calomel in the digestive system will likely be oxidized into a form of mercury that can be absorbed through the intestine, but most of it will not. Oral calomel was actually the safest form of the drug to take, especially in low doses. Most of the calomel ingested will be excreted through urine and stool. Powdered forms of calomel were much more toxic, as their vapors damaged the brain. Once inhaled, the calomel enters the bloodstream and the mercury binds with the amino acids methionine, cysteine, homocysteine and taurine. This is because of the sulfur group these amino acids contain, which mercury has a high affinity for. It is able to pass through the blood brain barrier and builds up in the brain. Mercury also has the ability to pass through the placenta, causing damage to unborn babies if a pregnant mother is taking calomel. Calomel was manufactured in two ways - sublimation and precipitation. When calomel first started being manufactured it was done through sublimation. Calomel made through sublimation tends to be a very fine white powder. There was some controversy over the sublimation of calomel. Many argued that the more times calomel was sublimed, the purer it got. Opponents believed that the repeated sublimation made calomel lose some of its therapeutic ability. In 1788 chemist Carl Wilhelm Scheele came up with the mechanism to make precipitated calomel. This became rapidly popular in the pharmaceutical industry because it was both a cheaper and safer form of production. Precipitation also tended to form very pure calomel salts. Medicine Calomel was a popular medicine used during the Victorian period, and was widely used as a treatment for a variety of ailments during the American Civil War. The medication was available in two forms, blue pills and blue masses. The blue pill was an oral form of calomel containing mercury that was often mixed with a sweet substance, like licorice or sugar in order to be taken by mouth. The blue mass was a solid form of calomel in which a piece could be pinched off and administered by a physician or other medical provider. Neither form of the medication came with a standardization of dosing. There was no way of knowing how much mercurous chloride each dose contained. Uses Calomel was marketed as a purgative agent to relieve congestion and constipation; however, physicians at the time had no idea what the medication’s mechanism of action was. They learned how calomel worked through trial and error. It was observed that small doses of calomel acted as a stimulant, often leading to bowel movements, while larger doses caused sedation. During the 19th century, calomel was used to treat numerous illnesses and diseases like mumps, typhoid fever, and others—especially those that impact the gastrointestinal tract, such as constipation, dysentery, and vomiting. As mercury softened the gums, calomel was the principal constituent of teething powders until the mid-twentieth century. Babies given calomel for teething often suffered from acrodynia. Side effects It became popular in the late 18th century to give calomel in extremely high doses, as Benjamin Rush normalized the heroic dose. This caused many patients to experience many painful and sometimes life-threatening side effects. Calomel, in high doses, led to mercury poisoning, which had the potential to cause permanent deformities and even death. Some patients experienced gangrene of the mouth generated by the mercury in the medicine, which caused the tissue on the cheeks and gums inside the mouth to break down and die. Some patients would lose teeth, while others were left with facial deformities. High doses of calomel would often lead to extreme cramping, vomiting, and bloody diarrhea; however, at the time, this was taken as a sign that the calomel was working to purge the system and rid the disease. Calomel was often administered as a treatment for dysentery; the effects of calomel would often worsen the severe diarrhea associated with dysentery and acted as a catalyst in speeding up the effects of dehydration. One victim was Alvin Smith, the eldest brother of Joseph Smith, founder of the Church of Jesus Christ of Latter-day Saints. Alvin was suffering from a "bilious colic" better known as abdominal pain. It was also used by Charles Darwin to treat his mysterious chronic gastrointestinal illness, which has recently been attributed to Crohn's disease. Discontinuation By the mid-19th century, some physicians had begun to question the usefulness of calomel. In 1863, the Surgeon General of the U.S. Army forbade calomel from inclusion in army medical supplies, a decision that angered many practicing doctors. The use of calomel gradually died out over the course of the late 19th and early 20th centuries, although its use persisted longer in the American South and American West. Citations General bibliography Palache, P.; Berman H.; Frondel, C. (1960). Dana's System of Mineralogy, Volume II: Halides, Nitrates, Borates, Carbonates, Sulfates, Phosphates, Arsenates, Tungstates, Molybdates, Etc. (Seventh Edition). John Wiley and Sons, Inc., New York, pp. 25–28. Halide minerals Mercury(I) minerals Withdrawn drugs History of medicine Mercury poisoning
Calomel
[ "Chemistry" ]
2,140
[ "Drug safety", "Withdrawn drugs" ]
1,503,112
https://en.wikipedia.org/wiki/Domed%20city
A domed city is a hypothetical structure that encloses a large urban area under a single roof. In most descriptions, the dome is airtight and pressurized, creating a habitat that can be controlled for air temperature, composition and quality, typically due to an external atmosphere (or lack thereof) that is inimical to habitation for one or more reasons. Domed cities have been a fixture of science fiction and futurology since the early 20th century, offer inspirations for potential utopias and may be situated on Earth, a moon or other planet. Origin In the early 19th century, the social reformer Charles Fourier proposed that an ideal city must be connected by glass galleries. Such ideas inspired several architectural projects along of 19th and 20th centuries. The most famous of these is the building of The Crystal Palace in 1851 at Hyde Park. In fiction Domed cities appear frequently in underwater environments. In Robert Ellis Dudgeon's novel Colymbia (1873), glass domes are used for underwater conversation. In William Delisle Hay's novel Three Hundred Years Hence (1881), whole cities are covered by domes beneath the sea. Survivors of Atlantis are found living in an underwater glass-domed city in André Laurie's novel Atlantis (1895). The same idea is found later in David M. Parry's The Scarlet Empire (1906) and Stanton A Coblentz's The Sunken World (1928). In William Gibson's Sprawl trilogy, the namesake of the series is a massive supercity in the USA, stretching from Boston to Atlanta and housed in a series of geodesic domes. Authors used domed cities in response to many problems, sometimes to the benefit of the people living in them and sometimes not. The problems of air pollution and other environmental destruction are a common motive, particularly in stories of the middle to late 20th century. As in the Pure trilogy of books by Julianna Baggott. In some works, the domed city represents the last stand of a human race that is either dead or dying. The 1976 film Logan's Run shows both of these themes. The characters have a comfortable life within a domed city, but the city also serves to control the populace and to ensure that humanity never again outgrows its means. The domed city in fiction has been interpreted as a symbolic womb that both nourishes and protects humanity. Where other science fiction stories emphasize the vast expanse of the universe, the domed city places limits on its inhabitants, with the subtext that chaos will ensue if they interact with the world outside. In some works cities are getting "domed" to quarantine its inhabitants. Engineering proposals During the 1960s and 1970s, the domed city concept was widely discussed outside the confines of science fiction. In 1960, visionary engineer Buckminster Fuller described the Dome over Manhattan, a 3 km geodesic dome spanning Midtown Manhattan that would regulate weather and reduce air pollution. A domed city was proposed in 1979 for Winooski, Vermont and in 2010 for Houston. Seward's Success, Alaska, was a domed city proposed in 1968 and designed to hold over 40,000 people along with commercial, recreational and office space. Intended to capitalize on the economic boom following the discovery of oil in northern Alaska, the project was canceled in 1972 due to delays in constructing the Trans-Alaska Pipeline. In order to test whether an artificial closed ecological system was feasible, Biosphere 2 (a complex of interconnected domes and glass pyramids) was constructed in the late 1980s. Its original experiment housed eight people and remains the largest such system attempted to date. In 2010, a domed city known as Eco-city 2020 of 100,000 was proposed for the Mir mine in Siberia. In 2014, the ruler of Dubai announced plans for a climate-controlled domed city, named the Mall of the World, covering an area of 48 million square feet (4.5 square kilometers), but as of 2016, the project has been redesigned without the dome. See also Closed ecosystems: Notes Space colonization Science fiction themes Fictional populated places City, domed Architecture related to utopias
Domed city
[ "Technology", "Engineering" ]
831
[ "Exploratory engineering", "Proposed arcologies", "Architecture related to utopias", "Architecture" ]
1,503,166
https://en.wikipedia.org/wiki/Terminal%20yield
In formal language theory, the terminal yield (or fringe) of a tree is the sequence of leaves encountered in an ordered walk of the tree. Parse trees and/or derivation trees are encountered in the study of phrase structure grammars such as context-free grammars or linear grammars. The leaves of a derivation tree for a formal grammar G are the terminal symbols of that grammar, and the internal nodes the nonterminal or variable symbols. One can read off the corresponding terminal string by performing an ordered tree traversal and recording the terminal symbols in the order they are encountered. The resulting sequence of terminals is a string of the language L(G) generated by the grammar G. Formal languages
Terminal yield
[ "Mathematics" ]
143
[ "Formal languages", "Mathematical logic" ]
1,503,224
https://en.wikipedia.org/wiki/Local%20boundedness
In mathematics, a function is locally bounded if it is bounded around every point. A family of functions is locally bounded if for any point in their domain all the functions are bounded around that point and by the same number. Locally bounded function A real-valued or complex-valued function defined on some topological space is called a if for any there exists a neighborhood of such that is a bounded set. That is, for some number one has In other words, for each one can find a constant, depending on which is larger than all the values of the function in the neighborhood of Compare this with a bounded function, for which the constant does not depend on Obviously, if a function is bounded then it is locally bounded. The converse is not true in general (see below). This definition can be extended to the case when takes values in some metric space Then the inequality above needs to be replaced with where is some point in the metric space. The choice of does not affect the definition; choosing a different will at most increase the constant for which this inequality is true. Examples The function defined by is bounded, because for all Therefore, it is also locally bounded. The function defined by is bounded, as it becomes arbitrarily large. However, it locally bounded because for each in the neighborhood where The function defined by is neither bounded locally bounded. In any neighborhood of 0 this function takes values of arbitrarily large magnitude. Any continuous function is locally bounded. Here is a proof for functions of a real variable. Let be continuous where and we will show that is locally bounded at for all Taking ε = 1 in the definition of continuity, there exists such that for all with . Now by the triangle inequality, which means that is locally bounded at (taking and the neighborhood ). This argument generalizes easily to when the domain of is any topological space. The converse of the above result is not true however; that is, a discontinuous function may be locally bounded. For example consider the function given by and for all Then is discontinuous at 0 but is locally bounded; it is locally constant apart from at zero, where we can take and the neighborhood for example. Locally bounded family A set (also called a family) U of real-valued or complex-valued functions defined on some topological space is called locally bounded if for any there exists a neighborhood of and a positive number such that for all and In other words, all the functions in the family must be locally bounded, and around each point they need to be bounded by the same constant. This definition can also be extended to the case when the functions in the family U take values in some metric space, by again replacing the absolute value with the distance function. Examples The family of functions where is locally bounded. Indeed, if is a real number, one can choose the neighborhood to be the interval Then for all in this interval and for all one has with Moreover, the family is uniformly bounded, because neither the neighborhood nor the constant depend on the index The family of functions is locally bounded, if is greater than zero. For any one can choose the neighborhood to be itself. Then we have with Note that the value of does not depend on the choice of x0 or its neighborhood This family is then not only locally bounded, it is also uniformly bounded. The family of functions is locally bounded. Indeed, for any the values cannot be bounded as tends toward infinity. Topological vector spaces Local boundedness may also refer to a property of topological vector spaces, or of functions from a topological space into a topological vector space (TVS). Locally bounded topological vector spaces A subset of a topological vector space (TVS) is called bounded if for each neighborhood of the origin in there exists a real number such that A is a TVS that possesses a bounded neighborhood of the origin. By Kolmogorov's normability criterion, this is true of a locally convex space if and only if the topology of the TVS is induced by some seminorm. In particular, every locally bounded TVS is pseudometrizable. Locally bounded functions Let a function between topological vector spaces is said to be a locally bounded function if every point of has a neighborhood whose image under is bounded. The following theorem relates local boundedness of functions with the local boundedness of topological vector spaces: Theorem. A topological vector space is locally bounded if and only if the identity map is locally bounded. See also External links PlanetMath entry for Locally Bounded nLab entry for Locally Bounded Category Theory of continuous functions Functional analysis Mathematical analysis
Local boundedness
[ "Mathematics" ]
922
[ "Mathematical analysis", "Functions and mappings", "Functional analysis", "Theory of continuous functions", "Mathematical objects", "Topology", "Mathematical relations" ]
1,503,460
https://en.wikipedia.org/wiki/McNeill%27s%20law
In human geography, McNeill's law is the process outlined in William H. McNeill's book Plagues and Peoples. The process described concerns the role of microbial disease in the conquering of people-groups. Particularly, it describes how diseases such as smallpox, measles, typhus, scarlet fever, and sexually-transmitted diseases have significantly reduced native populations so that they are unable to resist colonization. Concept According to McNeill's Law, the microbiological aspect of conquest and invasion has been the deciding principle or one of the deciding principles in both the expansion of certain empires (as during the emigration to the Americas) and the containment in others (as during the crusades). The argument is that less civilized peoples were easily subjugated due to the immunological advantages of those coming from civilized countries. An evidence presented to support the hypothesis involves the manner diseases associated with Europeans were rebuffed in their forays into disease-experienced countries such as China and Japan. McNeill's law also maintains that parasites are not only natural but also social in the sense that these organisms are part of the social continuum and that the human social evolution is inextricably linked with genetic transformations. Instances in history The first people-group fully wiped out due to European expansion (with the possible exception of the Arawaks) was the Guanches of the Canary Islands. Despite an inbred ferocity, superior knowledge of the land and even a possible tactical superiority, they were eventually wiped out through the concentrated efforts of the Spanish and Portuguese. McNeill's Law would place the deciding factor squarely on the introduction of deadly diseases and parasites from the mainland to the previously geographically isolated islanders. This is the likely explanation, as what records still exist show numerous deaths by disease on the islands and a declining birth rate, leading eventually to the almost complete end of the Guanches as a race. Other instances include the devastation of the Incas by smallpox. References Human geography Epidemiology
McNeill's law
[ "Environmental_science" ]
414
[ "Epidemiology", "Environmental social science", "Human geography" ]
1,503,549
https://en.wikipedia.org/wiki/Chrispijn%20van%20den%20Broeck
Chrispijn van den Broeck (1523 – c. 1591) was a Flemish painter, draughtsman, print designer and designer of temporary decorations. He was a scion of a family of artists, which had its origins in Mechelen and later moved to Antwerp. He is known for his religious compositions and portraits as well as his extensive output of designs for prints. He was active in Antwerp which he left for some time because of the prosecution of persons adhering to his religious convictions. Life Chrispijn van den Broeck was born in Mechelen as the son of Jan van den Broeck, a painter. His family members included artists who were active in Mechelen. His family also used the Latinised name 'Paludanus'. The Latinized name is based on the Latin translation ('palus') of the Dutch word 'broeck' which is part of the family name and means a marsh or swamp land. He was likely a relative of the sculptor and painter Willem van den Broecke and the painter Hendrick van den Broeck. He was probably trained by his father. He moved to Antwerp some time before 1555 since Chrispijn was registered as a master painter of the Guild of St. Luke of Antwerp for the first time in 1555. Chrispijn was then working in the workshop of the leading history painter Frans Floris. Frans Floris was one of the Romanist painters active in Antwerp. The Romanists were Netherlandish artists who had trained in Italy and upon their return to their home countries painted in a style that assimilated Italian influences into the Northern painting tradition. Van den Broeck remained in Frans Floris' workshop until the master's death in 1570. He was together with Frans Pourbus the Elder one of the collaborators of Floris who helped finish Floris' paintings after the master had become incapacitated due to the alcoholism in which he had sunk in his later years. According to the Flemish contemporary art historian and artist Karel van Mander, Chrispijn van den Broeck and Frans Pourbus completed an altarpiece for the Grand Prior of Spain left incomplete at the time of Floris' death. Van den Broeck became a citizen of Antwerp in 1559. He married Barbara de Bruyn. Their daughter Barbara van den Broeck (1560 -?) became an engraver who mainly created reproductions after her father's work. Crispijn may have lived in Italy for some time, but there is no evidence of this. He received a pupil named Niclaes Ficet in his Antwerp workshop in 1577. In 1584, van de Broeck resided in Middelburg for a short time to escape the political and religious unrest in Antwerp. His name was last mentioned in the Guild records of 1589 in connection with a payment. His wife is mentioned as a widow on 6 February 1591. Chrispijn van den Broeck must therefore have died sometime between 1589 and 6 February 1591, most likely in Antwerp. Work Van Mander stated that Chrispijn van den Broeck was 'a good inventor... clever at large nudes and just as good an architect'. The latter may refer to his involvement in temporary constructions and decorations during festivities in the city, such as the theatre competition called the Landjuweel held in Antwerp in 1561 and the Joyous Entries in Antwerp of 1570 and 1582. About 23 paintings are attributed to Chrispijn van den Broeck, some of which are signed. From 16th and 17th century inventories in Antwerp van den Broeck's work is regularly mentioned, which indicates that his output must have been larger. While there is no evidence that the artist visited Italy, his work shows the influence of the Venetian Jacopo Bassano in the use of large, solid figures placed within a landscape. As he was a pupil of the Romanist Frans Floris who did study in Italy he may have received the Italian influence through his master. He may also have seen prints after Italian artworks. Van den Broeck further adopted Floris technique of applying a brown preparatory ground underneath the main colours of his paintings. As a result, his works typically display a brown hue. His palette favours pink, brown, grey and yellow tones. Van den Broeck's painting Two Young Men (Fitzwilliam Museum) is a double portrait of two cheerful young men or adolescent boys. They are wearing fancy clothes in Italian fashion which were likely also worn by fashionable young men in 16th century Flanders. Their embrace and smiling glances show that the relationship between the two men is close. The boy in black seems to be offering his friend an apple while he looks at the viewer with a smile. The other boy looks with a smile at the boy in black. While an apple was often used as a symbol of physical love, it would be wrong to assume the painting depicts two homosexual lovers. The boys' physical likeness indicates that they are more likely brothers. Based on the symbols used throughout the painting, its subject appears to be death. Two dark owl heads peek out over each shoulder of the boy in black while a crow's or raven's head in profile with its sharp beak pointed towards the boy in black juts out from the right side of the head of the boy in red. Both the owl and the raven are traditional symbols of death. The stone panel at the top of the picture bears the artist's initials and recalls funerary sculpture. A total of 146 drawings have been attributed with certainty to van den Broeck. Of these, 89 are designs for engravings. His earliest drawing is dated 1560. It is possible that his tendency to accentuate the contours of forms made his drawings suited as designs for prints. Whether Van den Broeck himself also etched or engraved is unknown. From 1566 onwards van den Broeck started to create design drawings for publications by Christoffel Plantin, such as for Benito Arias Montano's Humanae salutis monumenta published in 1571. Van den Broeck also worked for print publishers Gerard de Jode, Adriaen Huybrechts, Hans van Luijck, Willem van Haecht the Elder and Plantin's successor Jan Moretus I. His designs were engraved by engravers such as Abraham de Bruyn, Jan Collaert the Elder and Johannes Wierix. He designed the illustration of the allegory of the Low Countries used in Lodovico Guicciardini's Descrizione di tutti I Paesi Bassi (1567). References External links 1523 births 1591 deaths Flemish Renaissance painters Flemish portrait painters Flemish history painters Painters from Antwerp Artists from Mechelen
Chrispijn van den Broeck
[ "Engineering" ]
1,379
[]