id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
313,830
https://en.wikipedia.org/wiki/Bulldozer
A bulldozer or dozer (also called a crawler) is a large, motorized machine equipped with a metal blade to the front for pushing material: soil, sand, snow, rubble, or rock during construction work. It travels most commonly on continuous tracks, though specialized models riding on large off-road tires are also produced. Its most popular accessory is a ripper, a large hook-like device mounted singly or in multiples in the rear to loosen dense materials. Bulldozers are used heavily in large and small scale construction, road building, minings and quarrying, on farms, in heavy industry factories, and in military applications in both peace and wartime. The word "bulldozer" refers only to a motorized unit fitted with a blade designed for pushing. The word is sometimes used inaccurately for other heavy equipment such as a front-end loader designed for carrying rather than pushing material. The term originally referred only to the blade attachment but is now commonly applied to any crawler tractor with a front mounted blade. Description Typically, bulldozers are large and powerful tracked heavy equipment. The tracks give them excellent traction and mobility through very rough terrain. Wide tracks also help distribute the vehicle's weight over a large area (decreasing ground pressure), thus preventing it from sinking in sandy or muddy ground. Extra-wide tracks are known as swamp tracks or low ground pressure (lgp) tracks. Bulldozers have transmission systems designed to take advantage of the track system and provide excellent tractive force. These traits allow bulldozers to excel in road building, construction, mining, forestry, land clearing, infrastructure development, and any other projects requiring highly mobile, powerful, and stable earth-moving equipment. A variant is the all-wheel-drive wheeled bulldozer, which generally has four large rubber-tired wheels, hydraulically operated articulated steering, and a hydraulically actuated blade mounted forward of the articulation joint. The bulldozer's primary tools are the blade and the ripper: Blade Bulldozer blades come in three types: straight ("S blade"), short with no lateral curve or side wings. Can be used for fine grading. universal ("U blade"), tall and very curved, with large side wings to maximize load. combination ("S-U", or semi-U), shorter, with less curvature and smaller side wings. It is typically used for pushing large rocks, as at a quarry. Blades can be fitted straight across the frame, or at an angle. All can be lifted, some, with additional hydraulic cylinders, can be tilted to vary the angle up to one side. Sometimes, a bulldozer is used to push or pull another piece of earth-moving equipment known as a "scraper" to increase productivity. The towed Fresno Scraper, invented in 1883 by James Porteous, was the first design to enable this to be done economically, removing the soil from an area being cut and depositing where needed as fill. Dozer blades with a reinforced center section for pushing are known as "bull blades". Dozer blades are added to combat engineering vehicles and other military equipment, such as artillery tractors such as the Type 73 or M8 tractor, to clear battlefield obstacles and prepare firing positions. Dozer blades may be mounted on main battle tanks to clear antitank obstacles or mines, and dig improvised shelters. Ripper A ripper is a long, claw-like shank that may be mounted singly or in multiples on the rear of a bulldozer to loosen hard and impacted materials. Usually a single shank is preferred for heavy ripping. The ripper is equipped with a replaceable tungsten steel alloy tip, known as a boot. Ripping can not only loosen soil (such as podzol hardpan) in agricultural and construction applications but break shaly rock or pavement into easily handled small rubble. A variant of the ripper is the stumpbuster, a single spike protruding horizontally used to split a tree stump. Variants Armored bulldozers Bulldozers employed for combat-engineering roles are often fitted with armor to protect the driver from firearms and debris, enabling bulldozers to operate in combat zones. The most widely documented use is the Israeli Military militarized Caterpillar D9, for earth moving, clearing terrain obstacles, opening routes, and detonating explosive charges. The IDF used armoured bulldozers extensively during Operation Rainbow where they were used to uproot Gaza Strip smuggling tunnels and destroy residential neighbourhoods, water wells and pipes, and agricultural land to expand the military buffer zone along the Philadelphi Route. This use drew criticism against both the use and the suppliers of armoured bulldozers from human-rights organizations such as the EWASH-coalition and Human Rights Watch, the latter of whom urged Caterpillar to cease their sale of bulldozers to the IDF. The use of bulldozers was seen as necessary by Israeli authorities to uproot smuggling tunnels, destroy houses used by Palestinian gunmen, and expand the buffer zone. Some forces' engineer doctrines differentiate between a low-mobility armoured dozer (LMAD) and a high-mobility armoured dozer (HMAD). The LMAD is dependent on a flatbed to move it to its employment site, whereas the HMAD has a more robust engine and drive system designed to give it road mobility with a moderate range and speed. HMADs, however, normally lack the full cross-country mobility characteristics of a dozer blade-equipped tank or armoured personnel carrier. Some bulldozers have been fitted with armor by civilian operators to prevent bystanders or police from interfering with the work performed by the bulldozer, as in the case of strikes or demolition of condemned buildings. This has also been done by civilians with a dispute with the authorities, such as Marvin Heemeyer, who outfitted his Komatsu D355A bulldozer with homemade composite armor to then demolish government buildings. Remote-controlled dozers In recent years, innovations in the construction technology have made remote-controlled bulldozers a reality. Now, heavy machinery can be controlled from up to 1,000 feet away. This contributes to the safety of workers on the jobsite, keeping them at a secure distance from potentially dangerous jobs. The advancement and the ability to control the heavy machinery from afar provides workers with the sufficient control over the dozers to get the job done. Though these machines are still in their early stages, many construction companies are using them successfully. History The first bulldozers were adapted from Holt farm tractors that were used to plough fields. The versatility of tractors in soft ground for logging and road building contributed to the development of the armored tank in World War I. In 1923, farmer James Cummings and draftsman J. Earl McLeod made the first designs for the bulldozer. A replica is on display at the city park in Morrowville, Kansas, where the two built the first bulldozer. On December 18, 1923, Cummings and McLeod filed U.S. patent #1,522,378 that was later issued on January 6, 1925, for an "Attachment for Tractors." By the 1920s, tracked vehicles became common, particularly the Caterpillar 60. Rubber-tired vehicles came into use in the 1940s. To dig canals, raise earthen dams, and do other earth-moving jobs, these tractors were equipped with a large, thick, metal plate in front. (The blade got its curved shape later). In some early models, the driver sat on top in the open without a cabin. The three main types of bulldozer blades are a U-blade for pushing and carrying soil relatively long distances, a straight blade for "knocking down" and spreading piles of soil, and a brush rake for removing brush and roots. These attachments (home-built or built by small equipment manufacturers of attachments for wheeled and crawler tractors and trucks) appeared by 1929. Widespread acceptance of the bull-grader does not seem to appear before the mid-1930s. The addition of power down-force provided by hydraulic cylinders instead of just the weight of the blade made them the preferred excavation machine for large and small contractors alike by the 1940s, by which time the term "bulldozer" referred to the entire machine and not just the attachment. Over the years, bulldozers got bigger and more powerful in response to the demand for equipment suited for ever larger earthworks. Firms such as Caterpillar, Komatsu, Clark Equipment Co, Case, Euclid, Allis Chalmers, Liebherr, LiuGong, Terex, Fiat-Allis, John Deere, Massey Ferguson, BEML, XGMA, and International Harvester manufactured large, tracked-type earthmoving machines. R.G. LeTourneau and Caterpillar manufactured large, rubber-tired bulldozers. Bulldozers grew more sophisticated as time passed. Improvements include drivetrains analogous to (in automobiles) an automatic transmission instead of a manual transmission, such as the early Euclid C-6 and TC-12 or Model C Tournadozer, blade movement controlled by hydraulic cylinders or electric motors instead of early models' cable winch/brake, and automatic grade control. Hydraulic cylinders enabled the application of down force, more precise manipulation of the blade, and automated controls. In the very snowy winter of 1946–47 in the United Kingdom, in at least one case a remote cut-off village running out of food was supplied by a bulldozer towing a big sled carrying necessary supplies. A more recent innovation is the outfitting of bulldozers with GPS technology, such as manufactured by Topcon Positioning Systems, Inc., Trimble Inc, or Leica Geosystems, for precise grade control and (potentially) "stakeless" construction. As a response to the many, and often varying claims about these systems, the Kellogg Report published in 2010 a detailed comparison of all the manufacturers' systems, evaluating more than 200 features for dozers alone. The best-known maker of bulldozers is Caterpillar. Komatsu, Liebherr, Case, Hitachi, Volvo, and John Deere are present-day competitors. Although these machines began as modified farm tractors, they became the mainstay for big civil construction projects, and found their way into use by military construction units worldwide. The best-known model, the Caterpillar D9, was also used to clear mines and demolish enemy structures. Manufacturers Industry statistics based on 2010 production published by Off-Highway Research showed Shantui was the largest producer of bulldozers, making over 10,000 units that year or two in five crawler-type dozers made in the world. The next-largest producer by number of units is Caterpillar Inc., which produced 6,400 units. Komatsu introduced the D575A in 1981, the D757A-2 in 1991, and the D575A-3 in 2002, which the company touts as the biggest bulldozer in the world. History of the word A 19th-century term used in engineering for a horizontal forging press Around 1870s: In the USA, a "bulldose" was a large dose (namely, one large enough to be literally or figuratively effective against a bull) of any sort of medicine or punishment. By the late 1870s, "to bulldoze" and "bulldozing" were being used throughout the United States to describe intimidation "by violent and unlawful means", which sometimes meant a severe whipping or coercion, or other intimidation, such as at gunpoint. It had a particular meaning in the Southern United States as a whipping or other punishment for African Americans to suppress black voter turnout in the 1876 United States presidential election. 1886: "Bulldozer" meant a large-caliber pistol and the person who wielded it. Late 19th century: "Bulldozing" meant using brute force to push over or through any obstacle, with reference to two bulls pushing against each other's heads in a fight over dominance. 1930s: applied to the vehicle These appeared as early as 1929, but were known as "bull grader" blades, and the term "bulldozer blade" did not appear to come into widespread use until the mid-1930s. "Bulldozer" now refers to the whole machine, not just the attachment. In contemporary usage, "bulldozer" is sometimes shortened to "dozer", and the verb "bulldozing" to "dozing", thus making a homophone with the pre-existing verb "dozing". Gallery See also Acco super bulldozer, largest bulldozer manufactured Athanas for the 'bulldozer shrimp' (from the way it pushes sand about) Land scraper or land leveler - an earth moving machine that is pulled behind a tractor rather than pushed. == References == External links The mechanism of a bulldozer (Short illustrated explanations, with flash animations, suitable for kids) Old engine Bulldozer pages photos When Bulldozers roamed the earth Construction equipment Demolition Engineering vehicles Heavy equipment Tracked vehicles American inventions
Bulldozer
[ "Engineering" ]
2,705
[ "Demolition", "Construction equipment", "Construction", "Engineering vehicles", "Bulldozers", "Industrial machinery" ]
313,833
https://en.wikipedia.org/wiki/Swiss%20Army%20knife
The Swiss Army knife (SAK; ) is a pocketknife, generally multi-tooled, now manufactured by Victorinox. The term "Swiss Army knife" was coined by American soldiers after World War II because they had trouble pronouncing the German word "", meaning "officer’s knife". The Swiss Army knife generally has a drop-point main blade plus other types of blades and tools, such as a screwdriver, a can opener, a saw blade, a pair of scissors, and many others. These are folded into the handle of the knife through a pivot point mechanism. The handle is traditionally a red colour, with either a Victorinox or Wenger "cross" logo or, for Swiss military issue knives, the coat of arms of Switzerland. Other colours, textures, and shapes have appeared over the years. Originating in Ibach, Switzerland, the Swiss Army knife was first produced in 1891 when the Karl Elsener company, which later became Victorinox, won the contract to produce the Swiss Army's Modell 1890 knife from the previous German manufacturer. In 1893, the Swiss cutlery company Paul Boéchat & Cie, which later became Wenger SA, received its first contract from the Swiss military to produce model 1890 knives; the two companies split the initial contract for provision of the knives and operated as separate enterprises from 1908. In 2005 Victorinox acquired Wenger. As an icon of the culture of Switzerland, both the design and the versatility of the knife have worldwide recognition. The term "Swiss Army knife" has acquired usage as a figure of speech indicating a multifaceted skillset. History Origins The Swiss Army Knife was not the first multi-use pocket knife. In 1851, in Moby-Dick (chapter 107), Herman Melville mentions the "Sheffield contrivances, assuming the exterior – though a little swelled – of a common pocket knife; but containing, not only blades of various sizes, but also screwdrivers, cork-screws, tweezers, bradawls, pens, rulers, nail files and countersinkers." During the late 1880s, the Swiss Army decided to purchase a new folding pocket knife for their soldiers. This knife was to be suitable for use by the army in opening canned food and for maintenance of the Swiss service rifle, the Schmidt–Rubin, which required a screwdriver for assembly and disassembly. In January 1891, the knife received the official designation Modell 1890. The knife had a blade, reamer, can opener, screwdriver, and grips made out of dark oak wood that some say was later partly replaced with ebony wood. At that time no Swiss company had the necessary production capacity, so the initial order for 15,000 knives was placed with the German knife manufacturer Wester & Co. from Solingen, Germany. These knives were delivered in October 1891. In 1891, Karl Elsener, then owner of a company that made surgical equipment, set out to manufacture the knives in Switzerland itself. At the end of 1891 Elsener began production of the Modell 1890 knives, in direct competition with the Solingen company. He incurred financial losses doing so, as Wester & Co was able to produce the knives at a lower cost. Elsener was on the verge of bankruptcy when, in 1896, he developed an improved knife, intended for the use by officers, with tools attached on both sides of the handle using a special spring mechanism, allowing him to use the same spring to hold them in place. This new knife was patented on 12 June 1897, with a second, smaller cutting blade, a corkscrew, and wood fibre grips, under the name of Schweizer Offiziers- und Sportmesser ("Swiss officer's and sports knife"). While the Swiss military did not commission the knife, it was successfully marketed internationally, restoring Elsener's company to prosperity. Elsener used a variation on the Swiss coat of arms to identify his knives beginning in 1909. With slight modifications, this is still the company logo. Also in 1909, on the death of his mother, Elsener used his mother's name Victoria, as a brand name, in her honour. In 1921 following the invention of stainless steel ( in French), Karl Elsener's son renamed the company to be Victorinox combining Victoria and inoxydable. In 1893 the second industrial cutler of Switzerland, Paul Boéchat & Cie, headquartered in Delémont in the French-speaking region of Jura, started selling a similar product. Its general manager, Théodore Wenger, acquired the company and renamed it the Wenger Company. Victorinox and Wenger In 1908 the Swiss government split the contract between Victorinox and Wenger, placing half the orders with each. By mutual agreement, Wenger advertised "the Genuine Swiss Army Knife" and Victorinox used the slogan, "the Original Swiss Army Knife". On 26 April 2005, Victorinox acquired Wenger, once again becoming the sole supplier of knives to the military of Switzerland. Victorinox at first kept the Wenger brand intact, but on 30 January 2013, the company announced that the Wenger brand of knives would be abandoned in favour of Victorinox. The press release stated that Wenger's factory in Delémont would continue to produce knives and all employees at this site will retain their jobs. They further elaborated that an assortment of items from the Wenger line-up will remain in production under the Victorinox brand name. Wenger's US headquarters will be merged with Victorinox's location in Monroe, Connecticut. Wenger's watch and licensing business will continue as a separate brand: SwissGear. Up until 2008 Victorinox AG and Wenger SA supplied about 50,000 knives to the military of Switzerland each year, and manufactured many more for export, mostly to the United States. Commercial knives can be distinguished by their cross logos; the Victorinox cross logo is surrounded by a shield while the Wenger cross logo is surrounded by a slightly rounded square. Victorinox registered the words "Swiss Army" and "Swiss Military" as a trademark in the US and was sued at Bern cantonal commercial court by the Swiss Confederacy (represented by Armasuisse, the authority representing the actual Swiss military), in October 2018. After an initial hearing Victorinox agreed to cede the registration in the United States of the term "Swiss military" to Armasuisse in return for an exclusive licence to market perfumes under the same name. Features, tools, and parts Tools and components There are various models of the Swiss Army knife with different tool combinations. Though Victorinox does not provide custom knives, they have produced many different variations to suit individual users, with the Wenger company producing even more model variations. Common main layer tools: Large blade - With 'VICTORINOX SWISS MADE' tang stamp on Victorinox blades since 2005 Small blade Nail file Scissors (sharpened to a 65° angle) Wood saw Metal file or metal saw with nail file Magnifying glass Phillips screwdriver Fish scaler / hook disgorger / ruler in cm and inches Pliers / wire cutter / wire crimper Can opener / 3 mm slot screwdriver Bottle opener / 6 mm slot screwdriver with wire stripper Other main layer tools: LED light USB flash drive Hoof cleaner Shackle opener / marlinspike Electrician's blade / wire scraper Pruning blade Pharmaceutical spatula (cuticle pusher) Cyber Tool (bit driver) Combination tool containing cap opener / can opener / 5 mm slot screwdriver with wire stripper Back layer tools: Corkscrew or Phillips driver Reamer Multipurpose hook with nail file 2mm slotted screwdriver Chisel Mini screwdriver (screws within the corkscrew) Keyring Scale tools: Tweezers Toothpick Pressurised ballpoint pen (with a retractable version on smaller models, which can be used to set DIP switches) Stainless steel pin Digital clock / alarm / timer / altimeter / thermometer / barometer Three Victorinox SAK models had a butane lighter: the SwissFlame, the CampFlame and the SwissChamp XXLT, first introduced in 2002 and discontinued in 2005. The models were never sold in the United States due to lack of safety features. They used a standard piezoelectric ignition system for easy ignition, with adjustable flame; they and were designed for operation at altitudes up to above sea level and continuous operation of 10 minutes. In January 2010, Victorinox announced the Presentation Master models, released in April 2010. The technological tools included a laser pointer, and detachable flash drive with fingerprint reader. Victorinox now sells an updated version called the Slim Jetsetter, with "a premium software package that provides ultra secure data encryption, automatic backup functionality, secure web surfing capabilities, file and email synchronization between the drive and multiple computers, Bluetooth pairing and much more. On the hardware side of things, biometric fingerprint technology, laser pointers, LED lights, Bluetooth remote control and of course, the original Swiss Army Knife implements – blade, scissors, nail file, screwdriver, key ring and ballpoint pen are standard. **Not every feature is available on every model within the collection." In 2006, Wenger produced a knife called "The Giant" that included every implement the company ever made, with 87 tools and 141 different functions. It was recognized by Guinness World Records as the world's most multifunctional penknife. It retails for about €798 or $US1000, though some vendors charge much higher prices. In the same year, Victorinox released the SwissChamp XAVT, consisting of 118 parts and 80 functions with a retail price of $425. The Guinness Book of Records recognizes a unique 314-blade Swiss Army-style knife made in 1991 by Master Cutler Hans Meister as the world's largest penknife, weighing . Locking mechanisms Some Swiss Army knives have locking blades to prevent accidental closure. Wenger was the first to offer a "PackLock" for the main blade on several of their standard 85mm models. Several large Wenger and Victorinox models have a locking blade secured by a slide lock that is operated with an unlocking-button integrated in the scales. Some Victorinox 111 mm series knives have a double liner lock that secures the cutting blade and large slotted screwdriver/cap opener/wire stripper combination tool designed towards prying. Design and materials Rivets and flanged bushings made from brass hold together all machined steel parts and other tools, separators and the scales. The rivets are made by cutting and pointing appropriately sized bars of solid brass. The separators between the tools have been made from aluminium alloy since 1951. This makes the knives lighter. Previously these separating layers were made of nickel-silver. The martensitic stainless steel alloy used for the cutting blades is optimized for high toughness and corrosion resistance and has a composition of 15% chromium, 0.60% silicon, 0.52% carbon, 0.50% molybdenum, and 0.45% manganese and is designated X55CrMo14 or DIN 1.4110 according to Victorinox. After a hardening process at 1040 °C and annealing at 160 °C the blades achieve an average hardness of 56 HRC. This steel hardness is suitable for practical use and easy resharpening, but less than achieved in stainless steel alloys used for blades optimized for high wear resistance. According to Victorinox the martensitic stainless steel alloy used for the other parts is X39Cr13 (aka DIN 1.4031, AISI/ASTM 420) and for the springs X20Cr13 (aka DIN 1.4021, but still within AISI/ASTM 420). The steel used for the wood saws, scissors and nail files has a steel hardness of HRC 53, the screwdrivers, tin openers and awls have a hardness of HRC 52, and the corkscrew and springs have a hardness of HRC 49. The metal saws and files, in addition to the special case hardening, are also subjected to a hard chromium plating process so that iron and steel can also be filed and cut. Although red Cellulose Acetate Butyrate (CAB) (generally known trade names are Cellidor, Tenite and Tenex) scaled Swiss Army knives are most common, there are many colors and alternative materials like more resilient nylon and aluminum for the scales available. Many textures, colors and shapes now appear in the Swiss Army Knife. Since 2006 the scales on some knife models can have textured rubber non-slip inlays incorporated, intended for sufficient grip with moist or wet hands. The rubber also provides some impact protection for such edged scales. Modifications have been made, including professionally produced custom models combining novel materials, colors, finishes and occasionally new tools such as firesteels or tool 'blades' mounting replaceable surgical scalpel blades to replacement of standard scales (handles) with new versions in natural materials such as buffalo horn. In addition to 'limited edition' productions runs, numerous examples from basic to professional-level customizations of standard knives—such as retrofitting pocket clips, one-off scales created using 3D printing techniques, decoration using anodization and new scale materials—can be found by searching for "SAK mods". Assembly During assembly, all components are placed on several brass rivets. The first components are generally an aluminium separator and a flat steel spring. Once a layer of tools is installed, another separator and spring are placed for the next layer of tools. This process is repeated until all the desired tool layers and the finishing separator are installed. Once the knife is built, the metal parts are fastened by adding brass flanged bushings to the rivets. The excess length of the rivets is then cut off to make them flush with the bushings. Finally, the remaining length of the rivets is flattened into the flanged bushings. After the assembly of the metal parts, the blades on smaller knives are sharpened to a 15° angle, resulting in a 30° V-shaped steel cutting edge. From sized knives the blades are sharpened to a 20° angle, resulting in a 40° V-shaped steel cutting edge. Chisel ground blades are sharpened to a 24° angle, resulting in a 24° asymmetric-shaped steel cutting edge where only one side is ground and the other is deburred and remains flat. The blades are then checked with a laser reflecting goniometer to verify the angle of the cutting edges. Finally, scales are applied. Slightly undersized holes incorporated into the inner surface enclose the bushings, which have truncated cone cross-section and are slightly undercut, forming a one-way interference fit when pressed into the generally softer and more elastic scale material. The result is a tight adhesive-free connection that nonetheless permits new identical-pattern scales to be quickly and easily applied. Sizes Victorinox models are available in , , , , , , and lengths when closed. The thickness of the knives varies depending on the number of tool layers included. The models offer the most variety in tool configurations in the Victorinox model line with as many as 15 layers. Wenger models are available in , , , , and lengths when closed. Thickness varies depending on the number of tool layers included. The models offer the most variety in tool configurations in the Wenger model line, with as many as 10 layers. Knives issued by the Swiss Armed Forces Since the first issue as personal equipment in 1891 the Soldatenmesser (Soldier Knives) issued by the Swiss Armed Forces have been revised several times. There are five different main Modelle (models). Their model numbers refer to the year of introduction in the military supply chain. Several main models have been revised over time and therefore exist in different Ausführungen (executions), also denoted by the year of introduction. The issued models of the Swiss Armed Forces are: Modell 1890 Modell 1890 Ausführung 1901 Modell 1908 Modell 1951 Modell 1951 Ausführung 1954 Modell 1951 Ausführung 1957 Modell 1961 Modell 1961 Ausführung 1965 Modell 1961 Ausführung 1978 Modell 1961 Ausführung 1994 Soldatenmesser 08 (Soldier Knife 08) Soldier Knives are issued to every recruit or member of the Swiss Armed Forces and the knives issued to officers have never differed from those issued to non-commissioned officers and privates. A model incorporating a corkscrew and scissors was produced as an officer's tool, but was deemed not "essential for survival". Officers were free to purchase it individually on their own account. Soldier knife model 1890 The Soldier Knife model 1890 had a spear point blade, reamer, can-opener, screwdriver and grips made out of oak wood scales (handles) that were treated with rapeseed oil for greater toughness and water-repellency, which made them black in color. The wooden grips of the Modell 1890 tended to crack and chip so in 1901 these were changed to a hard reddish-brown fiber similar in appearance to wood. The knife was long, thick and weighed . Soldier knife model 1908 The Soldier Knife model 1908 had a clip point blade rather than the 1890s spear point blade, still with the fiber scales, carbon steel tools, nickel-silver bolster, liners, and divider. The knife was long, thick and weighed . The contract with the Swiss Army split production equally between the Victorinox and Wenger companies. Soldier knife model 1951 The soldier Knife model 1951 had fiber scales, nickel-silver bolsters, liners, and divider, and a spear point blade. This was the first Swiss Armed Forces issue model where the tools were made of stainless steel. The screwdriver now had a scraper arc on one edge. The knife was long, thick and weighed . Soldier knife model 1961 The Soldier Knife model 1961 has a long knurled alox handle with the Swiss crest, a drop point blade, a reamer, a blade combining bottle opener, screwdriver, and wire stripper, and a combined can-opener and small screwdriver. The knife was thick and weighed The 1961 model also contains a brass spacer, which allows the knife, with the screwdriver and the reamer extended simultaneously, to be used to assemble the SIG 550 and SIG 510 assault rifles: the knife serves as a restraint to the firing pin during assembly of the lock. The Soldier Knife model 1961 was manufactured only by Victorinox and Wenger and was the first issued knife bearing the Swiss Coat of Arms on the handle. Soldier knife 08 In 2007 the Swiss Government made a request for new updated soldier knives for the Swiss military for distribution in late 2008. The evaluation phase of the new soldier knife began in February 2008, when Armasuisse issued an invitation to tender. A total of seven suppliers from Switzerland and other countries were invited to participate in the evaluation process. Functional models submitted by suppliers underwent practical testing by military personnel in July 2008, while laboratory tests were used to assess compliance with technical requirements. A cost-benefit analysis was conducted and the model with the best price/performance ratio was awarded the contract. The order for 75,000 soldier knives plus cases was worth . This equates to a purchase price of , , in October 2009 per knife plus case. Victorinox won the contest with a knife based on the One-Hand German Army Knife as issued by the German Bundeswehr and released in the civilian model lineup with the addition of a toothpick and tweezers stored in the nylon grip scales (side cover plates) as the One-Hand Trekker/Trailmaster model. Mass production of the new Soldatenmesser 08 (Soldier Knife 08) for the Swiss Armed Forces was started in December 2008, and first issued to the Swiss Armed Forces beginning with the first basic training sessions of 2009. The Soldier Knife 08 has an long ergonomic dual density handle with TPU rubbery thermoplastic elastomer non-slip inlays incorporated in the green Polyamide 6 grip shells and a double liner locking system, one-hand long locking partly wavy serrated chisel ground (optimized for right-handed use) drop point blade sharpened to a 24° angle, wood saw, can opener with small slotted screwdriver, locking bottle opener with large slotted screwdriver and wire stripper/bender, reamer sharpened to a 48° angle, Phillips (PH2) screwdriver and diameter split keyring. The Soldier Knife 08 width is , thickness is , overall length opened is and it weighs . The Soldier Knife 08 was not manufactured by Wenger. Knives issued by other militaries The armed forces of more than 20 different nations have issued or approved the use of various versions of Swiss army knives made by Victorinox, among them the forces of Germany, France, the Netherlands, Norway, Malaysia and the United States (NSN 1095-01-653-1166 Knife, Combat). Space program The Swiss Army knife has been present in space missions carried out by NASA since the late 1970s. In 1978, NASA sent a letter of confirmation to Victorinox regarding a purchase of 50 knives known as the Master Craftsman model. In 1985, Edward M. Payton, brother of astronaut Gary E. Payton, sent a letter to Victorinox, asking about getting a Master Craftsman knife after seeing the one his brother used in space. There are other stories of repairs conducted in space using a Swiss Army knife. Cultural impact The Swiss Army knife has been added to the collection of the Museum of Modern Art in New York and Munich's State Museum of Applied Art for its design. The term "Swiss Army" currently is a registered trademark owned by Victorinox AG and its subsidiary, Wenger SA. In both the original television series MacGyver as well as its 2016 reboot, character Angus MacGyver frequently uses different Swiss Army knives in various episodes to solve problems and construct simple objects. The term "Swiss Army knife" has entered popular culture as a metaphor for usefulness and adaptability. The multi-purpose nature of the tool has also inspired a number of other gadgets. A particularly large Wenger knife model, Wenger 16999, has inspired a large number of humorous reviews on Amazon. This model was recognized by Guinness World Records as 'The World's Most Multifunctional Penknife'. When U.S. District Court for the Southern District of California Roger Benitez overturned California's 30-year-old ban on assault weapons in Miller v. Bonta, he compared the Swiss Army knife to the AR-15 rifle in the first sentence of his opinion, "Like the Swiss Army Knife, the popular AR-15 rifle is a perfect combination of home defense weapon and homeland defense equipment." In response, California Governor Gavin Newsom stated that the comparison "completely undermines the credibility of this decision". Gallery See also Gerber multitool Leatherman Pocketknife Swiss Army Man, a 2016 film that uses absurdist humor to manipulate a man's corpse like a multi-tool References Further reading The Knife and its History – Written on the occasion of the centennial anniversary of Victorinox. Printed in Switzerland in 1984. Begins with 117 pages covering the history of world cutlery, beginning in the Stone Age; many black-and-white prints from old books. 72 pages on the history of the Victorinox company; color photos of the factory, production, and knives. There is an edition in German also, Das Messer and Seine Geschichte. A large-format hardback. Swiss Army Knife Handbook: The Official History and Owner's Guide, by Kathryn Kane. Printed in US, 1988. Practical information on the tools, modifications, uses. 93 pages, paperback booklet. Published by the Swiss Army Knife Society. Die Lieferanten von Schweizer Soldatenmessern Seit 1891, by Martin Frosch, a binder-format in German with drawings dealing mainly with the technical details of the Soldier model up through 1988. A Collector's Guide to Victorinox 58 mm Pocket Knives. Published about 1990 by the author, Daniel J. Jacquart, President of the Victorinox SAK Society. 173 pages enumerating the models, scale materials, colors. Binder format with black & white photos. A Fervour Over Knives: Celebrating the centennial of Wenger. Printed in Switzerland in 1993. Eight pages on the history of cutlery, 28 pages on the Delémont region of the 19th century, its iron, forges, waters, businesses. 97 pages on the Wenger company; striking color photographs of production and knives. Large-format hardback. Swiss Army Knives: A Collectors Companion, by Derek Jackson. Published in London, printed in the United Arab Emirates, 1999; a 2nd edition printed in China, 2003. 35 pages on the history of cutlery; 157 pages on Victorinox knives, brief history of the company, almost no mention of Wenger; no history of models or development of tools. Much of it is material reproduced from Victorinox's The Knife and its History. A first boxed edition included a Soldier with Carl Elsener's signature engraved on the blade; the second edition was sometimes accompanied by one of a limited run (1 of 5,000) 2008 Soldier, last of the Model 1961. A friend in need, printed by Victorinox. The first edition no title and no date; a second edition dated 2003. 60 pages (2nd edition 56 pages) of true stories about lives saved, emergencies handled, situations resolved with the SAK. The Swiss Army Knife Owner's Manual, by Michael M. Young, 2011. Published by the author, printed in the US. 224 page paperback with 96 color photos and several drawings. Chapters on the history of the Victorinox and Wenger companies and the factories, the development of the Soldier and Officer models, charts of the main models made by both companies, care and safe use, improvised uses, results of physical tests, repairs and modifications, true stories. Les couteaux du soldat de l'Armée suisse, by Robert Moix, 2013. An informative summary in French, with many photos, of the many types and the various manufacturers of the pocketknife issued to the Swiss Army. External links Victorinox manufacturer's website SAKWiki Products introduced in 1891 Camping equipment Mechanical hand tools Military equipment of Germany Military equipment of Malaysia Military equipment of Switzerland Military equipment of the United States Pocket knives Swiss inventions Victorinox
Swiss Army knife
[ "Physics" ]
5,561
[ "Mechanics", "Mechanical hand tools" ]
313,845
https://en.wikipedia.org/wiki/Formal%20concept%20analysis
In information science, formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hierarchy represents a subset of the objects (as well as a superset of the properties) in the concepts above it. The term was introduced by Rudolf Wille in 1981, and builds on the mathematical theory of lattices and ordered sets that was developed by Garrett Birkhoff and others in the 1930s. Formal concept analysis finds practical application in fields including data mining, text mining, machine learning, knowledge management, semantic web, software development, chemistry and biology. Overview and history The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data visualization and interpretation. A data table that represents a heterogeneous relation between objects and attributes, tabulating pairs of the form "object g has attribute m", is considered as a basic data type. It is referred to as a formal context. In this theory, a formal concept is defined to be a pair (A, B), where A is a set of objects (called the extent) and B is a set of attributes (the intent) such that the extent A consists of all objects that share the attributes in B, and dually the intent B consists of all attributes shared by the objects in A. In this way, formal concept analysis formalizes the semantic notions of extension and intension. The formal concepts of any formal context can—as explained below—be ordered in a hierarchy called more formally the context's "concept lattice". The concept lattice can be graphically visualized as a "line diagram", which then may be helpful for understanding the data. Often however these lattices get too large for visualization. Then the mathematical theory of formal concept analysis may be helpful, e.g., for decomposing the lattice into smaller pieces without information loss, or for embedding it into another structure which is easier to interpret. The theory in its present form goes back to the early 1980s and a research group led by Rudolf Wille, Bernhard Ganter and Peter Burmeister at the Technische Universität Darmstadt. Its basic mathematical definitions, however, were already introduced in the 1930s by Garrett Birkhoff as part of general lattice theory. Other previous approaches to the same idea arose from various French research groups, but the Darmstadt group normalised the field and systematically worked out both its mathematical theory and its philosophical foundations. The latter refer in particular to Charles S. Peirce, but also to the Port-Royal Logic. Motivation and philosophical background In his article "Restructuring Lattice Theory" (1982), initiating formal concept analysis as a mathematical discipline, Wille starts from a discontent with the current lattice theory and pure mathematics in general: The production of theoretical results—often achieved by "elaborate mental gymnastics"—were impressive, but the connections between neighboring domains, even parts of a theory were getting weaker. This aim traces back to the educationalist Hartmut von Hentig, who in 1972 pleaded for restructuring sciences in view of better teaching and in order to make sciences mutually available and more generally (i.e. also without specialized knowledge) critiqueable. Hence, by its origins formal concept analysis aims at interdisciplinarity and democratic control of research. It corrects the starting point of lattice theory during the development of formal logic in the 19th century. Then—and later in model theory—a concept as unary predicate had been reduced to its extent. Now again, the philosophy of concepts should become less abstract by considering the intent. Hence, formal concept analysis is oriented towards the categories extension and intension of linguistics and classical conceptual logic. Formal concept analysis aims at the clarity of concepts according to Charles S. Peirce's pragmatic maxim by unfolding observable, elementary properties of the subsumed objects. In his late philosophy, Peirce assumed that logical thinking aims at perceiving reality, by the triade concept, judgement and conclusion. Mathematics is an abstraction of logic, develops patterns of possible realities and therefore may support rational communication. On this background, Wille defines: Example The data in the example is taken from a semantic field study, where different kinds of bodies of water were systematically categorized by their attributes. For the purpose here it has been simplified. The data table represents a formal context, the line diagram next to it shows its concept lattice. Formal definitions follow below. The above line diagram consists of circles, connecting line segments, and labels. Circles represent formal concepts. The lines allow to read off the subconcept-superconcept hierarchy. Each object and attribute name is used as a label exactly once in the diagram, with objects below and attributes above concept circles. This is done in a way that an attribute can be reached from an object via an ascending path if and only if the object has the attribute. In the diagram shown, e.g. the object reservoir has the attributes stagnant and constant, but not the attributes temporary, running, natural, maritime. Accordingly, puddle has exactly the characteristics temporary, stagnant and natural. The original formal context can be reconstructed from the labelled diagram, as well as the formal concepts. The extent of a concept consists of those objects from which an ascending path leads to the circle representing the concept. The intent consists of those attributes to which there is an ascending path from that concept circle (in the diagram). In this diagram the concept immediately to the left of the label reservoir has the intent stagnant and natural and the extent puddle, maar, lake, pond, tarn, pool, lagoon, and sea. Formal contexts and concepts A formal context is a triple , where G is a set of objects, M is a set of attributes, and is a binary relation called incidence that expresses which objects have which attributes. For subsets of objects and subsets of attributes, one defines two derivation operators as follows: , i.e., a set of all attributes shared by all objects from A, and dually , i.e., a set of all objects sharing all attributes from B. Applying either derivation operator and then the other constitutes two closure operators: A ↦ = () for A ⊆ G (extent closure), and B ↦ = () for B ⊆ M (intent closure). The derivation operators define a Galois connection between sets of objects and of attributes. This is why in French a concept lattice is sometimes called a treillis de Galois (Galois lattice). With these derivation operators, Wille gave an elegant definition of a formal concept: a pair (A,B) is a formal concept of a context provided that: A ⊆ G, B ⊆ M, = B, and = A. Equivalently and more intuitively, (A,B) is a formal concept precisely when: every object in A has every attribute in B, for every object in G that is not in A, there is some attribute in B that the object does not have, for every attribute in M that is not in B, there is some object in A that does not have that attribute. For computing purposes, a formal context may be naturally represented as a (0,1)-matrix K in which the rows correspond to the objects, the columns correspond to the attributes, and each entry ki,j equals to 1 if "object i has attribute j." In this matrix representation, each formal concept corresponds to a maximal submatrix (not necessarily contiguous) all of whose elements equal 1. It is however misleading to consider a formal context as boolean, because the negated incidence ("object g does not have attribute m") is not concept forming in the same way as defined above. For this reason, the values 1 and 0 or TRUE and FALSE are usually avoided when representing formal contexts, and a symbol like × is used to express incidence. Concept lattice of a formal context The concepts (Ai, Bi) of a context K can be (partially) ordered by the inclusion of extents, or, equivalently, by the dual inclusion of intents. An order ≤ on the concepts is defined as follows: for any two concepts (A1, B1) and (A2, B2) of K, we say that (A1, B1) ≤ (A2, B2) precisely when A1 ⊆ A2. Equivalently, (A1, B1) ≤ (A2, B2) whenever B1 ⊇ B2. In this order, every set of formal concepts has a greatest common subconcept, or meet. Its extent consists of those objects that are common to all extents of the set. Dually, every set of formal concepts has a least common superconcept, the intent of which comprises all attributes which all objects of that set of concepts have. These meet and join operations satisfy the axioms defining a lattice, in fact a complete lattice. Conversely, it can be shown that every complete lattice is the concept lattice of some formal context (up to isomorphism). Attribute values and negation Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is called conceptual scaling. The negation of an attribute m is an attribute ¬m, the extent of which is just the complement of the extent of m, i.e., with (¬m) = G \ . It is in general not assumed that negated attributes are available for concept formation. But pairs of attributes which are negations of each other often naturally occur, for example in contexts derived from conceptual scaling. For possible negations of formal concepts see the section concept algebras below. Implications An implication A → B relates two sets A and B of attributes and expresses that every object possessing each attribute from A also has each attribute from B. When is a formal context and A, B are subsets of the set M of attributes (i.e., A,B ⊆ M), then the implication A → B is valid if ⊆ . For each finite formal context, the set of all valid implications has a canonical basis, an irredundant set of implications from which all valid implications can be derived by the natural inference (Armstrong rules). This is used in attribute exploration, a knowledge acquisition method based on implications. Arrow relations Formal concept analysis has elaborate mathematical foundations, making the field versatile. As a basic example we mention the arrow relations, which are simple and easy to compute, but very useful. They are defined as follows: For and let and dually Since only non-incident object-attribute pairs can be related, these relations can conveniently be recorded in the table representing a formal context. Many lattice properties can be read off from the arrow relations, including distributivity and several of its generalizations. They also reveal structural information and can be used for determining, e.g., the congruence relations of the lattice. Extensions of the theory Triadic concept analysis replaces the binary incidence relation between objects and attributes by a ternary relation between objects, attributes, and conditions. An incidence then expresses that the object has the attribute under the condition . Although triadic concepts can be defined in analogy to the formal concepts above, the theory of the trilattices formed by them is much less developed than that of concept lattices, and seems to be difficult. Voutsadakis has studied the n-ary case. Fuzzy concept analysis: Extensive work has been done on a fuzzy version of formal concept analysis. Concept algebras: Modelling negation of formal concepts is somewhat problematic because the complement of a formal concept (A, B) is in general not a concept. However, since the concept lattice is complete one can consider the join (A, B)Δ of all concepts (C, D) that satisfy ; or dually the meet (A, B)𝛁 of all concepts satisfying . These two operations are known as weak negation and weak opposition, respectively. This can be expressed in terms of the derivation operators. Weak negation can be written as , and weak opposition can be written as . The concept lattice equipped with the two additional operations Δ and 𝛁 is known as the concept algebra of a context. Concept algebras generalize power sets. Weak negation on a concept lattice L is a weak complementation, i.e. an order-reversing map which satisfies the axioms . Weak opposition is a dual weak complementation. A (bounded) lattice such as a concept algebra, which is equipped with a weak complementation and a dual weak complementation, is called a weakly dicomplemented lattice. Weakly dicomplemented lattices generalize distributive orthocomplemented lattices, i.e. Boolean algebras. Temporal concept analysis Temporal concept analysis (TCA) is an extension of Formal Concept Analysis (FCA) aiming at a conceptual description of temporal phenomena. It provides animations in concept lattices obtained from data about changing objects. It offers a general way of understanding change of concrete or abstract objects in continuous, discrete or hybrid space and time. TCA applies conceptual scaling to temporal data bases. In the simplest case TCA considers objects that change in time like a particle in physics, which, at each time, is at exactly one place. That happens in those temporal data where the attributes 'temporal object' and 'time' together form a key of the data base. Then the state (of a temporal object at a time in a view) is formalized as a certain object concept of the formal context describing the chosen view. In this simple case, a typical visualization of a temporal system is a line diagram of the concept lattice of the view into which trajectories of temporal objects are embedded. TCA generalizes the above mentioned case by considering temporal data bases with an arbitrary key. That leads to the notion of distributed objects which are at any given time at possibly many places, as for example, a high pressure zone on a weather map. The notions of 'temporal objects', 'time' and 'place' are represented as formal concepts in scales. A state is formalized as a set of object concepts. That leads to a conceptual interpretation of the ideas of particles and waves in physics. Algorithms and tools There are a number of simple and fast algorithms for generating formal concepts and for constructing and navigating concept lattices. For a survey, see Kuznetsov and Obiedkov or the book by Ganter and Obiedkov, where also some pseudo-code can be found. Since the number of formal concepts may be exponential in the size of the formal context, the complexity of the algorithms usually is given with respect to the output size. Concept lattices with a few million elements can be handled without problems. Many FCA software applications are available today. The main purpose of these tools varies from formal context creation to formal concept mining and generating the concepts lattice of a given formal context and the corresponding implications and association rules. Most of these tools are academic open-source applications, such as: ConExp ToscanaJ Lattice Miner Coron FcaBedrock GALACTIC Related analytical techniques Bicliques A formal context can naturally be interpreted as a bipartite graph. The formal concepts then correspond to the maximal bicliques in that graph. The mathematical and algorithmic results of formal concept analysis may thus be used for the theory of maximal bicliques. The notion of bipartite dimension (of the complemented bipartite graph) translates to that of Ferrers dimension (of the formal context) and of order dimension (of the concept lattice) and has applications e.g. for Boolean matrix factorization. Biclustering and multidimensional clustering Given an object-attribute numerical data-table, the goal of biclustering is to group together some objects having similar values of some attributes. For example, in gene expression data, it is known that genes (objects) may share a common behavior for a subset of biological situations (attributes) only: one should accordingly produce local patterns to characterize biological processes, the latter should possibly overlap, since a gene may be involved in several processes. The same remark applies for recommender systems where one is interested in local patterns characterizing groups of users that strongly share almost the same tastes for a subset of items. A bicluster in a binary object-attribute data-table is a pair (A,B) consisting of an inclusion-maximal set of objects A and an inclusion-maximal set of attributes B such that almost all objects from A have almost all attributes from B and vice versa. Of course, formal concepts can be considered as "rigid" biclusters where all objects have all attributes and vice versa. Hence, it is not surprising that some bicluster definitions coming from practice are just definitions of a formal concept. Relaxed FCA-based versions of biclustering and triclustering include OA-biclustering and OAC-triclustering (here O stands for object, A for attribute, C for condition); to generate patterns these methods use prime operators only once being applied to a single entity (e.g. object) or a pair of entities (e.g. attribute-condition), respectively. A bicluster of similar values in a numerical object-attribute data-table is usually defined as a pair consisting of an inclusion-maximal set of objects and an inclusion-maximal set of attributes having similar values for the objects. Such a pair can be represented as an inclusion-maximal rectangle in the numerical table, modulo rows and columns permutations. In it was shown that biclusters of similar values correspond to triconcepts of a triadic context where the third dimension is given by a scale that represents numerical attribute values by binary attributes. This fact can be generalized to n-dimensional case, where n-dimensional clusters of similar values in n-dimensional data are represented by n+1-dimensional concepts. This reduction allows one to use standard definitions and algorithms from multidimensional concept analysis for computing multidimensional clusters. Knowledge spaces In the theory of knowledge spaces it is assumed that in any knowledge space the family of knowledge states is union-closed. The complements of knowledge states therefore form a closure system and may be represented as the extents of some formal context. Hands-on experience with formal concept analysis The formal concept analysis can be used as a qualitative method for data analysis. Since the early beginnings of FCA in the early 1980s, the FCA research group at TU Darmstadt has gained experience from more than 200 projects using the FCA (as of 2005). Including the fields of: medicine and cell biology, genetics, ecology, software engineering, ontology, information and library sciences, office administration, law, linguistics, political science. Many more examples are e.g. described in: Formal Concept Analysis. Foundations and Applications, conference papers at regular conferences such as: International Conference on Formal Concept Analysis (ICFCA), Concept Lattices and their Applications (CLA), or International Conference on Conceptual Structures (ICCS). See also Association rule learning Cluster analysis Commonsense reasoning Conceptual analysis Conceptual clustering Conceptual space Concept learning Correspondence analysis Description logic Factor analysis Formal semantics (natural language) General Concept Lattice Graphical model Grounded theory Inductive logic programming Pattern theory Statistical relational learning Schema (genetic algorithms) Notes References External links A Formal Concept Analysis Homepage Demo Formal Concept Analysis. ICFCA International Conference Proceedings 2007 5th 2008 6th 2009 7th 2010 8th 2011 9th 2012 10th 2013 11th 2014 12th 2015 13th 2017 14th 2019 15th 2021 16th Machine learning Lattice theory Data mining Formal semantics (natural language) Ontology (information science) Semantic relations
Formal concept analysis
[ "Mathematics", "Engineering" ]
4,130
[ "Lattice theory", "Machine learning", "Fields of abstract algebra", "Artificial intelligence engineering", "Order theory" ]
313,925
https://en.wikipedia.org/wiki/Ichthyostega
Ichthyostega (from , 'fish' and , 'roof') is an extinct genus of limbed tetrapodomorphs from the Late Devonian of what is now Greenland. It was among the earliest four-limbed vertebrates ever in the fossil record and was one of the first with weight-bearing adaptations for terrestrial locomotion. Ichthyostega possessed lungs and limbs that helped it navigate through shallow water in swamps. Although Ichthyostega is often labelled a 'tetrapod' because of its limbs and fingers, it evolved long before true crown group tetrapods and could more accurately be referred to as a stegocephalian or stem tetrapod. Likewise, while undoubtedly of amphibian build and habit, it is not a true member of the group in the narrow sense, as the first modern amphibians (members of the group Lissamphibia) appeared in the Triassic Period. Until finds of other early stegocephalians and closely related fishes in the late 20th century, Ichthyostega stood alone as a transitional fossil between fish and tetrapods, combining fish and tetrapod features. Newer research has shown that it had an unusual anatomy, functioning more akin to a seal than a salamander, as previously assumed. History In 1932 Gunnar Säve-Söderbergh described four Ichthyostega species from the Late Devonian of East Greenland and one species belonging to the genus Ichthyostegopsis, I. wimani. These species could be synonymous (in which case only I. stensioei would remain), because their morphological differences are not very pronounced. The species differ in skull proportions, skull punctuation and skull bone patterns. The comparisons were done on 14 specimens collected in 1931 by the Danish East Greenland Expedition. Additional specimens were collected between 1933 and 1955. Description Ichthyostega was a fairly large animal for its time, as it was broadly built and about 1.5 m (4.9 ft) long. The skull was low, with dorsally placed eyes and large labyrinthodont teeth. The posterior margin of the skull formed an operculum covering the gills. The spiracle was situated in an otic notch behind each eye. Computed tomography has revealed that Ichthyostega had a specialized ear, including a stapes with a unique morphology compared to other tetrapods or to any fish hyomandibula. Postcranial skeleton The legs were large compared to contemporary relatives. It had seven digits on each hind leg, along with a peculiar, poorly ossified mass which lies anteriorly adjacent to the digits. The exact number of digits on the forelimb is not yet known, since fossils of the hand have not been found. While in water, the foot would have functioned like a fleshy paddle more than a fin.The vertebral column and ribcage of Ichthyostega was unusual and highly specialized relative to both its contemporaries and later tetrapods. The thoracic vertebrae at the front of the trunk and the short neck have tall neural spines that lean backwards. They attach to pointed ribs which increase in size and acquire prominent overlapping flanges. Past the sixth or seventh flanged rib, the ribs abruptly decrease in size and lose their flanges. The lumbar vertebrae at the back of the trunk have strong muscle scars and neural spines which are bent forwards and decrease in size towards the hips. The sacral vertebrae above the hips have fan-shaped neural spines that transition from forward-leaning to backward-leaning as they approach the tail. The vertebrae right behind the hips have unusually large ribs similar to those of the thoracic region. The caudal vertebrae have slender spines that lean backwards. The tail of Ichthyostega retained a low fin supported by bony lepidotrichia (fin rays). The tail fin was not as deep as in Acanthostega, and would have been less useful for swimming. Ichthyostega is related to Acanthostega gunnari, which is also from what is now East Greenland. Ichthyostega'''s skull seems more fish-like than that of Acanthostega, but had apelvic girdle morphology that seems stronger and better adapted to life on land. Ichthyostega also had more supportive ribs and stronger vertebrae with more developed zygapophyses. Whether or not these traits were independently evolved in Ichthyostega is debated. It does, however, show that Ichthyostega may have ventured onto land on occasions, unlike contemporaneous limbed vertebrates, such as Elginerpeton and Obruchevichthys. Classification Traditionally, Ichthyostega was considered part of an order named for it, the "Ichthyostegalia". However, this group represents a paraphyletic grade of primitive stem-tetrapods and is not used by many modern researchers. Phylogenetic analysis has shown Ichthyostega is intermediate between other primitive stegocephalian stem-tetrapods. The evolutionary tree of early stegocephalians below follows the results of one such analysis performed by Swartz in 2012. Paleobiology Early limbed vertebrates like Ichthyostega and Acanthostega differed from earlier tetrapodomorphs such as Eusthenopteron or Panderichthys in their increased adaptations for life on land. Though tetrapodomorphs possessed lungs, they used gills as their primary means of discharging carbon dioxide. Tetrapodomorphs used their bodies and tails for locomotion and their fins for steering and braking; Ichthyostega may have used its forelimbs for locomotion on land and its tail for swimming.Its massive ribcage was made up of overlapping ribs and the animal possessed a stronger skeletal structure, a largely fishlike spine, and forelimbs apparently powerful enough to pull the body from the water. These anatomical modifications may have been a result of selection to overcome the lack of buoyancy experienced on land. The hindlimbs were smaller than the forelimbs and unlikely to have borne full weight in an adult, while the broad, overlapping ribs would have inhibited side-to-side movements. The forelimbs had the required range of movement to push the body up and forward, probably allowing the animal to drag itself across flat land by synchronous (rather than alternate) "crutching" movements, much like that of a mudskipper or a seal. It was incapable of typical quadrupedal gaits as the forelimbs lacked the necessary rotary motion range. See also Evolutionary history of life Hynerpeton List of transitional fossils List of prehistoric amphibians Ymeria References Further reading External links Tree of Life Site on early tetrapods Getting a Leg Up on Land Scientific American Nov. 21, 2005, article by Jennifer A. Clack. BBC News: Ancient walking mystery deepens 3D computer model, forelimb maximal joint range, and hindlimb maximal joint range of Icthyostega'' on YouTube, videos by Stephanie E. Pierce, Jennifer A. Clack, & John R. Hutchinson Stegocephalians Prehistoric lobe-finned fish genera Late Devonian sarcopterygians Devonian sarcopterygians of North America Transitional fossils Fossil taxa described in 1932 Fossils of Greenland Late Devonian genus first appearances Late Devonian genus extinctions Evolution of tetrapods
Ichthyostega
[ "Biology" ]
1,551
[ "Phylogenetics", "Evolution of tetrapods" ]
313,953
https://en.wikipedia.org/wiki/Lenticel
A lenticel is a porous tissue consisting of cells with large intercellular spaces in the periderm of the secondarily thickened organs and the bark of woody stems and roots of gymnosperms and dicotyledonous flowering plants. It functions as a pore, providing a pathway for the direct exchange of gases between the internal tissues and atmosphere through the bark, which is otherwise impermeable to gases. The name lenticel, pronounced with an , derives from its lenticular (lens-like) shape. The shape of lenticels is one of the characteristics used for tree identification. Evolution Before there was much evidence for the existence and functionality of lenticels, the fossil record has shown the first primary mechanism of aeration in early vascular plants to be the stomata. However, in woody plants, while the respiratory function of stomata is retained in the living epidermis of leaves and green stems, that function is lost where the epidermis of trunks and branches is displaced by vascular and cork cambial activity and by secondary growth. In such parts the entire epidermis may be shed as it is replaced by a suberized periderm or bark in which the respiratory functions of the stomata may be replaced by lenticels, at least until the bark becomes too thick. The extinct arboreal plants of the genera Lepidodendron and Sigillaria were the first to have distinct aeration structures that rendered these modifications. "Parichnoi" (singular: parichnos) are canal-like structures that, in association with foliar traces of the stem, connected the stem's outer and middle cortex to the mesophyll of the leaf. Parichnoi were thought to eventually give rise to lenticels as they helped solve the issue of long-range oxygen transport in these woody plants during the Carboniferous period. They also acquired secondary connections as they evolved to become transversely elongated to efficiently aerate the maximum number of vertical rays as well as the central core tissue of the stem. The evolutionary significance of parichnoi was their functionality in the absence of cauline stomata, where they can also be affected and destroyed by pressure similar to what can damage to stomatal tissue. Evidently, in both conifers and Lepidodendroids, the parichnoi, as the primary lenticular structure, appear as paired structures on either side of leaf scars. The development and increase in the number of these primitive lenticels were key to providing a system that was open for aeration and gas exchange in these plants. Structure and development In plant bodies that produce secondary growth, lenticels promote gas exchange of oxygen, carbon dioxide, and water vapor. Lenticel formation usually begins beneath stomatal complexes during primary growth preceding the development of the first periderm. The formation of lenticels seem to be directly related to the growth and strength of the shoot and on the hydrose of the tissue, which refers to the internal moisture. As stems and roots mature lenticel development continues in the new periderm (for example, periderm that forms at the bottom of cracks in the bark). Lenticels are found as raised circular, oval, or elongated areas on stems and roots. In woody plants, lenticels commonly appear as rough, cork-like structures on young branches. Underneath them, porous tissue creates a number of large intercellular spaces between cells. This tissue fills the lenticel and arises from cell division in the phellogen or substomatal ground tissue. Discoloration of lenticels may also occur, such as in mangoes, that may be due to the amount of lignin in cell walls. In oxygen deprived conditions, making respiration a daily challenge, different species may possess specialized structures where lenticels can be found. For example, in a common mangrove species, lenticels appear on pneumatophores (specialized roots), where the parenchyma cells that connect to the aerenchyma structure increase in size and go through cell division. In contrast, lenticels in grapes are located on the pedicels and act as a function of temperature. If they are blocked, hypoxia and successive ethanol accumulation may result and lead to cell death. Fruits Lenticels are also present on many fruits, quite noticeably on many apples and pears. On European pears, they can serve as an indicator of when to pick the fruit, as light lenticels on the immature fruit darken and become brown and shallow from the formation of cork cells. Certain bacterial and fungal infections can penetrate fruits through their lenticels, with susceptibility sometimes increasing with its age. While the term lenticel is usually associated with the breakage of periderm tissue that is associated with gas exchange, it also refers to the lightly colored spots found on apples (a type of pome fruit). "Lenticel" seems to be the most appropriate term to describe both structures mentioned in light of their similar function in gas exchange. Pome lenticels can be derived from no longer functioning stomata, epidermal breaks from the removal of trichomes, and other epidermal breaks that usually occur in the early development of young pome fruits. The closing of pome lenticels can arise when the cuticle over the stomata opening or the substomatal layer seals. Closing can also begin if the substomatal cells become suberized, like cork. The number of lenticels usually varies between the species of apples, where the range may be from 450 to 800 or from 1500 to 2500 in Winesap and Spitzenburg apples, respectively. This wide range may be due to the water availability during the early stages of development of each apple type. "Lenticel breakdown" is a global skin disorder of apples in which lenticels develop dark 1–8 mm diameter pits shortly after processing and packing. It is most common on the 'Gala' (Malus × domestica) variety, particularly the 'Royal Gala', and also occurs in 'Fuji', 'Granny Smith', 'Golden Delicious', and 'Delicious' varieties. It is more common in arid regions, and is thought to be related to relative humidity and temperature. The effect can be mitigated by spraying the fruit with lipophilic coatings prior to harvest. Tubers Lenticels are also present on potato tubers. Gallery See also Complementary cells Notes References Plant morphology Trees
Lenticel
[ "Biology" ]
1,351
[ "Plant morphology", "Plants" ]
314,028
https://en.wikipedia.org/wiki/Body%20orifice
A body orifice is any opening in the body of an animal. External In a typical mammalian body such as the human body, the external body orifices are: The nostrils, for breathing and the associated sense of smell The mouth, for eating, drinking, breathing, and vocalizations such as speech The ear canals, for the sense of hearing The nasolacrimal ducts, to carry tears from the lacrimal sac into the nasal cavity The anus, for defecation The urinary meatus, for urination in males and females and ejaculation in males In females, the vagina, for menstruation, copulation and birth The nipple orifices Other animals may have some other body orifices: cloaca, in birds, reptiles, amphibians, and a few mammals, such as monotremes. siphon in mollusks, arthropods, and some other animals Internal Internal orifices include the orifices of the outflow tracts of the heart, between the heart valves. See also Internal urethral orifice Mucosa Mucocutaneous boundary Meatus Body cavity References Anatomy
Body orifice
[ "Biology" ]
241
[ "Anatomy" ]
314,123
https://en.wikipedia.org/wiki/The%20Thackery%20T.%20Lambshead%20Pocket%20Guide%20to%20Eccentric%20%26%20Discredited%20Diseases
The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases (2003) is an anthology of fantasy medical conditions edited by Jeff VanderMeer and Mark Roberts, and published by Night Shade Books. A second edition was published by Spectra in 2005. Contents The Guide claims to be 83rd edition of a medical reference text first compiled by the fictional Dr. Thackery T. Lambshead in 1915. It contains generally humorous entries (in varying degrees of darkness) with describing fantastical diseases, which together detail the "secret medical history" of the 20th century. Each entry presents a history of the disease, its characteristic symptoms, and any cure that might exist, all written in a faux encyclopedic style. Many diseases are accompanied by illustrations. The book also includes essays about the adventures of the titular Thackery T. Lambshead, "a sort of medical Indiana Jones." Appendices to the book include a history of its many editions, and biographies of the many contributing authors, named as "doctors," in which their writing career is recast as a medical career. Reception In 2004, the book was shortlisted for a Hugo Award for Best Related Book and a World Fantasy Award for Best Anthology. A sequel anthology was released in 2011 called The Thackery T. Lambshead Cabinet of Curiosities, which was co-edited by Jeff and Ann VanderMeer. The encyclopedia makes an appearance in the novel Monstrocity by Jeffrey Thomas. It is also referenced in VanderMeer's own collection City of Saints and Madmen. Contributors The anthology includes entries by almost 70 different authors. These contributors include: Alan M. Clark Alan Moore Andrew J. Wilson Brendan Connell Brian Evenson Brian Stableford China Miéville Cory Doctorow David Langford Dawn Andrews Elliot Fintushel G. Eric Schaller Gahan Wilson Gary Couzens Harvey Jacobs Iain Rowan Jack Slay, Jr. Jay Caselburg Jeff Topham Jeffrey Ford Jeffrey Thomas John Coulthart Kage Baker K.J. Bishop L. Timmel Duchamp Lance Olsen Liz Williams Martin Newell Michael Barry Michael Bishop Michael Cisco Michael Cobley Michael Moorcock Mike O'Driscoll Nathan Ballingrud Neil Gaiman Neil Williamson Paul Di Filippo R.M. Berry Rachel Pollack Rhys Hughes Richard Calder Rikki Ducornet Robert Freeman Wexler Sara Gwenllian Jones Shelley Jackson Stepan Chapman Steve Rasnic Tem Steve Redwood Tamar Yellin Tim Lebbon References External links Four Additional (Unofficial) Entries to the Guide - A tribute by author Jordan Inman 2003 books Fantasy anthologies Fictional diseases and disorders Fictional encyclopedias Eccentricity (behavior)
The Thackery T. Lambshead Pocket Guide to Eccentric & Discredited Diseases
[ "Biology" ]
547
[ "Behavior", "Human behavior", "Eccentricity (behavior)" ]
314,124
https://en.wikipedia.org/wiki/Jody%20Williams
Jody Williams (born October 9, 1950) is an American political activist known for her work in banning anti-personnel landmines, her defense of human rights (especially those of women), and her efforts to promote new understandings of security in today's world. She was awarded the Nobel Peace Prize in 1997 for her work toward the banning and clearing of anti-personnel mines. Education Williams earned a Master in International Relations from the Paul H. Nitze School of Advanced International Studies (a division of Johns Hopkins University) in Washington, D.C. (1984), an MA in teaching Spanish and English as a second language from the School for International Training (SIT) in Brattleboro, Vermont (1976), and a BA from the University of Vermont (1972). Advocacy Williams served as the founding coordinator of the International Campaign to Ban Landmines (ICBL) from early 1992 until February 1998. Before that work, she spent eleven years on various projects related to the wars in Nicaragua and El Salvador, where, according to the Encyclopedia of Human Rights, she "spent the 1980s performing life-threatening human rights work." In an cooperative effort with governments, UN bodies and the International Committee of the Red Cross, Williams served as a chief strategist and spokesperson for the ICBL, which she developed from two non-governmental organizations (NGOs) with a staff of one – herself – to an international powerhouse of 1,300 NGOs in ninety countries. From its small beginning and official launch in 1992, Williams and the ICBL achieved the campaign's goal of an international treaty banning antipersonnel landmines during a diplomatic conference held in Oslo in September 1997. The Ottawa Treaty that banned land-mines is credited to her and the ICBL. Three weeks later, she and the ICBL were awarded the Nobel Peace Prize. At that time, she became the tenth woman – and third American woman – in its almost hundred-year history to receive the Prize. In November 2004, after discussions with Iranian Peace Laureates Shirin Ebadi and Professor Wangari Maathai of Kenya, Williams established the Nobel Women's Initiative which was launched in January 2006. Williams has since served as its chair. This initiative brought together six of the female Peace Laureates, the women seek to use their influence to promote the work of women working for peace with justice and equality. (Aung San Suu Kyi is an honorary member.) In 2020, Williams called upon Chevron Corporation to pay cleanup costs to the residents of the Lago Agrio oil field which were awarded in 2011 and have been in litigation ever since. Williams is quoted as saying, "The image of peace with a dove flying over a rainbow and people holding hands singing kumbaya ends up infantilizing people who believe that sustainable peace is possible. If you think that singing and looking at a rainbow will suddenly make peace appear then you're not capable of meaningful thought, or understanding the difficulties of the world." In 2019, Williams called for a treaty to end violence against women, in support of Every Woman Coalition. Academic career Since 2007, Williams has been the Sam and Cele Keeper Professor in Peace and Social Justice in the Graduate College of Social Work at the University of Houston. Before that she had been a distinguished visiting professor of global justice at the college since 2003. Recognition Williams continues to be recognized for her contributions to human rights and global security. She is the recipient of fifteen honorary degrees. In 2004, in its first such listing, she was named by Forbes magazine as one of the 100 most powerful women in the world. She is one of the female Nobel Prize laureates to be recognized as a "Woman of the Year" by Glamour magazine – which has also honoured Hillary Clinton, Katie Couric and Barbara Walters. Publications Williams' work includes articles for magazines and newspapers, for example the Wall Street Journal, International Herald Tribune, The Independent (UK), The Irish Times, The Toronto Globe and Mail, Los Angeles Times, La Jornada (Mexico), The Review of the International Red Cross, Columbia University's Journal of Politics and Society. She added several chapters to numerous books: The Personal Philosophies of Remarkable Men and Women, edited by Jay Allison and Dan Gediman; A Memory, A Monologue, A Rant, and A Prayer Lessons from our Fathers, by Keith McDermott Girls Like Us: 40 Extraordinary Women Celebrate Girlhood in Story, Poetry and Song, by Gina Misiroglu; The Way We Will be 50 Years from Today: 60 of the World's Greatest Minds Share Their Visions of the Next Half-Century Williams co-authored a seminal book on the landmine crisis in 1995, After the Guns Fall Silent: The Enduring Legacy of Landmines. Her book, released at the end of March 2008, Banning Landmines: Disarmament, Citizen Diplomacy and Human Security, analyzes the Mine Ban Treaty and its impact on other human security- related work. In March 2013, her memoir, My Name Is Jody Williams: A Vermont Girl's Winding Path to the Nobel Peace Prize, was released. See also List of female Nobel laureates List of peace activists PeaceJam References External links "An Individual's Impact on Social and Political Change" One on One – Jody Williams, interview by Riz Khan on Al Jazeera English, March 2011 (video, 25 mins). 1950 births Living people Mine action Nobel Peace Prize laureates American Nobel laureates American anti-war activists American human rights activists American women human rights activists University of Vermont alumni SIT Graduate Institute alumni Paul H. Nitze School of Advanced International Studies alumni University of Houston faculty Women Nobel laureates People from Brattleboro, Vermont American nonviolence advocates
Jody Williams
[ "Technology" ]
1,173
[ "Women Nobel laureates", "Women in science and technology" ]
314,139
https://en.wikipedia.org/wiki/Triptych
A triptych ( ) is a work of art (usually a panel painting) that is divided into three sections, or three carved panels that are hinged together and can be folded shut or displayed open. It is therefore a type of polyptych, the term for all multi-panel works. The middle panel is typically the largest and it is flanked by two smaller related works, although there are triptychs of equal-sized panels. The form can also be used for pendant jewelry. Beyond its association with art, the term is sometimes used more generally to connote anything with three parts, particularly if integrated into a single unit. Etymology The word triptych was formed in English by compounding the prefix tri- with the word diptych. Diptych is borrowed from the Latin , which itself is derived from the Late Greek () . is the neuter plural of () . In art The triptych form appears in early Christian art, and was a popular standard format for altar paintings from the Middle Ages onwards. Its geographical range was from the eastern Byzantine churches to the Celtic churches in the west. During the Byzantine period, triptychs were often used for private devotional use, along with other relics such as icons. Renaissance painters such as Hans Memling and Hieronymus Bosch used the form. Sculptors also used it. Triptych forms also allow ease of transport. From the Gothic period onward, both in Europe and elsewhere, altarpieces in churches and cathedrals were often in triptych form. One such cathedral with an altarpiece triptych is Llandaff Cathedral. The Cathedral of Our Lady in Antwerp, Belgium, contains two examples by Rubens, and Notre Dame de Paris is another example of the use of triptych in architecture. The form is echoed by the structure of many ecclesiastical stained glass windows. The triptych form's transportability was exploited during World War Two when a private citizens' committee in the United States commissioned painters and sculptors to create portable three-panel hinged altarpieces for use by Christian and Jewish U.S. troops for religious services. By the end of the war, 70 artists had created 460 triptychs. Among the most prolific were Violet Oakley, Nina Barr Wheeler, and Hildreth Meiere. The triptych format has been used in non-Christian faiths, including, Judaism, Islam, and Buddhism. For example: the triptych Hilje-j-Sherif displayed at the National Museum of Oriental Art, Rome, Italy, and a page of the Qur'an at the Museum of Turkish and Islamic Arts in Istanbul, Turkey, exemplify Ottoman religious art adapting the motif. Likewise, Tibetan Buddhists have used it in traditional altars. Although strongly identified as a religious altarpiece form, triptychs outside that context have been created, some of the best-known examples being works by Max Beckmann and Francis Bacon. When Bacon's 1969 triptych, Three Studies of Lucian Freud, was sold in 2013 for $142.4 million, it was the highest price ever paid for an artwork at auction at that time. That record was broken in May 2015 by $179.4 million for Pablo Picasso's 1955 painting Les Femmes d’Alger. In photography A photographic triptych is a common style used in modern commercial artwork. The photographs are usually arranged with a plain border between them. The work may consist of separate images that are variants on a theme, or may be one larger image split into three. Examples Stefaneschi Triptych by Giotto, c. 1330 Annunciation with St. Margaret and St. Ansanus by Simone Martini, 1333 The Mérode Altarpiece by Robert Campin, late 1420's The Garden of Earthly Delights, Triptych of the Temptation of St. Anthony and The Haywain Triptych by Hieronymus Bosch The Portinari Altarpiece by Hugo van der Goes, c. 1475 The Buhl Altarpiece, c. 1495 The Raising of the Cross by Peter Paul Rubens, 1610 or 1611 The Aino Myth triptych by Akseli Gallen-Kallela, 1891 The Pioneer by Frederick McCubbin, 1904 Departure by Max Beckmann, 1932–33 Three Studies for Figures at the Base of a Crucifixion by Francis Bacon, 1944 Gallery See also References External links The Institution of the Eucharist at the Last Supper with St. Peter and St. Paul, Metropolitan Museum of Art On the triptych as a writing instrument Example of triptych features and restoration Articles containing video clips Altarpieces Artistic techniques Church architecture Iconography Optical illusions Picture framing Romanesque art Rotational symmetry Sculpture Symmetry Synagogue architecture Triptychs Visual motifs Binocular rivalry
Triptych
[ "Physics", "Mathematics" ]
984
[ "Physical phenomena", "Optical illusions", "Visual motifs", "Symbols", "Optical phenomena", "Geometry", "Symmetry", "Rotational symmetry" ]
314,151
https://en.wikipedia.org/wiki/Moissanite
Moissanite () is naturally occurring silicon carbide and its various crystalline polymorphs. It has the chemical formula SiC and is a rare mineral, discovered by the French chemist Henri Moissan in 1893. Silicon carbide or moissanite is useful for commercial and industrial applications due to its hardness, optical properties and thermal conductivity. Background The mineral moissanite was discovered by Henri Moissan while examining rock samples from a meteor crater located in Canyon Diablo, Arizona, in 1893. At first, he mistakenly identified the crystals as diamonds, but in 1904 he identified the crystals as silicon carbide. Artificial silicon carbide had been synthesized in the lab by Edward G. Acheson in 1891, just two years before Moissan's discovery. The mineral form of silicon carbide was named in honor of Moissan later on in his life. Geological occurrence In its natural form, moissanite remains very rare. Until the 1950s, no other source for moissanite other than as presolar grains in carbonaceous chondrite meteorites had been encountered. Then, in 1958, moissanite was found in the upper mantle Green River Formation in Wyoming and, the following year, as inclusions in the ultramafic rock kimberlite from a diamond mine in Yakutia in the Russian Far East. Yet the existence of moissanite in nature was questioned as late as 1986 by the American geologist Charles Milton. Discoveries show that it occurs naturally as inclusions in diamonds, xenoliths, and such other ultramafic rock such as lamproite. Meteorites Analysis of silicon carbide grains found in the Murchison meteorite has revealed anomalous isotopic ratios of carbon and silicon, indicating an extraterrestrial origin from outside the Solar System. 99% of these silicon carbide grains originate around carbon-rich asymptotic giant branch stars. Silicon carbide is commonly found around these stars, as deduced from their infrared spectra. The discovery of silicon carbide in the Canyon Diablo meteorite and other places was delayed for a long time as carborundum (SiC) contamination had occurred from man-made abrasive tools. Physical properties The crystalline structure is held together with strong covalent bonding similar to diamonds, that allows moissanite to withstand high pressures up to 52.1 gigapascals. Colors vary widely and are graded from D to K range on the diamond color grading scale. Sources All applications of silicon carbide today use synthetic material, as the natural material is very scarce. The idea that a silicon-carbon bond might in fact exist in nature was first proposed by the Swedish chemist Jöns Jacob Berzelius as early as 1824 (Berzelius 1824). In 1891, Edward Goodrich Acheson produced viable minerals that could substitute for diamond as an abrasive and cutting material. This was possible, as moissanite is one of the hardest substances known, with a hardness just below that of diamond and comparable with those of cubic boron nitride and boron. Pure synthetic moissanite can also be made from thermal decomposition of the preceramic polymer poly(methylsilyne), requiring no binding matrix, e.g., cobalt metal powder. Single-crystalline silicon carbide, in certain forms, has been used for the fabrication of high-performance semiconductor devices. As natural sources of silicon carbide are rare, and only certain atomic arrangements are useful for gemological applications, North Carolina–based Cree Research, Inc., founded in 1987, developed a commercial process for producing large single crystals of silicon carbide. Cree is the world leader in the growth of single crystal silicon carbide, mostly for electronics use. In 1995 C3 Inc., a company helmed by Charles Eric Hunter, formed Charles & Colvard to market gem quality moissanite. Charles & Colvard was the first company to produce and sell synthetic moissanite under U.S. patent US5723391 A, first filed by C3 Inc. in North Carolina. Applications Moissanite was introduced to the jewelry market as a diamond alternative in 1998 after Charles & Colvard (formerly known as C3 Inc.) received patents to create and market lab-grown silicon carbide gemstones, becoming the first firm to do so. By 2018 all patents on the original process world-wide had expired. Charles & Colvard currently makes and distributes moissanite jewelry and loose gems under the trademarks Forever One, Forever Brilliant, and Forever Classic. Other manufacturers market silicon carbide gemstones under trademarked names such as Amora. On the Mohs scale of mineral hardness (with diamond as the upper extreme, 10) moissanite is rated as 9.25. As a diamond alternative, Moissanite has some optical properties exceeding those of diamond. It is marketed as a lower price alternative to diamond that does not involve the expensive mining practices used for the extraction of natural diamonds. As some of its properties are quite similar to diamond, moissanite may be used as counterfeit diamond. Testing equipment based on measuring thermal conductivity in particular may give results similar to diamond. In contrast to diamond, moissanite exhibits a thermochromism, such that heating it gradually will cause it to temporarily change color, starting at around . A more practical test is a measurement of electrical conductivity, which will show higher values for moissanite. Moissanite is birefringent (i.e., light sent through the material splits into separate beams that depend on the source polarization), which can be easily seen, and diamond is not. Because of its hardness, it can be used in high-pressure experiments, as a replacement for diamond (see diamond anvil cell). Since large diamonds are usually too expensive to be used as anvils, moissanite is more often used in large-volume experiments. Synthetic moissanite is also interesting for electronic and thermal applications because its thermal conductivity is similar to that of diamonds. High power silicon carbide electronic devices are expected to find use in the design of protection circuits used for motors, actuators, and energy storage or pulse power systems. It also exhibits thermoluminescence, making it useful in radiation dosimetry. See also Charles & Colvard Cubic zirconia Diamond Engagement ring Fair trade Glossary of meteoritics References External links Carbide minerals Hexagonal minerals Minerals in space group 186 Meteorite minerals Native element minerals Gemstones Green River Formation
Moissanite
[ "Physics" ]
1,373
[ "Materials", "Gemstones", "Matter" ]
314,204
https://en.wikipedia.org/wiki/Chemometrics
Chemometrics is the science of extracting information from chemical systems by data-driven means. Chemometrics is inherently interdisciplinary, using methods frequently employed in core data-analytic disciplines such as multivariate statistics, applied mathematics, and computer science, in order to address problems in chemistry, biochemistry, medicine, biology and chemical engineering. In this way, it mirrors other interdisciplinary fields, such as psychometrics and econometrics. Background Chemometrics is applied to solve both descriptive and predictive problems in experimental natural sciences, especially in chemistry. In descriptive applications, properties of chemical systems are modeled with the intent of learning the underlying relationships and structure of the system (i.e., model understanding and identification). In predictive applications, properties of chemical systems are modeled with the intent of predicting new properties or behavior of interest. In both cases, the datasets can be small but are often large and complex, involving hundreds to thousands of variables, and hundreds to thousands of cases or observations. Chemometric techniques are particularly heavily used in analytical chemistry and metabolomics, and the development of improved chemometric methods of analysis also continues to advance the state of the art in analytical instrumentation and methodology. It is an application-driven discipline, and thus while the standard chemometric methodologies are very widely used industrially, academic groups are dedicated to the continued development of chemometric theory, method and application development. Origins Although one could argue that even the earliest analytical experiments in chemistry involved a form of chemometrics, the field is generally recognized to have emerged in the 1970s as computers became increasingly exploited for scientific investigation. The term 'chemometrics' was coined by Svante Wold in a 1971 grant application, and the International Chemometrics Society was formed shortly thereafter by Svante Wold and Bruce Kowalski, two pioneers in the field. Wold was a professor of organic chemistry at Umeå University, Sweden, and Kowalski was a professor of analytical chemistry at University of Washington, Seattle. Many early applications involved multivariate classification, numerous quantitative predictive applications followed, and by the late 1970s and early 1980s a wide variety of data- and computer-driven chemical analyses were occurring. Multivariate analysis was a critical facet even in the earliest applications of chemometrics. Data from infrared and UV/visible spectroscopy are often counted in thousands of measurements per sample. Mass spectrometry, nuclear magnetic resonance, atomic emission/absorption and chromatography experiments are also all by nature highly multivariate. The structure of these data was found to be conducive to using techniques such as principal components analysis (PCA), partial least-squares (PLS), orthogonal partial least-squares (OPLS), and two-way orthogonal partial least squares (O2PLS). This is primarily because, while the datasets may be highly multivariate there is strong and often linear low-rank structure present. PCA and PLS have been shown over time very effective at empirically modeling the more chemically interesting low-rank structure, exploiting the interrelationships or 'latent variables' in the data, and providing alternative compact coordinate systems for further numerical analysis such as regression, clustering, and pattern recognition. Partial least squares in particular was heavily used in chemometric applications for many years before it began to find regular use in other fields. Through the 1980s three dedicated journals appeared in the field: Journal of Chemometrics, Chemometrics and Intelligent Laboratory Systems, and Journal of Chemical Information and Modeling. These journals continue to cover both fundamental and methodological research in chemometrics. At present, most routine applications of existing chemometric methods are commonly published in application-oriented journals (e.g., Applied Spectroscopy, Analytical Chemistry, Analytica Chimica Acta, Talanta). Several important books/monographs on chemometrics were also first published in the 1980s, including the first edition of Malinowski's Factor Analysis in Chemistry, Sharaf, Illman and Kowalski's Chemometrics, Massart et al. Chemometrics: a textbook, and Multivariate Calibration by Martens and Naes. Some large chemometric application areas have gone on to represent new domains, such as molecular modeling and QSAR, cheminformatics, the '-omics' fields of genomics, proteomics, metabonomics and metabolomics, process modeling and process analytical technology. An account of the early history of chemometrics was published as a series of interviews by Geladi and Esbensen. Techniques Multivariate calibration Many chemical problems and applications of chemometrics involve calibration. The objective is to develop models which can be used to predict properties of interest based on measured properties of the chemical system, such as pressure, flow, temperature, infrared, Raman, NMR spectra and mass spectra. Examples include the development of multivariate models relating 1) multi-wavelength spectral response to analyte concentration, 2) molecular descriptors to biological activity, 3) multivariate process conditions/states to final product attributes. The process requires a calibration or training data set, which includes reference values for the properties of interest for prediction, and the measured attributes believed to correspond to these properties. For case 1), for example, one can assemble data from a number of samples, including concentrations for an analyte of interest for each sample (the reference) and the corresponding infrared spectrum of that sample. Multivariate calibration techniques such as partial-least squares regression, or principal component regression (and near countless other methods) are then used to construct a mathematical model that relates the multivariate response (spectrum) to the concentration of the analyte of interest, and such a model can be used to efficiently predict the concentrations of new samples. Techniques in multivariate calibration are often broadly categorized as classical or inverse methods. The principal difference between these approaches is that in classical calibration the models are solved such that they are optimal in describing the measured analytical responses (e.g., spectra) and can therefore be considered optimal descriptors, whereas in inverse methods the models are solved to be optimal in predicting the properties of interest (e.g., concentrations, optimal predictors). Inverse methods usually require less physical knowledge of the chemical system, and at least in theory provide superior predictions in the mean-squared error sense, and hence inverse approaches tend to be more frequently applied in contemporary multivariate calibration. The main advantages of the use of multivariate calibration techniques is that fast, cheap, or non-destructive analytical measurements (such as optical spectroscopy) can be used to estimate sample properties which would otherwise require time-consuming, expensive or destructive testing (such as LC-MS). Equally important is that multivariate calibration allows for accurate quantitative analysis in the presence of heavy interference by other analytes. The selectivity of the analytical method is provided as much by the mathematical calibration, as the analytical measurement modalities. For example, near-infrared spectra, which are extremely broad and non-selective compared to other analytical techniques (such as infrared or Raman spectra), can often be used successfully in conjunction with carefully developed multivariate calibration methods to predict concentrations of analytes in very complex matrices. Classification, pattern recognition, clustering Supervised multivariate classification techniques are closely related to multivariate calibration techniques in that a calibration or training set is used to develop a mathematical model capable of classifying future samples. The techniques employed in chemometrics are similar to those used in other fields – multivariate discriminant analysis, logistic regression, neural networks, regression/classification trees. The use of rank reduction techniques in conjunction with these conventional classification methods is routine in chemometrics, for example discriminant analysis on principal components or partial least squares scores. A family of techniques, referred to as class-modelling or one-class classifiers, are able to build models for an individual class of interest. Such methods are particularly useful in the case of quality control and authenticity verification of products. Unsupervised classification (also termed cluster analysis) is also commonly used to discover patterns in complex data sets, and again many of the core techniques used in chemometrics are common to other fields such as machine learning and statistical learning. Multivariate curve resolution In chemometric parlance, multivariate curve resolution seeks to deconstruct data sets with limited or absent reference information and system knowledge. Some of the earliest work on these techniques was done by Lawton and Sylvestre in the early 1970s. These approaches are also called self-modeling mixture analysis, blind source/signal separation, and spectral unmixing. For example, from a data set comprising fluorescence spectra from a series of samples each containing multiple fluorophores, multivariate curve resolution methods can be used to extract the fluorescence spectra of the individual fluorophores, along with their relative concentrations in each of the samples, essentially unmixing the total fluorescence spectrum into the contributions from the individual components. The problem is usually ill-determined due to rotational ambiguity (many possible solutions can equivalently represent the measured data), so the application of additional constraints is common, such as non-negativity, unimodality, or known interrelationships between the individual components (e.g., kinetic or mass-balance constraints). Other techniques Experimental design remains a core area of study in chemometrics and several monographs are specifically devoted to experimental design in chemical applications. Sound principles of experimental design have been widely adopted within the chemometrics community, although many complex experiments are purely observational, and there can be little control over the properties and interrelationships of the samples and sample properties. Signal processing is also a critical component of almost all chemometric applications, particularly the use of signal pretreatments to condition data prior to calibration or classification. The techniques employed commonly in chemometrics are often closely related to those used in related fields. Signal pre-processing may affect the way in which outcomes of the final data processing can be interpreted. Performance characterization, and figures of merit Like most arenas in the physical sciences, chemometrics is quantitatively oriented, so considerable emphasis is placed on performance characterization, model selection, verification & validation, and figures of merit. The performance of quantitative models is usually specified by root mean squared error in predicting the attribute of interest, and the performance of classifiers as a true-positive rate/false-positive rate pairs (or a full ROC curve). A recent report by Olivieri et al. provides a comprehensive overview of figures of merit and uncertainty estimation in multivariate calibration, including multivariate definitions of selectivity, sensitivity, SNR and prediction interval estimation. Chemometric model selection usually involves the use of tools such as resampling (including bootstrap, permutation, cross-validation). Multivariate statistical process control (MSPC), modeling and optimization accounts for a substantial amount of historical chemometric development. Spectroscopy has been used successfully for online monitoring of manufacturing processes for 30–40 years, and this process data is highly amenable to chemometric modeling. Specifically in terms of MSPC, multiway modeling of batch and continuous processes is increasingly common in industry and remains an active area of research in chemometrics and chemical engineering. Process analytical chemistry as it was originally termed, or the newer term process analytical technology continues to draw heavily on chemometric methods and MSPC. Multiway methods are heavily used in chemometric applications. These are higher-order extensions of more widely used methods. For example, while the analysis of a table (matrix, or second-order array) of data is routine in several fields, multiway methods are applied to data sets that involve 3rd, 4th, or higher-orders. Data of this type is very common in chemistry, for example a liquid-chromatography / mass spectrometry (LC-MS) system generates a large matrix of data (elution time versus m/z) for each sample analyzed. The data across multiple samples thus comprises a data cube. Batch process modeling involves data sets that have time vs. process variables vs. batch number. The multiway mathematical methods applied to these sorts of problems include PARAFAC, trilinear decomposition, and multiway PLS and PCA. References Further reading External links An Introduction to Chemometrics (archived website) IUPAC Glossary for Chemometrics Homepage of Chemometrics, Sweden Homepage of Chemometrics (a starting point) Chemometric Analysis for Spectroscopy General resource on advanced chemometric methods and recent developments Computational chemistry Metrics Analytical chemistry Cheminformatics
Chemometrics
[ "Chemistry", "Mathematics" ]
2,666
[ "Metrics", "Quantity", "Theoretical chemistry", "Computational chemistry", "nan", "Cheminformatics" ]
314,205
https://en.wikipedia.org/wiki/Polynomial%20long%20division
In algebra, polynomial long division is an algorithm for dividing a polynomial by another polynomial of the same or lower degree, a generalized version of the familiar arithmetic technique called long division. It can be done easily by hand, because it separates an otherwise complex division problem into smaller ones. Sometimes using a shorthand version called synthetic division is faster, with less writing and fewer calculations. Another abbreviated method is polynomial short division (Blomqvist's method). Polynomial long division is an algorithm that implements the Euclidean division of polynomials, which starting from two polynomials A (the dividend) and B (the divisor) produces, if B is not zero, a quotient Q and a remainder R such that A = BQ + R, and either R = 0 or the degree of R is lower than the degree of B. These conditions uniquely define Q and R, which means that Q and R do not depend on the method used to compute them. The result R = 0 occurs if and only if the polynomial A has B as a factor. Thus long division is a means for testing whether one polynomial has another as a factor, and, if it does, for factoring it out. For example, if a root r of A is known, it can be factored out by dividing A by (x – r). Example Polynomial long division Find the quotient and the remainder of the division of , the dividend, by , the divisor. The dividend is first rewritten like this: The quotient and remainder can then be determined as follows: Divide the first term of the dividend by the highest term of the divisor (meaning the one with the highest power of x, which in this case is x). Place the result above the bar (x3 ÷ x = x2). Multiply the divisor by the result just obtained (the first term of the eventual quotient). Write the result under the first two terms of the dividend (). Subtract the product just obtained from the appropriate terms of the original dividend (being careful that subtracting something having a minus sign is equivalent to adding something having a plus sign), and write the result underneath Then, "bring down" the next term from the dividend. Repeat the previous three steps, except this time use the two terms that have just been written as the dividend. Repeat step 4. This time, there is nothing to "bring down". The polynomial above the bar is the quotient q(x), and the number left over (5) is the remainder r(x). The long division algorithm for arithmetic is very similar to the above algorithm, in which the variable x is replaced (in base 10) by the specific number 10. Polynomial short division Blomqvist's method is an abbreviated version of the long division above. This pen-and-paper method uses the same algorithm as polynomial long division, but mental calculation is used to determine remainders. This requires less writing, and can therefore be a faster method once mastered. The division is at first written in a similar way as long multiplication with the dividend at the top, and the divisor below it. The quotient is to be written below the bar from left to right. Divide the first term of the dividend by the highest term of the divisor (x3 ÷ x = x2). Place the result below the bar. x3 has been divided leaving no remainder, and can therefore be marked as used by crossing it out. The result x2 is then multiplied by the second term in the divisor −3 = −3x2. Determine the partial remainder by subtracting −2x2 − (−3x2) = x2. Mark −2x2 as used and place the new remainder x2 above it. Divide the highest term of the remainder by the highest term of the divisor (x2 ÷ x = x). Place the result (+x) below the bar. x2 has been divided leaving no remainder, and can therefore be marked as used. The result x is then multiplied by the second term in the divisor −3 = −3x. Determine the partial remainder by subtracting 0x − (−3x) = 3x. Mark 0x as used and place the new remainder 3x above it. Divide the highest term of the remainder by the highest term of the divisor (3x ÷ x = 3). Place the result (+3) below the bar. 3x has been divided leaving no remainder, and can therefore be marked as used. The result 3 is then multiplied by the second term in the divisor −3 = −9. Determine the partial remainder by subtracting −4 − (−9) = 5. Mark −4 as used and place the new remainder 5 above it. The polynomial below the bar is the quotient q(x), and the number left over (5) is the remainder r(x). Pseudocode The algorithm can be represented in pseudocode as follows, where +, −, and × represent polynomial arithmetic, and lead(r) / lead(d) represents the polynomial obtained by dividing the two leading terms: function n / d is require d ≠ 0 q ← 0 r ← n // At each step n = d × q + r while r ≠ 0 and degree(r) ≥ degree(d) do t ← lead(r) / lead(d) // Divide the leading terms q ← q + t r ← r − t × d return (q, r) This works equally well when degree(n) < degree(d); in that case the result is just the trivial (0, n). This algorithm describes exactly the above paper and pencil method: is written on the left of the ")"; is written, term after term, above the horizontal line, the last term being the value of ; the region under the horizontal line is used to compute and write down the successive values of . Euclidean division For every pair of polynomials (A, B) such that B ≠ 0, polynomial division provides a quotient Q and a remainder R such that and either R=0 or degree(R) < degree(B). Moreover (Q, R) is the unique pair of polynomials having this property. The process of getting the uniquely defined polynomials Q and R from A and B is called Euclidean division (sometimes division transformation). Polynomial long division is thus an algorithm for Euclidean division. Applications Factoring polynomials Sometimes one or more roots of a polynomial are known, perhaps having been found using the rational root theorem. If one root r of a polynomial P(x) of degree n is known then polynomial long division can be used to factor P(x) into the form where Q(x) is a polynomial of degree n − 1. Q(x) is simply the quotient obtained from the division process; since r is known to be a root of P(x), it is known that the remainder must be zero. Likewise, if several roots r, s, . . . of P(x) are known, a linear factor can be divided out to obtain Q(x), and then can be divided out of Q(x), etc. Alternatively, the quadratic factor can be divided out of P(x) to obtain a quotient of degree This method is especially useful for cubic polynomials, and sometimes all the roots of a higher-degree polynomial can be obtained. For example, if the rational root theorem produces a single (rational) root of a quintic polynomial, it can be factored out to obtain a quartic (fourth degree) quotient; the explicit formula for the roots of a quartic polynomial can then be used to find the other four roots of the quintic. There is, however, no general way to solve a quintic by purely algebraic methods, see Abel–Ruffini theorem. Finding tangents to polynomial functions Polynomial long division can be used to find the equation of the line that is tangent to the graph of the function defined by the polynomial P(x) at a particular point If R(x) is the remainder of the division of P(x) by then the equation of the tangent line at to the graph of the function is regardless of whether or not r is a root of the polynomial. Example Find the equation of the line that is tangent to the following curve at: Begin by dividing the polynomial by: The tangent line is Cyclic redundancy check A cyclic redundancy check uses the remainder of polynomial division to detect errors in transmitted messages. See also Polynomial remainder theorem Synthetic division, a more concise method of performing Euclidean polynomial division Ruffini's rule Euclidean domain Gröbner basis Greatest common divisor of two polynomials References Polynomials Computer algebra Division (mathematics)
Polynomial long division
[ "Mathematics", "Technology" ]
1,839
[ "Polynomials", "Computer algebra", "Computational mathematics", "Computer science", "Algebra" ]
4,052,033
https://en.wikipedia.org/wiki/Toyota%20hybrid%20vehicles
By the end of 2006 there were about 15 hybrid vehicles from various car makers available in the U.S. By May 2007 Toyota sold its first million hybrids and had sold a total of two million hybrids at the end of August 2009. Comparisons Below is a comparison of the Toyota hybrid models. Note: Miles per gallon estimates are those provided by the United States Environmental Protection Agency (EPA) and are the 2008 revision of the original numbers. Hybrid access to US HOV lanes varies by US state. Factors can include total/average miles per gallon rating from the EPA, type of technology used, and/or date of vehicle registration with the relevant state authorities. (Several states have begun restricting HOV lane access by hybrid and clean-fuel vehicles due to crowding.) Traction battery power is the amount of power available from the electric portion of the powertrain without the aid of the internal combustion engine (ICE). This is generally limited by the traction battery rather than the electric motor(s). See also Hybrid Synergy Drive Hybrid electric vehicles in the United States List of hybrid vehicles Notes References External links about.com hybrid comparison allabouthybridcars comparison Hybrid Synergy Drive movie from Toyota United States Environmental Protection Agency Fuel Economy Site 2016 Toyota Prius Specifications Revealed "Evaluation of the 2007 Toyota Camry Hybrid Synergy Drive system" from Oak Ridge National Laboratory has an extensive comparison between the 2004 Prius and 2007 Camry Hybrid systems Comparison Toyota engines Hybrid electric cars Toyota
Toyota hybrid vehicles
[ "Technology" ]
298
[ "nan" ]
4,052,447
https://en.wikipedia.org/wiki/Neuroimmunology
Neuroimmunology is a field combining neuroscience, the study of the nervous system, and immunology, the study of the immune system. Neuroimmunologists seek to better understand the interactions of these two complex systems during development, homeostasis, and response to injuries. A long-term goal of this rapidly developing research area is to further develop our understanding of the pathology of certain neurological diseases, some of which have no clear etiology. In doing so, neuroimmunology contributes to development of new pharmacological treatments for several neurological conditions. Many types of interactions involve both the nervous and immune systems including the physiological functioning of the two systems in health and disease, malfunction of either and or both systems that leads to disorders, and the physical, chemical, and environmental stressors that affect the two systems on a daily basis. Background Neural targets that control thermogenesis, behavior, sleep, and mood can be affected by pro-inflammatory cytokines which are released by activated macrophages and monocytes during infection. Within the central nervous system production of cytokines has been detected as a result of brain injury, during viral and bacterial infections, and in neurodegenerative processes. From the US National Institute of Health: "Despite the brain's status as an immune privileged site, an extensive bi-directional communication takes place between the nervous and the immune system in both health and disease. Immune cells and neuroimmune molecules such as cytokines, chemokines, and growth factors modulate brain function through multiple signaling pathways throughout the lifespan. Immunological, physiological and psychological stressors engage cytokines and other immune molecules as mediators of interactions with neuroendocrine, neuropeptide, and neurotransmitter systems. For example, brain cytokine levels increase following stress exposure, while treatments designed to alleviate stress reverse this effect. "Neuroinflammation and neuroimmune activation have been shown to play a role in the etiology of a variety of neurological disorders such as stroke, Parkinson's and Alzheimer's disease, multiple sclerosis, pain, and AIDS-associated dementia. However, cytokines and chemokines also modulate CNS function in the absence of overt immunological, physiological, or psychological challenges. For example, cytokines and cytokine receptor inhibitors affect cognitive and emotional processes. Recent evidence suggests that immune molecules modulate brain systems differently across the lifespan. Cytokines and chemokines regulate neurotrophins and other molecules critical to neurodevelopmental processes, and exposure to certain neuroimmune challenges early in life affects brain development. In adults, cytokines and chemokines affect synaptic plasticity and other ongoing neural processes, which may change in aging brains. Finally, interactions of immune molecules with the hypothalamic-pituitary-gonadal system indicate that sex differences are a significant factor determining the impact of neuroimmune influences on brain function and behavior." Recent research demonstrates that reduction of lymphocyte populations can impair cognition in mice, and that restoration of lymphocytes restores cognitive abilities. Epigenetics Overview Epigenetic medicine encompasses a new branch of neuroimmunology that studies the brain and behavior, and has provided insights into the mechanisms underlying brain development, evolution, neuronal and network plasticity and homeostasis, senescence, the etiology of diverse neurological diseases and neural regenerative processes. It is leading to the discovery of environmental stressors that dictate initiation of specific neurological disorders and specific disease biomarkers. The goal is to "promote accelerated recovery of impaired and seemingly irrevocably lost cognitive, behavioral, sensorimotor functions through epigenetic reprogramming of endogenous regional neural stem cells". Neural stem cell fate Several studies have shown that regulation of stem cell maintenance and the subsequent fate determinations are quite complex. The complexity of determining the fate of a stem cell can be best understood by knowing the "circuitry employed to orchestrate stem cell maintenance and progressive neural fate decisions". Neural fate decisions include the utilization of multiple neurotransmitter signal pathways along with the use of epigenetic regulators. The advancement of neuronal stem cell differentiation and glial fate decisions must be orchestrated timely to determine subtype specification and subsequent maturation processes including myelination. Neurodevelopmental disorders Neurodevelopmental disorders result from impairments of growth and development of the brain and nervous system and lead to many disorders. Examples of these disorders include Asperger syndrome, traumatic brain injury, communication, speech and language disorders, genetic disorders such as fragile-X syndrome, Down syndrome, ADHD, epilepsy, and fetal alcohol syndrome. Studies have shown that autism spectrum disorders (ASDs) may present due to basic disorders of epigenetic regulation. Other neuroimmunological research has shown that deregulation of correlated epigenetic processes in ASDs can alter gene expression and brain function without causing classical genetic lesions which are more easily attributable to a cause and effect relationship. These findings are some of the numerous recent discoveries in previously unknown areas of gene misexpression. Neurodegenerative disorders Increasing evidence suggests that neurodegenerative diseases are mediated by erroneous epigenetic mechanisms. Neurodegenerative diseases include Huntington's disease and Alzheimer's disease. Neuroimmunological research into these diseases has yielded evidence including the absence of simple Mendelian inheritance patterns, global transcriptional dysregulation, multiple types of pathogenic RNA alterations, and many more. In one of the experiments, a treatment of Huntington’s disease with histone deacetylases (HDAC), an enzyme that removes acetyl groups from lysine, and DNA/RNA binding anthracylines that affect nucleosome positioning, showed positive effects on behavioral measures, neuroprotection, nucleosome remodeling, and associated chromatin dynamics. Another new finding on neurodegenerative diseases involves the overexpression of HDAC6 suppresses the neurodegenerative phenotype associated with Alzheimer’s disease pathology in associated animal models. Other findings show that additional mechanisms are responsible for the "underlying transcriptional and post-transcriptional dysregulation and complex chromatin abnormalities in Huntington's disease". Neuroimmunological disorders The nervous and immune systems have many interactions that dictate overall body health. The nervous system is under constant monitoring from both the adaptive and innate immune system. Throughout development and adult life, the immune system detects and responds to changes in cell identity and neural connectivity. Deregulation of both adaptive and acquired immune responses, impairment of crosstalk between these two systems, as well as alterations in the deployment of innate immune mechanisms can predispose the central nervous system (CNS) to autoimmunity and neurodegeneration. Other evidence has shown that development and deployment of the innate and acquired immune systems in response to stressors on functional integrity of cellular and systemic level and the evolution of autoimmunity are mediated by epigenetic mechanisms. Autoimmunity has been increasingly linked to targeted deregulation of epigenetic mechanisms, and therefore, use of epigenetic therapeutic agents may help reverse complex pathogenic processes. Multiple sclerosis (MS) is one type of neuroimmunological disorder that affects many people. MS features CNS inflammation, immune-mediated demyelination and neurodegeneration. Myalgic Encephalomyelitis (also known as Chronic fatigue syndrome), is a multi-system disease that causes dysfunction of neurological, immune, endocrine and energy-metabolism systems. Though many patients show neuroimmunological degeneration, the correct roots of ME/CFS are unknown. Symptoms of ME/CFS include significantly lowered ability to participate in regular activities, stand or sit straight, inability to talk, sleep problems, excessive sensitivity to light, sound or touch and/or thinking and memory problems (defective cognitive functioning). Other common symptoms are muscle or joint pain, sore throat or night sweats. There is no treatment but symptoms may be treated. Patients that are sensitive to mold may show improvement in symptoms having moved to drier areas. Some patients in general have less severe ME, whereas others may be bedridden for life. PTSD has been linked to neuroimmunity dysfunction with this being greater in individuals with worse anhedonia. Major themes of research The interaction of the CNS and immune system are fairly well known. Burn-induced organ dysfunction using vagus nerve stimulation has been found to attenuate organ and serum cytokine levels. Burns generally induce abacterial cytokine generation and perhaps parasympathetic stimulation after burns would decrease cardiodepressive mediator generation. Multiple groups have produced experimental evidence that support proinflammatory cytokine production being the central element of the burn-induced stress response. Still other groups have shown that vagus nerve signaling has a prominent impact on various inflammatory pathologies. These studies have laid the groundwork for inquiries that vagus nerve stimulation may influence postburn immunological responses and thus can ultimately be used to limit organ damage and failure from burn induced stress. Basic understanding of neuroimmunological diseases has changed significantly during the last ten years. New data broadening the understanding of new treatment concepts has been obtained for a large number of neuroimmunological diseases, none more so than multiple sclerosis, since many efforts have been undertaken recently to clarify the complexity of pathomechanisms of this disease. Accumulating evidence from animal studies suggests that some aspects of depression and fatigue in MS may be linked to inflammatory markers. Studies have demonstrated that Toll like-receptor (TLR4) is critically involved in neuroinflammation and T cell recruitment in the brain, contributing to exacerbation of brain injury. Research into the link between smell, depressive behavior, and autoimmunity has turned up interesting findings including the facts that inflammation is common in all of the diseases analyzed, depressive symptoms appear early in the course of most diseases, smell impairment is also apparent early in the development of neurological conditions, and all of the diseases involved the amygdale and hippocampus. Better understanding of how the immune system functions and what factors contribute to responses are being heavily investigated along with the aforementioned coincidences. Neuroimmunology is also an important topic to consider during the design of neural implants. Neural implants are being used to treat many diseases, and it is key that their design and surface chemistry do not elicit an immune response. Future directions The nervous system and immune system require the appropriate degrees of cellular differentiation, organizational integrity, and neural network connectivity. These operational features of the brain and nervous system may make signaling difficult to duplicate in severely diseased scenarios. There are currently three classes of therapies that have been utilized in both animal models of disease and in human clinical trials. These three classes include DNA methylation inhibitors, HDAC inhibitors, and RNA-based approaches. DNA methylation inhibitors are used to activate previously silenced genes. HDACs are a class of enzymes that have a broad set of biochemical modifications and can affect DNA demethylation and synergy with other therapeutic agents. The final therapy includes using RNA-based approaches to enhance stability, specificity, and efficacy, especially in diseases that are caused by RNA alterations. Emerging concepts concerning the complexity and versatility of the epigenome may suggest ways to target genomewide cellular processes. Other studies suggest that eventual seminal regulator targets may be identified allowing with alterations to the massive epigenetic reprogramming during gametogenesis. Many future treatments may extend beyond being purely therapeutic and may be preventable perhaps in the form of a vaccine. Newer high throughput technologies when combined with advances in imaging modalities such as in vivo optical nanotechnologies may give rise to even greater knowledge of genomic architecture, nuclear organization, and the interplay between the immune and nervous systems. See also Immune system Immunology Gut–brain axis Neural top down control of physiology Neuroimmune system Neurology Psychosomatic illness References Further reading (Written for the highly technical reader) Mind-Body Medicine: An Overview, US National Institutes of Health, Center for Complementary and Integrative Health technical. (Written for the general public) External links Online Resources Psychoneuroimmunology, Neuroimmunomodulation (6 chapters from this Cambridge UP book are freely available) More than 100, freely available, published research articles on neuroimmunology and related topics by Professor Michael P. Pender, Neuroimmunology Research Unit, The University of Queensland Branches of immunology Clinical neuroscience Neurology
Neuroimmunology
[ "Biology" ]
2,676
[ "Branches of immunology" ]
4,052,453
https://en.wikipedia.org/wiki/Behavioral%20modeling
The behavioral approach to systems theory and control theory was initiated in the late-1970s by J. C. Willems as a result of resolving inconsistencies present in classical approaches based on state-space, transfer function, and convolution representations. This approach is also motivated by the aim of obtaining a general framework for system analysis and control that respects the underlying physics. The main object in the behavioral setting is the behavior – the set of all signals compatible with the system. An important feature of the behavioral approach is that it does not distinguish a priority between input and output variables. Apart from putting system theory and control on a rigorous basis, the behavioral approach unified the existing approaches and brought new results on controllability for nD systems, control via interconnection, and system identification. Dynamical system as a set of signals In the behavioral setting, a dynamical system is a triple where is the "time set" – the time instances over which the system evolves, is the "signal space" – the set in which the variables whose time evolution is modeled take on their values, and the "behavior" – the set of signals that are compatible with the laws of the system ( denotes the set of all signals, i.e., functions from into ). means that is a trajectory of the system, while means that the laws of the system forbid the trajectory to happen. Before the phenomenon is modeled, every signal in is deemed possible, while after modeling, only the outcomes in remain as possibilities. Special cases: – continuous-time systems – discrete-time systems – most physical systems a finite set – discrete event systems Linear time-invariant differential systems System properties are defined in terms of the behavior. The system is said to be "linear" if is a vector space and is a linear subspace of , "time-invariant" if the time set consists of the real or natural numbers and for all , where denotes the -shift, defined by . In these definitions linearity articulates the superposition law, while time-invariance articulates that the time-shift of a legal trajectory is in its turn a legal trajectory. A "linear time-invariant differential system" is a dynamical system whose behavior is the solution set of a system of constant coefficient linear ordinary differential equations , where is a matrix of polynomials with real coefficients. The coefficients of are the parameters of the model. In order to define the corresponding behavior, we need to specify when we consider a signal to be a solution of . For ease of exposition, often infinite differentiable solutions are considered. There are other possibilities, as taking distributional solutions, or solutions in , and with the ordinary differential equations interpreted in the sense of distributions. The behavior defined is This particular way of representing the system is called "kernel representation" of the corresponding dynamical system. There are many other useful representations of the same behavior, including transfer function, state space, and convolution. For accessible sources regarding the behavioral approach, see . Observability of latent variables A key question of the behavioral approach is whether a quantity w1 can be deduced given an observed quantity w2 and a model. If w1 can be deduced given w2 and the model, w2 is said to be observable. In terms of mathematical modeling, the to-be-deduced quantity or variable is often referred to as the latent variable and the observed variable is the manifest variable. Such a system is then called an observable (latent variable) system. References Additional sources Paolo Rapisarda and Jan C.Willems, 2006. Recent Developments in Behavioral System Theory, July 24–28, 2006, MTNS 2006, Kyoto, Japan J.C. Willems. Terminals and ports. IEEE Circuits and Systems Magazine Volume 10, issue 4, pages 8–16, December 2010 J.C. Willems and H.L. Trentelman. On quadratic differential forms. SIAM Journal on Control and Optimization Volume 36, pages 1702-1749, 1998 J.C. Willems. Paradigms and puzzles in the theory of dynamical systems. IEEE Transactions on Automatic Control Volume 36, pages 259-294, 1991 J.C. Willems. Models for dynamics. Dynamics Reported Volume 2, pages 171-269, 1989 Systems theory Dynamical systems
Behavioral modeling
[ "Physics", "Mathematics" ]
883
[ "Mechanics", "Dynamical systems" ]
4,052,784
https://en.wikipedia.org/wiki/Neuroimmune%20system
The neuroimmune system is a system of structures and processes involving the biochemical and electrophysiological interactions between the nervous system and immune system which protect neurons from pathogens. It serves to protect neurons against disease by maintaining selectively permeable barriers (e.g., the blood–brain barrier and blood–cerebrospinal fluid barrier), mediating neuroinflammation and wound healing in damaged neurons, and mobilizing host defenses against pathogens. The neuroimmune system and peripheral immune system are structurally distinct. Unlike the peripheral system, the neuroimmune system is composed primarily of glial cells; among all the hematopoietic cells of the immune system, only mast cells are normally present in the neuroimmune system. However, during a neuroimmune response, certain peripheral immune cells are able to cross various blood or fluid–brain barriers in order to respond to pathogens that have entered the brain. For example, there is evidence that following injury macrophages and T cells of the immune system migrate into the spinal cord. Production of immune cells of the complement system have also been documented as being created directly in the central nervous system. Structure The key cellular components of the neuroimmune system are glial cells, including astrocytes, microglia, and oligodendrocytes. Unlike other hematopoietic cells of the peripheral immune system, mast cells naturally occur in the brain where they mediate interactions between gut microbes, the immune system, and the central nervous system as part of the microbiota–gut–brain axis. G protein-coupled receptors that are present in both CNS and immune cell types and which are responsible for a neuroimmune signaling process include: Chemokine receptors: CXCR4 Cannabinoid receptors: CB1, CB2, GPR55 Trace amine-associated receptors: TAAR1 μ-Opioid receptors – all subtypes Neuroimmunity is additionally mediated by the enteric nervous system, namely the interactions of enteric neurons and glial cells. These engage with enteroendocrine cells and local macrophages, sensing signals from the gut lumen, including those from the microbiota. These signals prompt local immune responses and transmit to the CNS through humoral and neural pathways. Interleukins and signals from immune cells can access the hypothalamus via the neurovascular unit or circumventricular organs. Cellular physiology The neuro-immune system, and study of, comprises an understanding of the immune and neurological systems and the cross-regulatory impacts of their functions. Cytokines regulate immune responses, possibly through activation of the hypothalamic-pituitary-adrenal (HPA) axis. Cytokines have also been implicated in the coordination between the nervous and immune systems. Instances of cytokine binding to neural receptors have been documented between the cytokine releasing immune cell IL-1 β and the neural receptor IL-1R. This binding results in an electrical impulse that creates the sensation of pain. Growing evidence suggests that auto-immune T-cells are involved in neurogenesis. Studies have shown that during times of adaptive immune system response, hippocampal neurogenesis is increased, and conversely that auto-immune T-cells and microglia are important for neurogenesis (and so memory and learning) in healthy adults. The neuroimmune system uses complementary processes of both sensory neurons and immune cells to detect and respond to noxious or harmful stimuli. For example, invading bacteria may simultaneously activate inflammasomes, which process interleukins (IL-1 β), and depolarize sensory neurons through the secretion of hemolysins. Hemolysins create pores causing a depolarizing release of potassium ions from inside the eukaryotic cell and an influx of calcium ions. Together this results in an action potential in sensory neurons and the activation of inflammasomes. Injury and necrosis also cause a neuroimmune response. The release of adenosine triphosphate (ATP) from damaged cells binds to and activates both P2X7 receptors on macrophages of the immune system, and P2X3 receptors of nociceptors of the nervous system. This causes the combined response of both a resulting action potential due to the depolarization created by the influx of calcium and potassium ions, and the activation of inflammasomes. The produced action potential is also responsible for the sensation of pain, and the immune system produces IL-1 β as a result of the ATP P2X7 receptor binding. Although inflammation is typically thought of as an immune response, there is an orchestration of neural processes involved with the inflammatory process of the immune system. Following injury or infection, there is a cascade of inflammatory responses such as the secretion of cytokines and chemokines that couple with the secretion of neuropeptides (such as substance P) and neurotransmitters (such as serotonin). Together, this coupled neuroimmune response has an amplifying effect on inflammation. Neuroimmune responses Neuron-glial cell interaction Neurons and glial cells work in conjunction to combat intruding pathogens and injury. Chemokines play a prominent role as a mediator between neuron-glial cell communication since both cell types express chemokine receptors. For example, the chemokine fractalkine has been implicated in communication between microglia and dorsal root ganglion (DRG) neurons in the spinal cord. Fractalkine has been associated with hypersensitivity to pain when injected in vivo, and has been found to upregulate inflammatory mediating molecules. Glial cells can effectively recognize pathogens in both the central nervous system and in peripheral tissues. When glial cells recognize foreign pathogens through the use of cytokine and chemokine signaling, they are able to relay this information to the CNS. The result is an increase in depressive symptoms. Chronic activation of glial cells however leads to neurodegeneration and neuroinflammation. Microglial cells are of the most prominent types of glial cells in the brain. One of their main functions is phagocytozing cellular debris following neuronal apoptosis. Following apoptosis, dead neurons secrete chemical signals that bind to microglial cells and cause them to devour harmful debris from the surrounding nervous tissue. Microglia and the complement system are also associated with synaptic pruning as their secretions of cytokines, growth factors and other complements all aid in the removal of obsolete synapses. Astrocytes are another type of glial cell that among other functions, modulate the entry of immune cells into the CNS via the blood–brain barrier (BBB). Astrocytes also release various cytokines and neurotrophins that allow for immune cell entry into the CNS; these recruited immune cells target both pathogens and damaged nervous tissue. Reflexes Withdrawal reflex The withdrawal reflex is a reflex that protects an organism from harmful stimuli. This reflex occurs when noxious stimuli activate nociceptors that send an action potential to nerves in the spine, which then innervate effector muscles and cause a sudden jerk to move the organism away from the dangerous stimuli. The withdrawal reflex involves both the nervous and immune systems. When the action potential travels back down the spinal nerve network, another impulse travels to peripheral sensory neurons that secrete amino acids and neuropeptides like calcitonin gene-related peptide (CGRP) and Substance P. These chemicals act by increasing the redness, swelling of damaged tissues, and attachment of immune cells to endothelial tissue, thereby increasing the permeability of immune cells across capillaries. Reflex response to pathogens and toxins Neuroimmune interactions also occur when pathogens, allergens, or toxins invade an organism. The vagus nerve connects to the gut and airways and elicits nerve impulses to the brainstem in response to the detection of toxins and pathogens. This electrical impulse that travels down from the brain stem travels to mucosal cells and stimulates the secretion of mucus; this impulse can also cause ejection of the toxin by muscle contractions that cause vomiting or diarrhea. Neuroimmune connections and the vagus nerve have also been highlighted more recently as essential to maintaining homeostasis in the context of novel viruses such as SARS-CoV-2 This is especially relevant when considering the role of the vagus nerve in regulating systemic inflammation via the Cholinergic Anti-inflammatory Pathway. Reflex response to parasites The neuroimmune system is involved in reflexes associated with parasitic invasions of hosts. Nociceptors are also associated with the body's reflexes to pathogens as they are in strategic locations, such as airways and intestinal tissues, to induce muscle contractions that cause scratching, vomiting, and coughing. These reflexes are all designed to eject pathogens from the body. For example, scratching is induced by pruritogens that stimulate nociceptors on epidermal tissues. These pruritogens, like histamine, also cause other immune cells to secrete further pruritogens in an effort to cause more itching to physically remove parasitic invaders. In terms of intestinal and bronchial parasites, vomiting, coughing, sneezing, and diarrhea can also be caused by nociceptor stimulation in infected tissues, and nerve impulses originating from the brain stem that innervate respective smooth muscles. Eosinophils in response to capsaicin, can trigger further sensory sensitization to the molecule. Patients with chronic cough also have an enhanced cough reflex to pathogens even if the pathogen has been expelled. In both cases, the release of eosinophils and other immune molecules cause a hypersensitization of sensory neurons in bronchial airways that produce enhanced symptoms. It has also been reported that increased immune cell secretions of neurotrophins in response to pollutants and irritants can restructure the peripheral network of nerves in the airways to allow for a more primed state for sensory neurons. Clinical significance It has been demonstrated that prolonged psychological stress could be linked with increased risk of infection via viral respiratory infection. Studies, in animals, indicate that psychological stress raises glucocorticoid levels and eventually, an increase in susceptibility to streptococcal skin infections. The neuroimmune system plays a role in Alzheimer's disease. In particular, microglia may be protective by promoting phagocytosis and removal of amyloid-β (Aβ) deposits, but also become dysfunctional as disease progresses, producing neurotoxins, ceasing to clear Aβ deposits, and producing cytokines that further promote Aβ deposition. It has been shown that in Alzheimer's disease, amyloid-β directly activates microglia and other monocytes to produce neurotoxins. Astrocytes have also been implicated in multiple sclerosis (MS). Astrocytes are responsible for demyelination and the destruction of oligodendrocytes that is associated with the disease. This demyelinating effect is a result of the secretion of cytokines and matrix metalloproteinases (MMP) from activated astrocyte cells onto neighboring neurons. Astrocytes that remain in an activated state form glial scars that also prevent the re-myelination of neurons, as they are a physical impediment to oligodendrocyte progenitor cells (OPCs). The neuroimmune system is essential for increasing plasticity following a CNS injury via an increase in excitability and a decrease in inhibition, which leads to synaptogenesis and a restructuring of neurons. The neuroimmune system may play a role in recovery outcomes after a CNS injury. The neuroimmune system is also involved in asthma and chronic cough, as both are a result of the hypersensitized state of sensory neurons due to the release of immune molecules and positive feedback mechanisms. Preclinical and clinical studies have shown that cellular (microglia/macrophages, leukocytes, astrocytes, and mast cells, etc.) and molecular neuroimmune responses contribute to secondary brain injury after intracerebral hemorrhage. See also References Further reading External links Figure7.1: Neuroimmune mechanisms of methamphetamine-induced CNS toxicity Immune system
Neuroimmune system
[ "Biology" ]
2,668
[ "Immune system", "Organ systems" ]
4,053,240
https://en.wikipedia.org/wiki/Trent%20Vanegas
Trent Vanegas (born 12 July 1974) is an American blogger from Michigan who is best known for his celebrity gossip blog, Pink is the New Blog (PITNB for short), which he launched in 2004. Early life and education Vanegas is originally from the Detroit, Michigan area. He attended Wayne State University and graduated from the University of Oklahoma in 1997. Career He spent five years as a high school history teacher at the University Liggett School in Grosse Pointe Woods, Michigan before starting the blog. Pink is the New Blog Pink is the New Blog started in June 2004. Vanegas started the blog for the purpose of getting into the habit of writing every day. The website focuses on celebrity gossip (often assigning his own monikers), including Britney Spears, Kevin Federline, Lindsay Lohan, Hilary Duff, Paris Hilton, Nicole Richie, and Jake Gyllenhaal, as well as Vanegas' own daily adventures and exploits. The website's signature is in the large block pink letters that are used to add comments to paparazzi photos. Some celebrities like John Mayer have copied the idea to give props to Vanegas and his site by using the large pink blocks in the pictures they take themselves. Vanegas also has pictures of himself with celebrities and celebrities with their pinkisthenewblog.com stickers. The blog has been published via blogger since its inception, and its host URL Blogspot. In 2005, PITNB had less than 200 hits per day according to a February 2006 New York Magazine article. However, a November 2005 New York Times article said the blog was receiving 70,000 visitors per day. In 2006, PITNB was reported to have 200,000 hits per month. In 2008, the site moved to WordPress and was relaunched with a new layout. In November 2011, PITNB received over 1 million unique hits per week. Personal life Vanegas currently resides in Los Angeles, California. References External links Pink is the New Blog Living people American bloggers Writers from Detroit American LGBTQ writers 1974 births LGBTQ people from Michigan American gossip columnists
Trent Vanegas
[ "Technology" ]
428
[ "Computing stubs", "World Wide Web stubs" ]
4,053,506
https://en.wikipedia.org/wiki/Sanguinarine
Sanguinarine is a polycyclic quaternary alkaloid. It is extracted from some plants, including the bloodroot plant, from whose scientific name, Sanguinaria canadensis, its name is derived; the Mexican prickly poppy (Argemone mexicana); Chelidonium majus; and Macleaya cordata. Toxicity Sanguinarine is a toxin that kills animal cells through its action on the Na+/K+-ATPase transmembrane protein. Epidemic dropsy is a disease that results from ingesting sanguinarine. If applied to the skin, sanguinarine may cause a massive scab of dead flesh where it killed the cells where it was applied, called an eschar. For this reason, sanguinarine is termed an escharotic. It is said to be 2.5 times more toxic than dihydrosanguinarine. Alternative medicine Native Americans once used sanguinarine in the form of bloodroot as a medical remedy, believing it had curative properties as an emetic, respiratory aid, and for a variety of ailments. In Colonial America, sanguinarine from bloodroot was used as a wart remedy. Later, in 1869, William Cook's The Physiomedical Dispensatory included information on the preparation and uses of sanguinarine. During the 1920s and 1930s, sanguinarine was the chief component of "Pinkard's Sanguinaria Compound," a drug sold by Dr. John Henry Pinkard. Pinkard advertised the compound as "a treatment, remedy, and cure for pneumonia, coughs, weak lungs, asthma, kidney, liver, bladder, or any stomach troubles, and effective as a great blood and nerve tonic." In 1931, several samples of the compound were seized by federal officials who determined Pinkard's claims to be fraudulent. Pinkard pleaded guilty in court and accepted a fine of $25.00. More recently, sanguinarine from bloodroot has been promoted by many alternative medicine companies as a treatment or cure for cancer; however, the U.S. Food and Drug Administration warns that products containing bloodroot, or other sanguinarine-based plants, have no proven anti-cancer effects, and that they should be avoided on those grounds. Meanwhile, Australian Therapeutic Goods Administration also advise consumers not to purchase or use products marketed as containing Sanguinaria canadensis to cure or treat cancer, including certain types of skin cancer. Indeed, oral use of such products has been associated with oral leukoplakia, a possible precursor of oral cancer. In addition, the escharotic form of sanguinarine, applied to the skin for skin cancers, may leave cancerous cells alive in the skin while creating a significant scar. For this reason it is not recommended as a skin cancer treatment. Biosynthesis In plants, sanguinarine biosynthesis begins with 4-hydroxyphenyl-acetaldehyde and dopamine. These two compounds are combined to form norcoclaurine. Next, methyl groups are added to form N-methylcoclaurine. The enzyme CYP80B1 subsequently adds a hydroxyl group, forming 3'-hydroxy-N-methylcoclaurine. The addition of another methyl group transforms this compound into reticuline. Notably, biosynthesis of sanguinarine up to this point is virtually identical to that of morphine. However, instead of being converted to codeinone (as in the biosynthesis of morphine), reticuline is converted to scoulerine via berberine bridge enzyme (BBE). As such, this is the commitment step in the sanguinarine pathway. Although it is unknown exactly how scoulerine proceeds down the biosynthetic pathway, it is eventually converted to dihydrosanguinarine. The precursor to sanguinarine, dihydrosanguinarine is converted to the final toxin via the action of dihydrobenzophenanthridine oxidase. See also Berberine, a plant-derived compound having a chemical classification similar to that of sanguinarine. Chelidonine References Isoquinoline alkaloids Quinoline alkaloids Quaternary ammonium compounds Alkaloids found in Papaveraceae Toxins
Sanguinarine
[ "Chemistry", "Environmental_science" ]
925
[ "Toxicology", "Isoquinoline alkaloids", "Alkaloids by chemical classification", "Quinoline alkaloids", "Toxins" ]
4,053,538
https://en.wikipedia.org/wiki/Phosphatidylserine
Phosphatidylserine (abbreviated Ptd-L-Ser or PS) is a phospholipid and is a component of the cell membrane. It plays a key role in cell cycle signaling, specifically in relation to apoptosis. It is a key pathway for viruses to enter cells via apoptotic mimicry. Its exposure on the outer surface of a membrane marks the cell for destruction via apoptosis. Structure Phosphatidylserine is a phospholipid—more specifically a glycerophospholipid—which consists of two fatty acids attached in ester linkage to the first and second carbon of glycerol and serine attached through a phosphodiester linkage to the third carbon of the glycerol. Phosphatidylserine sourced from plants differs in fatty acid composition from that sourced from animals. It is commonly found in the inner (cytoplasmic) leaflet of biological membranes. It is almost entirely found in the inner monolayer of the membrane with only less than 10% of it in the outer monolayer. Biosynthesis Phosphatidylserine (PS) is the major acidic phospholipid class that accounts for 13–15% of the phospholipids in the human cerebral cortex. In the plasma membrane, PS is localized exclusively in the cytoplasmic leaflet where it forms part of protein docking sites necessary for the activation of several key signaling pathways. These include the Akt, protein kinase C (PKC) and Raf-1 signaling that is known to stimulate neuronal survival, neurite growth, and synaptogenesis. Modulation of the PS level in the plasma membrane of neurons has a significant impact on these signaling processes. Composition Phosphatidylserine is formed in bacteria (such as E. coli) through a displacement of cytidine monophosphate (CMP) through a nucleophilic attack by the hydroxyl functional group of serine. CMP is formed from CDP-diacylglycerol by PS synthase. Phosphatidylserine can eventually become phosphatidylethanolamine by the enzyme PS decarboxylase (forming carbon dioxide as a byproduct). Similar to bacteria, yeast can form phosphatidylserine in an identical pathway. In mammals, phosphatidylserine is instead derived from phosphatidylethanolamine or phosphatidylcholine through one of two Ca2+-dependent head-group exchange reactions in the endoplasmic reticulum. Both reactions require a serine but produce an ethanolamine or choline, respectively. These are promoted by phosphatidylserine synthase 1 (PSS1) or 2 (PSS2). Conversely, phosphatidylserine can also give rise to phosphatidylethanolamine and phosphatidylcholine, although in animals the pathway to generate phosphatidylcholine from phosphatidylserine only operates in the liver. Functions Cognitive function PS has been studied for its potential in improving memory, learning, and concentration. Supplementation with PS has been shown to have no effect in enhancing cognitive performance in the elderly and individuals with cognitive decline. Apoptosis PS plays a crucial role in the process of apoptosis (programmed cell death). During apoptosis, PS translocates from the inner leaflet of the cell membrane to the outer leaflet, serving as a signal for phagocytic cells to engulf the dying cell. Dietary sources The average daily phosphatidylserine intake in a Western diet is estimated to be 130mg. Phosphatidylserine may be found in meat and fish. Only small amounts are found in dairy products and vegetables, with the exception of white beans and soy lecithin. Phosphatidylserine is found in soy lecithin at about 3% of total phospholipids. Table 1. Phosphatidylserine content in different foods. Supplementation Health claims A panel of the European Food Safety Authority concluded that a cause and effect relationship cannot be established between the consumption of phosphatidylserine and "memory and cognitive functioning in the elderly", "mental health/cognitive function" and "stress reduction and enhanced memory function". This conclusion follows because bovine brain cortex- and soy-based phosphatidylserine are different substances and might, therefore, have different biological activities. Therefore, the results of studies using phosphatidylserine from different sources cannot be generalized. Cognition In May, 2003 the Food and Drug Administration gave "qualified health claim" status to phosphatidylserine thus allowing labels to state "consumption of phosphatidylserine may reduce the risk of dementia and cognitive dysfunction in the elderly" along with the disclaimer "very limited and preliminary scientific research suggests that phosphatidylserine may reduce the risk of cognitive dysfunction in the elderly." According to the FDA, there is a lack of scientific agreement amongst qualified experts that a relationship exists between phosphatidylserine and cognitive function. More recent reviews have suggested that the relationship may be more robust, though the mechanism remains unclear. A 2020 review of three clinical trials found that phosphatidylserine is likely effective for enhancing cognitive function in older people with mild cognitive impairment. Some studies have suggested that whether the phosphatidylserine is plant- or animal-derived may have significance, with the FDA's statement applying specifically to soy-derived products. Safety Initially, phosphatidylserine supplements were derived from bovine cortex. However, due to the risk of potential transfer of infectious diseases such as bovine spongiform encephalopathy (or "mad cow disease"), soy-derived supplements became an alternative. A 2002 safety report determined supplementation in elderly people at a dosage of 200mg three times daily to be safe. Some manufacturers of phosphatidylserine use sunflower lecithin instead of soy lecithin as a source of raw material production. References External links DrugBank info page Phospholipids Membrane biology
Phosphatidylserine
[ "Chemistry" ]
1,325
[ "Signal transduction", "Membrane biology", "Phospholipids", "Molecular biology" ]
4,053,672
https://en.wikipedia.org/wiki/Evolutionary%20developmental%20psychology
Evolutionary developmental psychology (EDP) is a research paradigm that applies the basic principles of evolution by natural selection, to understand the development of human behavior and cognition. It involves the study of both the genetic and environmental mechanisms that underlie the development of social and cognitive competencies, as well as the epigenetic (gene-environment interactions) processes that adapt these competencies to local conditions. EDP considers both the reliably developing, species-typical features of ontogeny (developmental adaptations), as well as individual differences in behavior, from an evolutionary perspective. While evolutionary views tend to regard most individual differences as the result of either random genetic noise (evolutionary byproducts) and/or idiosyncrasies (for example, peer groups, education, neighborhoods, and chance encounters) rather than products of natural selection, EDP asserts that natural selection can favor the emergence of individual differences via "adaptive developmental plasticity." From this perspective, human development follows alternative life-history strategies in response to environmental variability, rather than following one species-typical pattern of development. EDP is closely linked to the theoretical framework of evolutionary psychology (EP), but is also distinct from EP in several domains, including: research emphasis (EDP focuses on adaptations of ontogeny, as opposed to adaptations of adulthood); consideration of proximate ontogenetic; environmental factors (i.e., how development happens) in addition to more ultimate factors (i.e., why development happens). These things of which are the focus of mainstream evolutionary psychology. History Development and evolution Like mainstream evolutionary psychology, EDP is rooted in Charles Darwin's theory of natural selection. Darwin himself emphasized development, using the process of embryology as evidence to support his theory. From The Descent of Man:"Man is developed from an ovule...which differs in no respect from the ovules of other animals. The embryo itself at a very early period can hardly be distinguished from that of other members of the vertebrate kingdom."Darwin also published his observations of the development of one of his own sons in 1877, noting the child's emotional, moral, and linguistic development. Despite this early emphasis on developmental processes, theories of evolution and theories of development have long been viewed as separate, or even opposed to one another (for additional background, see nature versus nurture). Since the advent of the modern evolutionary synthesis, evolutionary theory has been primarily "gene-centric", and developmental processes have often been seen as incidental. Evolutionary biologist Richard Dawkins's appraisal of development in 1973 illustrates this shift: "The details of embryological developmental processes, interesting as they may be, are irrelevant to evolutionary considerations." Similarly, sociobiologist E. O. Wilson regarded ontogenetic variation as "developmental noise". As a consequence of this shift in perspective, many biologists interested in topics such as embryology and developmental systems subsequently branched off into evolutionary developmental biology. Evolutionary perspectives in developmental psychology Despite the minimization of development in evolutionary theory, early developmental psychology was influenced by evolution. Both Darwin's theory of evolution and Karl Ernst von Baer's developmental principles of ontogeny shaped early thought in developmental psychology. Wilhelm T. Preyer, a pioneer of child psychology, was heavily inspired by Darwin's work and approached the mental development of children from an evolutionary perspective. However, evolutionary theory has had a limited impact on developmental psychology as a whole, and some authors argue that even its early influence was minimal. Developmental psychology, as with the social sciences in general, has long been resistant to evolutionary theories of development (with some notable exceptions, such as John Bowlby's work on attachment theory). Evolutionary approaches to human behavior were, and to some extent continue to be, considered a form of genetic determinism and dismissive of the role of culture and experience in shaping human behavior (see Standard social science model). One group of developmental psychologists who have embraced evolutionary perspectives are nativists, who argue than infants possess innate cognitive mechanisms (or modules) which allow them to acquire crucial information, such as language (for a prominent example, see universal grammar). Evolutionary developmental psychology Evolutionary developmental psychology can be viewed as a more focused theoretical framework derived from the larger field of evolutionary psychology (EP). Mainstream evolutionary psychology grew out of earlier movements which applied the principles of evolutionary biology to understand the mind and behavior such as sociobiology, ethology, and behavioral ecology, differing from these earlier approaches by focusing on identifying psychological adaptations rather than adaptive behavior. While EDP theory generally aligns with that of mainstream EP, it is distinguished by a conscious effort to reconcile theories of both evolution and development. EDP theory diverges from mainstream evolutionary psychology in both the degree of importance placed on the environment in influencing behavior, and in how evolution has shaped the development of human psychology. Advocates of EDP assert that evolutionary psychologists, while acknowledging the role of the environment in shaping behavior and making claims as to its effects, rarely develop explicit models (i.e., predictions of how the environment might shape behavior) to support their claims . EDP seeks to distinguish itself from mainstream evolutionary psychology in this way by embracing a developmental systems approach, and emphasizing that function at one level of organization (e.g., the genetic level) effects organization at adjacent levels of an organization. Developmental systems theorists such as Robert Lickliter point out that the products of development are both genetic and epigenetic, and have questioned the strictly gene-centric view of evolution. However, some authors have rebutted the claim that mainstream evolutionary psychologists do not integrate developmental theory into their theoretical programs, and have further questioned the value of developmental systems theory (see Criticism). Additionally, evolutionary developmental psychologists emphasize research on psychological development and behaviors across the lifespan. Pioneers of EDP contrast their work with that of mainstream evolutionary psychologists, who they argue focus primarily on adults, especially on behaviors related to socializing and mating. Evolutionary developmental psychologists have worked to integrate evolutionary and developmental theories, attempting to synthesize the two without discarding the theoretical foundations of either. This effort is evident in the types of questions which researchers working in the EDP paradigm ask; in reference to Nikolaas Tinbergen's four categories of questions, EP typically focuses on evolutionary ("Why") questions, while EDP explicitly integrates proximate questions ("How"), with the assumption that a greater understanding of the former category will yield insights into the latter. See the following table for an overview of Tinbergen's questions. Basic assumptions The following list summarizes the broad theoretical assumptions of EDP. From "Evolutionary Developmental Psychology," in The Handbook of Evolutionary Psychology: All evolutionarily-influenced characteristics in the phenotype of adults develop, and this requires examining not only the functioning of these characteristics in adults but also their ontogeny. All evolved characteristics develop via continuous and bidirectional gene-environment interactions that emerge dynamically over time. Infants and children are prepared by natural selection to process some information more readily than others. Development is constrained by genetic, environmental, and cultural factors. Infants and children show a high degree of developmental plasticity and adaptive sensitivity to context. An extended childhood is needed in which to learn the complexities of human social communities. Many aspects of childhood serve as preparations for adulthood and were selected over the course of evolution (deferred adaptations). Some characteristics of infants and children were selected to serve an adaptive function at specific times in development and not as preparations for adulthood (ontogenetic adaptations). Developmental adaptations EDP assumes that natural selection creates adaptations for specific stages of development, rather than only specifying adult states. Frequently, EDP researchers seek to identify such adaptations, which have been subdivided into deferred adaptations, ontogenetic adaptations, and conditional adaptations. Deferred adaptations Some behaviors or traits exhibited during childhood or adolescence may have been selected to serve as preparations for adult life, a type of adaptation that evolutionary developmental psychologists have named "deferred adaptations". Sex differences in children's play may be an example of this type of adaptation: higher frequencies of "rough-and-tumble" play among boys, as well as content differences in fantasy play (cross-culturally, girls engage in more "parenting" play than boys), seem to serve as early preparation for the roles that men and women play in many extant contemporary societies, and, presumably, played over human evolutionary history. Ontogenetic adaptations In contrast to deferred adaptations, which function to prepare individuals for future environments (i.e., adulthood), ontogenetic adaptations adapt individuals to their current environment. These adaptations serve a specific function during a particular period of development, after which they are discarded. Ontogenetic adaptations can be physiological (for example, when fetal mammals deriving nutrition and oxygen from the placenta before birth, but no longer utilize the placenta after birth) and psychological. David F. Bjorklund has argued that the imitation of facial gestures by infants, which has a predictable developmental window and seemingly different functions at different ages, shows evidence of being an ontogenetic adaptation. Conditional adaptations EDP emphasizes that children display considerable developmental plasticity, and proposes a special type of adaptation to facilitate adaptive developmental plasticity, called a conditional adaptation. Conditional adaptations detect and respond to relevant environmental cues, altering developmental pathways in ways which better adapt an individual to their particular environment. These adaptations allow organisms to implement alternative and contingent life history strategies, depending on environmental factors. Related research Social learning and the evolution of childhood The social brain (or Machiavellian) hypothesis posits that the emergence of a complex social environment (e.g., larger group sizes) served as a key selection pressure in the evolution of human intelligence. Among primates, larger brains result in an extension of the juvenile period, and some authors argue that humans evolved (and/or expanded) novel developmental stages, childhood and adolescence, in response to increasing social complexity and sophisticated social learning. While many species exhibit social learning to some degree and seemingly possess behavioral traditions (i.e., culture), humans can transmit cultural information across many generations with very high fidelity. High fidelity cultural learning is what many have argued is necessary for cumulative cultural evolution, and has only been definitively observed in humans, although arguments have been made for chimpanzees, orangutans, and New Caledonian crows. Developmentally-oriented researchers have proposed that over-imitation of behavioral models facilitates cultural learning, a phenomenon which emerges in children by age three and is seemingly absent in chimpanzees. Cooperation and prosociality Behaviors that benefit other members of one's social group, particularly those which appear costly to the prosocial or "altruistic" individual, have received considerable attention from disciplines interested in the evolution of behavior. Michael Tomasello has argued that cooperation and prosociality are evolved characteristics of human behavior, citing the emergence of "helping" behavior early in development (observed among 18-24 month old infants) as one piece of evidence. Researchers investigating the ontogeny and evolution of human cooperation design experiments intended to reveal the prosociality of infants and young children, then compare children's performance with that of other animals, typically chimpanzees. While some of the helping behaviors exhibited by infants and young children has also been observed in chimpanzees, preschool-age children tend to display greater prosociality than both human-raised and semi-free-ranging adult chimps. Life history strategies and developmental plasticity EDP researchers emphasize that evolved strategies are context dependent, in the sense that a strategy which is optimal in one environment will often be sub-optimal in another environment. They argue that this will result in natural selection favoring "adaptive developmental plasticity," allowing an organism to alter its developmental trajectory in response to environmental cues. Related to this is the idea of a life history strategy, which can be conceptualized as a chain of resource-allocation decisions (e.g., allocating resources towards growth or towards reproduction) that an organism makes. Biologists have used life history theory to characterize between-species variation in resource-allocation in terms of a fast-slow continuum (see r/K selection theory), and, more recently, some anthropologists and psychologists have applied this continuum to understand within-species variation in trade-offs between reproductive and somatic effort. Some authors argue that childhood environment and early life experiences are highly influential in determining an individual's life history strategy. Factors such as exposure to violence, harsh child-rearing, and environmental unpredictability (e.g., frequent moving, unstable family composition) have been shown to correlate with the proposed behavioral indicators of "fast" life history strategies (e.g., early sexual maturation, unstable couple relationships, impulsivity, and reduced cooperation), where current reproduction is prioritized over future reproduction. Criticism John Tooby, Leda Cosmides, and H. Clark Barrett have refuted claims that mainstream evolutionary psychology neglects development, arguing that their discipline is, in reality, exceptionally interested in and highly considerate of development. In particular, they cite cross-cultural studies as a sort of natural developmental "experiment," which can reveal the influence of culture in shaping developmental outcomes. The authors assert that the arguments of developmental systems theorists consists largely of truisms, of which evolutionary psychologists are well aware, and that developmental systems theory has no scientific value because it fails to generate any predictions. Debra Lieberman similarly objected to the characterization of evolutionary psychology as ignorant of developmental principles. Lieberman argued that both developmental systems theorists and evolutionary psychologists share a common goal of uncovering species-typical cognitive architecture, as well as the ontogeny of that architecture. See also Developmental psychology Differential susceptibility Dual inheritance theory Epigenetic theory Evolutionary educational psychology Evolutionary psychology FOXP2 and human evolution Human behavioral ecology Life history theory Nature and nurture Wikipedia:Research resources/Evolution and human behavior References Relevant journals Evolution and Development Research relevant to interface of evolutionary and developmental biology Evolutionary Psychology (journal) (2014) Further reading Burgess, R. L. & MacDonald (Eds.) (2004). Evolutionary Perspectives on Human Development, 2nd ed. Thousand Oaks, CA: Sage Publications. Ellis, B.J., & Bjorklund, D.F. (Eds.) (2005). Origins of the social mind: Evolutionary psychology and child development. New York: Guilford Press. Ellis, B.J., Essex, M.J., & Boyce, W.T. (2005). Biological sensitivity to context: II. Empirical explorations of an evolutionary-developmental theory. Development and Psychopathology 17, 303–328. Full text Flinn M.V. (2004). Culture and developmental plasticity: Evolution of the social brain. In K. MacDonald and R. L. Burgess (Eds.), Evolutionary Perspectives on Human Development. Chapter 3, pp. 73–98. Thousand Oaks, CA: Sage. Full text Flinn, M.V. & Ward, C.V. (2004). Ontogeny and Evolution of the Social Child. In B. Ellis & D. Bjorklund (Eds.), Origins of the social mind: Evolutionary psychology and child development. Chapter 2, pp. 19–44. London: Guilford Press. Full text Geary, D. C. (2005). Folk knowledge and academic learning. In B. J. Ellis & D. F. Bjorklund (Eds.), Origins of the social mind. (pp. 493–519). New York: Guilford Publications. Full text Geary, D. C. (2004). Evolution and cognitive development. In R. Burgess & K. MacDonald (Eds.), Evolutionary perspectives on human development (pp. 99–133). Thousand Oaks, CA: Sage Publications. Full text MacDonald, K. (2005). Personality, Evolution, and Development. In R. Burgess and K. MacDonald (Eds.), Evolutionary Perspectives on Human Development, 2nd edition, pp. 207–242. Thousand Oaks, CA: Sage. Full text MacDonald, K., & Hershberger, S. (2005). Theoretical Issues in the Study of Evolution and Development. In R. Burgess and K. MacDonald (Eds.), Evolutionary Perspectives on Human Development, 2nd edition, pp. 21–72. Thousand Oaks, CA: Sage. Full text Robert, J. S. Taking old ideas seriously: Evolution, development, and human behavior. New Ideas in Psychology. Developmental psychology Evolutionary psychology Human development E
Evolutionary developmental psychology
[ "Biology" ]
3,416
[ "Behavioural sciences", "Behavior", "Human development", "Developmental psychology" ]
4,054,298
https://en.wikipedia.org/wiki/M%C4%81rti%C5%86%C5%A1%20Rubenis
Mārtiņš Rubenis (born 26 September 1978) is a retired Latvian luger who competed between 1998 and 2014. He won the bronze medal at the men's singles event at the 2006 Winter Olympics in Turin, becoming the first Latvian (i.e. representing Republic of Latvia, as opposed to the Soviet Union) to win a medal at the Winter Olympics and the only one fr He won his second bronze medal at the 2014 Winter Olympics in Sochi in the Team Relay event. In total he competed in five Olympics. Rubenis has also won the gold medal at the 1998 World Junior Championships, as well as the silver and bronze medals at the 2003 and 2004 World Championships respectively. He also won three medals in the Team Relay event at the FIL European Luge Championships with a golds in 2008 and 2010, and a bronze in 2006. Rubenis retired after the 2014 Winter Olympics. He announced his retirement after the men's event, in which he finished 10th, yet a few days later Rubenis won a bronze medal being a part of the Latvian Relay Team. As a result, he and his team-mates in the relay squad were featured on a commemorative stamp issued by Latvian Post. Following his retirement, he was appointed as coach of the Latvian national luge team, and additionally uses his skills as a mechanical engineer to design sleds for the team, having already made his own sleds whilst competing. He also became a member of the Latvian Olympic Committee, having previously served as an athlete representative to the International Luge Federation. Rubenis is a musician and DJ and a member of the DJs group Värka Kru. Awards 2011 – The Three – Star Order 2014 – The Cross of Recognition Achievements 1998 – 1st place in World Junior championship 2000 – 11th place in World championship 2000 – 18th place Overall World Cup 2001 – 29th place in World championship 2001 – 25th place Overall World Cup 2002 – 15th place in European championship 2002 – 34th place Overall World Cup 2003 – 2nd place in World championship – team competition 2003 – 2nd place in World championship 2003 – 18th place Overall World Cup 2004 – 3rd place in World championship 2004 – 13th place Overall World Cup 2004 – 12th place Overall Challenge Cup 2005 – 11th place in World championship 2005 – 11th place Overall World Cup 2005 – 9th place Overall Challenge Cup 2006 – 3rd place in European championship – team competition 2006 – 7th place in European championship Olympic Games results 1998 – Nagano 14th place 2002 – Salt Lake City after crash – DNF 2006 – Torino 3rd place 2010 – Vancouver 11th place 2014 – Sochi 10th place 2014 – Sochi 3rd place Team Relay References FIL-Luge profile External links 1978 births Living people Latvian male lugers 21st-century Latvian sportsmen Olympic lugers for Latvia Olympic medalists in luge Olympic bronze medalists for Latvia Lugers at the 1998 Winter Olympics Lugers at the 2002 Winter Olympics Lugers at the 2006 Winter Olympics Lugers at the 2010 Winter Olympics Lugers at the 2014 Winter Olympics Medalists at the 2006 Winter Olympics Medalists at the 2014 Winter Olympics Sportspeople from Riga Recipients of the Cross of Recognition Latvian sports coaches Mechanical engineers Falun Gong practitioners
Mārtiņš Rubenis
[ "Engineering" ]
633
[ "Mechanical engineers", "Mechanical engineering" ]
4,054,664
https://en.wikipedia.org/wiki/NTLM
In a Windows network, NT (New Technology) LAN Manager (NTLM) is a suite of Microsoft security protocols intended to provide authentication, integrity, and confidentiality to users. NTLM is the successor to the authentication protocol in Microsoft LAN Manager (LANMAN), an older Microsoft product. The NTLM protocol suite is implemented in a Security Support Provider, which combines the LAN Manager authentication protocol, NTLMv1, NTLMv2 and NTLM2 Session protocols in a single package. Whether these protocols are used or can be used on a system, which is governed by Group Policy settings, for which different versions of Windows have different default settings. NTLM passwords are considered weak because they can be brute-forced very easily with modern hardware. Protocol NTLM is a challenge–response authentication protocol which uses three messages to authenticate a client in a connection-oriented environment (connectionless is similar), and a fourth additional message if integrity is desired. First, the client establishes a network path to the server and sends a NEGOTIATE_MESSAGE advertising its capabilities. Next, the server responds with CHALLENGE_MESSAGE which is used to establish the identity of the client. Finally, the client responds to the challenge with an AUTHENTICATE_MESSAGE. The NTLM protocol uses one or both of two hashed password values, both of which are also stored on the server (or domain controller), and which through a lack of salting are password equivalent, meaning that if you grab the hash value from the server, you can authenticate without knowing the actual password. The two are the LM hash (a DES-based function applied to the first 14 characters of the password converted to the traditional 8-bit PC charset for the language), and the NT hash (MD4 of the little endian UTF-16 Unicode password). Both hash values are 16 bytes (128 bits) each. The NTLM protocol also uses one of two one-way functions, depending on the NTLM version; NT LanMan and NTLM version 1 use the DES-based LanMan one-way function (LMOWF), while NTLMv2 uses the NT MD4 based one-way function (NTOWF). NTLMv1 The server authenticates the client by sending an 8-byte random number, the challenge. The client performs an operation involving the challenge and a secret shared between client and server, specifically one of the two password hashes described above. The client returns the 24-byte result of the computation. In fact, in NTLMv1 the computations are usually made using both hashes and both 24-byte results are sent. The server verifies that the client has computed the correct result, and from this infers possession of the secret, and hence the authenticity of the client. Both the hashes produce 16-byte quantities. Five bytes of zeros are appended to obtain 21 bytes. The 21 bytes are separated in three 7-byte (56-bit) quantities. Each of these 56-bit quantities is used as a key to DES encrypt the 64-bit challenge. The three encryptions of the challenge are reunited to form the 24-byte response. Both the response using the LM hash and the NT hash are returned as the response, but this is configurable. C = 8-byte server challenge, random K1 | K2 | K3 = NTLM-Hash | 5-bytes-0 response = DES(K1,C) | DES(K2,C) | DES(K3,C) NTLMv2 NTLMv2, introduced in Windows NT 4.0 SP4 (and natively supported in Windows 2000), is a challenge-response authentication protocol. It is intended as a cryptographically strengthened replacement for NTLMv1, enhancing NTLM security by hardening the protocol against many spoofing attacks and adding the ability for a server to authenticate to the client. NTLMv2 sends two responses to an 8-byte server challenge. Each response contains a 16-byte HMAC-MD5 hash of the server challenge, a fully/partially randomly generated client challenge, and an HMAC-MD5 hash of the user's password and other identifying information. The two responses differ in the format of the client challenge. The shorter response uses an 8-byte random value for this challenge. In order to verify the response, the server must receive as part of the response the client challenge. For this shorter response, the 8-byte client challenge appended to the 16-byte response makes a 24-byte package which is consistent with the 24-byte response format of the previous NTLMv1 protocol. In certain non-official documentation (e.g. DCE/RPC Over SMB, Leighton) this response is termed LMv2. The second response sent by NTLMv2 uses a variable-length client challenge which includes (1) the current time in NT Time format, (2) an 8-byte random value (CC2 in the box below), (3) the domain name and (4) some standard format stuff. The response must include a copy of this client challenge, and is therefore variable length. In non-official documentation, this response is termed NTv2. Both LMv2 and NTv2 hash the client and server challenge with the NT hash of the user's password and other identifying information. The exact formula is to begin with the NT hash, which is stored in the SAM or AD, and continue to hash in, using HMAC-MD5, the username and domain name. In the box below, X stands for the fixed contents of a formatting field. SC = 8-byte server challenge, random CC = 8-byte client challenge, random CC* = (X, time, CC2, domain name) v2-Hash = HMAC-MD5(NT-Hash, user name, domain name) LMv2 = HMAC-MD5(v2-Hash, SC, CC) NTv2 = HMAC-MD5(v2-Hash, SC, CC*) response = LMv2 | CC | NTv2 | CC* NTLM2 Session The NTLM2 Session protocol is similar to MS-CHAPv2. It consists of authentication from NTLMv1 combined with session security from NTLMv2. Briefly, the NTLMv1 algorithm is applied, except that an 8-byte client challenge is appended to the 8-byte server challenge and MD5-hashed. The least 8-byte half of the hash result is the challenge utilized in the NTLMv1 protocol. The client challenge is returned in one 24-byte slot of the response message, the 24-byte calculated response is returned in the other slot. This is a strengthened form of NTLMv1 which maintains the ability to use existing Domain Controller infrastructure yet avoids a dictionary attack by a rogue server. For a fixed X, the server computes a table where location Y has value K such that Y=DES_K(X). Without the client participating in the choice of challenge, the server can send X, look up response Y in the table and get K. This attack can be made practical by using rainbow tables. However, existing NTLMv1 infrastructure allows that the challenge/response pair is not verified by the server, but sent to a Domain Controller for verification. Using NTLM2 Session, this infrastructure continues to work if the server substitutes for the challenge the hash of the server and client challenges. NTLMv1 Client<-Server: SC Client->Server: H(P,SC) Server->DomCntl: H(P,SC), SC Server<-DomCntl: yes or no NTLM2 Session Client<-Server: SC Client->Server: H(P,H'(SC,CC)), CC Server->DomCntl: H(P,H'(SC,CC)), H'(SC,CC) Server<-DomCntl: yes or no Availability and use of NTLM Since 2010, Microsoft no longer recommends NTLM in applications: Implementers should be aware that NTLM does not support any recent cryptographic methods, such as AES or SHA-256. It uses cyclic redundancy checks (CRC) or MD5 for integrity, and RC4 for encryption. Deriving a key from a password is as specified in RFC1320 and FIPS46-2. Therefore, applications are generally advised not to use NTLM. Despite these recommendations, NTLM is still widely deployed on systems. A major reason is to maintain compatibility with older systems. However, it can be avoided in some circumstances. Microsoft has added the NTLM hash to its implementation of the Kerberos protocol to improve interoperability (in particular, the RC4-HMAC encryption type). According to an independent researcher, this design decision allows Domain Controllers to be tricked into issuing an attacker with a Kerberos ticket if the NTLM hash is known. Microsoft adopted Kerberos as the preferred authentication protocol for Windows 2000 and subsequent Active Directory domains. Kerberos is typically used when a server belongs to a Windows Server domain. Microsoft recommends developers neither to use Kerberos nor the NTLM Security Support Provider (SSP) directly. Your application should not access the NTLM security package directly; instead, it should use the Negotiate security package. Negotiate allows your application to take advantage of more advanced security protocols if they are supported by the systems involved in the authentication. Currently, the Negotiate security package selects between Kerberos and NTLM. Negotiate selects Kerberos unless it cannot be used by one of the systems involved in the authentication. Use of the NTLM Security Support Provider The NTLM SSP is used in the following situations: The client is authenticating to a server that doesn't belong to a domain or no Active Directory domain exists (commonly referred to as "workgroup" or "peer-to-peer") The server must have the "password-protected sharing" feature enabled, which is not enabled by default and which is mutually exclusive with HomeGroup on some versions of Windows. When server and client both belong to the same HomeGroup, a protocol similar to Kerberos, Public Key Cryptography based User to User Authentication will be used instead of NTLM. HomeGroup is probably the easiest way to share resources on a small network, requiring minimal setup, even compared to configuring a few additional users to be able to use password-protected sharing, which may mean it is used much more than password-protected sharing on small networks and home networks. If the server is a device that supports SMB, such as NAS devices and network printers, the NTLM SSP may offer the only supported authentication method. Some implementations of SMB or older distributions of e.g. Samba may cause Windows to negotiate NTLMv1 or even LM for outbound authentication with the SMB server, allowing the device to work although it may be loaded with outdated, insecure software regardless of whether it were a new device. If the server is a member of a domain but Kerberos cannot be used. The client is authenticating to a server using an IP address (and no reverse name resolution is available) The client is authenticating to a server that belongs to a different Active Directory forest that has a legacy NTLM trust instead of a transitive inter-forest trust Where a firewall would otherwise restrict the ports required by Kerberos (typically TCP 88) Use of protocol versions After it has been decided either by the application developer or by the Negotiate SSP that the NTLM SSP be used for authentication, Group Policy dictates the ability to use each of the protocols that the NTLM SSP implements. There are five authentication levels. Send LM & NTLM responses: Clients use LM and NTLM authentication, and never use NTLMv2 session security; DCs accept LM, NTLM, and NTLMv2 authentication. Send LM & NTLM - use NTLMv2 session security if negotiated: Clients use LM and NTLM authentication, and use NTLMv2 session security if server supports it; DCs accept LM, NTLM, and NTLMv2 authentication. Send NTLM response only: Clients use NTLM authentication only, and use NTLMv2 session security if server supports it; DCs accept LM, NTLM, and NTLMv2 authentication. Send NTLMv2 response only: Clients use NTLMv2 authentication only, and use NTLMv2 session security if server supports it; DCs accept LM, NTLM, and NTLMv2 authentication. Send NTLMv2 response only\refuse LM: Clients use NTLMv2 authentication only, and use NTLMv2 session security if server supports it; DCs refuse LM (accept only NTLM and NTLMv2 authentication). Send NTLMv2 response only\refuse LM & NTLM: Clients use NTLMv2 authentication only, and use NTLMv2 session security if server supports it; DCs refuse LM and NTLM (accept only NTLMv2 authentication). DC would mean Domain Controller, but use of that term is confusing. Any computer acting as server and authenticating a user fulfills the role of DC in this context, for example a Windows computer with a local account such as Administrator when that account is used during a network logon. Prior to Windows NT 4.0 Service Pack 4, the SSP would negotiate NTLMv1 and fall back to LM if the other machine did not support it. Starting with Windows NT 4.0 Service Pack 4, the SSP would negotiate NTLMv2 Session whenever both client and server would support it. Up to and including Windows XP, this used either 40- or 56-bit encryption on non-U.S. computers, since the United States had severe restrictions on the export of encryption technology at the time. Starting with Windows XP SP3, 128-bit encryption could be added by installing an update and on Windows 7, 128-bit encryption would be the default. In Windows Vista and above, LM has been disabled for inbound authentication. Windows NT-based operating systems up through and including Windows Server 2003 store two password hashes, the LAN Manager (LM) hash and the Windows NT hash. Starting in Windows Vista, the capability to store both is there, but one is turned off by default. This means that LM authentication no longer works if the computer running Windows Vista acts as the server. Prior versions of Windows (back as far as Windows NT 4.0 Service Pack 4) could be configured to behave this way, but it was not the default. Weakness and vulnerabilities NTLM remains vulnerable to the pass the hash attack, which is a variant on the reflection attack which was addressed by Microsoft security update MS08-068. For example, Metasploit can be used in many cases to obtain credentials from one machine which can be used to gain control of another machine. The Squirtle toolkit can be used to leverage web site cross-site scripting attacks into attacks on nearby assets via NTLM. In February 2010, Amplia Security discovered several flaws in the Windows implementation of the NTLM authentication mechanism which broke the security of the protocol allowing attackers to gain read/write access to files and remote code execution. One of the attacks presented included the ability to predict pseudo-random numbers and challenges/responses generated by the protocol. These flaws had been present in all versions of Windows for 17 years. The security advisory explaining these issues included fully working proof-of-concept exploits. All these flaws were fixed by MS10-012. In 2012, it was demonstrated that every possible 8-character NTLM password hash permutation can be cracked in under 6 hours. In 2019, this time was reduced to roughly 2.5 hours by using more modern hardware. Also, Rainbow tables are available for eight- and nine-character NTLM passwords. Shorter passwords can be recovered by brute force methods. In 2019, EvilMog published a tool called the ntlmv1-multitool to format NTLMv1 challenge responses in a hashcat compatible cracking format. With hashcat and sufficient GPU power the NTLM hash can be derived using a known plaintext attack by cracking the DES keys with hashcat mode 14000 as demonstrated by atom on the hashcat forums. Note that the password-equivalent hashes used in pass-the-hash attacks and password cracking must first be "stolen" (such as by compromising a system with permissions sufficient to access hashes). Also, these hashes are not the same as the NTLMSSP_AUTH "hash" transmitted over the network during a conventional NTLM authentication. Compatibility with Linux NTLM implementations for Linux include Cntlm and winbind (part of Samba) allow Linux applications to use NTLM proxies. FreeBSD also supports storing passwords via Crypt (C) in the insecure NT-Hash form. See also LAN Manager NTLMSSP Integrated Windows Authentication Kerberos References External links Online NTLM hash crack using Rainbow tables NT LAN Manager (NTLM) Authentication Protocol Specification Cntlm – NTLM, NTLMSR, NTLMv2 Authentication Proxy and Accelerator Personal HTTP(S) and SOCKS5 proxy for NTLM-unaware applications (Windows/Linux/UNIX) The NTLM Authentication Protocol and Security Support Provider A detailed analysis of the NTLM protocol. MSDN article explaining the protocol and that it has been renamed MSDN page on NTLM authentication Libntlm – a free implementation. NTLM Authorization Proxy Server software that allows users to authenticate via an MS Proxy Server. Installing NTLM authentication – NTLM set-up instructions for Samba and Midgard on Linux NTLM version 2 (NTLMv2) and the LMCompatibilityLevel setting that governs it Jespa – Java Active Directory Integration Full NTLM security service provider with server-side NETLOGON validation (commercial but free up to 25 users) EasySSO - NTML Authenticator for JIRA NTLM Authenticator utilising Jespa library to provide IWA for Atlassian products. ntlmv2-auth NTLMv2 API and Servlet Filter for Java A ntlm message generator tool WAFFLE – Java/C# Windows Authentication Framework objectif-securite (Rainbow tables for ophcrack) Px for Windows - An HTTP proxy server to automatically authenticate through an NTLM proxy NTLMv1 Multi Tool - A tool for formatting NTLMv1 challenge responses into a format that can be cracked with hashcat Microsoft Windows security technology Computer network security Computer access control protocols
NTLM
[ "Engineering" ]
3,905
[ "Cybersecurity engineering", "Computer networks engineering", "Computer network security" ]
4,055,011
https://en.wikipedia.org/wiki/Eastern%20falanouc
The eastern falanouc (Eupleres goudotii) is a rare mongoose-like mammal in the carnivoran family Eupleridae endemic to Madagascar . It is classified alongside the Western falanouc (Eupleres major), recognized only in 2010, in the genus Eupleres. Falanoucs have several peculiarities. They have no anal or perineal glands (unlike their closest relative, the fanaloka), nonretractile claws, and a unique dentition: the canines and premolars are backwards-curving and flat. This is thought to be related to their prey, mostly invertebrates, such as worms, slugs, snails, and larvae. It lives primarily in the lowland rainforests of eastern Madagascar, while E. major is found in northwest Madagascar. It is solitary and territorial, but whether nocturnal or diurnal is unknown. It is small (about 50 centimetres long with a 24-centimetre-long tail) and shy (clawing, not biting, in self-defence). It most closely resembles the mongooses with its long snout and low body, though its colouration is plain and brown (most mongooses have colouring schemes such as striping, banding, or other variations on the hands and feet). Its life cycle displays periods of fat buildup during April and May, before the dry months of June and July. It has a brief courting period and weaning period, the young being weaned before the next mating season. Its reproductive cycle is fast. The offspring (one per litter) are born in burrows with opened eyes and can move with the mother through dense foliage at only two days old. In nine weeks, the already well-developed young are on solid food and shortly thereafter they leave their mothers. Though it is fast in gaining mobility (so as to follow its mother on forages), it grows at a slower rate than comparatively-sized carnivorans. "Falanoucs are threatened by habitat loss, humans, dogs and an introduced competitor, the small Indian civet (Viverricula indica)." Viverricula indica are also carnivores, and they had much spatial and temporal overlap with Eupleres goudotii when introduced to the same ecosystem the Eupleres goudotii were in. This overlap has shown to potentially have a negative impact on native carnivore populations such as the Eupleres goudotii because of the two species competing for similar resources. References Sources Macdonald, David (ed). The Encyclopedia of Mammals. (New York, 1984) External links Eupleres goudotii - Animal Diversity Web Images and Video - ARKive.org EDGE species eastern falanouc Mammals of Madagascar Endemic fauna of Madagascar eastern falanouc
Eastern falanouc
[ "Biology" ]
590
[ "EDGE species", "Biodiversity" ]
4,055,437
https://en.wikipedia.org/wiki/Picocell
A picocell is a small cellular base station typically covering a small area, such as in-building (offices, shopping malls, train stations, stock exchanges, etc.), or more recently in-aircraft. In cellular networks, picocells are typically used to extend coverage to indoor areas where outdoor signals do not reach well, or to add network capacity in areas with very dense phone usage, such as train stations or stadiums. Picocells provide coverage and capacity in areas difficult or expensive to reach using the more traditional macrocell approach. Overview In cellular wireless networks, such as GSM, the picocell base station is typically a low-cost, small (typically the size of a ream of A4 paper), reasonably simple unit that connects to a base station controller (BSC). Multiple picocell 'heads' connect to each BSC: the BSC performs radio resource management and hand-over functions, and aggregates data to be passed to the mobile switching centre (MSC) or the gateway GPRS support node (GGSN). Connectivity between the picocell heads and the BSC typically consists of in-building wiring. Although originally deployed systems (1990s) used plesiochronous digital hierarchy (PDH) links such as E1/T1 links, more recent systems use Ethernet cabling. Aircraft use satellite links. More recent work has developed the concept towards a head unit containing not only a picocell, but also many of the functions of the BSC and some of the MSC. This form of picocell is sometimes called an access point base station or 'enterprise femtocell'. In this case, the unit contains all the capability required to connect directly to the Internet, without the need for the BSC/MSC infrastructure. This is a potentially more cost-effective approach. Picocells offer many of the benefits of "small cells" (similar to femtocells) in that they improve data throughput for mobile users and increase capacity in the mobile network. In particular, the integration of picocells with macrocells through a heterogeneous network can be useful in seamless hand-offs and increased mobile data capacity. Picocells are available for most cellular technologies including GSM, CDMA, UMTS and LTE from manufacturers including ip.access, ZTE, Huawei and Airwalk. Range Typically the range of a microcell is less than two kilometers wide, a picocell is 200 meters or less, and a femtocell is on the order of 10 meters, although AT&T calls its product, with a range of , a "microcell". AT&T uses "AT&T 3G MicroCell" as a trademark and not necessarily the "microcell" technology, however. See also Femtocell Macrocell Microcell Small cell References Mobile telecommunications 9. http://defenseelectronicsmag.com/site-files/defenseelectronicsmag.com/files/archive/rfdesign.com/mag/407rfdf1.pdf
Picocell
[ "Technology" ]
646
[ "Mobile telecommunications" ]
4,055,584
https://en.wikipedia.org/wiki/Rotary%20vane%20pump
A rotary vane pump is a type of positive-displacement pump that consists of vanes mounted to a rotor that rotates inside a cavity. In some cases, these vanes can have variable length and/or be tensioned to maintain contact with the walls as the pump rotates. This type of pump is considered less suitable than other vacuum pumps for high-viscosity and high-pressure fluids, and is . They can endure short periods of dry operation, and are considered good for low-viscosity fluids. Types The simplest vane pump has a circular rotor rotating inside a larger circular cavity. The centers of these two circles are offset, causing eccentricity. Vanes are mounted in slots cut into the rotor. The vanes are allowed a certain limited range of movement within these slots such that they can maintain contact with the wall of the cavity as the rotor rotates. The vanes may be encouraged to maintain such contact through means such as springs, gravity, or centrifugal force. A small amount of oil may be present within the mechanism to help create a better seal between the tips of the vanes and the cavity's wall. The contact between the vanes and the cavity wall divides up the cavity into "vane chambers" that do the pumping work. On the suction side of the pump, the vane chambers are increased in volume and are thus filled with fluid forced in by the inlet vacuum pressure, which is the pressure from the system being pumped, sometimes just the atmosphere. On the discharge side of the pump, the vane chambers decrease in volume, compressing the fluid and thus forcing it out of the outlet. The action of the vanes pulls through the same volume of fluid with each rotation. Multi-stage rotary-vane vacuum pumps, which force the fluid through a series of two or more rotary-vane pump mechanisms to enhance the pressure, can attain vacuum pressures as low as 10−6 bar (0.1 Pa). Uses Vane pumps are commonly used as high-pressure hydraulic pumps and in automobiles, including supercharging, power-steering, air conditioning, and automatic-transmission pumps. Pumps for mid-range pressures include applications such as carbonators for fountain soft-drink dispensers and espresso coffee machines. Furthermore, vane pumps can be used in low-pressure gas applications such as secondary air injection for auto exhaust emission control, or in low-pressure chemical vapor deposition systems. Rotary-vane pumps are also a common type of vacuum pump, with two-stage pumps able to reach pressures well below 10−6 bar. These are found in such applications as providing braking assistance in large trucks and diesel-powered passenger cars (whose engines do not generate intake vacuum) through a braking booster, in most light aircraft to drive gyroscopic flight instruments, in evacuating refrigerant lines during installation of air conditioners, in laboratory freeze dryers, and vacuum experiments in physics. In the vane pump, the pumped gas and the oil are mixed within the pump, and so they must be separated externally. Therefore, the inlet and the outlet have a large chamber, perhaps with swirl, where the oil drops fall out of the gas. Sometimes the inlet has louvers cooled by the room air (the pump is usually 40 K hotter) to condense cracked pumping oil and water, and let it drop back into the inlet. When these pumps are used in high-vacuum systems (where the inflow of gas into the pump becomes very low), a significant concern is contamination of the entire system by molecular oil back streaming. History Like many simple mechanisms, it is unclear when the rotary vane pump was invented. Agostino Ramelli's 1588 book Le diverse et artificiose machine del capitano Agostino Ramelli ("The Various and Ingenious Machines of Captain Agostino Ramelli") contains a description and an engraving of a rotary vane pump along with other types of rotary pumps, which suggests that the design was known at the time. In more recent times, vane pumps also show up in 19th-century patent records. In 1858, a US patent was granted to one W. Pierce for "a new and useful Improvement in Rotary Pumps", which acknowledged as prior art sliding blades "used in connection with an eccentric inner surface". In 1874, a Canadian patent was granted to Charles C. Barnes of Sackville, New Brunswick. There have been various improvements since, including a variable vane pump for gases (1909). Variable-displacement vane pump One of the major advantages of the vane pump is that the design readily lends itself to become a variable-displacement pump, rather than a fixed-displacement pump such as a spur-gear or a gerotor pump. The centerline distance from the rotor to the eccentric ring is used to determine the pump's displacement. By allowing the eccentric ring to pivot or translate relative to the rotor, the displacement can be varied. It is even possible for a vane pump to pump in reverse if the eccentric ring moves far enough. However, performance cannot be optimized to pump in both directions. This can make for a very interesting hydraulic-control oil pump. A variable-displacement vane pump is used as an energy-saving device and has been used in many applications, including automotive transmissions, for over 30 years. Materials Externals (head, casing) – cast iron, ductile iron, steel, brass, plastic, and stainless steel Vane, pushrods – carbon graphite, PEEK End plates – carbon graphite Shaft seal – component mechanical seals, industry-standard cartridge mechanical seals, and magnetically driven pumps Packing – available from some vendors, but not usually recommended for thin liquid service See also Guided-rotor compressor Powerplus supercharger Dry rotary vane pump diagram References External links H. Eugene Bassett's articulated displacer compressor Vane Pump Animation Pumps Canadian inventions
Rotary vane pump
[ "Physics", "Chemistry" ]
1,201
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
4,055,589
https://en.wikipedia.org/wiki/Proprietary%20hardware
Proprietary hardware is computer hardware whose interface is controlled by the proprietor, often under patent or trade-secret protection. Historically, most early computer hardware was designed as proprietary until the 1980s, when IBM PC changed this paradigm. Earlier, in the 1970s, many vendors tried to challenge IBM's monopoly in the mainframe computer market by reverse engineering and producing hardware components electrically compatible with expensive equipment and (usually) able to run the same software. Those vendors were nicknamed plug compatible manufacturers (PCMs). See also Micro Channel architecture, a commonly cited historical example of proprietary hardware Vendor lock-in Proprietary device drivers Proprietary firmware Proprietary software References Computer peripherals
Proprietary hardware
[ "Technology" ]
131
[ "Computer peripherals", "Computing stubs", "Components", "Computer hardware stubs" ]
4,055,626
https://en.wikipedia.org/wiki/Sorption%20pump
The sorption pump is a vacuum pump that creates a vacuum by adsorbing molecules on a very porous material like molecular sieve which is cooled by a cryogen, typically liquid nitrogen. The ultimate pressure is about 10−2 mbar. With special techniques this can be lowered till 10−7 mbar. The main advantages are the absence of oil or other contaminants, low cost and vibration free operation because there are no moving parts. The main disadvantages are that it cannot operate continuously and cannot effectively pump hydrogen, helium and neon, all gases with lower condensation temperature than liquid nitrogen. The main application is as a roughing pump for a sputter-ion pump in ultra-high vacuum experiments, for example in surface physics. Construction A sorption pump is usually constructed in stainless steel, aluminium or borosilicate glass. It can be a simple Pyrex flask filled with molecular sieve or an elaborate metal construction consisting of a metal flask containing perforated tubing and heat-conducting fins. A pressure relief valve can be installed. The design only influences the pumping speed and not the ultimate pressure that can be reached. The design details are a trade-off between fast cooling using heat conducting fins and high gas conductance using perforated tubing. The typical molecular sieve used is a synthetic zeolite with a pore diameter around 0.4 nanometer ( Type 4A ) and a surface area of about 500 m2/g. The sorption pump contains between 300 g and 1.2 kg of molecular sieve. A 15-liter system will be pumped down to about 10−2 mbar by 300 g molecular sieve. Operation The sorption pump is a cyclic pump and its cycle has 3 phases: sorption, desorption and regeneration. In the sorption phase the pump is actually used to create a vacuum. This is achieved by cooling the pump body to low temperatures, typically by immersing it in a Dewar flask filled with liquid nitrogen. Gases will now either condense or be adsorbed by the large surface of the molecular sieve. In the desorption phase the pump is allowed warm up to room temperature and the gases escape through the pressure relief valve or other opening to the atmosphere. If the pump has been used to pump toxic, flammable or other dangerous gasses one has to be careful to vent safely into the atmosphere as all gases pumped during the sorption phase will be released during the desorption phase. In the regeneration phase the pump body is heated to 300 °C to drive off water vapor that does not desorb at room temperature and accumulates in the molecular sieve. It takes typically 2 hours to fully regenerate a pump. The pump can be used in a cycle of sorption and desorption until it loses too much efficiency and is regenerated or in a cycle where sorption and desorption are always followed by regeneration. After filling a sorption pump with new molecular sieve it should always be regenerated as the new molecular sieve is probably saturated with water vapor. Also when a pump is not in use it should be closed off from the atmosphere to prevent water vapor saturation. Performance improvement Pumping capacity can be improved by prepumping the system by another simple and clean vacuum pump like a diaphragm pump or even a water aspirator or compressed-air venturi pump. Sequential or multistage pumping can be used to attain lower pressures. In this case two or more pumps are connected in parallel to the vacuum vessel. Every pump has a valve to isolate it from the vacuum vessel. At the start of the pump down all valves are open. The first pump is cooled down while the others are still hot. When the first pump has reached its ultimate pressure it is closed off and the next pump is cooled down. Final pressures are in the 10−4 mbar region. What is left is mainly helium because it is almost not pumped at all. The final pressure almost equals the partial pressure of helium in air. A sorption pump does pump all gases effectively with the exception of hydrogen, helium and neon which do not condensate at liquid nitrogen temperatures and are not efficiently adsorbed by the molecular sieves because of their small molecular size. This problem can be solved by purging the vacuum system with dry pure nitrogen before pump down. In purged system with aspirator rough pumping ultimate pressures of 10−4 mbar for a single sorption pump and 10−7 mbar for sequential pumping can be reached. A typical source of dry pure nitrogen would be a liquid nitrogen Dewar head space. It has been suggested that by applying a dynamic pumping technique hydrogen, helium and neon can also be pumped without resorting to dry nitrogen purging. This is done by precooling the pump with the valve to the vacuum vessel closed. The valve is opened when the pump is cold and the inrush of adsorbable gases will carry all other gases into the pump. The valve is closed before hydrogen, helium or neon can back-migrate into the vacuum vessel. Sequential pumping can also be applied. No final pressures are given. Continuous pumping may be simulated by using two pumps in parallel and letting one pump pump the system while the other pump, temporally sealed-off from the system, is in the desorption phase and venting to the atmosphere. When the pump is well desorbed it is cooled down and reconnected to the system. The other pump is sealed-off and goes into desorption. This becomes a continuous cycle. References Vacuum pumps
Sorption pump
[ "Physics", "Engineering" ]
1,161
[ "Vacuum pumps", "Vacuum systems", "Vacuum", "Matter" ]
4,055,635
https://en.wikipedia.org/wiki/Siblicide
Siblicide (attributed by behavioural ecologist Doug Mock to Barbara M. Braun) is the killing of an infant individual by its close relatives (full or half siblings). It may occur directly between siblings or be mediated by the parents, and is driven by the direct fitness benefits to the perpetrator and sometimes its parents. Siblicide has mainly, but not only, been observed in birds. (The word is also used as a unifying term for fratricide and sororicide in the human species; unlike these more specific terms, it leaves the sex of the victim unspecified.) Siblicidal behavior can be either obligate or facultative. Obligate siblicide is when a sibling almost always ends up being killed. Facultative siblicide means that siblicide may or may not occur, based on environmental conditions. In birds, obligate siblicidal behavior results in the older chick killing the other chick(s). In facultative siblicidal animals, fighting is frequent, but does not always lead to death of a sibling; this type of behavior often exists in patterns for different species. For instance, in the blue-footed booby, a sibling may be hit by a nest mate only once a day for a couple of weeks and then attacked at random, leading to its death. More birds are facultatively siblicidal than obligatory siblicidal. This is perhaps because siblicide takes a great amount of energy and is not always advantageous. Siblicide generally only occurs when resources, specifically food sources, are scarce. Siblicide is advantageous for the surviving offspring because they have now eliminated most or all of their competition. It is also somewhat advantageous for the parents because the surviving offspring most likely have the strongest genes, and therefore likely have the highest fitness. Some parents encourage siblicide, while others prevent it. If resources are scarce, the parents may encourage siblicide because only some offspring will survive anyway, so they want the strongest offspring to survive. By letting the offspring kill each other, it saves the parents time and energy that would be wasted on feeding offspring that most likely would not survive anyway. Models Originally proposed by , the insurance egg hypothesis (IEH) has quickly become the most widely supported explanation for avian siblicide as well as the overproduction of eggs in siblicidal birds. The IEH states that the extra egg(s) produced by the parent serves as an "insurance policy" in the case of the failure of the first egg (either it did not hatch or the chick died soon after hatching). When both eggs hatch successfully, the second chick, or is the so-called marginal offspring; it is marginal in the sense that it can add to or subtract from the evolutionary success of its family members. It can increase reproductive and evolutionary success in two primary ways. Firstly, it represents an extra unit of parental success if it survives along with its siblings. In the context of Hamilton's inclusive fitness theory, the marginal chick increases the total number of offspring successfully produced by the parent and therefore adds to the gene pool that the parent bird passes to the next generation. Secondly, it can serve as a replacement for any of its siblings that do not hatch or die prematurely. Inclusive fitness is defined as an animal's individual reproductive success, plus the positive and/or negative effects that animal has on its sibling's reproductive success, multiplied by the animal's degree of kinship. In instances of siblicide, the victim is usually the youngest sibling. This sibling's reproductive value can be measured by how much it enhances or detracts from the success of other siblings, therefore this individual is considered to be marginal. The marginal sibling can act as an additional element of parental success if it, as well as its siblings, survive. If an older sibling happens to die unexpectedly, the marginal sibling is there to take its place; this acts as insurance against the death of another sibling, which depends on the likelihood of the older sibling dying. Parent–offspring conflict is a theory which states that offspring can take actions to advance their own fitness while decreasing the fitness of their parents and that parents can increase their own fitness while simultaneously decreasing the fitness of their offspring. This is one of the driving forces of siblicide because it increases the fitness of the offspring by decreasing the amount of competition they have. Parents may either discourage or accept siblicide, depending on whether it increases the probability of their offspring surviving to reproduce. Mathematical representation The cost and effect siblicide has on a brood's reproductive success can be broken down into an algebraic equation: is some measure of the total parental care or parental investment (PI) in the entire brood, with an absolute maximum possible value (hence parental effort constrained to ). Parents investing units of care in the current batch of offspring can expect a future reproductive success given by {| |- | for | |- | for | |- | for | |} where is the parents' future reproductive success when it makes no reproductive attempt (reproduction postponed to next season). The constant is a shape parameter that determines the relationship between parental investment and the cost of reproduction. The equation models the risk / cost to the parent's own survival into the next breeding season, given the extra exertion to protect and provide food for their young; it indicates that as parental care increases, the future reproductive success of the parent decreases. The parents' future reproductive success is modeled as an exhaustible asset, which drops to zero (no possibility of parents breeding again, later) if they provide self-sacrificial care (), whereas the parents' own future prospects remain the same, or nearly the same, if they provide no care, or very little care (). The probability that the offspring thrive to join the breeding population after receiving units of parental care is {| |- | for | |- | for | |} where is the minimum amount of parental care, required for the season's offspring to have any chance of growing to themselves become breeding adults. The relation indicates that with inadequate care, or with merely adequate care, () the whole brood will surely fail to survive to become reproducing adults, but that with more than adequate care () the probability of the offspring living and breeding in the next season rises (only becoming certain, with a hypothetically "infinite" amount of parental care, ). is the minimum amount of effort required from the parents, to give their offspring any non-zero chance of their brood / litter maturing to themselves become breeding adults. If then the parents just barely have a chance of producing any offspring, and have only one chance to breed in their lifetime, like many seasonal insects. If then the parents might raise several successful offspring, while still themselves having a fair chance of breeding again; in that case, would represent a minimalist strategy, where the parents spend little effort, and the underfed offspring just barely have any chance of survival, but the parents conserve their own chance of breeding again later. At the other extreme, would represent a parental "go for broke" strategy, where the parents will be unable to breed any more, but ensure maximal brood survival (e.g. salmon or octopuses laying myriad eggs, but the parents always dying soon after they breed). There is some kind of middle ground, where the parents raise as many offspring as possible, with some risk to their own future, but not so much that they completely squander their own chance of breeding again. Examples In birds Cattle egrets, Bubulcus ibis, exhibit asynchronous hatching and androgen loading in the first two eggs of their normal three-egg clutch. This results in older chicks being more aggressive and having a developmental head start. If food is scarce the third chick often dies or is killed by the larger siblings and so parental effort is distributed between the remaining chicks, which are hence more likely to survive to reproduce. The extra "excess" egg is possibly laid either due to exploit the possibility of elevated food abundance (as seen in the blue-footed booby, Sula nebouxii) or due to the chance of sterility in one egg. This is suggested by studies into the common grackle, Quiscalus quiscula and the masked booby, Sula dactylatra. The theory of kin selection may be seen as a genetically mediated altruistic response within closely related individuals whereby the fitness conferred by the altruist to the recipient outweighs the cost to itself or the sibling/parent group. The fact that such a sacrifice occurs indicates an evolutionary tendency in some taxa toward improved vertical gene transmission in families or a higher percentage of the unit in reaching a reproductive age in a resource-limited environment. The closely related masked and Nazca boobies are both obligately siblicidal species, while the blue-footed booby is a facultatively siblicidal species. In a facultatively siblicidal species, aggression occurs between siblings but is not always lethal, whereas in an obligately siblicidal species, aggression between siblings always leads to the death of one of the offspring. All three species have an average brood size of two eggs, which are laid within approximately four days of each other. In the few days before the second egg hatches, the first-born chick, known as the senior chick or A-chick, enjoys a period of growth and development during which it has full access to resources provided by the parent bird. Therefore, when the junior chick (B-chick) hatches, there is a significant disparity in size and strength between it and its older sibling. In these three booby species, hatching order indicates chick hierarchy in the nest. The A-chick is dominant to the B-chick, which in turn is dominant to the C chick, etc. (when there are more than two chicks per brood). Masked booby and Nazca booby dominant A-chicks always begin pecking their younger sibling(s) as soon as they hatch; moreover, assuming it is healthy, the A-chick usually pecks its younger sibling to death or pushes it out of the nest scrape within the first two days that the junior chick is alive. Blue-footed booby A-chicks also express their dominance by pecking their younger sibling. However, unlike the obligately siblicidal masked and Nazca booby chicks, their behavior is not always lethal. A study by Lougheed and Anderson (1999) reveals that blue-footed booby senior chicks only kill their siblings in times of food shortage. Furthermore, even when junior chicks are killed, it does not happen immediately. According to Anderson, the average age of death of the junior chick in a masked booby brood is 1.8 days, while the average age of death of the junior chick in a blue-footed booby brood may be as high as 18 days. The difference in age of death in the junior chick in each booby species is indicative of the type of siblicide that the species practices. Facultatively siblicidal blue-footed booby A-chicks only kill their nest mate(s) when necessary. Obligately siblicidal masked and Nazca booby A-chicks kill their sibling no matter if resources are plentiful or not; in other words, siblicidal behavior occurs independently of environmental factors. Blue-footed boobies are less likely to commit siblicide and if they do, they commit it later after hatching than masked boobies. In a study, the chicks of blue-footed and masked boobies were switched to see if the rates of siblicide would be affected by the foster parents. It turns out that the masked boobies that were placed under the care of blue-footed booby parents committed siblicide less often than they would normally. Similarly, the blue-footed booby chicks placed with the masked booby parents committed siblicide more often than they normally did, indicating that parental intervention also affects the offspring's behavior. In another experiment which tested the effect of a synchronous brood on siblicide, three groups were created: one in which all the eggs were synchronous, one in which the eggs hatched asynchronously, and one in which asynchronous hatching was exaggerated. It was found that the synchronous brood fought more, was less likely to survive than the control group, and resulted in lower parental efficiency. The exaggerated asynchronous brood also had a lower survivorship rate than the control brood and forced parents to bring more food to the nest each day, even though not as many offspring survived. In other animals Siblicide (brood reduction) in spotted hyenas (Crocuta crocuta) resulted in the champions achieving a long-term growth rate similar to that of singletons and thus significantly increased their expected survival. The incidence of siblicide increased as the average cohort growth rate declined. When both cubs were alive, total maternal input in siblicidal litters was significantly lower than in non-siblicidal litters. Once siblicide has occurred, the growth rates of siblicide survivors substantially increased, indicating that mothers don't reduce their maternal input after siblicide has occurred. Also, facultative siblicide can evolve when the fitness benefits gained after the removal of a sibling by the dominant offspring, exceeds the costs acquired in terms of decreasing that sibling's inclusive fitness from the death of its sibling. Some mammals sometimes commit siblicide for the purpose of gaining a larger portion of the parent's care. In spotted hyenas, pups of the same sex exhibit siblicide more often than male-female twins. Sex ratios may be manipulated in this way and the dominant status of a female and transmission of genes may be ensured through a son or daughter which inherits this solely, receiving much more parental nursing and decreased sexual competition. Siblicidal "survival of the fittest" is also exhibited in parasitic wasps, which lay multiple eggs in a host, after which the strongest larva kills its rival sibling. Another example is when mourning cloak larvae will eat non-hatched eggs. In sand tiger sharks, the first embryo to hatch from its egg capsule kills and consumes its younger siblings while still in the womb. In humans Siblicide can also be seen in humans in the form of twins in the mother's womb. One twin may grow to be an average weight, while the other is underweight. This is a result of one twin taking more nutrients from the mother than the other twin. In cases of identical twins, they may even have twin-to-twin transfusion syndrome (TTTS). This means that the twins share the same placenta and blood and nutrients can then move between twins. The twins may also be suffering from intrauterine growth restriction (IUGR), meaning that there is not enough room for both of the twins to grow. All of these factors can limit the growth of one of the twins while promoting the growth of the other. While one of the twins may not die because of these factors, it is entirely possible that their health will be compromised and lead to complications after their birth. Siblicide in humans can also manifest itself in the form of murder. This type of killing (siblicide) is rarer than other types of killings. Genetic relatedness may be an important moderator of conflict and homicide among family members, including siblings. Siblings may be less likely to kill a full sibling because that would be a decrease in their own fitness. The cost of killing a sibling is much higher than the fitness costs associated with the death of a sibling-in-law because the killer wouldn't be losing 50% of their genes. Siblicide was found to be more common in early to middle adulthood as opposed to adolescence. However, there is still a tendency for the killer to be the younger party when the victim and killer were of the same sex. The older individual was most likely to be the killer if the incident were to occur at a younger age. See also Fratricide, the killing of a brother Infanticide (zoology), a related behaviour Intrauterine cannibalism Nazca booby (displays obligate siblicide) Parent–offspring conflict Sibling abuse Sibling rivalry Sororicide, the killing of a sister References Further reading Killings by type Fratricides Homicide Selection Sibling Sibling rivalry Sociobiology Sororicides
Siblicide
[ "Biology" ]
3,425
[ "Evolutionary processes", "Behavior", "Selection", "Behavioural sciences", "Sociobiology" ]
4,055,697
https://en.wikipedia.org/wiki/NGC%201309
NGC 1309 is a spiral galaxy located approximately 120 million light-years away, appearing in the constellation Eridanus. It was discovered by German-British astronomer William Herschel on 3 October 1785. NGC 1309 is about 75,000 light-years across, and is about 3/4s the width of the Milky Way. Its shape is classified as SA(s)bc, meaning that it has moderately wound spiral arms and no ring. Bright blue areas of star formation can be seen in the spiral arms, while the yellowish central nucleus contains older-population stars. NGC 1309 is one of over 200 members of the Eridanus Group of galaxies. Supernova 2002fk SN 2002fk was discovered jointly by Reiki Kushida of the Yatsugatake South Base Observatory, Nagano Prefecture, Japan; and Jun-jie Wang and Yu-Lei Qiu of the Beijing Astronomical Observatory on 17 Sept 2002. When it was discovered it was magnitude ~15.0; it was estimated to have reached maximum magnitude of ~13.0 before fading away. It was a Type Ia supernova (i.e., the progenitor star was white dwarf). White dwarfs are older stars that have used up almost all of their main fuel (the lighter elements such as hydrogen and helium). SN 2002fk's spectra showed no indications of hydrogen, helium or carbon; instead ionized calcium, silicon, iron and nickel were found. Supernova 2012Z SN 2012Z was discovered jointly by Brad Cenko, Weidong Li, and Alex Filippenko using the Katzman Automatic Imaging Telescope on 29 January 2012 as part of the Lick Observatory Supernova Search. The scientists have hypothesized that this is a type Iax supernova, and may have left behind a remnant zombie star. In February 2022 a study with new observations has confirmed that the star survived the explosion and is even brighter than before. See also List of NGC objects (1001–2000) References External links Unbarred spiral galaxies Eridanus (constellation) Eridanus Group 1309 012626 -03-09-028 03197-1534 17851003 Discoveries by William Herschel
NGC 1309
[ "Astronomy" ]
452
[ "Eridanus (constellation)", "Constellations" ]
4,055,832
https://en.wikipedia.org/wiki/Nonlocal%20Lagrangian
In field theory, a nonlocal Lagrangian is a Lagrangian, a type of functional containing terms that are nonlocal in the fields , i.e. not polynomials or functions of the fields or their derivatives evaluated at a single point in the space of dynamical parameters (e.g. space-time). Examples of such nonlocal Lagrangians might be: The Wess–Zumino–Witten action. Actions obtained from nonlocal Lagrangians are called nonlocal actions. The actions appearing in the fundamental theories of physics, such as the Standard Model, are local actions; nonlocal actions play a part in theories that attempt to go beyond the Standard Model and also in some effective field theories. Nonlocalization of a local action is also an essential aspect of some regularization procedures. Noncommutative quantum field theory also gives rise to nonlocal actions. References Quantum measurement Quantum field theory
Nonlocal Lagrangian
[ "Physics" ]
200
[ "Quantum field theory", "Quantum measurement", "Quantum mechanics", "Quantum physics stubs" ]
4,055,891
https://en.wikipedia.org/wiki/High-energy%20X-rays
High-energy X-rays or HEX-rays are very hard X-rays, with typical energies of 80–1000 keV (1 MeV), about one order of magnitude higher than conventional X-rays used for X-ray crystallography (and well into gamma-ray energies over 120 keV). They are produced at modern synchrotron radiation sources such as the Cornell High Energy Synchrotron Source, SPring-8, and the beamlines ID15 and BM18 at the European Synchrotron Radiation Facility (ESRF). The main benefit is the deep penetration into matter which makes them a probe for thick samples in physics and materials science and permits an in-air sample environment and operation. Scattering angles are small and diffraction directed forward allows for simple detector setups. High energy (megavolt) X-rays are also used in cancer therapy, using beams generated by linear accelerators to suppress tumors. Advantages High-energy X-rays (HEX-rays) between 100 and 300 keV bear unique advantage over conventional hard X-rays, which lie in the range of 5–20 keV They can be listed as follows: High penetration into materials due to a strongly reduced photo absorption cross section. The photo-absorption strongly depends on the atomic number of the material and the X-ray energy. Several centimeter thick volumes can be accessed in steel and millimeters in lead containing samples. No radiation damage of the sample, which can pin incommensurations or destroy the chemical compound to be analyzed. The Ewald sphere has a curvature ten times smaller than in the low energy case and allows whole regions to be mapped in a reciprocal lattice, similar to electron diffraction. Access to diffuse scattering. This is absorption and not extinction limited at low energies while volume enhancement takes place at high energies. Complete 3D maps over several Brillouin zones can be easily obtained. High momentum transfers are naturally accessible due to the high momentum of the incident wave. This is of particular importance for studies of liquid, amorphous and nanocrystalline materials as well as pair distribution function analysis. Realization of the Materials oscilloscope. Simple diffraction setups due to operation in air. Diffraction in forward direction for easy registration with a 2D detector. Forward scattering and penetration make sample environments easy and straight forward. Negligible polarization effects due to relative small scattering angles. Special non-resonant magnetic scattering. LLL interferometry. Access to high-energy spectroscopic levels, both electronic and nuclear. Neutron-like, but complementary studies combined with high precision spatial resolution. Cross sections for Compton scattering are similar to coherent scattering or absorption cross sections. Applications With these advantages, HEX-rays can be applied for a wide range of investigations. An overview, which is far from complete: Structural investigations of real materials, such as metals, ceramics, and liquids. In particular, in-situ studies of phase transitions at elevated temperatures up to the melt of any metal. Phase transitions, recovery, chemical segregation, recrystallization, twinning and domain formation are a few aspects to follow in a single experiment. Materials in chemical or operation environments, such as electrodes in batteries, fuel cells, high-temperature reactors, electrolytes etc. The penetration and a well-collimated pencil beam allows focusing in the region and material of interest while it undergoes a chemical reaction. Study of 'thick' layers, such as oxidation of steel in its production and rolling process, which are too thick for classical reflectometry experiments. Interfaces and layers in complicated environments, such as the intermetallic reaction of Zincalume surface coating on industrial steel in the liquid bath. In situ studies of industrial like strip casting processes for light metals. A casting setup can be set up on a beamline and probed with the HEX-ray beam in real time. Bulk studies in single crystals differ from studies in surface-near regions limited by the penetration of conventional X-rays. It has been found and confirmed in almost all studies, that critical scattering and correlation lengths are strongly affected by this effect. Combination of neutron and HEX-ray investigations on the same sample, such as contrast variations due to the different scattering lengths. Residual stress analysis in the bulk with unique spatial resolution in centimeter thick samples; in-situ under realistic load conditions. In-situ studies of thermo-mechanical deformation processes such as forging, rolling, and extrusion of metals. Real time texture measurements in the bulk during a deformation, phase transition or annealing, such as in metal processing. Structures and textures of geological samples which may contain heavy elements and are thick. High resolution triple crystal diffraction for the investigation of single crystals with all the advantages of high penetration and studies from the bulk. Compton spectroscopy for the investigation of momentum distribution of the valence electron shells. Imaging and tomography with high energies. Dedicated sources can be strong enough to obtain 3D tomograms in a few seconds. Combination of imaging and diffraction is possible due to simple geometries. For example, tomography combined with residual stress measurement or structural analysis. See also Bremsstrahlung Cyclotron radiation Electromagnetic radiation Electron–positron annihilation Gamma ray Gamma-ray generation Ionization Synchrotron light source Synchrotron radiation X-radiation X-ray fluorescence X-ray generator X-ray tube References Further reading External links Applied and interdisciplinary physics Gamma rays Materials testing Synchrotron radiation Synchrotron-related techniques X-rays
High-energy X-rays
[ "Physics", "Materials_science", "Engineering" ]
1,144
[ "Applied and interdisciplinary physics", "X-rays", "Spectrum (physical sciences)", "Electromagnetic spectrum", "Materials science", "Materials testing", "Gamma rays" ]
4,055,903
https://en.wikipedia.org/wiki/Glutamate%20transporter
Glutamate transporters are a family of neurotransmitter transporter proteins that move glutamate – the principal excitatory neurotransmitter – across a membrane. The family of glutamate transporters is composed of two primary subclasses: the excitatory amino acid transporter (EAAT) family and vesicular glutamate transporter (VGLUT) family. In the brain, EAATs remove glutamate from the synaptic cleft and extrasynaptic sites via glutamate reuptake into glial cells and neurons, while VGLUTs move glutamate from the cell cytoplasm into synaptic vesicles. Glutamate transporters also transport aspartate and are present in virtually all peripheral tissues, including the heart, liver, testes, and bone. They exhibit stereoselectivity for L-glutamate but transport both L-aspartate and D-aspartate. The EAATs are membrane-bound secondary transporters that superficially resemble ion channels. These transporters play the important role of regulating concentrations of glutamate in the extracellular space by transporting it along with other ions across cellular membranes. After glutamate is released as the result of an action potential, glutamate transporters quickly remove it from the extracellular space to keep its levels low, thereby terminating the synaptic transmission. Without the activity of glutamate transporters, glutamate would build up and kill cells in a process called excitotoxicity, in which excessive amounts of glutamate acts as a toxin to neurons by triggering a number of biochemical cascades. The activity of glutamate transporters also allows glutamate to be recycled for repeated release. Classes There are two general classes of glutamate transporters, those that are dependent on an electrochemical gradient of sodium ions (the EAATs) and those that are not (VGLUTs and xCT). The cystine-glutamate antiporter (xCT) is localised to the plasma membrane of cells whilst vesicular glutamate transporters (VGLUTs) are found in the membrane of glutamate-containing synaptic vesicles. Na+-dependent EAATs are also dependent on transmembrane K+ and H+concentration gradients, and so are also known as 'sodium and potassium coupled glutamate transporters'. Na+-dependent transporters have also been called 'high-affinity glutamate transporters', though their glutamate affinity actually varies widely. EAATs are antiporters which carry one molecule of glutamate in along with three Na+ and one H+, while export one K+. EAATs are transmembrane integral proteins which traverse the plasmalemma 8 times. Mitochondria also possess mechanisms for taking up glutamate that are quite distinct from membrane glutamate transporters. EAATs In humans (as well as in rodents), five subtypes have been identified and named EAAT1-5 (SLC1A3, SLC1A2, SLC1A1, , ). Subtypes EAAT1-2 are found in membranes of glial cells (astrocytes, microglia, and oligodendrocytes). However, low levels of EAAT2 are also found in the axon-terminals of hippocampal CA3 pyramidal cells. EAAT2 is responsible for over 90% of glutamate reuptake within the central nervous system (CNS). The EAAT3-4 subtypes are exclusively neuronal, and are expressed in axon terminals, cell bodies, and dendrites. Finally, EAAT5 is only found in the retina where it is principally localized to photoreceptors and bipolar neurons in the retina. When glutamate is taken up into glial cells by the EAATs, it is converted to glutamine and subsequently transported back into the presynaptic neuron, converted back into glutamate, and taken up into synaptic vesicles by action of the VGLUTs. This process is named the glutamate–glutamine cycle. VGLUTs Three types of vesicular glutamate transporters are known, VGLUTs 1–3 (SLC17A7, SLC17A6, and SLC17A8 respectively) and the novel glutamate/aspartate transporter sialin. These transporters pack the neurotransmitter into synaptic vesicles so that they can be released into the synapse. VGLUTs are dependent on the proton gradient that exists in the secretory system (vesicles being more acidic than the cytosol). VGLUTs have only between one hundredth and one thousandth the affinity for glutamate that EAATs have. Also unlike EAATs, they do not appear to transport aspartate. VGluT3 VGluT3 (Vesicular Glutamate Transporter 3) that is encoded by the SLC17A8 gene is a member of the vesicular glutamate transporter family that transports glutamate into the cells. It is involved in neurological and pain diseases. Neurons are able to express VGluT3 when they use a neurotransmitter different to Glutamate, for example in the specific case of central 5-HT neurons. The role of this unconventional transporter (VGluT3) still remains unknown but, at the moment, has been demonstrated that, in auditory system, the VGluT3 is involved in fast excitatory glutamatergic transmission very similar to the other two vesicular glutamate transporters, VGluT1 and VGluT2. There are behavioral and physiological consequences of VGluT3 ablation because it modulates a wide range of neuronal and physiological processes like anxiety, mood regulation, impulsivity, aggressive behavior, pain perception, sleep–wake cycle, appetite, body temperature and sexual behavior. Certainly, no significant change was found in aggression and depression-like behaviors, but in contrast, the loss of VGluT3 resulted in a specific anxiety-related phenotype. The sensory nerve fibers have different ways to detect the pain hypersensivity throughout their sensory modalities and conduction velocities, but at the moment is still unknown which types of sensory is related to the different forms of inflammatory and neuropathic pain hypersensivity. In this case, Vesicular glutamate transporter 3 (VGluT3), have been implicated in mechanical hypersensitivity after inflammation, but their role in neuropathic pain still remains under debate. VGluT3 has extensive somatic throughout development, which could be involved in non-synaptic modulation by glutamate in developing retina, and could influence trophic and extra-synaptic neuronal signaling by glutamate in the inner retina. Molecular Structure of EAATs Like all glutamate transporters, EAATs are trimers, with each protomer consisting of two domains : the central scaffold domain (Figure 1A, wheat) and the peripheral transport domain (Figure 1A, blue). The transport conformational path is as follows. First, the outward facing conformation occurs (OF, open) which allows the glutamate to bind. Then the HP2 region closes after uptake (OF, closed) and the elevator like movement carries the substrate to the intracellular side of the membrane. It worth nothing that this elevator motion consists of several yet to be categorized/identified conformational changes. After the elevator motion brings the substrate to the IC side of the membrane, EAAT adopts the inward facing (IF, closed) state in which the transport domain is lowered, but the HP2 gate is still closed with the glutamate still bound to the transporter. Lastly, the HP2 gate opens and the glutamate diffuses into the cytoplasm of the cell. Pathology Overactivity of glutamate transporters may result in inadequate synaptic glutamate and may be involved in schizophrenia and other mental illnesses. During injury processes such as ischemia and traumatic brain injury, the action of glutamate transporters may fail, leading to toxic buildup of glutamate. In fact, their activity may also actually be reversed due to inadequate amounts of adenosine triphosphate to power ATPase pumps, resulting in the loss of the electrochemical ion gradient. Since the direction of glutamate transport depends on the ion gradient, these transporters release glutamate instead of removing it, which results in neurotoxicity due to overactivation of glutamate receptors. Loss of the Na+-dependent glutamate transporter EAAT2 is suspected to be associated with neurodegenerative diseases such as Alzheimer's disease, Huntington's disease, and ALS–parkinsonism dementia complex. Also, degeneration of motor neurons in the disease amyotrophic lateral sclerosis has been linked to loss of EAAT2 from patients' brains and spinal cords. Addiction to certain addictive drugs (e.g., cocaine, heroin, alcohol, and nicotine) is correlated with a persistent reduction in the expression of EAAT2 in the nucleus accumbens (NAcc); the reduced expression of EAAT2 in this region is implicated in addictive drug-seeking behavior. In particular, the long-term dysregulation of glutamate neurotransmission in the NAcc of addicts is associated with an increase in vulnerability to relapse after re-exposure to the addictive drug or its associated drug cues. Drugs which help to normalize the expression of EAAT2 in this region, such as N-acetylcysteine, have been proposed as an adjunct therapy for the treatment of addiction to cocaine, nicotine, alcohol, and other drugs. See also Dopamine transporters Norepinephrine transporters Serotonin transporters NMDA receptors AMPA receptors Kainate receptors Metabotropic glutamate receptors References External links Amphetamine Membrane proteins Neurotransmitter transporters Solute carrier family Glutamate (neurotransmitter)
Glutamate transporter
[ "Biology" ]
2,215
[ "Protein classification", "Membrane proteins" ]
4,055,928
https://en.wikipedia.org/wiki/Structure%20%28mathematical%20logic%29
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it. Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory. From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics. For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory; see interpretation (model theory). In database theory, structures with no functions are studied as models for relational databases, in the form of relational models. History In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831 – 1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it. Definition Formally, a structure can be defined as a triple consisting of a domain a signature and an interpretation function that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature one can refer to it as a -structure. Domain The domain of a structure is an arbitrary set; it is also called the of the structure, its (especially in universal algebra), its (especially in model theory, cf. universe), or its . In classical first-order logic, the definition of a structure prohibits the empty domain. Sometimes the notation or is used for the domain of but often no notational distinction is made between a structure and its domain (that is, the same symbol refers both to the structure and its domain.) Signature The signature of a structure consists of: a set of function symbols and relation symbols, along with a function that ascribes to each symbol a natural number The natural number of a symbol is called the arity of because it is the arity of the interpretation of Since the signatures that arise in algebra often contain only function symbols, a signature with no relation symbols is called an algebraic signature. A structure with such a signature is also called an algebra; this should not be confused with the notion of an algebra over a field. Interpretation function The interpretation function of assigns functions and relations to the symbols of the signature. To each function symbol of arity is assigned an -ary function on the domain. Each relation symbol of arity is assigned an -ary relation on the domain. A nullary (-ary) function symbol is called a constant symbol, because its interpretation can be identified with a constant element of the domain. When a structure (and hence an interpretation function) is given by context, no notational distinction is made between a symbol and its interpretation For example, if is a binary function symbol of one simply writes rather than Examples The standard signature for fields consists of two binary function symbols and where additional symbols can be derived, such as a unary function symbol (uniquely determined by ) and the two constant symbols and (uniquely determined by and respectively). Thus a structure (algebra) for this signature consists of a set of elements together with two binary functions, that can be enhanced with a unary function, and two distinguished elements; but there is no requirement that it satisfy any of the field axioms. The rational numbers the real numbers and the complex numbers like any other field, can be regarded as -structures in an obvious way: In all three cases we have the standard signature given by with and The interpretation function is: is addition of rational numbers, is multiplication of rational numbers, is the function that takes each rational number to and is the number and is the number and and are similarly defined. But the ring of integers, which is not a field, is also a -structure in the same way. In fact, there is no requirement that of the field axioms hold in a -structure. A signature for ordered fields needs an additional binary relation such as or and therefore structures for such a signature are not algebras, even though they are of course algebraic structures in the usual, loose sense of the word. The ordinary signature for set theory includes a single binary relation A structure for this signature consists of a set of elements and an interpretation of the relation as a binary relation on these elements. Induced substructures and closed subsets is called an (induced) substructure of if and have the same signature the domain of is contained in the domain of and the interpretations of all function and relation symbols agree on The usual notation for this relation is A subset of the domain of a structure is called closed if it is closed under the functions of that is, if the following condition is satisfied: for every natural number every -ary function symbol (in the signature of ) and all elements the result of applying to the -tuple is again an element of For every subset there is a smallest closed subset of that contains It is called the closed subset generated by or the hull of and denoted by or . The operator is a finitary closure operator on the set of subsets of . If and is a closed subset, then is an induced substructure of where assigns to every symbol of σ the restriction to of its interpretation in Conversely, the domain of an induced substructure is a closed subset. The closed subsets (or induced substructures) of a structure form a lattice. The meet of two subsets is their intersection. The join of two subsets is the closed subset generated by their union. Universal algebra studies the lattice of substructures of a structure in detail. Examples Let be again the standard signature for fields. When regarded as -structures in the natural way, the rational numbers form a substructure of the real numbers, and the real numbers form a substructure of the complex numbers. The rational numbers are the smallest substructure of the real (or complex) numbers that also satisfies the field axioms. The set of integers gives an even smaller substructure of the real numbers which is not a field. Indeed, the integers are the substructure of the real numbers generated by the empty set, using this signature. The notion in abstract algebra that corresponds to a substructure of a field, in this signature, is that of a subring, rather than that of a subfield. The most obvious way to define a graph is a structure with a signature consisting of a single binary relation symbol The vertices of the graph form the domain of the structure, and for two vertices and means that and are connected by an edge. In this encoding, the notion of induced substructure is more restrictive than the notion of subgraph. For example, let be a graph consisting of two vertices connected by an edge, and let be the graph consisting of the same vertices but no edges. is a subgraph of but not an induced substructure. The notion in graph theory that corresponds to induced substructures is that of induced subgraphs. Homomorphisms and embeddings Homomorphisms Given two structures and of the same signature σ, a (σ-)homomorphism from to is a map that preserves the functions and relations. More precisely: For every n-ary function symbol f of σ and any elements , the following equation holds: . For every n-ary relation symbol R of σ and any elements , the following implication holds: where , is the interpretation of the relation symbol of the object theory in the structure , respectively. A homomorphism h from to is typically denoted as , although technically the function h is between the domains , of the two structures , . For every signature σ there is a concrete category σ-Hom which has σ-structures as objects and σ-homomorphisms as morphisms. A homomorphism is sometimes called strong if: For every n-ary relation symbol R of the object theory and any elements such that , there are such that and The strong homomorphisms give rise to a subcategory of the category σ-Hom that was defined above. Embeddings A (σ-)homomorphism is called a (σ-)embedding if it is one-to-one and for every n-ary relation symbol R of σ and any elements , the following equivalence holds: (where as before , refers to the interpretation of the relation symbol R of the object theory σ in the structure , respectively). Thus an embedding is the same thing as a strong homomorphism which is one-to-one. The category σ-Emb of σ-structures and σ-embeddings is a concrete subcategory of σ-Hom. Induced substructures correspond to subobjects in σ-Emb. If σ has only function symbols, σ-Emb is the subcategory of monomorphisms of σ-Hom. In this case induced substructures also correspond to subobjects in σ-Hom. Example As seen above, in the standard encoding of graphs as structures the induced substructures are precisely the induced subgraphs. However, a homomorphism between graphs is the same thing as a homomorphism between the two structures coding the graph. In the example of the previous section, even though the subgraph H of G is not induced, the identity map id: H → G is a homomorphism. This map is in fact a monomorphism in the category σ-Hom, and therefore H is a subobject of G which is not an induced substructure. Homomorphism problem The following problem is known as the homomorphism problem: Given two finite structures and of a finite relational signature, find a homomorphism or show that no such homomorphism exists. Every constraint satisfaction problem (CSP) has a translation into the homomorphism problem. Therefore, the complexity of CSP can be studied using the methods of finite model theory. Another application is in database theory, where a relational model of a database is essentially the same thing as a relational structure. It turns out that a conjunctive query on a database can be described by another structure in the same signature as the database model. A homomorphism from the relational model to the structure representing the query is the same thing as a solution to the query. This shows that the conjunctive query problem is also equivalent to the homomorphism problem. Structures and first-order logic Structures are sometimes referred to as "first-order structures". This is misleading, as nothing in their definition ties them to any specific logic, and in fact they are suitable as semantic objects both for very restricted fragments of first-order logic such as that used in universal algebra, and for second-order logic. In connection with first-order logic and model theory, structures are often called models, even when the question "models of what?" has no obvious answer. Satisfaction relation Each first-order structure has a satisfaction relation defined for all formulas in the language consisting of the language of together with a constant symbol for each element of which is interpreted as that element. This relation is defined inductively using Tarski's T-schema. A structure is said to be a model of a theory if the language of is the same as the language of and every sentence in is satisfied by Thus, for example, a "ring" is a structure for the language of rings that satisfies each of the ring axioms, and a model of ZFC set theory is a structure in the language of set theory that satisfies each of the ZFC axioms. Definable relations An -ary relation on the universe (i.e. domain) of the structure is said to be definable (or explicitly definable cf. Beth definability, or -definable, or definable with parameters from cf. below) if there is a formula such that In other words, is definable if and only if there is a formula such that is correct. An important special case is the definability of specific elements. An element of is definable in if and only if there is a formula such that Definability with parameters A relation is said to be definable with parameters (or -definable) if there is a formula with parameters from such that is definable using Every element of a structure is definable using the element itself as a parameter. Some authors use definable to mean definable without parameters, while other authors mean definable with parameters. Broadly speaking, the convention that definable means definable without parameters is more common amongst set theorists, while the opposite convention is more common amongst model theorists. Implicit definability Recall from above that an -ary relation on the universe of is explicitly definable if there is a formula such that Here the formula used to define a relation must be over the signature of and so may not mention itself, since is not in the signature of If there is a formula in the extended language containing the language of and a new symbol and the relation is the only relation on such that then is said to be implicitly definable over By Beth's theorem, every implicitly definable relation is explicitly definable. Many-sorted structures Structures as defined above are sometimes called s to distinguish them from the more general s. A many-sorted structure can have an arbitrary number of domains. The sorts are part of the signature, and they play the role of names for the different domains. Many-sorted signatures also prescribe which sorts the functions and relations of a many-sorted structure are defined on. Therefore, the arities of function symbols or relation symbols must be more complicated objects such as tuples of sorts rather than natural numbers. Vector spaces, for example, can be regarded as two-sorted structures in the following way. The two-sorted signature of vector spaces consists of two sorts V (for vectors) and S (for scalars) and the following function symbols: If V is a vector space over a field F, the corresponding two-sorted structure consists of the vector domain , the scalar domain , and the obvious functions, such as the vector zero , the scalar zero , or scalar multiplication . Many-sorted structures are often used as a convenient tool even when they could be avoided with a little effort. But they are rarely defined in a rigorous way, because it is straightforward and tedious (hence unrewarding) to carry out the generalization explicitly. In most mathematical endeavours, not much attention is paid to the sorts. A many-sorted logic however naturally leads to a type theory. As Bart Jacobs puts it: "A logic is always a logic over a type theory." This emphasis in turn leads to categorical logic because a logic over a type theory categorically corresponds to one ("total") category, capturing the logic, being fibred over another ("base") category, capturing the type theory. Other generalizations Partial algebras Both universal algebra and model theory study classes of (structures or) algebras that are defined by a signature and a set of axioms. In the case of model theory these axioms have the form of first-order sentences. The formalism of universal algebra is much more restrictive; essentially it only allows first-order sentences that have the form of universally quantified equations between terms, e.g.  x y (x + y = y + x). One consequence is that the choice of a signature is more significant in universal algebra than it is in model theory. For example, the class of groups, in the signature consisting of the binary function symbol × and the constant symbol 1, is an elementary class, but it is not a variety. Universal algebra solves this problem by adding a unary function symbol −1. In the case of fields this strategy works only for addition. For multiplication it fails because 0 does not have a multiplicative inverse. An ad hoc attempt to deal with this would be to define 0−1 = 0. (This attempt fails, essentially because with this definition 0 × 0−1 = 1 is not true.) Therefore, one is naturally led to allow partial functions, i.e., functions that are defined only on a subset of their domain. However, there are several obvious ways to generalize notions such as substructure, homomorphism and identity. Structures for typed languages In type theory, there are many sorts of variables, each of which has a type. Types are inductively defined; given two types δ and σ there is also a type σ → δ that represents functions from objects of type σ to objects of type δ. A structure for a typed language (in the ordinary first-order semantics) must include a separate set of objects of each type, and for a function type the structure must have complete information about the function represented by each object of that type. Higher-order languages There is more than one possible semantics for higher-order logic, as discussed in the article on second-order logic. When using full higher-order semantics, a structure need only have a universe for objects of type 0, and the T-schema is extended so that a quantifier over a higher-order type is satisfied by the model if and only if it is disquotationally true. When using first-order semantics, an additional sort is added for each higher-order type, as in the case of a many sorted first order language. Structures that are proper classes In the study of set theory and category theory, it is sometimes useful to consider structures in which the domain of discourse is a proper class instead of a set. These structures are sometimes called class models to distinguish them from the "set models" discussed above. When the domain is a proper class, each function and relation symbol may also be represented by a proper class. In Bertrand Russell's Principia Mathematica, structures were also allowed to have a proper class as their domain. See also Notes References External links Semantics section in Classical Logic (an entry of Stanford Encyclopedia of Philosophy) Mathematical logic Mathematical structures Model theory Universal algebra
Structure (mathematical logic)
[ "Mathematics" ]
3,862
[ "Mathematical structures", "Mathematical logic", "Mathematical objects", "Universal algebra", "Fields of abstract algebra", "Model theory" ]
4,055,998
https://en.wikipedia.org/wiki/Robot%20software
Robot software is the set of coded commands or instructions that tell a mechanical device and electronic system, known together as a robot, what tasks to perform. Robot software is used to perform autonomous tasks. Many software systems and frameworks have been proposed to make programming robots easier. Some robot software aims at developing intelligent mechanical devices. Common tasks include feedback loops, control, pathfinding, data filtering, locating and sharing data. Introduction While it is a specific type of software, it is still quite diverse. Each manufacturer has their own robot software. While the vast majority of software is about manipulation of data and seeing the result on-screen, robot software is for the manipulation of objects or tools in the real world. Industrial robot software Software for industrial robots consists of data objects and lists of instructions, known as program flow (list of instructions). For example, Go to Jig1 It is an instruction to the robot to go to positional data named Jig1. Of course, programs can also contain implicit data for example Tell axis 1 move 30 degrees. Data and program usually reside in separate sections of the robot controller memory. One can change the data without changing the program and vice versa. For example, one can write a different program using the same Jig1 or one can adjust the position of Jig1 without changing the programs that use it. Examples of programming languages for industrial robots Due to the highly proprietary nature of robot software, most manufacturers of robot hardware also provide their own software. While this is not unusual in other automated control systems, the lack of standardization of programming methods for robots does pose certain challenges. For example, there are over 30 different manufacturers of industrial robots, so there are also 30 different robot programming languages required. There are enough similarities between the different robots that it is possible to gain a broad-based understanding of robot programming without having to learn each manufacturer's proprietary language. One method of controlling robots from multiple manufacturers is to use a Post processor and Off-line programming (robotics) software. With this method, it is possible to handle brand-specific robot programming language from a universal programming language, such as Python (programming language). however, compiling and uploading fixed off-line code to a robot controller doesn't allow the robotic system to be state aware, so it cannot adapt its motion and recover as the environment changes. Unified real-time adaptive control for any robot is currently possible with a few different third-party tools. Some examples of published robot programming languages are shown below. Task in plain English: Move to P1 (a general safe position) Move to P2 (an approach to P3) Move to P3 (a position to pick the object) Close gripper Move to P4 (an approach to P5) Move to P5 (a position to place the object) Open gripper Move to P1 and finish VAL was one of the first robot ‘languages’ and was used in Unimate robots. Variants of VAL have been used by other manufacturers including Adept Technology. Stäubli currently use VAL3. Example program: PROGRAM PICKPLACE 1. MOVE P1 2. MOVE P2 3. MOVE P3 4. CLOSEI 0.00 5. MOVE P4 6. MOVE P5 7. OPENI 0.00 8. MOVE P1 .END Example of Stäubli VAL3 program: begin movej(p1,tGripper,mNomSpeed) movej(appro(p3,trAppro),tGripper,mNomSpeed) movel(p3,tGripper,mNomSpeed) close(tGripper) movej(appro(p5,trAppro),tGripper,mNomSpeed) movel(p5,tGripper,mNomSpeed) open(tGripper) movej(p1,tGripper,mNomSpeed) end trAppro is cartesian transformation variable. If we use in with appro command, we do not need to teach P2 land P4 point, but we dynamically transform an approach to position of pick and place for trajectory generation. Epson RC+ (example for a vacuum pickup) Function PickPlace Jump P1 Jump P2 Jump P3 On vacuum Wait .1 Jump P4 Jump P5 Off vacuum Wait .1 Jump P1 Fend ROBOFORTH (a language based on FORTH). : PICKPLACE P1 P3 GRIP WITHDRAW P5 UNGRIP WITHDRAW P1 ; (With Roboforth you can specify approach positions for places so you do not need P2 and P4.) Clearly, the robot should not continue the next move until the gripper is completely closed. Confirmation or allowed time is implicit in the above examples of CLOSEI and GRIP whereas the On vacuum command requires a time delay to ensure satisfactory suction. Other robot programming languages Visual programming language The LEGO Mindstorms EV3 programming language is a simple language for its users to interact with. It is a graphical user interface (GUI) written with LabVIEW. The approach is to start with the program rather than the data. The program is constructed by dragging icons into the program area and adding or inserting into the sequence. For each icon, you then specify the parameters (data). For example, for the motor drive icon you specify which motors and by how much they move. When the program is written it is downloaded into the Lego NXT 'brick' (microcontroller) for test. Scripting languages A scripting language is a high-level programming language that is used to control the software application, and is interpreted in real-time, or "translated on the fly", instead of being compiled in advance. A scripting language may be a general-purpose programming language or it may be limited to specific functions used to augment the running of an application or system program. Some scripting languages, such as RoboLogix, have data objects residing in registers, and the program flow represents the list of instructions, or instruction set, that is used to program the robot. Programming languages are generally designed for building data structures and algorithms from scratch, while scripting languages are intended more for connecting, or “gluing”, components and instructions together. Consequently, the scripting language instruction set is usually a streamlined list of program commands that are used to simplify the programming process and provide rapid application development. Parallel languages Another interesting approach is worthy of mention. All robotic applications need parallelism and event-based programming. Parallelism is where the robot does two or more things at the same time. This requires appropriate hardware and software. Most programming languages rely on threads or complex abstraction classes to handle parallelism and the complexity that comes with it, like concurrent access to shared resources. URBI provides a higher level of abstraction by integrating parallelism and events in the core of the language semantics. whenever(face.visible) { headPan.val += camera.xfov * face.x & headTilt.val += camera.yfov * face.y } The above code will move the headPan and headTilt motors in parallel to make the robot head follow the human face visible on the video taken by its camera whenever a face is seen by the robot. Robot application software Regardless which language is used, the result of robot software is to create robotic applications that help or entertain people. Applications include command-and-control and tasking software. Command-and-control software includes robot control GUIs for tele-operated robots, point-n-click command software for autonomous robots, and scheduling software for mobile robots in factories. Tasking software includes simple drag-n-drop interfaces for setting up delivery routes, security patrols and visitor tours; it also includes custom programs written to deploy specific applications. General purpose robot application software is deployed on widely distributed robotic platforms. Safety considerations Programming errors represent a serious safety consideration, particularly in large industrial robots. The power and size of industrial robots mean they are capable of inflicting severe injury if programmed incorrectly or used in an unsafe manner. Due to the mass and high-speeds of industrial robots, it is always unsafe for a human to remain in the work area of the robot during automatic operation. The system can begin motion at unexpected times and a human will be unable to react quickly enough in many situations, even if prepared to do so. Thus, even if the software is free of programming errors, great care must to be taken to make an industrial robot safe for human workers or human interaction, such as loading or unloading parts, clearing a part jam, or performing maintenance. The ANSI/RIA R15.06-1999 American National Standard for Industrial Robots and Robot Systems - Safety Requirements (revision of ANSI/ R15.06-1992) book from the Robotic Industries Association is the accepted standard on robot safety. This includes guidelines for both the design of industrial robots, and the implementation or integration and use of industrial robots on the factory floor. Numerous safety concepts such as safety controllers, maximum speed during a teach mode, and use of physical barriers are covered . See also Behavior-based robotics and Subsumption architecture Developmental robotics Epigenetic robotics Evolutionary robotics Industrial robot Cognitive robotics Robot control RoboLogix Automated planning and scheduling Cybernetics Artificial intelligence Robotics suite Telerobotics / Telepresence Robotic automation software Swarm robotics platforms References External links Linux Devices. ANSI/RIA R15.06-1999 American National Standard for Industrial Robots and Robot Systems - Safety Requirements (revision of ANSI/RIA R15.06-1992)
Robot software
[ "Engineering" ]
1,967
[ "Robotics software", "Robotics engineering" ]
4,056,473
https://en.wikipedia.org/wiki/Petite%20size
In fashion and clothing, a petite size is a standard clothing size designed specifically for women 163 cm (5 ft 4 in) and under. This categorization is not solely based on a woman's height, but also takes into account the proportions of her body. Petite sizes cater to body shapes that typically have shorter limb lengths, narrower shoulders, and smaller bust sizes. This standard is predominantly recognized in the U.S., but is also utilized in some other regions around the world. Many clothing stores, including both specialty boutiques and major retail chains, offer a range of petite sized styles to accommodate the needs of women 163 cm (5 ft 4 in) or shorter. These styles aim to provide a better fit than regular sizes, which are often tailored based on the proportions of taller individuals. Petite clothing may include tops, bottoms, dresses, and outerwear, as well as specialty items like petite activewear and swimwear. Some brands also offer petite plus sizes, catering to women who are both shorter in height and larger in body size. Frequency The average height of an American woman is roughly between and . In the UK and throughout Europe the average height of a woman is around to . History The word 'petite' is the feminine form of French adjective , which translates to 'small' or 'short' in English. Petite sizing originated in the 1940s when US fashion designer Hannah Troy noticed that many women did not fit into standard size clothing. She studied the measurements of women who had completed military service during World War II and found that only 8% fit the proportions of standard sizing, with most women being 'short in the waist'. She developed a clothing range called 'Troyfigure', which was based on a 'junior' fit but with a more mature style. This range became very popular and is considered the beginning of petite fashion. The word 'petite' was chosen by Troy because it "just had a nice ring to it" See also Children's clothing Clothing sizes US standard clothing size EN 13402 References NHANES survey CDC Anthropometric Reference Data for Children and Adults: U.S. Population, 1999–2002 - Page 20, Table 19. Sizes in clothing Fashion design
Petite size
[ "Physics", "Mathematics", "Engineering" ]
451
[ "Sizes in clothing", "Fashion design", "Physical quantities", "Quantity", "Size", "Design" ]
4,056,685
https://en.wikipedia.org/wiki/Man%20and%20the%20Moon
"Man and the Moon" is an episode of Disneyland, which originally aired on December 28, 1955. It was directed by Disney animator Ward Kimball. The show begins with a humorous look with a man's fascination with the Moon through animation. This segment features characteristics of the Moon depicted from William Shakespeare and children's nursery rhymes to lunar superstitions and scientific research. Then Kimball comes on with some information on the Moon, supplemented by graphics. Kimball then introduces Dr. Wernher von Braun, who discusses plans for a trip around the Moon. Dr. Wernher von Braun was employed as a technical consultant on this film by Walt Disney, and on a number of other Disney films. He had a great knowledge of rockets, as he had helped to develop the V-2 rocket while working for Nazi Germany. Finally, a live action simulation from inside and outside the crewed ship Lunar Recon Ship RM-1 dramatizes what such an expedition might be like, including an almost-disastrous hit by a very small meteor. Towards the end, this film presents what seems to be a bit of "sci-fi"; as the RM-1, crossing the Moon's night side, approaches the night/day terminator, high radiation is suddenly detected, and a flare fired over the area reveals what looks like a rectangular double wall, or the ruins thereof, extending out from a crater; strangely, none of the crew remark on it, and the unusual radiation is never mentioned again. This episode later reaired in 1959 under a new title: "Tomorrow the Moon". This episode was preceded by "Man in Space" and followed by "Mars and Beyond". It was repeated on June 13, 1956, and September 25, 1959. Home media The episode was released on May 18, 2004, on Walt Disney Treasures: Tomorrow Land. See also "Man Will Conquer Space Soon!" "Our Friend the Atom" References External links Walt Disney anthology television series episodes 1955 American television episodes Works about the Moon Spaceflight Space advocacy Television episodes directed by Ward Kimball Wernher von Braun Works about astronauts
Man and the Moon
[ "Astronomy" ]
429
[ "Spaceflight", "Outer space" ]
4,056,695
https://en.wikipedia.org/wiki/Binary%20moment%20diagram
A binary moment diagram (BMD) is a generalization of the binary decision diagram (BDD) to linear functions over domains such as booleans (like BDDs), but also to integers or to real numbers. They can deal with Boolean functions with complexity comparable to BDDs, but also some functions that are dealt with very inefficiently in a BDD are handled easily by BMD, most notably multiplication. The most important properties of BMD is that, like with BDDs, each function has exactly one canonical representation, and many operations can be efficiently performed on these representations. The main features that differentiate BMDs from BDDs are using linear instead of pointwise diagrams, and having weighted edges. The rules that ensure the canonicity of the representation are: Decision over variables higher in the ordering may only point to decisions over variables lower in the ordering. No two nodes may be identical (in normalization such nodes all references to one of these nodes should be replaced be references to another) No node may have all decision parts equivalent to 0 (links to such nodes should be replaced by links to their always part) No edge may have weight zero (all such edges should be replaced by direct links to 0) Weights of the edges should be coprime. Without this rule or some equivalent of it, it would be possible for a function to have many representations, for example 2x + 2 could be represented as 2 · (1 + x) or 1 · (2 + 2x). Pointwise and linear decomposition In pointwise decomposition, like in BDDs, on each branch point we store result of all branches separately. An example of such decomposition for an integer function (2x + y) is: In linear decomposition we provide instead a default value and a difference: It can easily be seen that the latter (linear) representation is much more efficient in case of additive functions, as when we add many elements the latter representation will have only O(n) elements, while the former (pointwise), even with sharing, exponentially many. Edge weights Another extension is using weights for edges. A value of function at given node is a sum of the true nodes below it (the node under always, and possibly the decided node) times the edges' weights. For example, can be represented as: Result node, always 1× value of node 2, if add 4× value of node 4 Always 1× value of node 3, if add 2× value of node 4 Always 0, if add 1× value of node 4 Always 1× value of node 5, if add +4 Always 1× value of node 6, if add +2 Always 0, if add +1 Without weighted nodes a much more complex representation would be required: Result node, always value of node 2, if value of node 4 Always value of node 3, if value of node 7 Always 0, if value of node 10 Always value of node 5, if add +16 Always value of node 6, if add +8 Always 0, if add +4 Always value of node 8, if add +8 Always value of node 9, if add +4 Always 0, if add +2 Always value of node 11, if add +4 Always value of node 12, if add +2 Always 0, if add +1 References Graph data structures Formal methods
Binary moment diagram
[ "Engineering" ]
690
[ "Software engineering", "Formal methods" ]
4,057,221
https://en.wikipedia.org/wiki/Anatomical%20terms%20of%20motion
Motion, the process of movement, is described using specific anatomical terms. Motion includes movement of organs, joints, limbs, and specific sections of the body. The terminology used describes this motion according to its direction relative to the anatomical position of the body parts involved. Anatomists and others use a unified set of terms to describe most of the movements, although other, more specialized terms are necessary for describing unique movements such as those of the hands, feet, and eyes. In general, motion is classified according to the anatomical plane it occurs in. Flexion and extension are examples of angular motions, in which two axes of a joint are brought closer together or moved further apart. Rotational motion may occur at other joints, for example the shoulder, and are described as internal or external. Other terms, such as elevation and depression, describe movement above or below the horizontal plane. Many anatomical terms derive from Latin terms with the same meaning. Classification Motions are classified after the anatomical planes they occur in, although movement is more often than not a combination of different motions occurring simultaneously in several planes. Motions can be split into categories relating to the nature of the joints involved: Gliding motions occur between flat surfaces, such as in the intervertebral discs or between the carpal bones of the wrist, and the metacarpal bones of the hand. Angular motions occur over synovial joints and causes them to either increase or decrease angles between bones. Rotational motions move a structure in a rotational motion along a longitudinal axis, such as turning the head to look to either side. Apart from this motions can also be divided into: Linear motions (or translatory motions), which move in a line between two points. Rectilinear motion is motion in a straight line between two points, whereas curvilinear motion is motion following a curved path. Angular motions (or rotary motions) occur when an object is around another object increasing or decreasing the angle. The different parts of the object do not move the same distance. Examples include a movement of the knee, where the lower leg changes angle compared to the femur, or movements of the ankle. The study of movement in the human body is known as kinesiology. A categoric list of movements and the muscles involved can be found at list of movements of the human body. Abnormal motion The prefix hyper- is sometimes added to describe movement beyond the normal limits, such as in hypermobility, hyperflexion or hyperextension. The range of motion describes the total range of motion that a joint is able to do. For example, if a part of the body such as a joint is overstretched or "bent backwards" because of exaggerated extension motion, then it can be described as hyperextended. Hyperextension increases the stress on the ligaments of a joint, and is not always because of a voluntary movement. It may be a result of accidents, falls, or other causes of trauma. It may also be used in surgery, such as in temporarily dislocating joints for surgical procedures. Or it may be used as a pain compliance method to force a person to take a certain action, such as allowing a police officer to take him into custody. General motion These are general terms that can be used to describe most movements the body makes. Most terms have a clear opposite, and so are treated in pairs. Flexion and extension Flexion and extension are movements that affect the angle between two parts of the body. These terms come from the Latin words with the same meaning. Flexion is a bending movement that decreases the angle between a segment and its proximal segment. For example, bending the elbow, or clenching a hand into a fist, are examples of flexion. When a person is sitting down, the knees are flexed. When a joint can move forward and backward, such as the neck and trunk, flexion is movement in the anterior direction. When the chin is against the chest, the neck is flexed, and the trunk is flexed when a person leans forward. Flexion of the shoulder or hip is movement of the arm or leg forward. Extension is the opposite of flexion, a straightening movement that increases the angle between body parts. For example, when standing up, the knees are extended. When a joint can move forward and backward, such as the neck and trunk, extension is movement in the posterior direction. Extension of the hip or shoulder moves the arm or leg backward. Even for other upper extremity joints – elbow and wrist, backward movement results in extension. The knee, ankle, and wrist are exceptions, where the distal end has to move in the anterior direction for it to be called extension. For the toes, flexion is curling them downward whereas extension is uncurling them or raising them. Abduction and adduction Abduction is the motion of a structure away from the midline while adduction is motion towards the center of the body. The center of the body is defined as the midsagittal or longitudinal plane. These terms come from Latin words with similar meanings, being the Latin prefix indicating , indicating , and meaning . Abduction is a motion that pulls a structure or part away from the midline of the body, carried out by one or more abductor muscles. In the case of fingers and toes, it is spreading the digits apart, away from the centerline of the hand or foot. For example, raising the arms up, such as when tightrope-walking, is an example of abduction at the shoulder. When the legs are splayed at the hip, such as when doing a star jump or doing a split, the legs are abducted at the hip. Adduction is a motion that pulls a structure or part towards the midline of the body, or towards the midline of a limb, carried out by one or more adductor muscles. In the case of fingers and toes, it is bringing the digits together, towards the centerline of the hand or foot. Dropping the arms to the sides, and bringing the knees together, are examples of adduction. Adduction of the wrist is also known as ulnar deviation which moves the hand towards the ulnar styloid (or, towards the little finger). Abduction of the wrist is also called radial deviation which moves the hand moving towards the radial styloid (or, towards the thumb). Elevation and depression Elevation and depression are movements above and below the horizontal. The words derive from the Latin terms with similar meanings. Elevation is movement in a superior direction. For example, shrugging is an example of elevation of the scapula. Depression is movement in an inferior direction, the opposite of elevation. Rotation Rotation of body parts may be internal or external, that is, towards or away from the center of the body. Internal rotation (medial rotation or intorsion) is rotation towards the axis of the body, carried out by internal rotators. External rotation (lateral rotation or extorsion) is rotation away from the center of the body, carried out by external rotators. Internal and external rotators make up the rotator cuff, a group of muscles that help to stabilize the shoulder joint. Other Anterograde and retrograde flow refer to movement of blood or other fluids in a normal (anterograde) or abnormal (retrograde) direction. Circumduction is a conical movement of a body part, such as a ball and socket joint or the eye. Circumduction is a combination of flexion, extension, adduction and abduction. Circumduction can be best performed at ball and socket joints, such as the hip and shoulder, but may also be performed by other parts of the body such as fingers, hands, feet, and head. For example, circumduction occurs when spinning the arm when performing a serve in tennis or bowling a cricket ball. Reduction is a motion returning a bone to its original state, such as a shoulder reduction following shoulder dislocation, or reduction of a hernia. Special motion Hands and feet Flexion and extension of the foot Dorsiflexion and plantar flexion refer to extension or flexion of the foot at the ankle. These terms refer to flexion in direction of the "back" of the foot, which is the upper surface of the foot when standing, and flexion in direction of the sole of the foot. These terms are used to resolve confusion, as technically extension of the joint is dorsiflexion, which could be considered counter-intuitive as the motion reduces the angle between the foot and the leg. Dorsiflexion is where the toes are brought closer to the shin. This decreases the angle between the dorsum of the foot and the leg. For example, when walking on the heels the ankle is described as being in dorsiflexion. Similarly, dorsiflexion helps in assuming a deep squat position. Plantar flexion or plantarflexion is the movement which decreases the angle between the sole of the foot and the back of the leg; for example, the movement when depressing a car pedal or standing on tiptoes. Flexion and extension of the hand Palmarflexion and dorsiflexion refer to movement of the flexion (palmarflexion) or extension (dorsiflexion) of the hand at the wrist. These terms refer to flexion between the hand and the body's dorsal surface, which in anatomical position is considered the back of the arm; and flexion between the hand and the body's palmar surface, which in anatomical position is considered the anterior side of the arm. The direction of terms are opposite to those in the foot because of embryological rotation of the limbs in opposite directions. Palmarflexion is flexion of the wrist towards the palm and ventral side of forearm. Dorsiflexion is hyperextension of the wrist joint, towards the dorsal side of forearm. Pronation and supination Pronation and supination refer generally to the prone (facing down) or supine (facing up) positions. In the extremities, they are the rotation of the forearm or foot so that in the standard anatomical position the palm or sole is facing anteriorly when in supination and posteriorly when in pronation. As an example, when a person is typing on a computer keyboard, their hands are pronated; when washing their face, they are supinated. Pronation at the forearm is a rotational movement where the hand and upper arm are turned so the thumbs point towards the body. When the forearm and hand are supinated, the thumbs point away from the body. Pronation of the foot is turning of the sole outwards, so that weight is borne on the medial part of the foot. Supination of the forearm occurs when the forearm or palm are rotated outwards. Supination of the foot is turning of the sole of the foot inwards, shifting weight to the lateral edge. Inversion and eversion Inversion and eversion are movements that tilt the sole of the foot away from (eversion) or towards (inversion) the midline of the body. Eversion is the movement of the sole of the foot away from the median plane. Inversion is the movement of the sole towards the median plane. For example, inversion describes the motion when an ankle is twisted. Eyes Unique terminology is also used to describe the eye. For example: A version is an eye movement involving both eyes moving synchronously and symmetrically in the same direction. Torsion is eye movement that affects the vertical axis of the eye, such as the movement made when looking in to the nose. Jaw and teeth Occlusion is motion of the mandibula towards the maxilla making contact between the teeth. Protrusion and retrusion are sometimes used to describe the anterior (protrusion) and posterior (retrusion) movement of the jaw. Other Other terms include: Nutation and counternutation are movement of the sacrum defined by the rotation of the promontory downwards and anteriorly, as with lumbar extension (nutation); or upwards and posteriorly, as with lumbar flexion (counternutation). Opposition is the movement that involves grasping of the thumb and fingers. Protraction and retraction is an anterior (protraction) or posterior (retraction) movement, such as of the arm at the shoulders, although these terms have been criticised as non-specific. Reciprocal motion is alternating motions in opposing directions. Reposition is restoring an object to its natural condition. See also Anatomical terms of location Anatomical terms of muscle Anatomical terms of bone Anatomical terms of neuroanatomy Notes References Sources External links Hypermuscle: Muscles in Action at med.umich.edu
Anatomical terms of motion
[ "Biology" ]
2,629
[ "Behavior", "Anatomical terms of motion", "Motor control" ]
4,057,273
https://en.wikipedia.org/wiki/Microsoft%20Management%20Console
Microsoft Management Console (MMC) is a component of Microsoft Windows that provides system administrators and advanced users an interface for configuring and monitoring the system. It was first introduced in 1998 with the Option Pack for Windows NT 4.0 and later came pre-bundled with Windows 2000 and its successors. Snap-ins and consoles The management console can host Component Object Model components called snap-ins. Most of Microsoft's administration tools are implemented as MMC snap-ins. Third parties can also implement their own snap-ins using the MMC's application programming interfaces published on the Microsoft Developer Network's web site. Snap-ins are registered in the [HKEY_CLASSES_ROOT]\{CLSID} and [HKEY_LOCAL_MACHINE\Software\Microsoft\MMC\Snapins] registry keys. A snap-in combined with MMC is called a management saved console, which is a file with .msc extension and can be launched using this syntax: mmc path \ filename.msc [/a] [/64] [/32]. Common snap-ins The most prolific MMC component, Computer Management, appears in the "Administrative Tools" folder in the Control Panel, under "System and Security" in Category View. Computer Management actually consists of a collection of MMC snap-ins, including the Device Manager, Disk Defragmenter, Internet Information Services (if installed), Disk Management, Event Viewer, Local Users and Groups (except in the home editions of Windows), Shared Folders, Services snap-in, for managing Windows services, Certificates and other tools. Computer Management can also be pointed at another Windows machine altogether, allowing for monitoring and configuration of other computers on the local network that the user has access to. Other MMC snap-ins in common use include: Microsoft Exchange Server (up to version 2010) Active Directory Users and Computers, Domains and Trusts, and Sites and Services Group Policy Management, including the Local Security Policy snap-in; included on all versions of Windows 2000 and later (Home editions of Microsoft Windows disable this snap-in) Performance snap-in, for monitoring system performance and metrics Version history MMC 1.0, shipped with Windows NT 4.0 Option Pack. MMC 1.1, shipped with SQL Server 7.0 and Systems Management Server 2.0, and also made available as a download for Windows 9x and Windows NT. New features: Snap-in taskpads Wizard-style property sheets Ability to load extensions to a snap-in at run-time HTML Help support MMC 1.2, shipped with Windows 2000. New features: Support for Windows Installer and Group Policy Filtered views Exporting list views to a text file Persistence of user-set column layouts (i.e. widths, ordering, visibility and sorting of lists) MMC 2.0, shipped with Windows XP and Windows Server 2003. New features: Operating system-defined visual styles Automation object model, allowing the capabilities of an MMC snap-in to be used programmatically from outside MMC itself (e.g. from a script) 64-bit snap-ins Console Taskpads View Extensions Multilanguage User Interface help files MMC 3.0, shipped with Windows Vista, Windows Server 2003 SP2, Windows XP SP3 and every subsequent versions of Windows up to Windows 11. Also downloadable for Windows XP SP2 and Windows Server 2003 SP1. New features: A new "Actions pane", displayed on the right-hand side of the MMC user interface that displays available actions for currently-selected node Support for developing snap-ins with the .NET Framework, including Windows Forms Reduced amount of code required to create a snap-in Improved debugging capabilities Asynchronous user interface model (MMC 3.0 snap-ins only) True Color Icon Support (Windows Vista Only) New Add/Remove Snap-in UI DEP is always enforced. All snap-ins must be DEP-aware. See also List of Microsoft Windows components Microsoft Windows Windows PowerShell References External links Microsoft Management Console documentation Windows components Microsoft application programming interfaces System administration Windows 2000
Microsoft Management Console
[ "Technology" ]
854
[ "Information systems", "System administration" ]
4,057,287
https://en.wikipedia.org/wiki/Maximum%20term%20method
The maximum-term method is a consequence of the large numbers encountered in statistical mechanics. It states that under appropriate conditions the logarithm of a summation is essentially equal to the logarithm of the maximum term in the summation. These conditions are (see also proof below) that (1) the number of terms in the sum is large and (2) the terms themselves scale exponentially with this number. A typical application is the calculation of a thermodynamic potential from a partition function. These functions often contain terms with factorials which scale as (Stirling's approximation). Example Proof Consider the sum where >0 for all N. Since all the terms are positive, the value of S must be greater than the value of the largest term, , and less than the product of the number of terms and the value of the largest term. So we have Taking logarithm gives As frequently happens in statistical mechanics, we assume that will be : see Big O notation. Here we have For large M, is negligible with respect to M itself, and so . Then, we can see that ln S is bounded from above and below by , and so References D.A. McQuarrie, Statistical Mechanics. New York: Harper & Row, 1976. T.L. Hill, An Introduction to Statistical Thermodynamics. New York: Dover Publications, 1987 Physical chemistry Statistical mechanics
Maximum term method
[ "Physics", "Chemistry" ]
292
[ "Statistical mechanics stubs", "Applied and interdisciplinary physics", "nan", "Statistical mechanics", "Physical chemistry", "Physical chemistry stubs" ]
4,057,311
https://en.wikipedia.org/wiki/%C3%89cole%20nationale%20sup%C3%A9rieure%20d%27informatique%20et%20de%20math%C3%A9matiques%20appliqu%C3%A9es%20de%20Grenoble
The École nationale supérieure d'informatique et de mathématiques appliquées, or Ensimag, is a prestigious French grande école located in Grenoble, France. Ensimag is part of the Institut polytechnique de Grenoble (Grenoble INP). The school specializes in computer science, applied mathematics and telecommunications. Students are usually admitted to Ensimag competitively following two years of undergraduate studies in classes préparatoires aux grandes écoles. Studies at Ensimag are of three years' duration and lead to the French degree of "Diplôme National d'Ingénieur" (equivalent to a master's degree). Ensimag was founded in 1960 by French mathematician Jean Kuntzmann. About 250 students graduate from Ensimag each year in its different degrees, and the school counts more than 5500 alumni worldwide. Ensimag graduate specializations Ensimag's curriculum offers a variety of compulsory and elective advanced courses, making up specific profiles. Most of the common core courses are taught in the first year and the first semester of the second year, allowing students to acquire the basics in applied mathematics and informatics. Students then choose a graduate specialization. International Master's programs Master of Science in Informatics at Grenoble (MoSIG) Since September 2008, an English-language joint degree program with the University of Grenoble provides a highly competitive, two-year graduate Master's degree program. Master in Communication Systems Engineering Offered jointly by Ensimag and Politecnico di Torino (Italy), this four-semester course aims to train engineers to specialize in the design and management of communication systems, ranging from simple point-to-point transmissions to diversified telecommunications networks. Research at Ensimag Ensimag students can perform research work as part of their curriculum in second year, as well as a second-year internship and their end of studies project in a research laboratory. 15% of Ensimag graduates choose to pursue a Ph.D. Rankings The school is one of the top French engineering institutions. In the field of computer science, Ensimag was ranked first in France by Codingame, as measured by the position of its students in the national admission examinations and by the ranking of companies hiring its students and specialized media. References External links (fr) The official Ensimag website (en) The official Ensimag website (en) Ensimag English-language Master's degree programs Informatique et de mathématiques appliquées de Grenoble Grenoble Tech Ensimag Grenoble Tech Ensimag Grenoble Tech Ensimag Grenoble Tech Ensimag Grenoble Tech Ensimag Grenoble Tech Ensimag Universities and colleges established in 1960 1960 establishments in France
École nationale supérieure d'informatique et de mathématiques appliquées de Grenoble
[ "Technology" ]
605
[ "Information technology", "Information technology education" ]
4,057,413
https://en.wikipedia.org/wiki/Selection%20coefficient
Selection coefficient, usually denoted by the letter s, is a measure used in population genetics to quantify the relative fitness of a genotype compared to other genotypes. Selection coefficients are central to the quantitative description of evolution, since fitness differences determine the change in genotype frequencies attributable to selection. The selection coefficient is typically calculated using fitness values. The fitness () of a genotype is a measure of its reproductive success, often expressed as a fraction of the maximum reproductive success in the population. The formula to calculate the selection coefficient for a genotype is: , where is the relative fitness of the genotype, ranging between 0 and 1. Suppose we have two genotypes, and , with relative fitness values of 1 (most fit, standard reference) and 0.8, the selection coefficient () for is (no selection against this genotype); the selection coefficient () for is (this indicates that the genotype has 20% reduction in fitness compared to the genotype). For example, the lactose-tolerant allele spread from very low frequencies to high frequencies in less than 9000 years since farming with an estimated selection coefficient of 0.09-0.19 for a Scandinavian population. Though this selection coefficient might seem like a very small number, over evolutionary time, the favored alleles accumulate in the population and become more and more common, potentially reaching fixation. See also Evolutionary pressure References Population genetics Evolutionary biology
Selection coefficient
[ "Biology" ]
294
[ "Evolutionary biology" ]
4,058,023
https://en.wikipedia.org/wiki/Vanity%20sizing
Vanity sizing, or size inflation, is the phenomenon of ready-to-wear clothing of the same nominal size becoming bigger in physical size over time. This has been documented primarily in the United States and the United Kingdom. The use of US standard clothing sizes by manufacturers as the official guidelines for clothing sizes was abandoned in 1983. In the United States, although clothing size standards exist (i.e., ASTM), most companies do not use them any longer. Size inconsistency has existed since at least 1937. In Sears' 1937 catalog, a size 14 dress had a bust size of . In 1967, the same bust size was a size 8. In 2011, it was a size 0. Some argue that vanity sizing is designed to satisfy wearers' wishes to appear thin and feel better about themselves. This works by adhering to the theory of compensatory self-enhancement, as vanity sizing promotes a more positive self-image of one upon seeing a smaller label. In the 2000s, American designer Nicole Miller introduced size 0 because of its strong California presence and to satisfy the request of many Asian American customers in that state. Her brand introduced subzero sizes for naturally petite women. However, the increasing size of clothing with the same nominal size caused Nicole Miller to introduce size 0, 00, or subzero sizes. The UK's Chief Medical Officer has suggested that vanity sizing has contributed to the normalisation of obesity in society. In 2003, a study that measured over 1,000 pairs of women's pants found that pants from more expensive brands tended to be smaller than those from cheaper brands with the same nominal size. US pattern sizing measurements: 1931–2015 US misses standard sizing measurements: 1958–2011 Men's clothing Although more common in women's apparel, vanity sizing occurs in men's clothing as well. For example, men's pants are traditionally marked with two numbers, "waist" (waist circumference) and "inseam" (distance from the crotch to the hem of the pant). While the nominal inseam is fairly accurate, the nominal waist may be quite a bit smaller than the actual waist, in US sizes. In 2010, Abram Sauer of Esquire measured several pairs of dress pants with a nominal waist size of at different US retailers and found that actual measurements ranged from . The phenomenon has also been noticed in the United Kingdom, where a 2011 study found misleading labels on more than half of checked items of clothing. In that study, the worst offenders understated waist circumferences by . London-based market analyst Mintel say that the number of men reporting varying waistlines from store to store doubled between 2005 and 2011. Effects on consumers Vanity sizing is a common fashion industry practice used today that often involves labeling clothes with smaller sizes than their actual measurements size. Experts believe that this practice targets consumer's preferences and perceptions. Although it may seem like a marketing tactic to boost sales, it potentially has an impact that affects consumers' psychological well-being, purchasing behavior tendencies, and self-image perceptions. Research studies show that vanity sizing is a key factor in a consumer's ideal body image and self-esteem. The study claims that smaller-size labels can promote more positive mental imagery about one's self-image, viewing oneself as thinner and more attractive. One example that the article provides is a hypothetical situation when presented with two t-shirts that look the same, with the only difference being the size, one labeled medium and one labeled a size large. The article explains that consumers would be more willing to pick the t-shirt labeled medium because it makes them feel better about their figure. "'Consumers' decisions are influenced by framing; that is, the way that the good is presented to the consumers.'" However, this may depend on an individual's self-esteem about their appearance; those with lower self-esteem prefer small labels more. In another article, five studies were conducted, and all concluded that larger clothing sizes had a more negative response from consumers. Nevertheless, it is also important to consider the impact of vanity sizing on the plus-size women community. Finding clothes that fit and match personal style is challenging for this group of women. In an academic paper that analyzes the marketing for the plus-size community, the author mentions that "For most retailers plus size consumers are not their main target market unless they exclusively sell plus size clothes. But for the most part plus size consumers do fit into some kind of target market on every other attribute except for sizing." This can be frustrating for this community, making this group feel excluded and showing the ethical issues of not being able to provide a market for different communities. In addition, In another article that focuses on the plus-size community's satisfaction with retail clothing, the author states, "Additionally, 62% of plus-size women experience difficulty finding desirable clothing styles, and 56% report that it is challenging to find good quality plus-size clothing." It is crucial embracing diversity in clothing sizing and promoting inclusivity to address issues that maintain sizing discrimination. Not only does vanity sizing play a part in how consumers view themselves, but it can also be a factor in shaping a consumer's purchasing habits. Oftentimes, consumers lean toward clothing labels with smaller sizes based on how those clothes complement their figure. Retailers may incorporate vanity sizing practices, which can sometimes result in particular consumers having more appeal towards smaller sizes. Another study tests whether perceived deception is connected between a consumer's cynicism and a consumer's outcomes. The article discusses how wearing vanity sizes boosts consumers' self-esteem and adds value to the product that would not have been in those labeled in the actual size. Larger clothing sizes may influence consumers to purchase more clothing items to improve their self-esteem. However, there are times when people buy clothes, they might choose bigger sizes to feel better about themselves. The flip side of vanity sizing was concluded from their study, which showed that this only sometimes stops people from buying clothes. It can make people want to spend more money overall because they want to feel better about themselves, and buying clothes can help. This vanity sizing concept suggests that perhaps there is a connection between shopping habits and one's ideal body figure. While vanity sizing may seem a good advantage for store retailers, it can also change customers' trust if they feel deceived. Customers may lose trust in retailers if they feel they have been deceived by vanity sizing, which could alter their perspectives of a brand. An article analyzing the psychological process of vanity sizes says that retailers must be truthful about the labeled information because this information is essential for consumers. If not sized accurately, it can lead to negative views toward retailers. This can result in future references being affected when using sizing information. Later in the article, it says retailers should be truthful about the sizing information if they want to sustain more positive customer relationships. Negative effects, such as dissatisfaction with a purchase or less trust, may result from practices that retailers participate in when sizing labels. Moreover, retailers must be transparent in sizing practices to address consumers' distrust and perceived deception. Consumers may appreciate it when retailers are more transparent in sizing practices; this can build trust and avoid deceiving perceptions. Vanity sizing often affects women's clothing brands, especially for moderately priced designer brands targeting younger adult female consumers. An article tests the idea that women's apparel sizes would vary depending on their price. The study found that moderately expensive apparel for women tends to be larger than discount brands, while designer brands are more expensive and tend to be smaller than non-designer brands. In contrast, however, the study also found that children's and men's apparel brands show no vanity sizing practicing on clothes. The fashion industry's sizing standards may reflect gender disparities or pose challenges when conforming to marketing strategies or ideal societal body image. See also Body image The Association for Women with Large Feet EN 13402, emerging European and international clothing standard from 2007, based on body measurements in centimeters US standard clothing size, an inch based standard based on body measurements, gained little traction and was replaced by vanity sizing from the 1980s Inclusive sizing Notes References Further reading External links Press release. Clothing controversies Fashion design Sizes in clothing Advertising techniques
Vanity sizing
[ "Physics", "Mathematics", "Engineering" ]
1,719
[ "Sizes in clothing", "Fashion design", "Physical quantities", "Quantity", "Size", "Design" ]
4,058,047
https://en.wikipedia.org/wiki/Grand%20antiprism
In geometry, the grand antiprism or pentagonal double antiprismoid is a uniform 4-polytope (4-dimensional uniform polytope) bounded by 320 cells: 20 pentagonal antiprisms, and 300 tetrahedra. It is an anomalous, non-Wythoffian uniform 4-polytope, discovered in 1965 by Conway and Guy. Topologically, under its highest symmetry, the pentagonal antiprisms have D5d symmetry and there are two types of tetrahedra, one with S4 symmetry and one with Cs symmetry. Alternate names Pentagonal double antiprismoid Norman W. Johnson Gap (Jonathan Bowers: for grand antiprism) Structure 20 stacked pentagonal antiprisms occur in two disjoint rings of 10 antiprisms each. The antiprisms in each ring are joined to each other via their pentagonal faces. The two rings are mutually perpendicular, in a structure similar to a duoprism. The 300 tetrahedra join the two rings to each other, and are laid out in a 2-dimensional arrangement topologically equivalent to the 2-torus and the ridge of the duocylinder. These can be further divided into three sets. 100 face mate to one ring, 100 face mate to the other ring, and 100 are centered at the exact midpoint of the duocylinder and edge mate to both rings. This latter set forms a flat torus and can be "unrolled" into a flat 10×10 square array of tetrahedra that meet only at their edges and vertices. See figure below. In addition the 300 tetrahedra can be partitioned into 10 disjoint Boerdijk–Coxeter helices of 30 cells each that close back on each other. The two pentagonal antiprism tubes, plus the 10 BC helices, form an irregular discrete Hopf fibration of the grand antiprism that Hopf maps to the faces of a pentagonal antiprism. The two tubes map to the two pentagonal faces and the 10 BC helices map to the 10 triangular faces. The structure of the grand antiprism is analogous to that of the 3-dimensional antiprisms. However, the grand antiprism is the only convex uniform analogue of the antiprism in 4 dimensions (although the 16-cell may be regarded as a regular analogue of the digonal antiprism). The only nonconvex uniform 4-dimensional antiprism analogue uses pentagrammic crossed-antiprisms instead of pentagonal antiprisms, and is called the pentagrammic double antiprismoid. Vertex figure The vertex figure of the grand antiprism is a sphenocorona or dissected regular icosahedron: a regular icosahedron with two adjacent vertices removed. In their place 8 triangles are replaced by a pair of trapezoids, edge lengths φ, 1, 1, 1 (where φ is the golden ratio), joined together along their edge of length φ, to give a tetradecahedron whose faces are the 2 trapezoids and the 12 remaining equilateral triangles. Construction The grand antiprism can be constructed by diminishing the 600-cell: subtracting 20 pyramids whose bases are three-dimensional pentagonal antiprisms. Conversely, the two rings of pentagonal antiprisms in the grand antiprism may be triangulated by 10 tetrahedra joined to the triangular faces of each antiprism, and a circle of 5 tetrahedra between every pair of antiprisms, joining the 10 tetrahedra of each, yielding 150 tetrahedra per ring. These combined with the 300 tetrahedra that join the two rings together yield the 600 tetrahedra of the 600-cell. This diminishing may be realized by removing two rings of 10 vertices from the 600-cell, each lying in mutually orthogonal planes. Each ring of removed vertices creates a stack of pentagonal antiprisms on the convex hull. This relationship is analogous to how a pentagonal antiprism can be constructed from an icosahedron by removing two opposite vertices, thereby removing 5 triangles from the opposite 'poles' of the icosahedron, leaving the 10 equatorial triangles and two pentagons on the top and bottom. (The snub 24-cell can also be constructed by another diminishing of the 600-cell, removing 24 icosahedral pyramids. Equivalently, this may be realized as taking the convex hull of the vertices remaining after 24 vertices, corresponding to those of an inscribed 24-cell, are removed from the 600-cell.) Alternatively, it can also be constructed from the decagonal ditetragoltriate (the convex hull of two perpendicular nonuniform 10-10 duoprisms where the ratio of the two decagons are in the golden ratio) via an alternation process. The decagonal prisms alternate into pentagonal antiprisms, the rectangular trapezoprisms alternate into tetrahedra with two new regular tetrahedra (representing a non-corealmic triangular bipyramid) created at the deleted vertices. This is the only uniform solution for the p-gonal double antiprismoids alongside its conjugate, the pentagrammic double antiprismoid from the decagrammic ditetragoltriate. Projections These are two perspective projections, projecting the polytope into a hypersphere, and applying a stereographic projection into 3-space. See also 600-cell Snub 24-cell Uniform 4-polytope Duoprism Duocylinder Notes References Kaleidoscopes: Selected Writings of H.S.M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 23) H.S.M. Coxeter, Regular and Semi-Regular Polytopes II, [Math. Zeit. 188 (1985) 559-591] 2.8 The Grand Antiprism John H. Conway, Heidi Burgiel, Chaim Goodman-Strauss, The Symmetries of Things 2008, (Chapter 26) The Grand Antiprism Grand Antiprism and Quaternions Mehmet Koca, Mudhahir Al-Ajmi, Nazife Ozdes Koca (2009); Mehmet Koca et al. 2009 J. Phys. A: Math. Theor. 42 495201 External links In the Belly of the Grand Antiprism (middle section, describing the analogy with the icosahedron and the pentagonal antiprism) Uniform 4-polytopes
Grand antiprism
[ "Physics" ]
1,402
[ "Uniform 4-polytopes", "Uniform polytopes", "Symmetry" ]
4,058,119
https://en.wikipedia.org/wiki/Two%20Generals%27%20Problem
In computing, the Two Generals' Problem is a thought experiment meant to illustrate the pitfalls and design challenges of attempting to coordinate an action by communicating over an unreliable link. In the experiment, two generals are only able to communicate with one another by sending a messenger through enemy territory. The experiment asks how they might reach an agreement on the time to launch an attack, while knowing that any messenger they send could be captured. The Two Generals' Problem appears often as an introduction to the more general Byzantine Generals problem in introductory classes about computer networking (particularly with regard to the Transmission Control Protocol, where it shows that TCP cannot guarantee state consistency between endpoints and why this is the case), though it applies to any type of two-party communication where failures of communication are possible. A key concept in epistemic logic, this problem highlights the importance of common knowledge. Some authors also refer to this as the Two Generals' Paradox, the Two Armies Problem, or the Coordinated Attack Problem. The Two Generals' Problem was the first computer communication problem to be proven to be unsolvable. An important consequence of this proof is that generalizations like the Byzantine Generals problem are also unsolvable in the face of arbitrary communication failures, thus providing a base of realistic expectations for any distributed consistency protocols. Definition Two armies, each led by a different general, are preparing to attack a fortified city. The armies are encamped near the city, each in its own valley. A third valley separates the two hills, and the only way for the two generals to communicate is by sending messengers through the valley. Unfortunately, the valley is occupied by the city's defenders and there's a chance that any given messenger sent through the valley will be captured. While the two generals have agreed that they will attack, they haven't agreed upon a time for an attack. It is required that the two generals have their armies attack the city simultaneously to succeed, lest the lone attacker army die trying. They must thus communicate with each other to decide on a time to attack and to agree to attack at that time, and each general must know that the other general knows that they have agreed to the attack plan. Because acknowledgement of message receipt can be lost as easily as the original message, a potentially infinite series of messages is required to come to consensus. The thought experiment involves considering how they might go about coming to a consensus. In its simplest form, one general is known to be the leader, decides on the time of the attack, and must communicate this time to the other general. The problem is to come up with algorithms that the generals can use, including sending messages and processing received messages, that can allow them to correctly conclude: Yes, we will both attack at the agreed-upon time. Allowing that it is quite simple for the generals to come to an agreement on the time to attack (i.e. one successful message with a successful acknowledgement), the subtlety of the Two Generals' Problem is in the impossibility of designing algorithms for the generals to use to safely agree to the above statement. Illustrating the problem The first general may start by sending a message: "Attack at 0900 on August 4." However, once dispatched, the first general has no idea whether or not the messenger got through. This uncertainty may lead the first general to hesitate to attack due to the risk of being the sole attacker. To be sure, the second general may send a confirmation back to the first: "I received your message and will attack at 0900 on August 4." However, the messenger carrying the confirmation could face capture, and the second general may hesitate, knowing that the first might hold back without the confirmation. Further confirmations may seem like a solution—let the first general send a second confirmation: "I received your confirmation of the planned attack at 0900 on August 4." However, this new messenger from the first general is liable to be captured, too. Thus, it quickly becomes evident that no matter how many rounds of confirmation are made, there is no way to guarantee the second requirement that each general is sure the other has agreed to the attack plan. Both generals will always be left wondering whether their last messenger got through. Proof Because this protocol is deterministic, suppose there is a sequence of a fixed number of messages, one or more successfully delivered and one or more not. The assumption is that there should be a shared certainty for both generals to attack. Consider the last such message that was successfully delivered. If that last message had not been successfully delivered, then one general at least (presumably the receiver) would decide not to attack. From the viewpoint of the sender of that last message, however, the sequence of messages sent and delivered is exactly the same as it would have been, had that message been delivered. Since the protocol is deterministic, the general sending that last message will still decide to attack. We've now created a situation where the suggested protocol leads one general to attack and the other not to attack—contradicting the assumption that the protocol was a solution to the problem. A non-deterministic protocol with a potentially variable message count can be compared to an edge-labeled finite tree, where each node in the tree represents an explored example up to a specified point. A protocol that terminates before sending any messages is represented by a tree containing only a root node. The edges from a node to each child are labeled with the messages sent in order to reach the child state. Leaf nodes represent points at which the protocol terminates. Suppose there exists a non-deterministic protocol P which solves the Two Generals' Problem. Then, by a similar argument to the one used for fixed-length deterministic protocols above, P' must also solve the Two Generals' Problem, where the tree representing P' is obtained from that for P by removing all leaf nodes and the edges leading to them. Since P is finite, it then follows that the protocol that terminates before sending any messages would solve the problem. But clearly, it does not. Therefore, a non-deterministic protocol that solves the problem cannot exist. Engineering approaches A pragmatic approach to dealing with the Two Generals' Problem is to use schemes that accept the uncertainty of the communications channel and not attempt to eliminate it, but rather mitigate it to an acceptable degree. For example, the first general could send 100 messengers, anticipating that the probability of all being captured is low. With this approach, the first general will attack no matter what, and the second general will attack if any message is received. Alternatively, the first general could send a stream of messages and the second general could send acknowledgments to each, with each general feeling more comfortable with every message received. As seen in the proof, however, neither can be certain that the attack will be coordinated. There is no algorithm that they can use (e.g. attack if more than four messages are received) that will be certain to prevent one from attacking without the other. Also, the first general can send a marking on each message saying it is message 1, 2, 3 ... of n. This method will allow the second general to know how reliable the channel is and send an appropriate number of messages back to ensure a high probability of at least one message being received. If the channel can be made to be reliable, then one message will suffice and additional messages do not help. The last is as likely to get lost as the first. Assuming that the generals must sacrifice lives every time a messenger is sent and intercepted, an algorithm can be designed to minimize the number of messengers required to achieve the maximum amount of confidence the attack is coordinated. To save them from sacrificing hundreds of lives to achieve very high confidence in coordination, the generals could agree to use the absence of messengers as an indication that the general who began the transaction has received at least one confirmation and has promised to attack. Suppose it takes a messenger 1 minute to cross the danger zone, allowing 200 minutes of silence to occur after confirmations have been received will allow us to achieve extremely high confidence while not sacrificing messenger lives. In this case, messengers are used only in the case where a party has not received the attack time. At the end of 200 minutes, each general can reason: "I have not received an additional message for 200 minutes; either 200 messengers failed to cross the danger zone, or it means the other general has confirmed and committed to the attack and has confidence I will too". History The Two Generals' Problem and its impossibility proof was first published by E. A. Akkoyunlu, K. Ekanadham, and R. V. Huber in 1975 in "Some Constraints and Trade-offs in the Design of Network Communications", where it is described starting on page 73 in the context of communication between two groups of gangsters. This problem was given the name the Two Generals Paradox by Jim Gray in 1978 in "Notes on Data Base Operating Systems" starting on page 465. This reference is widely given as a source for the definition of the problem and the impossibility proof, though both were published previously as mentioned above. References See also Consensus algorithm Distributed computing problems Theory of computation Thought experiments
Two Generals' Problem
[ "Mathematics" ]
1,887
[ "Mathematical problems", "Distributed computing problems", "Computational problems" ]
4,058,157
https://en.wikipedia.org/wiki/Roughing%20filter
Roughing filters provide pretreatment for turbid water or simple, low maintenance treatment when high water quality is not needed. External links SANDEC page Blue filter Inc. (commercial site) Rejuvenation of SSF using HRF technique Appropriate technology Water filters
Roughing filter
[ "Chemistry" ]
57
[ "Water treatment", "Water filters", "Filters" ]
4,059,023
https://en.wikipedia.org/wiki/Search%20engine
A search engine is a software system that provides hyperlinks to web pages and other relevant information on the Web in response to a user's query. The user inputs a query within a web browser or a mobile app, and the search results are often a list of hyperlinks, accompanied by textual summaries and images. Users also have the option of limiting the search to a specific type of results, such as images, videos, or news. For a search provider, its engine is part of a distributed computing system that can encompass many data centers throughout the world. The speed and accuracy of an engine's response to a query is based on a complex system of indexing that is continuously updated by automated web crawlers. This can include data mining the files and databases stored on web servers, but some content is not accessible to crawlers. There have been many search engines since the dawn of the Web in the 1990s, but Google Search became the dominant one in the 2000s and has remained so. It currently has a 90% global market share. Other search engines with a smaller market share include Bing at 4%, Yandex at 2%, and Yahoo at 1%. Other search engines not listed have less than a 3% market share. The business of websites improving their visibility in search results, known as marketing and optimization, has thus largely focused on Google. History Pre-1990s In 1945, Vannevar Bush described an information retrieval system that would allow a user to access a great expanse of information, all at a single desk. He called it a memex. He described the system in an article titled "As We May Think" that was published in The Atlantic Monthly. The memex was intended to give a user the capability to overcome the ever-increasing difficulty of locating information in ever-growing centralized indices of scientific work. Vannevar Bush envisioned libraries of research with connected annotations, which are similar to modern hyperlinks. Link analysis eventually became a crucial component of search engines through algorithms such as Hyper Search and PageRank. 1990s: Birth of search engines The first internet search engines predate the debut of the Web in December 1990: WHOIS user search dates back to 1982, and the Knowbot Information Service multi-network user search was first implemented in 1989. The first well documented search engine that searched content files, namely FTP files, was Archie, which debuted on 10 September 1990. Prior to September 1993, the World Wide Web was entirely indexed by hand. There was a list of webservers edited by Tim Berners-Lee and hosted on the CERN webserver. One snapshot of the list in 1992 remains, but as more and more web servers went online the central list could no longer keep up. On the NCSA site, new servers were announced under the title "What's New!". The first tool used for searching content (as opposed to users) on the Internet was Archie. The name stands for "archive" without the "v". It was created by Alan Emtage, computer science student at McGill University in Montreal, Quebec, Canada. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie Search Engine did not index the contents of these sites since the amount of data was so limited it could be readily searched manually. The rise of Gopher (created in 1991 by Mark McCahill at the University of Minnesota) led to two new search programs, Veronica and Jughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to Computerized Archives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's Universal Gopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie Search Engine" was not a reference to the Archie comic book series, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor. In the summer of 1993, no search engine existed for the web, though numerous specialized catalogs were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that periodically mirrored these pages and rewrote them into a standard format. This formed the basis for W3Catalog, the web's first primitive search engine, released on September 2, 1993. In June 1993, Matthew Gray, then at MIT, produced what was probably the first web robot, the Perl-based World Wide Web Wanderer, and used it to generate an index called "Wandex". The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a web robot, but instead depended on being notified by website administrators of the existence at each site of an index file in a particular format. JumpStation (created in December 1993 by Jonathon Fletcher) used a web robot to find web pages and to build its index, and used a web form as the interface to its query program. It was thus the first WWW resource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform it ran on, its indexing and hence searching were limited to the titles and headings found in the web pages the crawler encountered. One of the first "all text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it allowed users to search for any word in any web page, which has become the standard for all major search engines since. It was also the search engine that was widely known by the public. Also, in 1994, Lycos (which started at Carnegie Mellon University) was launched and became a major commercial endeavor. The first popular search engine on the Web was Yahoo! Search. The first product from Yahoo!, founded by Jerry Yang and David Filo in January 1994, was a Web directory called Yahoo! Directory. In 1995, a search function was added, allowing users to search Yahoo! Directory. It became one of the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than its full-text copies of web pages. Soon after, a number of search engines appeared and vied for popularity. These included Magellan, Excite, Infoseek, Inktomi, Northern Light, and AltaVista. Information seekers could also browse the directory instead of doing a keyword-based search. In 1996, Robin Li developed the RankDex site-scoring algorithm for search engines results page ranking and received a US patent for the technology. It was the first search engine that used hyperlinks to measure the quality of websites it was indexing, predating the very similar algorithm patent filed by Google two years later in 1998. Larry Page referenced Li's work in some of his U.S. patents for PageRank. Li later used his Rankdex technology for the Baidu search engine, which was founded by him in China and launched in 2000. In 1996, Netscape was looking to give a single search engine an exclusive deal as the featured search engine on Netscape's web browser. There was so much interest that instead, Netscape struck deals with five of the major search engines: for $5 million a year, each search engine would be in rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite. Google adopted the idea of selling search terms in 1998 from a small search engine company named goto.com. This move had a significant effect on the search engine business, which went from struggling to one of the most profitable businesses in the Internet. Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s. Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in March 2000. 2000s–present: Post dot-com bubble Around 2000, Google's search engine rose to prominence. The company achieved better results for many searches with an algorithm called PageRank, as was explained in the paper Anatomy of a Search Engine written by Sergey Brin and Larry Page, the later founders of Google. This iterative algorithm ranks web pages based on the number and PageRank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Larry Page's patent for PageRank cites Robin Li's earlier RankDex patent as an influence. Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in a web portal. In fact, the Google search engine became so popular that spoof engines emerged such as Mystery Seeker. By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, and Overture (which owned AlltheWeb and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions. Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999, the site began to display listings from Looksmart, blended with results from Inktomi. For a short time in 1999, MSN Search used results from AltaVista instead. In 2004, Microsoft began a transition to its own search technology, powered by its own web crawler (called msnbot). Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft Bing technology. active search engine crawlers include those of Google, Sogou, Baidu, Bing, Gigablast, Mojeek, DuckDuckGo and Yandex. Approach A search engine maintains the following processes in near real time: Web crawling Indexing Searching Web search engines get their information by web crawling from site to site. The "spider" checks for the standard filename robots.txt, addressed to it. The robots.txt file contains directives for search spiders, telling it which pages to crawl and which pages not to crawl. After checking for robots.txt and either finding it or not, the spider sends certain information back to be indexed depending on many factors, such as the titles, page content, JavaScript, Cascading Style Sheets (CSS), headings, or its metadata in HTML meta tags. After a certain number of pages crawled, amount of data indexed, or time spent on the website, the spider stops crawling and moves on. "[N]o web crawler may actually crawl the entire reachable web. Due to infinite websites, spider traps, spam, and other exigencies of the real web, crawlers instead apply a crawl policy to determine when the crawling of a site should be deemed sufficient. Some websites are crawled exhaustively, while others are crawled only partially". Indexing means associating words and other definable tokens found on web pages to their domain names and HTML-based fields. The associations are made in a public database, made available for web search queries. A query from a user can be a single word, multiple words or a sentence. The index helps find information relating to the query as quickly as possible. Some of the techniques for indexing, and caching are trade secrets, whereas web crawling is a straightforward process of visiting all sites on a systematic basis. Between visits by the spider, the cached version of the page (some or all the content needed to render it) stored in the search engine working memory is quickly sent to an inquirer. If a visit is overdue, the search engine can just act as a web proxy instead. In this case, the page may differ from the search terms indexed. The cached page holds the appearance of the version whose words were previously indexed, so a cached version of a page can be useful to the website when the actual page has been lost, but this problem is also considered a mild form of linkrot. Typically when a user enters a query into a search engine it is a few keywords. The index already has the names of the sites containing the keywords, and these are instantly obtained from the index. The real processing load is in generating the web pages that are the search results list: Every page in the entire list must be weighted according to information in the indexes. Then the top search result item requires the lookup, reconstruction, and markup of the snippets showing the context of the keywords matched. These are only part of the processing each search results web page requires, and further pages (next to the top) require more of this post-processing. Beyond simple keyword lookups, search engines offer their own GUI- or command-driven operators and search parameters to refine the search results. These provide the necessary controls for the user engaged in the feedback loop users create by filtering and weighting while refining the search results, given the initial pages of the first search results. For example, from 2007 the Google.com search engine has allowed one to filter by date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range. It is also possible to weight by date because each page has a modification time. Most search engines support the use of the Boolean operators AND, OR and NOT to help end users refine the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work. Most Web search engines are commercial ventures supported by advertising revenue and thus some of them allow advertisers to have their listings ranked higher in search results for a fee. Search engines that do not accept money for their search results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads. Local search Local search is the process that optimizes the efforts of local businesses. They focus on change to make sure all searches are consistent. It is important because many people determine where they plan to go and what to buy based on their searches. Market share Google is by far the world's most used search engine, with a market share of 90%, and the world's other most used search engines were Bing at 4%, Yandex at 2%, Yahoo! at 1%. Other search engines not listed have less than a 3% market share. In 2024, Google's dominance was ruled an illegal monopoly in a case brought by the US Department of Justice. Russia and East Asia In Russia, Yandex has a market share of 62.6%, compared to Google's 28.3%. Yandex is the second most used search engine on smartphones in Asia and Europe. In China, Baidu is the most popular search engine. South Korea-based search portal Naver is used for 62.8% of online searches in the country. Yahoo! Japan and Yahoo! Taiwan are the most popular choices for Internet searches in Japan and Taiwan, respectively. China is one of few countries where Google is not in the top three web search engines for market share. Google was previously more popular in China, but withdrew significantly after a disagreement with the government over censorship and a cyberattack. Bing, however, is in the top three web search engines with a market share of 14.95%. Baidu is top with 49.1% of the market share. Europe Most countries' markets in the European Union are dominated by Google, except for the Czech Republic, where Seznam is a strong competitor. The search engine Qwant is based in Paris, France, where it attracts most of its 50 million monthly registered users from. Search engine bias Although search engines are programmed to rank websites based on some combination of their popularity and relevancy, empirical studies indicate various political, economic, and social biases in the information they provide and the underlying assumptions about the technology. These biases can be a direct result of economic and commercial processes (e.g., companies that advertise with a search engine can become also more popular in its organic search results), and political processes (e.g., the removal of search results to comply with local laws). For example, Google will not surface certain neo-Nazi websites in France and Germany, where Holocaust denial is illegal. Biases can also be a result of social processes, as search engine algorithms are frequently designed to exclude non-normative viewpoints in favor of more "popular" results. Indexing algorithms of major search engines skew towards coverage of U.S.-based sites, rather than websites from non-U.S. countries. Google Bombing is one example of an attempt to manipulate search results for political, social or commercial reasons. Several scholars have studied the cultural changes triggered by search engines, and the representation of certain controversial topics in their results, such as terrorism in Ireland, climate change denial, and conspiracy theories. Customized results and filter bubbles There has been concern raised that search engines such as Google and Bing provide customized results based on the user's activity history, leading to what has been termed echo chambers or filter bubbles by Eli Pariser in 2011. The argument is that search engines and social media platforms use algorithms to selectively guess what information a user would like to see, based on information about the user (such as location, past click behaviour and search history). As a result, websites tend to show only information that agrees with the user's past viewpoint. According to Eli Pariser users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Since this problem has been identified, competing search engines have emerged that seek to avoid this problem by not tracking or "bubbling" users, such as DuckDuckGo. However many scholars have questioned Pariser's view, finding that there is little evidence for the filter bubble. On the contrary, a number of studies trying to verify the existence of filter bubbles have found only minor levels of personalisation in search, that most people encounter a range of views when browsing online, and that Google news tends to promote mainstream established news outlets. Religious search engines The global growth of the Internet and electronic media in the Arab and Muslim world during the last decade has encouraged Islamic adherents in the Middle East and Asian sub-continent, to attempt their own search engines, their own filtered search portals that would enable users to perform safe searches. More than usual safe search filters, these Islamic web portals categorizing websites into being either "halal" or "haram", based on interpretation of Sharia law. ImHalal came online in September 2011. Halalgoogling came online in July 2013. These use haram filters on the collections from Google and Bing (and others). While lack of investment and slow pace in technologies in the Muslim world has hindered progress and thwarted success of an Islamic search engine, targeting as the main consumers Islamic adherents, projects like Muxlim (a Muslim lifestyle site) received millions of dollars from investors like Rite Internet Ventures, and it also faltered. Other religion-oriented search engines are Jewogle, the Jewish version of Google, and Christian search engine SeekFind.org. SeekFind filters sites that attack or degrade their faith. Search engine submission Web search engine submission is a process in which a webmaster submits a website directly to a search engine. While search engine submission is sometimes presented as a way to promote a website, it generally is not necessary because the major search engines use web crawlers that will eventually find most web sites on the Internet without assistance. They can either submit one web page at a time, or they can submit the entire site using a sitemap, but it is normally only necessary to submit the home page of a web site as search engines are able to crawl a well designed website. There are two remaining reasons to submit a web site or web page to a search engine: to add an entirely new web site without waiting for a search engine to discover it, and to have a web site's record updated after a substantial redesign. Some search engine submission software not only submits websites to multiple search engines, but also adds links to websites from their own pages. This could appear helpful in increasing a website's ranking, because external links are one of the most important factors determining a website's ranking. However, John Mueller of Google has stated that this "can lead to a tremendous number of unnatural links for your site" with a negative impact on site ranking. Comparison to social bookmarking Technology Archie The first web search engine was Archie, created in 1990 by Alan Emtage, a student at McGill University in Montreal. The author originally wanted to call the program "archives", but had to shorten it to comply with the Unix world standard of assigning programs and files short, cryptic names such as grep, cat, troff, sed, awk, perl, and so on. The primary method of storing and retrieving files was via the File Transfer Protocol (FTP). This was (and still is) a system that specified a common way for computers to exchange files over the Internet. It works like this: Some administrator decides that he wants to make files available from his computer. He sets up a program on his computer, called an FTP server. When someone on the Internet wants to retrieve a file from this computer, he or she connects to it via another program called an FTP client. Any FTP client program can connect with any FTP server program as long as the client and server programs both fully follow the specifications set forth in the FTP protocol. Initially, anyone who wanted to share a file had to set up an FTP server in order to make the file available to others. Later, "anonymous" FTP sites became repositories for files, allowing all users to post and retrieve them. Even with archive sites, many important files were still scattered on small FTP servers. These files could be located only by the Internet equivalent of word of mouth: Somebody would post an e-mail to a message list or a discussion forum announcing the availability of a file. Archie changed all that. It combined a script-based data gatherer, which fetched site listings of anonymous FTP files, with a regular expression matcher for retrieving file names matching a user query. (4) In other words, Archie's gatherer scoured FTP sites across the Internet and indexed all of the files it found. Its regular expression matcher provided users with access to its database. Veronica In 1993, the University of Nevada System Computing Services group developed Veronica. It was created as a type of searching device similar to Archie but for Gopher files. Another Gopher search service, called Jughead, appeared a little later, probably for the sole purpose of rounding out the comic-strip triumvirate. Jughead is an acronym for Jonzy's Universal Gopher Hierarchy Excavation and Display, although, like Veronica, it is probably safe to assume that the creator backed into the acronym. Jughead's functionality was pretty much identical to Veronica's, although it appears to be a little rougher around the edges. The Lone Wanderer The World Wide Web Wanderer, developed by Matthew Gray in 1993 was the first robot on the Web and was designed to track the Web's growth. Initially, the Wanderer counted only Web servers, but shortly after its introduction, it started to capture URLs as it went along. The database of captured URLs became the Wandex, the first web database. Matthew Gray's Wanderer created quite a controversy at the time, partially because early versions of the software ran rampant through the Net and caused a noticeable netwide performance degradation. This degradation occurred because the Wanderer would access the same page hundreds of times a day. The Wanderer soon amended its ways, but the controversy over whether robots were good or bad for the Internet remained. In response to the Wanderer, Martijn Koster created Archie-Like Indexing of the Web, or ALIWEB, in October 1993. As the name implies, ALIWEB was the HTTP equivalent of Archie, and because of this, it is still unique in many ways. ALIWEB does not have a web-searching robot. Instead, webmasters of participating sites post their own index information for each page they want listed. The advantage to this method is that users get to describe their own site, and a robot does not run about eating up Net bandwidth. The disadvantages of ALIWEB are more of a problem today. The primary disadvantage is that a special indexing file must be submitted. Most users do not understand how to create such a file, and therefore they do not submit their pages. This leads to a relatively small database, which meant that users are less likely to search ALIWEB than one of the large bot-based sites. This Catch-22 has been somewhat offset by incorporating other databases into the ALIWEB search, but it still does not have the mass appeal of search engines such as Yahoo! or Lycos. Excite Excite, initially called Architext, was started by six Stanford undergraduates in February 1993. Their idea was to use statistical analysis of word relationships in order to provide more efficient searches through the large amount of information on the Internet. Their project was fully funded by mid-1993. Once funding was secured. they released a version of their search software for webmasters to use on their own web sites. At the time, the software was called Architext, but it now goes by the name of Excite for Web Servers. Excite was the first serious commercial search engine which launched in 1995. It was developed in Stanford and was purchased for $6.5 billion by @Home. In 2001 Excite and @Home went bankrupt and InfoSpace bought Excite for $10 million. Some of the first analysis of web searching was conducted on search logs from Excite Yahoo! In April 1994, two Stanford University Ph.D. candidates, David Filo and Jerry Yang, created some pages that became rather popular. They called the collection of pages Yahoo! Their official explanation for the name choice was that they considered themselves to be a pair of yahoos. As the number of links grew and their pages began to receive thousands of hits a day, the team created ways to better organize the data. In order to aid in data retrieval, Yahoo! (www.yahoo.com) became a searchable directory. The search feature was a simple database search engine. Because Yahoo! entries were entered and categorized manually, Yahoo! was not really classified as a search engine. Instead, it was generally considered to be a searchable directory. Yahoo! has since automated some aspects of the gathering and classification process, blurring the distinction between engine and directory. The Wanderer captured only URLs, which made it difficult to find things that were not explicitly described by their URL. Because URLs are rather cryptic to begin with, this did not help the average user. Searching Yahoo! or the Galaxy was much more effective because they contained additional descriptive information about the indexed sites. Lycos At Carnegie Mellon University during July 1994, Michael Mauldin, on leave from CMU, developed the Lycos search engine. Types of web search engines Search engines on the web are sites enriched with facility to search the content stored on other sites. There is difference in the way various search engines work, but they all perform three basic tasks. Finding and selecting full or partial content based on the keywords provided. Maintaining index of the content and referencing to the location they find Allowing users to look for words or combinations of words found in that index. The process begins when a user enters a query statement into the system through the interface provided. There are basically three types of search engines: Those that are powered by robots (called crawlers; ants or spiders) and those that are powered by human submissions; and those that are a hybrid of the two. Crawler-based search engines are those that use automated software agents (called crawlers) that visit a Web site, read the information on the actual site, read the site's meta tags and also follow the links that the site connects to performing indexing on all linked Web sites as well. The crawler returns all that information back to a central depository, where the data is indexed. The crawler will periodically return to the sites to check for any information that has changed. The frequency with which this happens is determined by the administrators of the search engine. Human-powered search engines rely on humans to submit information that is subsequently indexed and catalogued. Only information that is submitted is put into the index. In both cases, when you query a search engine to locate information, you're actually searching through the index that the search engine has created —you are not actually searching the Web. These indices are giant databases of information that is collected and stored and subsequently searched. This explains why sometimes a search on a commercial search engine, such as Yahoo! or Google, will return results that are, in fact, dead links. Since the search results are based on the index, if the index has not been updated since a Web page became invalid the search engine treats the page as still an active link even though it no longer is. It will remain that way until the index is updated. So why will the same search on different search engines produce different results? Part of the answer to that question is because not all indices are going to be exactly the same. It depends on what the spiders find or what the humans submitted. But more important, not every search engine uses the same algorithm to search through the indices. The algorithm is what the search engines use to determine the relevance of the information in the index to what the user is searching for. One of the elements that a search engine algorithm scans for is the frequency and location of keywords on a Web page. Those with higher frequency are typically considered more relevant. But search engine technology is becoming sophisticated in its attempt to discourage what is known as keyword stuffing, or spamdexing. Another common element that algorithms analyze is the way that pages link to other pages in the Web. By analyzing how pages link to each other, an engine can both determine what a page is about (if the keywords of the linked pages are similar to the keywords on the original page) and whether that page is considered "important" and deserving of a boost in ranking. Just as the technology is becoming increasingly sophisticated to ignore keyword stuffing, it is also becoming more savvy to Web masters who build artificial links into their sites in order to build an artificial ranking. Modern web search engines are highly intricate software systems that employ technology that has evolved over the years. There are a number of sub-categories of search engine software that are separately applicable to specific 'browsing' needs. These include web search engines (e.g. Google), database or structured data search engines (e.g. Dieselpoint), and mixed search engines or enterprise search. The more prevalent search engines, such as Google and Yahoo!, utilize hundreds of thousands computers to process trillions of web pages in order to return fairly well-aimed results. Due to this high volume of queries and text processing, the software is required to run in a highly dispersed environment with a high degree of superfluity. Another category of search engines is scientific search engines. These are search engines which search scientific literature. The best known example is Google Scholar. Researchers are working on improving search engine technology by making them understand the content element of the articles, such as extracting theoretical constructs or key research findings. See also References Further reading Bing Liu (2007), Web Data Mining: Exploring Hyperlinks, Contents and Usage Data. Springer, Bar-Ilan, J. (2004). The use of Web search engines in information science research. ARIST, 38, 231–288. Yeo, ShinJoung. (2023) Behind the Search Box: Google and the Global Internet Industry (U of Illinois Press, 2023) ISBN 10:0252087127 online External links Search engine software History of the Internet Internet terminology Internet properties established in 1993 Canadian inventions
Search engine
[ "Technology" ]
7,028
[ "Computing terminology", "Internet terminology" ]
4,059,082
https://en.wikipedia.org/wiki/Kadowaki%E2%80%93Woods%20ratio
The Kadowaki–Woods ratio is the ratio of A, the quadratic term of the resistivity and γ2, the square of the linear term of the specific heat. This ratio is found to be a constant for transition metals, and for heavy-fermion compounds, although at different values. In 1968 M. J. Rice pointed out that the coefficient A should vary predominantly as the square of the linear electronic specific heat coefficient γ; in particular he showed that the ratio A/γ2 is material independent for the pure 3d, 4d and 5d transition metals. Heavy-fermion compounds are characterized by very large values of A and γ. Kadowaki and Woods showed that A/γ2 is material-independent within the heavy-fermion compounds, and that it is about 25 times larger than in aforementioned transition metals. It was shown by K. Miyake, T. Matsuura and C.M. Varma that local Fermi liquids, quasiparticle mass and lifetime are linked consistent with the A/γ2 ratio. This suggest that the Kadowaki-Woods ratio reflects a relation between quasiparticle mass and lifetime renormalisation as a function of electron-electron interaction strength. According to the theory of electron-electron scattering the ratio A/γ2 contains indeed several non-universal factors, including the square of the strength of the effective electron-electron interaction. Since in general the interactions differ in nature from one group of materials to another, the same values of A/γ2 are only expected within a particular group. In 2005 Hussey proposed a re-scaling of A/γ2 to account for unit cell volume, dimensionality, carrier density and multi-band effects. In 2009 Jacko, Fjaerestad, and Powell demonstrated fdx(n)A/γ2 to have the same value in transition metals, heavy fermions, organics and oxides with A varying over 10 orders of magnitude, where fdx(n) may be written in terms of the dimensionality of the system, the electron density and, in layered systems, the interlayer spacing or the interlayer hopping integral. See also Wilson ratio References Correlated electrons Condensed matter physics Fermions
Kadowaki–Woods ratio
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
465
[ "Matter", "Materials science stubs", "Fermions", "Phases of matter", "Materials science", "Condensed matter physics", "Correlated electrons", "Condensed matter stubs", "Subatomic particles" ]
4,059,423
https://en.wikipedia.org/wiki/WS-MetadataExchange
WS-MetaDataExchange is a web services protocol specification, published by BEA Systems, IBM, Microsoft, and SAP. WS-MetaDataExchange is part of the WS-Federation roadmap; and is designed to work in conjunction with WS-Addressing, WSDL and WS-Policy to allow retrieval of metadata about a Web Services endpoint. It uses a SOAP message to request metadata, and so goes beyond the basic technique of appending "?wsdl" to a service name's URL See also List of web service specifications Web services References External links W3C Working Draft of WS-MetadataExchange WS-MetadataExchange Specification Web service specifications
WS-MetadataExchange
[ "Technology" ]
149
[ "Computing stubs", "Computer network stubs" ]
4,060,313
https://en.wikipedia.org/wiki/Digital%20economy%20rankings
Digital economy rankings was published as a one-off exercise by the Economist Intelligence Unit as the follow-up to their previous e-readiness rankings. This was done to reflect the increasing influence of ICT in economic (and social) progress. The report was titled "Beyond e-readiness". , the Economist has not published a follow-up to the 2010 report, leaving it substantially outdated. A much more comprehensive and up-to-date index is the UN's ICT Development Index. See also e-Government Government Broadband Index ICT Development Index References Information economy Digital divide International rankings Economist Intelligence Unit IT infrastructure
Digital economy rankings
[ "Technology" ]
123
[ "Information technology", "IT infrastructure" ]
14,337,630
https://en.wikipedia.org/wiki/May%20spectral%20sequence
In mathematics, the May spectral sequence is a spectral sequence, introduced by . It is used for calculating the initial term of the Adams spectral sequence, which is in turn used for calculating the stable homotopy groups of spheres. The May spectral sequence is described in detail in . References . Spectral sequences
May spectral sequence
[ "Mathematics" ]
61
[ "Topology stubs", "Topology" ]
14,337,655
https://en.wikipedia.org/wiki/Charlestown%2C%20Fife
Charlestown (also known as Charlestown-on-Forth) is a village in Fife, Scotland. It lies on the north shore of the Firth of Forth, around west of Limekilns and south-west of Dunfermline. The village is known for its historic 18th century lime kilns and its Georgian planned housing. History Charlestown was established in 1756 by Charles Bruce, 5th Earl of Elgin. The planned village is laid out in the shape of a letters C and E, for Charles Elgin. It was established as a harbour town for the shipment of coal mined on Lord Elgin's Fife estates, and for the production of lime. The harbour's outer basin was built around 1840. In 1887, on the occasion of Queen Victoria's Golden Jubilee, the Queen's Hall was built at the village centre, to designs by Robert Rowand Anderson. Shipbuilding was carried on at Charlestown in the 19th century, as well as ship-breaking. Some of the German Imperial Fleet were brought here from Scapa Flow after World War I to be broken up. The Lime Kilns The fourteen massive lime kilns built of dressed-sandstone are a remarkable feature of Charlestown. They are regarded as one of the most important Industrial Revolution remains in Scotland and indeed the United Kingdom, being Scottish Category A Listed buildings. Built into the hillside below the village, they form a stone facade 110 metres long by 10 metres high. They are in a generally stable state of preservation with many features relating to the operation of the kilns still in situ. Most of the kilns were re-faced, probably in the 19th century. The kilns were built by Charles Bruce, 5th Earl of Elgin, in the late 18th century, building dates quoted vary, but Pevsner states that the first nine were built 1777 to 1778 and the last five in 1792. They were the largest group of lime kilns in Scotland, producing a third of all lime production, and were particularly important to agriculture for soil improvement but also for building work to produce mortar, plaster and other lime based products. Through the late eighteenth and nineteenth centuries, the kilns were part of a major industrial complex of the time, including coal mining, ironworking and salt extraction. Coal and limestone were brought in from the local quarries, also on the Earl's estate. The adjacent harbour was as well built by the Earl and used for transporting the lime products, limestone and importantly coal. There were local railway lines, also a railway link to Dunfermline. Around the lime kilns there were many ancillary buildings, these have almost entirely gone. The operation ran down from the 1930s and finally closed in 1956. The site is owned by the Broomhall Estate. The Planned Village Another distinctive feature of Charlestown is the early planned village, established again by Charles Bruce, the 5th Earl in 1756. It is however quite an unusual "plan", as the street pattern was reputedly laid out as the letters C and E, for Charles Elgin. The village was part of the Improving Movement in Scotland that led to the establishment of some 500 planned villages and small towns throughout the country between the mid 18th century to mid 19th century, although McWilliam writing earlier in 1975 gives a lower figure for the same period of some 200 towns. Charlestown however has the distinction of perhaps being one of the first industrial villages in Scotland, as against the numerous farming and fishing based planned villages. This was because it encompassed not only the housing accommodation for the workers, but an integrated operation of coalmining, limestone quarrying, tramways, lime kilns, the harbour and other ancillary operations. Building of the original village was commenced in 1756, and comprises the North Row, the South Row, the Double Row and also the shorter Hall Row and Lochaber to the east. The western half of the North and South Rows face the village green. Most house building was complete by 1771, although some houses were not completed until the early 19th century. The houses were originally single story, built in groups of six, some of them later extended to a second floor. As well, various other buildings were erected by the Earl, for example Easter Cottage built 1760, a school in 1768 and the Sutlery, being a stable and granary, in 1770. The houses are all Scottish B Grade Listed Buildings (excepting nos 36,37 and 52 to 55) within the Charlestown Conservation Area. While all were originally "estate" cottages belonging to the Broomhall Estate, after the decline and closure of most of the works in 1935, many were sold to the tenants. The original unity and appearance of the terraces is now somewhat compromised by alterations to doors and windows, high privet hedges and by many rear extensions, as identified in the Conservation Plan. Paths Some of the off-road paths in the village reflect aspects of the past; for example, "Shell Road" and "Lime Brae" indicate the routes over which these materials were transported in the past; "Craw Road" and "Rocks Road" refer to the avian inhabitants and the underfoot surface respectively; "The Run" refers to the route by which surplus water was run off from the upper part of the village and down to the sea. Cricket Charlestown is the home of Broomhall Cricket Club, named after Broomhall, the nearby home of Lord Elgin. They have a 1st XI and a 2nd XI that play in the Scottish East League run by the East of Scotland Cricket Association and have junior, midweek and Sunday teams as well. They play at The Cairns, Charlestown. The Scottish Lime Centre The Scottish Lime Centre Trust (SLCT) was established in 1994 by a pioneer in the re-introduction of lime in building repairs in Scotland, Pat Gibbons (Mrs Patricia). She was the founder and first Director, an architect with many years experience of building conservation in Scotland. Previously she had been a Senior Architect with Historic Scotland. The Director since 2005 is Roz Artis. Housed in an historic Charlestown building, the former Estate workshop, the Centre enjoys an international reputation for its work in promoting and training in the use of lime in building. The aims and objectives of the Trust are to: Promote for the public benefit the appropriate repair of Scotland's traditional and historic buildings Advance education through the provision of advice, training and practical experience in the use of lime for the repair and conservation of such buildings Promote and further the preservation and development of Scottish building traditional, crafts and skills. References Villages in Fife Firth of Forth Lime kilns Model villages Industrial Revolution in Scotland 1756 establishments in Scotland Populated places established in 1756
Charlestown, Fife
[ "Chemistry", "Engineering" ]
1,354
[ "Lime kilns", "Kilns" ]
14,337,857
https://en.wikipedia.org/wiki/Urgent%20computing
Urgent computing is prioritized and immediate access on supercomputers and grids for emergency computations such as severe weather prediction during matters of immediate concern. Applications that provide decision makers with information during critical emergencies cannot waste time waiting in job queues and need access to computational resources as soon as possible. Systems for urgent computing commonly use dedicated resources to ensure immediate and dedicated access to urgent computations. However, recent studies have shown the possibility of using shared resources to make urgent computing more economical. References Grid computing Supercomputers
Urgent computing
[ "Technology" ]
108
[ "Supercomputers", "Computing stubs", "Supercomputing", "Computer hardware stubs" ]
14,338,276
https://en.wikipedia.org/wiki/DP%20code
DP is a free software package for physicists implementing ab initio linear-response TDDFT (time-dependent density functional theory) in frequency-reciprocal space and on a plane wave basis set. It allows to calculate both dielectric spectra, such as EELS (electron energy-loss spectroscopy), IXSS (inelastic X-ray scattering spectroscopy) and CIXS (coherent inelastic X-ray scattering spectroscopy), and also optical spectra, e.g. optical absorption, reflectivity, refraction index. The systems range from periodic/crystalline solids, to surfaces, clusters, molecules and atoms made of insulators, semiconductors and metal elements. It implements the RPA (random phase approximation), the TDLDA or ALDA (adiabatic local-density approximation) plus other non-local approximations, including or neglecting local-field effects. It is distributed under the scientific software open-source academic for free license. See also ABINIT EXC code YAMBO code PWscf Quantum chemistry computer programs References External links DP code web site Density functional theory software Physics software Computational chemistry software
DP code
[ "Physics", "Chemistry" ]
235
[ "Quantum chemistry stubs", "Quantum chemistry", "Computational chemistry software", "Chemistry software", "Theoretical chemistry stubs", "Computational physics", "Computational chemistry", "Density functional theory software", "Physical chemistry stubs", "Physics software" ]
14,338,608
https://en.wikipedia.org/wiki/Neural%20backpropagation
Neural backpropagation is the phenomenon in which, after the action potential of a neuron creates a voltage spike down the axon (normal propagation), another impulse is generated from the soma and propagates towards the apical portions of the dendritic arbor or dendrites (from which much of the original input current originated). In addition to active backpropagation of the action potential, there is also passive electrotonic spread. While there is ample evidence to prove the existence of backpropagating action potentials, the function of such action potentials and the extent to which they invade the most distal dendrites remain highly controversial. Mechanism When the graded excitatory postsynaptic potentials (EPSPs) depolarize the soma to spike threshold at the axon hillock, first, the axon experiences a propagating impulse through the electrical properties of its voltage-gated sodium and voltage-gated potassium channels. An action potential occurs in the axon first as research illustrates that sodium channels at the dendrites exhibit a higher threshold than those on the membrane of the axon (Rapp et al., 1996). Moreover, the voltage-gated sodium channels on the dendritic membranes having a higher threshold helps prevent them triggering an action potential from synaptic input. Instead, only when the soma depolarizes enough from accumulating graded potentials and firing an axonal action potential will these channels be activated to propagate a signal traveling backwards (Rapp et al. 1996). Generally, EPSPs from synaptic activation are not large enough to activate the dendritic voltage-gated calcium channels (usually on the order of a couple milliamperes each) so backpropagation is typically believed to happen only when the cell is activated to fire an action potential. These sodium channels on the dendrites are abundant in certain types of neurons, especially mitral and pyramidal cells, and quickly inactivate. Initially, it was thought that an action potential could only travel down the axon in one direction (towards the axon terminal where it ultimately signaled the release of neurotransmitters). However, recent research has provided evidence for the existence of backwards-propagating action potentials (Staley 2004). To elaborate, neural backpropagation can occur in one of two ways. First, during the initiation of an axonal action potential, the cell body, or soma, can become depolarized as well. This depolarization can spread through the cell body towards the dendritic tree where there are voltage-gated sodium channels. The depolarization of these voltage-gated sodium channels can then result in the propagation of a dendritic action potential. Such backpropagation is sometimes referred to as an echo of the forward propagating action potential (Staley 2004). It has also been shown that an action potential initiated in the axon can create a retrograde signal that travels in the opposite direction (Hausser 2000). This impulse travels up the axon eventually causing the cell body to become depolarized, thus triggering the dendritic voltage-gated calcium channels. As described in the first process, the triggering of dendritic voltage-gated calcium channels leads to the propagation of a dendritic action potential. It is important to note that the strength of backpropagating action potentials varies greatly between different neuronal types (Hausser 2000). Some types of neuronal cells show little to no decrease in the amplitude of action potentials as they invade and travel through the dendritic tree while other neuronal cell types, such as cerebellar Purkinje neurons, exhibit very little action potential backpropagation (Stuart 1997). Additionally, there are other neuronal cell types that manifest varying degrees of amplitude decrement during backpropagation. It is thought that this is due to the fact that each neuronal cell type contains varying numbers of the voltage-gated channels required to propagate a dendritic action potential. Regulation and inhibition Generally, synaptic signals that are received by the dendrite are combined in the soma in order to generate an action potential that is then transmitted down the axon toward the next synaptic contact. Thus, the backpropagation of action potentials poses a threat to initiate an uncontrolled positive feedback loop between the soma and the dendrites. For example, as an action potential was triggered, its dendritic echo could enter the dendrite and potentially trigger a second action potential. If left unchecked, an endless cycle of action potentials triggered by their own echo would be created. In order to prevent such a cycle, most neurons have a relatively high density of A-type K+ channels. A-type K+ channels belong to the superfamily of voltage-gated ion channels and are transmembrane channels that help maintain the cell's membrane potential (Cai 2007). Typically, they play a crucial role in returning the cell to its resting membrane following an action potential by allowing an inhibitory current of K+ ions to quickly flow out of the neuron. The presence of these channels in such high density in the dendrites explains their inability to initiate an action potential, even during synaptic input. Additionally, the presence of these channels provides a mechanism by which the neuron can suppress and regulate the backpropagation of action potentials through the dendrite (Vetter 2000). Pharmacological antagonists of these channels promoted the frequency of backpropagating action potentials which demonstrates their importance in keeping the cell from excessive firing (Waters et al., 2004). Results have indicated a linear increase in the density of A-type channels with increasing distance into the dendrite away from the soma. The increase in the density of A-type channels results in a dampening of the backpropagating action potential as it travels into the dendrite. Essentially, inhibition occurs because the A-type channels facilitate the outflow of K+ ions in order to maintain the membrane potential below threshold levels (Cai 2007). Such inhibition limits EPSP and protects the neuron from entering a never-ending positive-positive feedback loop between the soma and the dendrites. History Since the 1950s, evidence has existed that neurons in the central nervous system generate an action potential, or voltage spike, that travels both through the axon to signal the next neuron and backpropagates through the dendrites sending a retrograde signal to its presynaptic signaling neurons. This current decays significantly with travel length along the dendrites, so effects are predicted to be more significant for neurons whose synapses are near the postsynaptic cell body, with magnitude depending mainly on sodium-channel density in the dendrite. It is also dependent on the shape of the dendritic tree and, more importantly, on the rate of signal currents to the neuron. On average, a backpropagating spike loses about half its voltage after traveling nearly 500 micrometres. Backpropagation occurs actively in the neocortex, hippocampus, substantia nigra, and spinal cord, while in the cerebellum it occurs relatively passively. This is consistent with observations that synaptic plasticity is much more apparent in areas like the hippocampus, which controls spatial memory, than the cerebellum, which controls more unconscious and vegetative functions. The backpropagating current also causes a voltage change that increases the concentration of Ca2+ in the dendrites, an event which coincides with certain models of synaptic plasticity. This change also affects future integration of signals, leading to at least a short-term response difference between the presynaptic signals and the postsynaptic spike. Functions While many questions have yet to be answered in regards to neural backpropagation, there exists a number of hypotheses regarding its function. Some proposed function include involvement in synaptic plasticity, involvement in dendrodendritic inhibition, boosting synaptic responses, resetting membrane potential, retrograde actions at synapses and conditional axonal output. Backpropagation is believed to help form LTP (long term potentiation) and Hebbian plasticity at hippocampal synapses. Since artificial LTP induction, using microelectrode stimulation, voltage clamp, etc. requires the postsynaptic cell to be slightly depolarized when EPSPs are elicited, backpropagation can serve as the means of depolarization of the postsynaptic cell. Backpropagating action potentials can induce Long-term potentiation by behaving as a signal that informs the presynaptic cell that the postsynaptic cell has fired. Moreover, Spike-Time Dependent Plasticity is known as the narrow time frame for which coincidental firing of both the pre and post synaptic neurons will induce plasticity. Neural backpropagation occurs in this window to interact with NMDA receptors at the apical dendrites by assisting in the removal of voltage sensitive Mg2+ block (Waters et al., 2004). This process permits the large influx of calcium which provokes a cascade of events to cause potentiation. Current literature also suggests that backpropagating action potentials are also responsible for the release of retrograde neurotransmitters and trophic factors which contribute to the short-term and long-term efficacy between two neurons. Since the backpropagating action potentials essentially exhibit a copy of the neurons axonal firing pattern, they help establish a synchrony between the pre and post synaptic neurons (Waters et al., 2004). Importantly, backpropagating action potentials are necessary for the release of Brain-Derived Neurotrophic Factor (BDNF). BDNF is an essential component for inducing synaptic plasticity and development (Kuczewski N., Porcher C., Ferrand N., 2008). Moreover, backpropagating action potentials have been shown to induce BDNF-dependent phosphorylation of cyclic AMP response element-binding protein (CREB) which is known to be a major component in synaptic plasticity and memory formation (Kuczewski N., Porcher C., Lessmann V., et al. 2008). Algorithm While a backpropagating action potential can presumably cause changes in the weight of the presynaptic connections, there is no simple mechanism for an error signal to propagate through multiple layers of neurons, as in the computer backpropagation algorithm. However, simple linear topologies have shown that effective computation is possible through signal backpropagation in this biological sense. References Vetter P, et al. Propagation of Action Potentials in Dendrites Depends on Dendritic Morphology. The American Physiology Society 2000; 926-937 Neural circuitry Neuroscience Computational neuroscience
Neural backpropagation
[ "Biology" ]
2,309
[ "Neuroscience" ]
14,338,696
https://en.wikipedia.org/wiki/JREAP
The Joint Range Extension Applications Protocol (JREAP) enables tactical data messages to be transmitted over long-distance networks, e.g. satellite links, thereby extending the range of Tactical Data Links (TDLs). JREAP is documented in U.S. Military Standard (MIL-STD) 3011 and NATO Standardization Agreement (STANAG) 5518, "Interoperability Standard for the Joint Range Extension Applications Protocol (JREAP)." Purpose JREAP was developed due to the need to communicate data over long distances without degradation to the message format or content. JREAP takes the message from the format it was originally formatted in and changes the protocol so that the message can be transmitted over Beyond Line-of Sight media. JREAP is the protocol and message structure for the transmission and reception of pre-formatted messages over communications media other than those for which these messages were designed. JREAP provides a foundation for Joint Range Extension (JRE) of Link 16 and other tactical data links to overcome the line-of-sight limitations of radio terminals such as the Joint Tactical Information Distribution System (JTIDS) and Multifunctional Information Distribution System (MIDS), and extends coverage of these data links through the use of long-haul media. Versions JREAP A JREAP A uses an Announced Token Passing protocol for half-duplex communications. This protocol may be used when several terminals share the same JRE media and take turns transmitting or in a broadcast situation when one transmits and the rest receive. It is targeted to data rates down to 2400 bits per second on a serial data interface with a TSEC/KG-84A/KIV-7 or a compatible encryption device used for data security. It is designed for use with media such as: 25-kHz UHF TDMA/DAMA SATCOM, EHF Low Data Rate (LDR) Forced Mode Network Operations and 5 and 25 kHz UHF non-DAMA SATCOM. JREAP B JREAP B is a synchronous or asynchronous point-to-point mode of the JREAP. This mode is similar in design to the Half-Duplex Announced Token Passing protocol used by JREAP A. This mode can be used with SHF and EHF LDR point-to-point mode synchronous connections, STU-III operations via phone lines and other point-to-point media connections. This JREAP application presumes full-duplex data-transparent communication media. JREAP C JREAP C makes use of the Internet Protocol (IP) in conjunction with either the User Datagram Protocol (UDP) or Transmission Control Protocol (TCP). The IP suite is a standard set of protocols that is deployed worldwide in commercial as well as military networks. By using JREAP encapsulation over IP, JRE can be performed over IP-based networks that meet operational requirements for security, speed of service and so on. See also Global Information Grid Network-centric warfare S-TADIL J (J-Series messages over satellite links) SIMPLE (M-Series and J-Series messages over IP-based networks) References Military communications Application layer protocols
JREAP
[ "Engineering" ]
665
[ "Military communications", "Telecommunications engineering" ]
14,339,638
https://en.wikipedia.org/wiki/Cost%20contingency
When estimating the cost for a project, product or other item or investment, there is always uncertainty as to the precise content of all items in the estimate, how work will be performed, what work conditions will be like when the project is executed and so on. These uncertainties are risks to the project. Some refer to these risks as "known-unknowns" because the estimator is aware of them, and based on past experience, can even estimate their probable costs. The estimated costs of the known-unknowns is referred to by cost estimators as cost contingency. Contingency "refers to costs that will probably occur based on past experience, but with some uncertainty regarding the amount. The term is not used as a catchall to cover ignorance. It is poor engineering and poor philosophy to make second-rate estimates and then try to satisfy them by using a large contingency account. The contingency allowance is designed to cover items of cost which are not known exactly at the time of the estimate but which will occur on a statistical basis." The cost contingency which is included in a cost estimate, bid, or budget may be classified as to its general purpose, that is what it is intended to provide for. For a class 1 construction cost estimate, usually needed for a bid estimate, the contingency may be classified as an estimating and contracting contingency. This is intended to provide compensation for "estimating accuracy based on quantities assumed or measured, unanticipated market conditions, scheduling delays and acceleration issues, lack of bidding competition, subcontractor defaults, and interfacing omissions between various work categories." Additional classifications of contingency may be included at various stages of a project's life, including design contingency, or design definition contingency, or design growth contingency, and change order contingency (although these may be more properly called allowances). AACE International has defined contingency as "An amount added to an estimate to allow for items, conditions, or events for which the state, occurrence, or effect is uncertain and that experience shows will likely result, in aggregate, in additional costs. Typically estimated using statistical analysis or judgment based on past asset or project experience. Contingency usually excludes: Major scope changes such as changes in end product specification, capacities, building sizes, and location of the asset or project Extraordinary events such as major strikes and natural disasters Management reserves Escalation and currency effects Some of the items, conditions, or events for which the state, occurrence, and/or effect is uncertain include, but are not limited to, planning and estimating errors and omissions, minor price fluctuations (other than general escalation), design developments and changes within the scope, and variations in market and environmental conditions. Contingency is generally included in most estimates, and is expected to be expended". A key phrase above is that it is "expected to be expended". In other words, it is an item in an estimate like any other, and should be estimated and included in every estimate and every budget. Because management often thinks contingency money is "fat" that is not needed if a project team does its job well, it is a controversial topic. Methods to estimate contingency In general, there are four classes of methods used to estimate contingency. ." These include the following: Expert judgment Predetermined guidelines (with varying degrees of judgment and empiricism used) Simulation analysis (primarily risk analysis judgment incorporated in a simulation such as Monte-Carlo) Parametric Modeling (empirically-based algorithm, usually derived through regression analysis, with varying degrees of judgment used). While all are valid methods, the method chosen should be consistent with the first principles of risk management in that the method must start with risk identification, and only then are the probable cost of those risks quantified. In best practice, the quantification will be probabilistic in nature (Monte-Carlo is a common method used for quantification). Typically, the method results in a distribution of possible cost outcomes for the project, product, or other investment. From this distribution, a cost value can be selected that has the desired probability of having a cost underrun or cost overrun. Usually a value is selected with equal chance of over or underrunning. The difference between the cost estimate without contingency, and the selected cost from the distribution is contingency. For more information, AACE International has catalogued many professional papers on this complex topic. Control account Contingency is included in budgets as a control account. As risks occur on a project, and money is needed to pay for them, the contingency can be transferred to the appropriate accounts that need it. The transfer and its reason is recorded. In risk management, risks are continually reassessed during the course of a project, as are the needs for cost contingency. See also Lists Glossary of construction cost estimating Glossary of project management Related fields Construction bidding Cost engineering Industrial engineering Pre-construction services Project management Total cost management Related topics Retentions in the British construction industry Mobilization payment, an advance payment to a contractor at the start of a project to assist in the beginning of operations. Professional organizations AACE International American Society of Professional Estimators Project Management Institute Royal Institution of Chartered Surveyors References External links ACostE Association of Cost Engineers ICEAA International Cost Estimating and Analysis Association Cost engineering
Cost contingency
[ "Engineering" ]
1,132
[ "Cost engineering" ]
14,339,687
https://en.wikipedia.org/wiki/Variable%20Message%20Format
Variable Message Format, abbreviated as "VMF" and documented in MIL-STD-6017, is a message format used in communicating tactical military information. A message formatted using VMF can be sent via many communication methods. As it does not define such a method, a communications medium, or a protocol, it is not a Tactical Data Link (TDL). Restriction The standard is designated distribution class C, meaning that it may only be distributed to federal employees and contractors. Contractors may obtain a copy from their government POC. However, the standard for the header is openly available. Format The VMF application header is defined by MIL-STD-2045-47001. The VMF message body consists of "K" Series messages. See also MIL-STD-6011 (TADIL-A) Link 4 (TADIL-C) TADIL-J JTIDS Link 1 Link 11 - (Link 11B) Link 16 Link 22 MIDS ACARS External links MIL-STD-2045/47001C References Military communications Military installations of NATO NATO
Variable Message Format
[ "Engineering" ]
228
[ "Military communications", "Telecommunications engineering" ]
14,340,077
https://en.wikipedia.org/wiki/Ultrahyperbolic%20equation
In the mathematical field of differential equations, the ultrahyperbolic equation is a partial differential equation (PDE) for an unknown scalar function of variables of the form More generally, if is any quadratic form in variables with signature , then any PDE whose principal part is is said to be ultrahyperbolic. Any such equation can be put in the form above by means of a change of variables. The ultrahyperbolic equation has been studied from a number of viewpoints. On the one hand, it resembles the classical wave equation. This has led to a number of developments concerning its characteristics, one of which is due to Fritz John: the John equation. In 2008, Walter Craig and Steven Weinstein proved that under a nonlocal constraint, the initial value problem is well-posed for initial data given on a codimension-one hypersurface. And later, in 2022, a research team at the University of Michigan extended the conditions for solving ultrahyperbolic wave equations to complex-time (kime), demonstrated space-kime dynamics, and showed data science applications using tensor-based linear modeling of functional magnetic resonance imaging data. The equation has also been studied from the point of view of symmetric spaces, and elliptic differential operators. In particular, the ultrahyperbolic equation satisfies an analog of the mean value theorem for harmonic functions. Notes References Differential operators
Ultrahyperbolic equation
[ "Mathematics" ]
287
[ "Mathematical analysis", "Differential operators", "Mathematical analysis stubs" ]
14,340,833
https://en.wikipedia.org/wiki/Nucleolin
Nucleolin is a protein that in humans is encoded by the NCL gene. Gene The human NCL gene is located on chromosome 2 and consists of 14 exons with 13 introns and spans approximately 11kb. Intron 11 of the NCL gene encodes a small nucleolar RNA, termed U20. Function Nucleolin is the major nucleolar protein of growing eukaryotic cells. It is found associated with intranucleolar chromatin and pre-ribosomal particles. It induces chromatin decondensation by binding to histone H1. It is thought to play a role in pre-rRNA transcription and ribosome assembly. May play a role in the process of transcriptional elongation. Binds RNA oligonucleotides with 5'-UUAGGG-3' repeats more tightly than the telomeric single-stranded DNA 5'-TTAGGG-3' repeats. Nucleolin is also able to act as a transcriptional coactivator with Chicken Ovalbumin Upstream Promoter Transcription Factor II (COUP-TFII). Clinical significance Midkine and pleiotrophin bind to cell-surface nucleolin as a low affinity receptor. This binding can inhibit HIV infection. Nucleolin at the cell surface is the receptor for the respiratory syncytial virus (RSV) fusion protein. Interference with the nucleolinRSV fusion protein interaction has been shown to be therapeutic against RSV infection in cell cultures and animal models. Interactions Nucleolin has been shown to interact with: MTDH, CSNK2A2, Centaurin, alpha 1, HuR, NPM1, P53, PPP1CB, S100A11, Sjögren syndrome antigen B, TOP1, and Telomerase reverse transcriptase. References Further reading Proteins
Nucleolin
[ "Chemistry" ]
385
[ "Biomolecules by chemical classification", "Proteins", "Molecular biology" ]
14,341,044
https://en.wikipedia.org/wiki/Preselector
A preselector is a name for an electronic device that connects between a radio antenna and a radio receiver. The preselector is a band-pass filter that blocks troublesome out-of-tune frequencies from passing through from the antenna into the radio receiver (or preamplifier) that otherwise would be directly connected to the antenna. Purpose A preselector improves the performance of nearly any receiver, but is especially helpful to receivers with broadband front-ends that are prone to overload, such as scanners, wideband software-defined radio receivers, ordinary consumer-market shortwave and AM broadcast receivers – particularly with receivers operating below 10~20 MHz where static is pervasive. Sometimes faint signals that occupy a very narrow frequency span (such as radiotelegraph or 'CW') can be heard more clearly if the receiving bandwidth is made narrower than the narrowest that a general-purpose receiver may be able to tune; likewise, signals which individually use a fairly wide span of frequencies, such as broadcast AM, can be made less noisy by narrowing the bandwidth of the signal, even though making the span of received frequencies narrower than was transmitted will sacrifice some audio fidelity. A good preselector often can reduce a radio's receive bandwidth to a narrower frequency span than many general-purpose radios can manage on their own. A preselector typically is tuned to have a narrow bandwidth, centered on the receiver's operating frequency. The preselector passes through unchanged the signal on its tuned frequency (or only slightly diminished) but it reduces or removes off-frequency signals, cutting down or eliminating unwanted interference. Extra filtering can be useful because the first input stage ("front end") of receivers contains at least one RF amplifier, which has power limits ("dynamic range"). Most radios' front ends amplify all radio frequencies delivered to the antenna connection. So off-frequency signals constitute a load on the RF amplifier, wasting part of its dynamic range on unused and unwanted signals. "Limited dynamic range" means that the amplifier circuits have a limit to the total amount of incoming RF signal they can amplify without overloading; symptoms of overload are nonlinearity ("distortion") and ultimately clipping ("buzz"). When the front-end overloads the performance of the receiver is severely reduced, and in extreme cases can damage the receiver. In situations with noisy and crowded bands, or where there is loud interference from nearby, high-power stations, the dynamic range of the receiver can quickly be exceeded. Extra filtering by the preselector limits frequency range and power demands that are applied to all later stages of the receiver, only loading it with signals within the preselected band. Preselect filter bank Similar to conventional radios, spectrum analyzers, heavy-duty network analyzers, and other RF measuring equipment can incorporate switchable banks of preselector circuits to reject out-of-band signals that could result in spurious signals at the frequencies being analyzed. Automatically switched filter banks can likewise be incorporated into various broadband, general purpose receivers. Multifunction preselectors A preselector may be engineered with extra features, so that in addition to attenuating interference from unwanted frequencies it can provide additional services which may be helpful for a receiver: It can limit input signal voltage to protect a sensitive receiver from damage caused by static discharge, nearby voltage spikes, and overload from nearby transmitters' signals. It can provide a DC path to ground, to drain off noisy static charge that tends to collect on the antenna. It can also incorporate a small radio frequency amplifier stage to boost the filtered signal. None of these extra conveniences are necessary for the function of preselection, and in particular, for the typical noisy frequency bands where a preselector is needed, an amplifier in the preselector has no useful function. On the other hand, when an antenna preamplifier (preamp) is actually needed, it can be made "tunable" by incorporating a front-end preselector circuit to improve its performance. The integrated device is both a preamplifier and a preselector, and either name is correct. This ambiguity sometimes leads to confusion – conflating preselection with amplification. Ordinary, regular preselectors (that are just preselectors) contain no amplifier: They are entirely passive devices. A standard, ordinary preselector sometimes has the word "passive" prefixed – hence "passive preselector" means "ordinary preselector". The adjective is redundant, but emphasizes to those only familiar with tunable preamplifiers that the preselector is normal, and has no internal amplifier, and requires no power supply. Since all ordinary preselectors are "passive", adding the redundant word is pedantic, and in the noisy longwave, mediumwave, and shortwave bands where preselectors are typically used, when used with "modern" (post 1950) receivers they function with no noticeable loss of signal strength. Bandwidth vs. signal strength trade-off With all preselectors there is some very small loss at the tuned frequency; usually, most of the loss is in the inductor (the tuning coil). Turning up the inductance gives the preselector a narrower bandwidth (or higher , or greater selectivity) and slightly raises the loss, which nonetheless remains very small. Most preselectors have separate settings for one inductor and one capacitor (at least). So with at least two adjustments available to tune to just one frequency, there are often a variety of possible settings that will tune the preselector to frequencies in its middle-range. For the narrowest bandwidth (highest ), the preselector is tuned using the highest inductance and lowest capacitance for the desired frequency, but this produces the greatest loss. It also requires retuning the preselector more often while searching for faint signals, to keep the preselector's pass band overlapping the radio's receiving frequency. For lowest loss (and widest bandwidth), the preselector is tuned using the lowest inductance and highest capacitance (and the lowest , or least selectivity) for the desired frequency. The wider bandwidth allows more interference through from nearby frequencies, but reduces the need to retune the preselector while tuning the receiver, since any one low-inductance setting for the preselector will pass a broader span of nearby frequencies. Different from an antenna tuner Although a preselector is placed inbetween the radio and the antenna, in the same electrical location as a feedline matching unit, it serves a different purpose: A transmatch or "antenna" tuner connects two transmission lines with different impedances and only incidentally blocks out-of-tune frequencies (if it blocks any at all). A transmatch matches transmitter impedance to feedline impedance and phase, so that signal power from the radio transmitter smoothly transfers into the antenna's feed cable; a properly adjusted transmatch prevents transmitted power from being reflected back into the transmitter ("backlash current"). Some antenna tuner circuits can both impedance match and preselect, for example the Series Parallel Capacitor (SPC) tuner, and many 'tuned-transformer'-type matching circuits used in many balanced line tuners (BLT) can be adjusted to also function as band-pass filters. See also Antenna tuner Band-pass filter Footnotes References External links Radio electronics Receiver (radio) Wireless tuning and filtering
Preselector
[ "Engineering" ]
1,595
[ "Radio electronics", "Wireless tuning and filtering", "Receiver (radio)" ]
14,341,419
https://en.wikipedia.org/wiki/Electrohydrogenesis
Electrohydrogenesis or biocatalyzed electrolysis is the name given to a process for generating hydrogen gas from organic matter being decomposed by bacteria. This process uses a modified fuel cell to contain the organic matter and water. A small amount, 0.2–0.8 V of electricity is used, the original article reports an overall energy efficiency of 288% can be achieved (this is computed relative to the amount of electricity used, waste heat lowers the overall efficiency). This work was reported by Cheng and Logan. See also Biohydrogen Electrochemical reduction of carbon dioxide Electromethanogenesis Fermentative hydrogen production Microbial fuel cell References External links Biocatalyzed electrolysis Hydrogen production Environmental engineering Biotechnology
Electrohydrogenesis
[ "Chemistry", "Engineering", "Biology" ]
150
[ "Chemical engineering", "Biotechnology", "Civil engineering", "nan", "Environmental engineering" ]
14,341,696
https://en.wikipedia.org/wiki/Haar%20%28fog%29
In meteorology, haar or sea fret is a cold sea fog. It occurs most often on the east coast of Scotland between April and September, when warm air passes over the cold North Sea. The term is also known as harr, hare, harl, har and hoar. Causes Haar is typically formed over the sea and is blown to the land by the wind. This commonly occurs when warmer moist air moves over the relatively cooler North Sea causing the moisture in the air to condense, forming haar. Sea breezes and easterly winds then bring the haar into the east coast of Scotland and North-East England where it can continue for several miles inland. This can be common in the UK summer when heating of the land creates a sea breeze, bringing haar in from the sea and as a result can significantly reduce temperatures compared to those just a few miles inland. Nomenclature The term haar is used along certain lands bordering the North Sea, primarily eastern Scotland and the north-east of England. Variants of the term in Scots and northern English include har, hare, harl, harr and hoar. Its origin is related to Middle Dutch haren, referring to a cold, sharp wind. In Yorkshire and Northumberland it is commonly referred to as a sea roke. References Environment of Scotland Fog Scottish coast Climate of Scotland North Sea Scots language Scottish words and phrases
Haar (fog)
[ "Physics" ]
285
[ "Visibility", "Fog", "Physical quantities" ]
14,342,207
https://en.wikipedia.org/wiki/Outline%20of%20computer%20engineering
The following outline is provided as an overview of and topical guide to computer engineering: Computer engineering – discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware–software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also how they integrate into the larger picture. Main articles on computer engineering Computer Computer architecture Computer hardware Computer software Computer science Engineering Electrical engineering Software engineering History of computer engineering General Time line of computing 2400 BC - 1949 - 1950–1979 - 1980–1989 - 1990–1999 - 2000–2009 History of computing hardware up to third generation (1960s) History of computing hardware from 1960s to current History of computer hardware in Eastern Bloc countries History of personal computers History of laptops History of software engineering History of compiler writing History of the Internet History of the World Wide Web History of video games History of the graphical user interface Timeline of computing Timeline of operating systems Timeline of programming languages Timeline of artificial intelligence Timeline of cryptography Timeline of algorithms Timeline of quantum computing Product specific Timeline of DOS operating systems Classic Mac OS History of macOS History of Microsoft Windows Timeline of the Apple II series Timeline of Apple products Timeline of file sharing Timeline of OpenBSD Hardware Digital electronics Very-large-scale integration Hardware description language Application-specific integrated circuit Electrical network Microprocessor Software Assembly language Operating system Database Software engineering System design Computer architecture Microarchitecture Multiprocessing Computer performance by orders of magnitude Interdisciplinary fields Human–computer interaction Computer network Digital signal processing Control theory See also Computer Science List of basic information technology topics References External links Computer Engineering at The Princeton Review Computer Engineering Conference Calendar Computer engineering Computer engineering Computer engineering topics, basic
Outline of computer engineering
[ "Technology", "Engineering" ]
409
[ "Computing-related lists", "Electrical engineering", "Computer engineering" ]
14,343,009
https://en.wikipedia.org/wiki/TBX21
T-box transcription factor TBX21, also called T-bet (T-box expressed in T cells), is a protein that in humans is encoded by the TBX21 gene. Though being for long thought of only as a master regulator of type 1 immune response, T-bet has recently been shown to be implicated in development of various immune cell subsets and maintenance of mucosal homeostasis. Function This gene is a member of a phylogenetically conserved family of genes that share a common DNA-binding domain, the T-box. T-box genes encode transcription factors involved in the regulation of developmental processes. This gene is the human ortholog of mouse Tbx21/Tbet gene. Studies in mouse show that Tbx21 protein is a Th1 cell-specific transcription factor that controls the expression of the hallmark Th1 cytokine, interferon-gamma (IFNg). Expression of the human ortholog also correlates with IFNg expression in Th1 and natural killer cells, suggesting a role for this gene in initiating Th1 lineage development from naive Th precursor cells. The function of T-bet is best known in T helper cells (Th cells). In naïve Th cells the gene is not constitutively expressed, but can be induced via 2 independent signalling pathways, IFNg-STAT1 and IL-12-STAT4 pathways. Both need to cooperate to reach stable Th1 phenotype. Th1 phenotype is also stabilised by repression of regulators of other Th cell phenotypes (Th2 and Th17). In a typical scenario it is thought that IFNg and T cell receptor (TCR) signalling initiates the expression of Tbet, and once TCR signalling stops, signalling via IL-12 receptor can come to play as it was blocked by repression of expression of one of its receptor subunits (IL12Rb2) by TCR signalling. IL-2 signalling enhances the expression of IL-12R. The 2-step expression of T-bet can be viewed as a safety mechanism of sort, which ensures, that cells commit to the Th1 phenotype only when desired. T-bet controls transcription of many genes, for example proinflammatory cytokines like lymphotoxin-a, tumour necrosis factor and ifng, which is a hallmark cytokine of type one immunity. Certain chemokines are also regulated by T-bet, namely xcl1, ccl3, ccl4 and chemokine receptors cxcr3, ccr5. The expression of T-bet controlled genes is facilitated by 2 distinct mechanisms: chromatin remodelation via enzyme recruitment and direct binding to enhancer sequences promoting transcription or 3D gene structure supporting transcription. T-bet also recruits other transcription factors like HLX, RUNX1, RUNX3 which aid it in setting Th1 transcription profile. Apart from promoting type 1 immune response (Th1), T-bet also suppresses the other types of immune response. Type 2 immune response (Th2) phenotype is repressed by sequestering of its master regulator, GATA3 away from its target genes. Gata3 expression is further silenced by promotion of silencing epigenetic changes in its region. In addition to that the Th2 specific cytokines are also silenced by binding of T-bet and RUNX3 to il4 silencer region. Type 17 immune response (Th17) phenotype is suppressed by RUNX1 recruitment, which disallows it to mediate Th17 specific genes, like rorc, a Th17 master regulator. Rorc is also silenced by epigenetic changes promoted by T-bet and STAT4. T-bet also performs function in cytotoxic T cells and B cells. In cytotoxic T cells it promotes IFNg, granzyme B expression and in cooperation with another transcription factor EOMES their maturation. The role of T-bet in B cells seems to be to direct the cell towards type 1 immune response expression profile, which involves secretion of antibodies IGg1 and IGg3 and is usually elevated during viral infections. These populations of B cells differ from standard ones by their lack of receptors CD21 and CD27, also given that these cells have undergone antibody class switch, they are regarded as memory B cells. These cells have been shown to secrete IFNg and in vitro to polarise naïve T helper cells towards Th1 phenotype. Populations of T-bet positive B cells were also identified in various autoimmune diseases like systemic lupus erythematosus, Crohn's disease, multiple sclerosis and rheumatoid arthritis. Role in mucosal homeostasis It has been identified that T-bet contributes to the maintenance of mucosal homeostasis and mucosal immune response. Mice lacking adapative immune cells and T-bet (RAG -/-, T-bet -/-) developed disease similar to human ulcerative colitis (hence the name TRUC), which was later attributed to the outgrowth Gram-negative bacteria, namely Helicobacter typhlonius. The dysbiosis appears to be a consequence of multiple factors, firstly the innate lymphoid cells 1 (ILC1) population and a subset of ILC3s are missing, because the expression of T-bet is needed for their maturation. Secondly, T-bet ablation causes increased levels of TNF, as its expression is not repressed in dendritic cells and immune system is more biased away from Th1. Role in disease Atherosclerosis Atherosclerosis is an autoimmune disease caused by inflammation and associated infiltration of immune cells in fatty deposits in arteries called atherosclerosis plaques. Th1 cells are responsible for production of proinflammatory cytokines contributing to the progression of the disease by promoting expression of adhesive (e.g., ICAM1) and homing molecules (mainly CCR5) needed for cellular migration. Experimental vaccination of patients with peptides derived from apolipoprotein B, part of low-density lipoprotein, which is deposited on arterial walls, has shown increased T regulatory cells (TREGs) and cytotoxic T cells. The vaccination has showed smaller Th1 differentiation, though the mechanism behind it remains unresolved. Currently it is hypothesised that the decrease of Th1 differentiation is caused by the destruction of dendritic cells presenting auto antigens by cytotoxic T cells and increased differentiation of TREGs suppressing immune response. Taken together T-bet might serve as a potential target in treatment of atherosclerosis. Asthma The transcription factor encoded by TBX21 is T-bet, which regulates the development of naive T lymphocytes. Asthma is a disease of chronic inflammation, and it is known that transgenic mice born without TBX21 spontaneously develop abnormal lung function consistent with asthma. It is thought that TBX21, therefore, may play a role in the development of asthma in humans as well. Experimental autoimmune encephalomyelitis Initially it was thought that experimental autoimmune encephalomyelitis (EAE) is caused by autoreactive Th1 cells. T-bet-deficient mice were resistant to EAE. However, later research has discovered, that not only Th1 but also Th17 and ThGM-CSF cells are the cause of immunopathology. Interestingly, IFNg, a main product of T-bet, has shown bidirectional effect in EAE. Injection of IFNg during acute stage worsens the course of the disease, presumably by strengthening Th1 response, however injection of IFNg in chronic stage has shown suppressive effect on EAE symptoms. Currently it is thought that IFNg stops T helper cells from committing for example to the Th17 phenotype, stimulates indoleamine 2,3-dioxygenase transcription (kynurenines or kyn pathway) in certain dendritic cells, stimulates cytotoxic T cells, downregulates T cell trafficking and limits their survival. T-bet and its controlled genes remain a possible target in treatment of neurological autoimmune diseases. References Further reading External links Transcription factors
TBX21
[ "Chemistry", "Biology" ]
1,760
[ "Induced stem cells", "Gene expression", "Transcription factors", "Signal transduction" ]
14,343,049
https://en.wikipedia.org/wiki/Lymphotoxin%20beta
Lymphotoxin-beta (LT-beta) formerly known as tumor necrosis factor C (TNF-C) is a protein that in humans is encoded by the LTB gene. Function Lymphotoxin beta is a type II membrane protein of the TNF family. It anchors lymphotoxin-alpha to the cell surface through heterotrimer formation. The predominant form on the lymphocyte surface is the lymphotoxin-alpha 1/beta 2 complex (e.g. 1 molecule alpha/2 molecules beta) and this complex is the primary ligand for the lymphotoxin-beta receptor. The minor complex is lymphotoxin-alpha 2/beta 1. LTB is an inducer of the inflammatory response system and involved in normal development of lymphoid tissue. Lymphotoxin-beta isoform b is unable to complex with lymphotoxin-alpha suggesting a function for lymphotoxin-beta which is independent of lymphotoxin-alpha. Alternative splicing results in multiple transcript variants encoding different isoforms. Pro-tumorigenic function of membrane LT is clearly established: mice with overexpression of LTα or LTβ showed increased tumor growth and metastasis in several models of cancer. However, these studies utilized mice with complete LTα gene deficiency that did not allow to distinguish effects of soluble versus membrane-associated LT. Interactions LTB has been shown to interact with Lymphotoxin alpha. References Further reading Cytokines
Lymphotoxin beta
[ "Chemistry" ]
323
[ "Cytokines", "Signal transduction" ]
14,343,089
https://en.wikipedia.org/wiki/Silt%20density%20index
The silt density index is a measure for the fouling capacity of water in reverse osmosis systems. The test measures the rate at which a 0.45-micrometre filter is plugged when subjected to a constant water pressure of . The SDI gives the percent drop per minute in the flow rate of the water through the filter, averaged over a period of time such as 15 minutes. Typically, spiral-wound reverse osmosis systems will need an SDI less than 5, and hollow fiber reverse osmosis systems will need an SDI less than 3. In these kinds of systems, deep-well waters (with a typical SDI of 3) could be used straight from the source. If fed from surface waters (with a typical SDI greater than 6), the water will need to be filtered before use. Seawater desalination plants utilising reverse osmosis systems also need very efficient filtering due to the typically high but variable SDI of seawater. References Water treatment Membrane technology
Silt density index
[ "Chemistry", "Engineering", "Environmental_science" ]
209
[ "Hydrology", "Separation processes", "Water treatment", "Water pollution", "Hydrology stubs", "Membrane technology", "Environmental engineering", "Water technology" ]
14,343,191
https://en.wikipedia.org/wiki/BLUF%20%28communication%29
BLUF (bottom line up front) is the practice of beginning a message with its key information (the "bottom line"). This provides the reader with the most important information first. By extension, that information is also called a BLUF. It differs from an abstract or executive summary in that it is simpler and more concise, similar to a thesis statement, and it resembles the inverted pyramid practice in journalism and the so-called “deductive” presentation of information, in which conclusions precede the material that justifies them, in contrast to “inductive” presentation, which lays out arguments before the conclusions drawn from them. BLUF is a standard in U.S. military communication whose aim is to make military messages precise and powerful. It differs from an older, more-traditional style in which conclusions and recommendations are included at the end, following the arguments and considerations of facts. The BLUF concept is not exclusive to writing since it can also be used in conversations and interviews. Purpose BLUF is used for effective communication. Studies show that organizations with effective communications produced a 47% greater return to shareholders over five years. BLUF aims to enable the receiver of a message to make faster decisions, especially for people who are busy, time-constrained, or overloaded with lots of information. BLUF helps manage a reader's load as most readers' priority is to get through all text or copy quickly and efficiently. This way, the reader can grasp the main idea or the whole thought of a write-up. The BLUF approach can help writers better organize their thoughts by starting with the big idea that they want to convey. For writers, announcing the BLUF convention while the article is in draft form might give the author a sense of clarity because the essay's purpose is stated early on, and having it written out can keep the writer on task. BLUF communications place the main point of a message at the beginning and then follow it up with the context. In addition, it is used to enforce speed and clarity in delivering reports and emails. It is followed by essential background information that summarizes or enumerates the considerations (events or prior decisions) that led to the bottom line. For example: BLUF: I need you to approve both the design and content of the attached flyer by noon on August 10. This flyer is for an upcoming conference at which we are exhibiting. I have included information about the upcoming classes we are offering, our contact information, and a list of the services we offer. By putting the bottom line up front, the example above gives the receiver what is expected of them and the task's level of priority. Origin The phrase "bottom line up front" comes from a 100-page long document entitled "Army Regulation 25–50: Information Management: Records Management: Preparing and Managing Correspondence". One of the standards for army writing for correspondences includes the use of BLUF, as cited in the following text:"Army writing will be concise, organized, and to the point. Two essential requirements include putting the main point at the beginning of the correspondence (bottom line up front) and using the active voice (for example, "You are entitled to jump pay for the time you spent in training last year")." In a 2017 guidance on how the U.S. Defense Department answers inquiries from Capitol Hill, Defense Secretary Jim Mattis expects the department to improve its communication with Congress "at every level". According to an August 22 memo from Capt. Hallock Mohler, Mattis' executive secretary, responses to congressional inquiries directed to the secretary or deputy secretary "must be completed within five calendar days". If more time is needed, "an interim response or reply will be sent indicating when to expect a final one. Mattis' guidance includes the following two directions: "Answer the question asked. Address the issue raised. Do not avoid the question or answer a different question. If you can't answer the question or address the issue, state why." And "Give members of Congress the Bottom Line Up Front; be direct and to the point using clear, concise, and straightforward language." Use in the military The various support units of the military also use the BLUF to convey their study and findings, as shown in the following abstract from a medical journal:"Bottom Line Up Front: In this perspective essay, ENS Ofir Nevo and Dr Laura Lambert briefly discuss the concept of an outward mindset and how they have applied it in the context of medical education. ENS Nevo shares his story of deciding to attend medical school at the Uniformed Services University, as part of his desire and commitment to serve others. Early on, the requirements of medical school created intense demands that began to disconnect him from the commitment and connection that first drew him to a medical career. ENS Nevo describes how an awareness of the choice of mindset helped him address these challenges and stay better connected to his purpose and calling. A case analysis by Lambert further explores how the awareness and practice of an outward mindset may help students, residents, and attendings see how they can improve their own well-being and connection to the people that brought them to medicine in the first place. Their experiences demonstrate how outward mindset principles can be a valuable tool for empowering students and physicians with a perspective that invites new solutions for the challenges of life and work." Similarly, military lawyers use BLUF to summarize the key points of their reports:"Bottom line up front: no two states have identical national laws; even our understanding and application of the laws of International Humanitarian Law (IHL) (Geneva and Hague being the cornerstones) are not uniform. As judge advocates (JA) and legal advisors (LEGADs), we have a central role in identifying and understanding the relevant national positions within combined forces, the implications for the force, and advising how to minimize the operational or tactical impact, in order to ensure mission accomplishment." Military officers also use short BLUF to convey a positive review, such as "Bottom line up front, you should read this book. That said, while I do recommend it, it is not without some serious issues." As well as a negative review, as follows:"Here's the bottom line up front: As physicist Wolfgang Pauli famously quipped, "This is not right. It isn't even wrong." Lt Col Robert Spalding's article "America's Two Air Forces" (Summer 2009) is deeply flawed in both premise and argument. Meaningful analysis of our aircraft requirements demands sound methodology and critical assessments that minimize internal biases. Unfortunately, the author falls far short on both counts. He describes a requirement for a bifurcated US Air Force equipped to meet the demands of peer-competitor threats and irregular warfare, and then asserts that current aircraft procurement plans will fail to meet either requirement. While there is some merit in his general assessment of roles and missions for his "Two Air Forces" (kudos to his discussion of irregular warfare), the analysis offered – which is inadequate and often specious – fails to support his conclusions. I will address his assertion that our Air Force should focus on a peer-competitor force structure." Effective writing Among its many rules, the now-rescinded DA pamphlet mandated structuring written staff products with the main point, or bottom line, at the beginning. The pamphlet said Army writers should give the bottom line up front, or BLUF, because "the greatest weakness in ineffective writing is that it doesn't quickly transmit a focused message." This is applicable not only in the military letters but also in writing emails, conversations, digital media, and many more. In writing When writing a document for business and academic purposes, BLUF helps in writing the message and argumentation because it features prominently a main "what" and "so what". Stating the key judgment and significance up front sets up the argument, ensures the message is clear, and highlights why the reader should care about the document. In order to create a reader-friendly prose, writers structure their paragraphs using BLUF format to better aid the reader's ability to recall the paragraph's main idea or content. BLUF-structured topic sentences are applicable when writing literature review, experimental results, and argumentative essays. The BLUF style can also be routinely seen in executive summaries, reports, subject lines in e-mails, and abstracts in scholarly articles. The intention to place the bottom line at the onset is done because executives tend to focus on problem solving. It may be applied directly to the format of a résumé to prevent it being too long or wordy. In certain technical writing, BLUF may be considered desirable. It has also been advocated for scholarly articles. BLUF gives brevity in communication. This conciseness in communication comes from placing at the start the conclusion the summarized vital information and actions. In journalistic writing, BLUF resembles the inverted pyramid structure for the latter also aims to serve the readers well by arranging the story elements in descending order of importance. Like the inverted pyramid structure in which the story's conclusion is already contained in its lead, writing in BLUF structure adheres to brevity and clarity so that readers can understand the message right away without sacrificing essential facts and without having to reread the message. Army writing is effective when it is functional and satisfies the writer's and the intended readers' purposes effectively (adequate to accomplish a purpose; producing the intended or expected result). The BLUF model is designed to state upfront the purpose of the message and the required action to be taken. It is intended to respond quickly to the five Ws: who, what, where, when, and why. This kind of writing requires precision and direct statements that enforce fast and clear communications. Their subject lines use keywords in all caps to note the email's purpose, such as info (for informational purposes only), request (seeks permission or approval by the recipient), and action (the recipient must take some action.) The following example is an example of a BLUF message from the Air Force Handbook: "BLUF: Effective 29 October 2013, all Air Force Doctrine Documents (AFDDs) have been rescinded and replaced by core doctrine volumes and doctrine annexes." Another example comes from the U.S. offensive in Iraq in 2007: "The battle against al-Qa'eda in Diyala, Operation Arrowhead Ripper, is expected to last for weeks. The end state is to destroy the al-Qa'eda influences in this province and eliminate their threat against the people." Along with the military professionals, analysts from the intelligence community also use the BLUF. Intelligence analysts often start an assessment with their bottom line up front. Their analytic reports are often drafted for busy policymakers possessing limited time for consumption of information and who therefore prefer the main points and judgements plainly presented at the beginning. For example, CIA Reports and Estimate 34-39's bottom-line-up-front judgment assessed Soviet intentions in Latin America, especially on what the Soviets would attempt: This assessment suggests that potential Soviet actions in Latin America may increase and do so nefariously. These types of BLUF judgments were discussed often at the U.S. National Security Council meetings after WWII up to the early 1950s. Another example would be the Muslim insurgency in Mindanao, with the following BLUF: "Substantive resolution of a decades-long Muslim insurgency in Mindanao is unlikely anytime soon despite the signing of peace agreements between Manila and the separatist militants. Several entrenched obstacles to resolution suggest that the conflict will continue to drain the Philippines' scarce security resources, thus limiting its ability to pursue greater military cooperation with its security partners, and that parts of Mindanao will remain a haven for terrorists. Endemic poverty, corruption, powerful political opposition, factionalism, Manila's weakness in resources and capacity, and inflexibility on outcome hamper both sides." The importance of BLUF in the intelligence community may be summarized as follows: "These busy men and women rely on clear, concise, and accurate intelligence reporting to make daily decisions that affect U.S. national security, U.S. policies, and the lives of U.S. servicemen and -women. Arranging your intelligence reporting in the BLUF format helps them efficiently locate and comprehend the information they need." In a Harvard Business Review article, Kabir Sehgal enumerated three main ways to format emails with military precision: (1) Subject with key words – Key words specify the nature in email (e.g. Action, Sign, Info, Decision, etc.); (2) Bottom Line Up Front (BLUF) – Emails should be short that basically answers the 5W's: who, what, when, where, and why; (3) Be Economical – short enough to understand and covey all the details. Using of Active voice is highly mandates rather than using passive voice." It has been recommended that BLUF be used in writing policy papers and memos. This is because policymakers have short attention spans, given that they have much work to do. They may not appreciate lengthy prose and verbosity. They only want the essential information, so as not to get bogged down into details. In writing policy papers and memos, military professionals, intelligence analysts, policy analysts, and the like need to include any second-order or third-order effect in their BLUF. The inclusion upfront of the result of the direct result of an action or change will entice the busy policymakers to read the whole memo or set it aside and read later. To illustrate, Title IX's college sports regulation makes sure women and men have the same rights. Women and men must be equal in both athletic scholarships, and the male-female ratio of athletes needs to match the school's student ratio. This is a first-order effect – more women get to play sports and receive scholarships. However, due to financial constraints, football is the only sport that makes money, and there is not a women's sport with an equivalent number of players. Hence, if the school wants to have a football team, they will also have to have five women's sports teams before adding another men's sport. A second-order effect is schools are (economically) forced to drop some of the less common men's sports teams. A third-order effect may be the sport loses popularity over time (wrestling is an example). Applying this to the policy world, the two examples show this BLUF structure with a second-order effect: (1) The Philippine President will probably sign key legislation for the peace accord, but opposition elements are likely to challenge the law in court and thwart implementation. (2) An Islamic militant group is publicizing the terrorist activities of its supporters in the region as part of a media campaign to promote the group's network there, which encourage foreign fighters to travel to the region. In conversation In conversation, the BLUF model can be used to keep conversation or answers to questions concise and focused on the immediate topic, in order to help a person state the main point (such as in an interview). The BLUF approach helps top-level managers and senior military officials in decision-making especially under severe time constraints, when faced with numerous issues on a given timeframe, and when communication of essential information is necessary in dealing with high-pressure situations. BLUF is also useful when conversing at the organizational level. In the BLUF framework, for effective communication, it is necessary to identify the purpose of the communication and share that purpose with the audience (e.g. bosses, workers, and colleagues). In this framework, instead of reporting a detailed chronology of all the events that led up to this point, people first report the BLUF or conclusion, then explain the premises that led to the conclusion. In digital communication At present, 66% of consumers open their messages through mobile devices. Digital communication follow a different set of norms than direct mail, as emails require brevity and clarity of message content; hence, the BLUF framework is applicable to optimizing writing for mobile email consumption. An email patterned in BLUF declares the purpose of the email and action required. The subject of the email states exactly what the email is about. The body of the message should quickly answer the five Ws: who, what, where, when, and why. The first few sentences explains the purpose and reason of the email and continues to give supporting details. Message conveyed must be clear if simply for information or requiring action. This help email recipients grasp and retain the message. Thus, an effective BLUF distills the most important information for the reader (receiver of the message). The nature of BLUF writing is short and concise; hence, it helps reduce time most especially in the decision-making process. Below is an example of a traditional narrative email between colleagues who try to solve a problem:Jim, Over the course of working on the new project, we've encountered some challenges working with the data. When we try to take table A from Database 1 and load it into Database 2, we are getting an error. So far, we've tried a few methods we found online here and here but nothing seems to work. Do you have any experience with this type of data transfer? If not do you know anyone else that has experience converting Oracle data to SQL Server?As the example reflected, the sender's query came to an end. Further, no information on what kind of error and even gave several links to the receiver instead of elaborating the methods and put a technical detail after asking for help. In contrast, the BLUF email version is:Jim, Do you know who can help us convert Oracle data to SQL Server? This is for the new project and we've encountered some challenges...In the second message, the receiver already knows if he can help or he needs the assistance of another colleague. This will lead to a faster decision-making process. The Persimmon Group had revealed that nearly 30% of office workdays are dedicated to reading and answering email and workers spend about 40% of their time in meetings. This has been rising due to the continuous integration of technology in the business field as part of their processes. During the COVID-19 pandemic, the National Bureau of Economic Research found that the number of meetings per person increased by 13.5%. Employees spent about 11.5% less time in meetings during the post-lockdown period. Beyond textual discussions, BLUF in digital communication also means conveying data. This includes making a presentation filled with facts and figures. A presentation can begin with a "BLUF slide"—a compelling visual image that encapsulates the overall thesis. Before presenting research data to marketers, for instance, presenters may show a timeline of a company's sales before and after it experienced a public relations crisis. In planning and project management The BLUF model can also be used in planning and management to ensure the purpose of plans are kept in mind, decision-maker support is more readily attainable, and impact may more easily and accurately be measured. It is considered as the best way for intelligence analysts to communicate with policymakers and commanders, who are often too busy to read and carefully digest every word of the intelligence products they rely upon to make decisions. Summarizing each paragraph at its beginning allows decision makers to quickly skim intelligence products without sacrificing clarity. Because materials that are not in the BLUF format—such as academic texts—may contain paragraphs with several important ideas located at the beginning, middle, or end, readers who skim these publications may inadvertently miss important information. The BLUF approach to sales talk, for example, is also called the elevator speech. It entails that the messenger should be able to pitch a story as the elevator travels from one floor to another, which is approximately 30 seconds or less. The following are some tips on using BLUF in project management: In order to synthesize the details well, there should be a great deal of topic mastery or familiarity of the whole story on the side of the messenger. Inputs must be comprehensive yet concise. In other words, wordiness and fillers must be avoided. For BLUF structured speech to work, all aspects of analysis must be understood such as "the critical success factors, the risks, the assumptions, and so on". Since BLUF is audience-centric, salient points must be addressed clearly while taking into consideration the needs and background of the listener. A clearly defined purpose must be kept in mind when structuring a speech in BLUF format: simple and measurable enough for a decision to be made possible. A BLUF allows messengers to "think through relevant views and understand these ideas as our stakeholders see them". In psychological assessment BLUF has been used in one program to help to quickly assess the most pressing problem facing a patient. The BLUF method is most useful as part of the cognitive-behavioral approaches in primary care to get the physician understand information that may be beneficial for the patient. Communicating the results in a format that subjects can easily understand is paramount. In a medical team setting, each member values speed and brevity. Simon and Folen (2001) suggest using the bottom line up front (BLUF) format—the recommendation first, followed by the backup reasoning or rationale in clear and straightforward terms. A parallel process should be used with the patients. Building on earlier works, Conoley, Padula, Payton, and Daniels (1994), using archived footage of sessions, found that a patient was most likely to implement a recommended treatment if the following three conditions were matched: the recommendation needed to match the problem, should not be too difficult to follow [emphasis added], and should build on a patient's strengths. Patients being counseled tend to follow a treatment plan if, among other things, the recommendation is explained first and followed up with the justification, which are typical features of a BLUF. Expectations with the patients in carrying out a tailored therapy are likely when the benefits are explicitly stated immediately. In SEO writing Search engine optimization (SEO) writing is an integral part of online marketing. It differs from other forms of writing since it contains keywords that will help in ranking the website. These keywords should appear naturally in the article. SEO writing is also more structured and may involve complicated instructions for better search engine visibility. When more people view the website, there is a chance for the conversion rate to increase. The BLUF model is relevant in SEO writing since it tells the readers what they want to know right away. While the primary goal of SEO writing is to rank higher in the search engines, the primary targets are still human readers. They need to grasp the information immediately. For instance, in a parenting article, the conclusion may be presented in the first paragraph. With the BLUF model, readers will feel the need to read until the end. It is beneficial in SEO writing since the goal is to make the readers go through the information until the end, and eventually heed the calls to action (CTAs). Creative calls to action (CTAs) are also used to point users toward the next step of partnering with a business. BLUF writing in SEO does not mean there will be no conclusion towards the end of the article. The idea is to write a more succinct version in the starting paragraph. In healthcare communication BLUF communication may also be used in healthcare. Data shows that poor communication comprises 30 percent of all medical malpractice claims filed from 2009 to 2013. Thirty-seven percent of these claims were serious adverse events, such as debilitating conditions (e.g., extended hospital stay, loss of arms or limbs, psychological trauma), and death. These errors result from miscommunications among the members of the medical team, such as when coordinating the treatment plan for the patient. Moreover, these may lead to an increase in negative patient outcomes, and customer dissatisfaction. Thus, there is a need to practice effective communication to significantly avert medical errors and malpractice, and therefore contribute to best health outcomes. One common practice in the healthcare setting is the "hand-off" procedure. During "hand-offs" or "handovers" (i.e., change of shift report), critical information about patient care is transferred between the outgoing and the incoming staff. Usually, this process takes place in limited time. Thus, the potential for communication gaps is very likely. One of the strategies to eliminate these gaps is the use of the bottom line up front (BLUF) approach to communication. The BLUF approach is used to customize the information to be transferred as well as the style of "handoff" to match the specific needs of patients. Additionally, BLUF communication may also be used during "code blue" (a medical emergency such as cardiac or respiratory arrest). Welu (2020) specified that BLUF fosters clarity of communication among members of a group during crisis or emergency situations. Furthermore, the BLUF approach may also be used during referrals of the condition of the patients to another healthcare worker or health service providers, as in the case of nurses to nurses, nurses to physicians, or junior physicians, to attending physicians. When documenting the treatment plan and the actual care interventions that were done to the patients, the BLUF approach may also be useful. Providing information about the patient's condition together with the appropriate plans that are succinct and straight-to-the-point is vital to ensure that patients experience the best quality of healthcare and best medical outcome possible for them. See also Abstract (summary) Inverted pyramid (journalism) Thesis statement TL;DR References Planning Human communication Newswriting
BLUF (communication)
[ "Biology" ]
5,310
[ "Human communication", "Behavior", "Human behavior" ]
14,343,887
https://en.wikipedia.org/wiki/Precision%20and%20recall
In pattern recognition, information retrieval, object detection and classification (machine learning), precision and recall are performance metrics that apply to data retrieved from a collection, corpus or sample space. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances. Written as a formula: Recall (also known as sensitivity) is the fraction of relevant instances that were retrieved. Written as a formula: Both precision and recall are therefore based on relevance. Consider a computer program for recognizing dogs (the relevant element) in a digital photograph. Upon processing a picture which contains ten cats and twelve dogs, the program identifies eight dogs. Of the eight elements identified as dogs, only five actually are dogs (true positives), while the other three are cats (false positives). Seven dogs were missed (false negatives), and seven cats were correctly excluded (true negatives). The program's precision is then 5/8 (true positives / selected elements) while its recall is 5/12 (true positives / relevant elements). Adopting a hypothesis-testing approach, where in this case, the null hypothesis is that a given item is irrelevant (not a dog), absence of type I and type II errors (perfect specificity and sensitivity) corresponds respectively to perfect precision (no false positives) and perfect recall (no false negatives). More generally, recall is simply the complement of the type II error rate (i.e., one minus the type II error rate). Precision is related to the type I error rate, but in a slightly more complicated way, as it also depends upon the prior distribution of seeing a relevant vs. an irrelevant item. The above cat and dog example contained 8 − 5 = 3 type I errors (false positives) out of 10 total cats (true negatives), for a type I error rate of 3/10, and 12 − 5 = 7 type II errors (false negatives), for a type II error rate of 7/12. Precision can be seen as a measure of quality, and recall as a measure of quantity. Higher precision means that an algorithm returns more relevant results than irrelevant ones, and high recall means that an algorithm returns most of the relevant results (whether or not irrelevant ones are also returned). Introduction In a classification task, the precision for a class is the number of true positives (i.e. the number of items correctly labelled as belonging to the positive class) divided by the total number of elements labelled as belonging to the positive class (i.e. the sum of true positives and false positives, which are items incorrectly labelled as belonging to the class). Recall in this context is defined as the number of true positives divided by the total number of elements that actually belong to the positive class (i.e. the sum of true positives and false negatives, which are items which were not labelled as belonging to the positive class but should have been). Precision and recall are not particularly useful metrics when used in isolation. For instance, it is possible to have perfect recall by simply retrieving every single item. Likewise, it is possible to achieve perfect precision by selecting only a very small number of extremely likely items. In a classification task, a precision score of 1.0 for a class C means that every item labelled as belonging to class C does indeed belong to class C (but says nothing about the number of items from class C that were not labelled correctly) whereas a recall of 1.0 means that every item from class C was labelled as belonging to class C (but says nothing about how many items from other classes were incorrectly also labelled as belonging to class C). Often, there is an inverse relationship between precision and recall, where it is possible to increase one at the cost of reducing the other, but context may dictate if one is more valued in a given situation: A smoke detector is generally designed to commit many Type I errors (to alert in many situations when there is no danger), because the cost of a Type II error (failing to sound an alarm during a major fire) is prohibitively high. As such, smoke detectors are designed with recall in mind (to catch all real danger), even while giving little weight to the losses in precision (and making many false alarms). In the other direction, Blackstone's ratio, "It is better that ten guilty persons escape than that one innocent suffer," emphasizes the costs of a Type I error (convicting an innocent person). As such, the criminal justice system is geared toward precision (not convicting innocents), even at the cost of losses in recall (letting more guilty people go free). A brain surgeon removing a cancerous tumor from a patient's brain illustrates the tradeoffs as well: The surgeon needs to remove all of the tumor cells since any remaining cancer cells will regenerate the tumor. Conversely, the surgeon must not remove healthy brain cells since that would leave the patient with impaired brain function. The surgeon may be more liberal in the area of the brain they remove to ensure they have extracted all the cancer cells. This decision increases recall but reduces precision. On the other hand, the surgeon may be more conservative in the brain cells they remove to ensure they extracts only cancer cells. This decision increases precision but reduces recall. That is to say, greater recall increases the chances of removing healthy cells (negative outcome) and increases the chances of removing all cancer cells (positive outcome). Greater precision decreases the chances of removing healthy cells (positive outcome) but also decreases the chances of removing all cancer cells (negative outcome). Usually, precision and recall scores are not discussed in isolation. A precision-recall curve plots precision as a function of recall; usually precision will decrease as the recall increases. Alternatively, values for one measure can be compared for a fixed level at the other measure (e.g. precision at a recall level of 0.75) or both are combined into a single measure. Examples of measures that are a combination of precision and recall are the F-measure (the weighted harmonic mean of precision and recall), or the Matthews correlation coefficient, which is a geometric mean of the chance-corrected variants: the regression coefficients Informedness (DeltaP') and Markedness (DeltaP). Accuracy is a weighted arithmetic mean of Precision and Inverse Precision (weighted by Bias) as well as a weighted arithmetic mean of Recall and Inverse Recall (weighted by Prevalence). Inverse Precision and Inverse Recall are simply the Precision and Recall of the inverse problem where positive and negative labels are exchanged (for both real classes and prediction labels). True Positive Rate and False Positive Rate, or equivalently Recall and 1 - Inverse Recall, are frequently plotted against each other as ROC curves and provide a principled mechanism to explore operating point tradeoffs. Outside of Information Retrieval, the application of Recall, Precision and F-measure are argued to be flawed as they ignore the true negative cell of the contingency table, and they are easily manipulated by biasing the predictions. The first problem is 'solved' by using Accuracy and the second problem is 'solved' by discounting the chance component and renormalizing to Cohen's kappa, but this no longer affords the opportunity to explore tradeoffs graphically. However, Informedness and Markedness are Kappa-like renormalizations of Recall and Precision, and their geometric mean Matthews correlation coefficient thus acts like a debiased F-measure. Definition For classification tasks, the terms true positives, true negatives, false positives, and false negatives compare the results of the classifier under test with trusted external judgments. The terms positive and negative refer to the classifier's prediction (sometimes known as the expectation), and the terms true and false refer to whether that prediction corresponds to the external judgment (sometimes known as the observation). Let us define an experiment from P positive instances and N negative instances for some condition. The four outcomes can be formulated in a 2×2 contingency table or confusion matrix, as follows: Precision and recall are then defined as: Recall in this context is also referred to as the true positive rate or sensitivity, and precision is also referred to as positive predictive value (PPV); other related measures used in classification include true negative rate and accuracy. True negative rate is also called specificity. Precision vs. Recall Both precision and recall may be useful in cases where there is imbalanced data. However, it may be valuable to prioritize one metric over the other in cases where the outcome of a false positive or false negative is costly. For example, in medical diagnosis, a false positive test can lead to unnecessary treatment and expenses. In this situation, it is useful to value precision over recall. In other cases, the cost of a false negative is high, and recall may be a more valuable metric. For instance, the cost of a false negative in fraud detection is high, as failing to detect a fraudulent transaction can result in significant financial loss. Probabilistic Definition Precision and recall can be interpreted as (estimated) conditional probabilities: Precision is given by while recall is given by , where is the predicted class and is the actual class (i.e. means the actual class is positive). Both quantities are, therefore, connected by Bayes' theorem. No-Skill Classifiers The probabilistic interpretation allows to easily derive how a no-skill classifier would perform. A no-skill classifiers is defined by the property that the joint probability is just the product of the unconditional probabilites since the classification and the presence of the class are independent. For example the precision of a no-skill classifier is simply a constant i.e. determined by the probability/frequency with which the class P occurs. A similar argument can be made for the recall: which is the probability for a positive classification. Imbalanced data Accuracy can be a misleading metric for imbalanced data sets. Consider a sample with 95 negative and 5 positive values. Classifying all values as negative in this case gives 0.95 accuracy score. There are many metrics that don't suffer from this problem. For example, balanced accuracy (bACC) normalizes true positive and true negative predictions by the number of positive and negative samples, respectively, and divides their sum by two: For the previous example (95 negative and 5 positive samples), classifying all as negative gives 0.5 balanced accuracy score (the maximum bACC score is one), which is equivalent to the expected value of a random guess in a balanced data set. Balanced accuracy can serve as an overall performance metric for a model, whether or not the true labels are imbalanced in the data, assuming the cost of FN is the same as FP. The TPR and FPR are a property of a given classifier operating at a specific threshold. However, the overall number of TPs, FPs etc depend on the class imbalance in the data via the class ratio . As the recall (or TPR) depends only on positive cases, it is not affected by , but the precision is. We have that Thus the precision has an explicit dependence on . Starting with balanced classes at and gradually decreasing , the corresponding precision will decrease, because the denominator increases. Another metric is the predicted positive condition rate (PPCR), which identifies the percentage of the total population that is flagged. For example, for a search engine that returns 30 results (retrieved documents) out of 1,000,000 documents, the PPCR is 0.003%. According to Saito and Rehmsmeier, precision-recall plots are more informative than ROC plots when evaluating binary classifiers on imbalanced data. In such scenarios, ROC plots may be visually deceptive with respect to conclusions about the reliability of classification performance. Different from the above approaches, if an imbalance scaling is applied directly by weighting the confusion matrix elements, the standard metrics definitions still apply even in the case of imbalanced datasets. The weighting procedure relates the confusion matrix elements to the support set of each considered class. F-measure A measure that combines precision and recall is the harmonic mean of precision and recall, the traditional F-measure or balanced F-score: This measure is approximately the average of the two when they are close, and is more generally the harmonic mean, which, for the case of two numbers, coincides with the square of the geometric mean divided by the arithmetic mean. There are several reasons that the F-score can be criticized, in particular circumstances, due to its bias as an evaluation metric. This is also known as the measure, because recall and precision are evenly weighted. It is a special case of the general measure (for non-negative real values of ): Two other commonly used measures are the measure, which weights recall higher than precision, and the measure, which puts more emphasis on precision than recall. The F-measure was derived by van Rijsbergen (1979) so that "measures the effectiveness of retrieval with respect to a user who attaches times as much importance to recall as precision". It is based on van Rijsbergen's effectiveness measure , the second term being the weighted harmonic mean of precision and recall with weights . Their relationship is where . Limitations as goals There are other parameters and strategies for performance metric of information retrieval system, such as the area under the ROC curve (AUC) or pseudo-R-squared. Multi-class evaluation Precision and recall values can also be calculated for classification problems with more than two classes. To obtain the precision for a given class, we divide the number of true positives by the classifier bias towards this class (number of times that the classifier has predicted the class). To calculate the recall for a given class, we divide the number of true positives by the prevalence of this class (number of times that the class occurs in the data sample). The class-wise precision and recall values can then be combined into an overall multi-class evaluation score, e.g., using the macro F1 metric. See also Uncertainty coefficient, also called proficiency Sensitivity and specificity Confusion matrix Scoring rule Base rate fallacy References Baeza-Yates, Ricardo; Ribeiro-Neto, Berthier (1999). Modern Information Retrieval. New York, NY: ACM Press, Addison-Wesley, Seiten 75 ff. Hjørland, Birger (2010); The foundation of the concept of relevance, Journal of the American Society for Information Science and Technology, 61(2), 217-237 Makhoul, John; Kubala, Francis; Schwartz, Richard; and Weischedel, Ralph (1999); Performance measures for information extraction, in Proceedings of DARPA Broadcast News Workshop, Herndon, VA, February 1999 van Rijsbergen, Cornelis Joost "Keith" (1979); Information Retrieval, London, GB; Boston, MA: Butterworth, 2nd Edition, External links Information Retrieval – C. J. van Rijsbergen 1979 Computing Precision and Recall for a Multi-class Classification Problem Information retrieval evaluation Information science Bioinformatics
Precision and recall
[ "Engineering", "Biology" ]
3,119
[ "Bioinformatics", "Biological engineering" ]
14,344,447
https://en.wikipedia.org/wiki/Tybamate
Tybamate (INN; Solacen, Tybatran, Effisax) is an anxiolytic of the carbamate family. It is a prodrug for meprobamate in the same way as the better known drug carisoprodol. It has liver enzyme inducing effects similar to those of phenobarbital but much weaker. As the trade name Tybatran (Robins), it was formerly available in capsules of 125, 250, and 350 mg, taken 3 or 4 times a day for a total daily dosage of 750 mg to 2 g. The plasma half-life of the drug is three hours. At high doses in combination with phenothiazines, it could produce convulsions. Synthesis Catalytic hydrogenation of 2-methyl-2-pentenal (1) gives the aldehyde 2-methylpentanal (2). Treatment with formaldehyde gives a crossed Cannizzaro reaction yielding 2,2-bis(hydroxymethyl)pentane (3). Cyclisation of this diol with diethyl carbonate gives (4), which reacts with ammonia to provide the carbamate (5). Lastly, treatment with butyl isocyanate (6) produces tybamate. References Anxiolytics Carbamates Prodrugs GABAA receptor positive allosteric modulators
Tybamate
[ "Chemistry" ]
295
[ "Chemicals in medicine", "Prodrugs" ]
14,344,741
https://en.wikipedia.org/wiki/Mating%20disruption
Mating disruption (MD) is a pest management technique designed to control certain insect pests by introducing artificial stimuli that confuse the individuals and disrupt mate localization and/or courtship, thus preventing mating and blocking the reproductive cycle. It usually involves the use of synthetic sex pheromones, although other approaches, such as interfering with vibrational communication, are also being developed. History La confusion sexuelle or mating disruption, was first discussed by the Institut national de la recherche agronomique in 1974 in Bordeaux, France. Winemakers in France, Switzerland, Spain, Germany, and Italy were the first to use the method to treat vines against the larvae of the moth genus Cochylis. Mechanism In many insect species of interest to agriculture, such as those in the order Lepidoptera, females emit an airborne trail of a specific chemical blend constituting that species' sex pheromone. This aerial trail is referred to as a pheromone plume. Males of that species use the information contained in the pheromone plume to locate the emitting female (known as a “calling” female). Mating disruption exploits the male insects' natural response to follow the plume by introducing a synthetic pheromone into the insects’ habitat. The synthetic pheromone is a volatile organic chemical designed to mimic the species-specific sex pheromone produced by the female insect. The general effect of mating disruption is to confuse the male insects by masking the natural pheromone plumes, causing the males to follow “false pheromone trails” at the expense of finding mates, and affecting the males’ ability to respond to “calling" females. Consequently, the male population experiences a reduced probability of successfully locating and mating with females, which leads to the eventual cessation of breeding and collapse of the insect infestation. The California Department of Pesticide Regulation, the California Department of Food and Agriculture, and the United States Environmental Protection Agency consider mating disruption to be among the most environmentally friendly treatments used to eradicate pest infestations. Mating disruption works best if large areas are treated with pheromones. Ten acres is a good minimum size for a successful MD program, but larger areas are preferable[2] Advantages of mating disruption Pheromone programs are most effective when controlling low to moderate pest population densities. MD has also been identified as a pest control method in which the insect does not become resistant[1]. The scientific community, together with governmental agencies throughout the world, understands the benefits of mating disruption using species-specific sex pheromones, and consider sex-pheromone-based insect control programs among the most environmentally friendly treatments to be used to manage and control insect pest populations. Insect pheromone has been successfully used as an effective tool to slow the spread and to eradicate pests from very large areas in the US; for example to control the spongy moth (Lymantria dispar), a devastating forestry pest, and to eradicate the boll weevil and pink bollworm, two of the most damaging pest of cotton. Conventional pesticide based control methods, kill insects directly, whereas mating disruption confuses male insects from accurately locating a mating partner, leading to the eventual collapse of the mating cycle[3]. Mating disruption, due to the specificity of the sex pheromone of the insect species, has the benefit of only affecting the males of that species, while leaving other non target species unaffected[3]. This allows for very targeted pest management, promoting the suppression of a single pest species, leaving the populations of beneficial insects (pollinators and natural enemies) intact. Mating disruption, like most pest management strategies, is a useful technique, but should not be considered a stand-alone treatment program[1] for it targets only a single species in plant production systems that usually have several pests of concern. Mating disruption is a valuable tool that should be used in Integrated Pest Management(IPM) programs. Pheromone programs have been used for several decades around the globe and to date (2009) there is no documented public health evidence to suggest that agricultural use of synthetic pheromones is harmful to humans or to any other non-target species. However, continuing research is being conducted. Disadvantages of mating disruption Over the decades that pheromone pest programs have been used several disadvantages have been argued when compared to the use of conventional pesticides. Most pheromones target a single species, so a specific mating disruption formulation controls only the species that uses that pheromone blend; whereas pesticides usually kill indiscriminately a plethora of species, including multiple species with a single application. Some synthetic pheromones have high developmental and production costs, causing the mating disruption technique to be too costly to be adopted by conventional commercial growers. Furthermore most commercial pheromone mating disruption formulations must be applied by hand, which can be an expensive and time consuming. Novel pheromone formulations recently developed to be mechanically applied provide long lasting mating disruption effects (e.g., depending on the target pest a single application of SPLAT controls the target pest for a complete reproductive cycle, or for the entire season. Methods of dispersal Microencapsulated pheromones Microencapsulated pheromones (MECs) are small droplets of pheromone enclosed within polymer capsules. The capsules control the release rate of the pheromone into the surrounding environment. The capsules are small enough to be applied in the same method as used to spray insecticides. The effective field longevity of the microencapsulated pheromone formulations ranges from a few days to slightly more than a week, depending on climatic conditions, capsule size and chemical properties[1]. Microcapsules in the pheromone formulations are usually kept above a prescribed diameter to avoid the risk of inhalation by humans. Hand applied dispensers Hollow tube dispensers are plastic twist-tie type dispensers, plastic hollow fibers or plastic hollow microfibers fibers, filled with synthetic sex pheromone and placed throughout the area to be protected. Pheromone Baits and Stations are a stationary, attract and kill type of dispensers. Some are relatively large platform, containing a pheromone lure inside a glue board that ensnares the attracted insect. Other pheromone bait stations contain a pheromone lure in conjunction with a surface containing a dose of insecticide that reduces the attracted insect's fitness, thus reducing its ability to mate and reproduce. High-emission dispensers There are several very high dose pheromone dispensers, some do it passively, like pheromone sachets and large dollops of SPLAT pheromone formulations, others do it be actively releasing bursts of sex pheromone at timed intervals. Monolithic Flowable dispensers A new, effective and economical concept in pheromone delivery using a flowable formulation to create long lasting monolithic pheromone dispensers has been brought to the market in the past decade. These novel SPLAT pheromone mating disruption formulations can provide effective season long suppression effect (e.g., depending on the target pest a single application of SPLAT controls the target pest for a complete reproductive cycle, or for the entire season) and can be manually or mechanically applied. Although mechanical dispersal techniques require specialized off-the-shelf application technology and/or equipment, once the application system is made to work it allows protection of extensive areas using pheromones, one of the most benign and effective pest management techniques available today. A benefit of SPLAT is that the dollop anchors where it lands, avoiding unwanted drift of the formulation once applied in the field, and, depending on the mode of application, the cured dollops are retrievable. Aerial dispersal In November 2007, a controversial aerial approach was used to spray microencapsulated LBAM pheromone in urban and rural areas of the counties of Santa Cruz and Monterey California to combat the invasive light brown apple moth. Usually the effect of disruption of orientation of the male moths to females (or monitoring pheromone traps) can be detected by the reduction in moth capture in monitoring pheromone traps. The government campaign using areawide aerial microencapsulated pheromone applications failed to show any sign of mating disruption on the light brown apple moth populations in the treated area. It was found that the first aerial campaign was performed using an incomplete (the wrong) pheromone blend of the light brown apple moth (the wrong blend decreased tremendously the likelihood of success of the mating disruption program), and the LBAM microencapsulated formulation was untested, and finally, microencapsule formulations are notoriously known for their short field life, weak and erratic performance. Furthermore it is possible that the LBAM microencapsulated formulation used in the government campaign was unfit for aerial delivery in urban areas; although pheromone is safe, the formulation used had microcapsules of very small diameter which made it into a possible inhalation hazard that seems to be linked to an increase in allergenic reactions of the population in the target area. This set of LBAM mating disruption aerial applications done by the government has created tremendous dissent of the public in general as well as of several sectors of the scientific community. Now, several years later, the affected communities as well as the nascent US pheromone industry (which provides safer, yet very effective, alternatives to the use of conventional pesticides) are still suffering the ripple effects of these disastrous Bay Area LBAM eradication campaigns. But there are numerous, successful pest suppression programs that rely on aerial dispersal of pheromone mating disruptants. One of the largest pheromone mating disruption programs in the globe is Slow the Spread. Slow the Spread has been implemented across the spongy moth frontier from Wisconsin to North Carolina. The program area is located ahead of the advancing front of the spongy moth population. The STS program focuses on early detection and suppression of the low–level populations along this advancing front, disrupting the natural progress of population buildup and spread. Every year hundreds of thousands of acres are aerially sprayed with two pheromone spongy moth pheromone mating disruption formulations, Flakes and SPLAT. A single mating disruption formulation application promotes season-long suppression of spongy moth in the treated areas. With a crew of 8 people it was possible to aerially treat with SPLAT GM over of forest in a single day. The consortium of Federal and State participants have been able to do the following: • decrease the new territory invaded by the spongy moth each year from to ; • protect forests, forest–based industries, urban and rural parks, and private property; and • avoid at least $22 million per year in damage and management costs. It seems that the tremendous success of the Slow the Spread program is related to extremely well planned campaigns, which involves communication, transparency and clarity of objectives: in advance to an application STS holds meetings that include the area population in general, concerned citizens, public officials, scientists and technical personnel to discuss strategies of management of spongy moths in the areas of concern. There is a movement requesting that new government invasive species eradication campaigns model their pest suppression actions on the existing successful suppression programs like STS, and embrace a more effective policy of communication, transparency and clarity of objectives. With the involvement and education of the public, areawide eradication campaigns will be better planned and more able to deliver decisive end effective pest eradication actions. See also Integrated pest management Pest (organism) Pesticide Pheromones Codling moth References Agronomy Pest control techniques Biological pest control Chemical ecology
Mating disruption
[ "Chemistry", "Biology" ]
2,457
[ "Biochemistry", "Chemical ecology" ]
14,345,140
https://en.wikipedia.org/wiki/European%20Society%20for%20Engineering%20Education
The European Society for Engineering Education an organisation for engineering education in Europe. Commonly known as SEFI, an acronym for its French name, Société Européenne pour la Formation des Ingénieurs, it is also known in German as the Europäische Gesellschaft für Ingenieur-Ausbildung. SEFI was founded in Brussels in 1973 and has more than 300 members in 40 countries. It promotes information exchange about current developments in the field of engineering education, between teachers, researchers and students in the various European countries. Additionally, it develops the cooperation between higher engineering education institutions and promotes cooperation with industry, acting as a link between its members and other scientific and international bodies, in collaboration with other international organisations like its European sister organisation IGIP, the American Society for Engineering Education, and the Board of European Students of Technology. Members SEFI is primarily a network of universities however, it offers four types of membership: individual, institutional (list), associate (list), and industrial (list) Institutional - Educational institutions and other teaching establishments involved in the education and training of engineers. Industrial - Enterprises, companies and administrations employing engineers or interested in the education and training of engineers. Associate - Professional organizations involved in engineering education or improvement of engineering profession, or institutions not fulfilling the criteria of the institutional membership Individual - Persons involved in the engineering education and the improvement of the engineering profession, and individuals interested in joining our Working groups or EEDC Organisation SEFI is governed by a board of directors composed of 21 elected members and members’ representatives, two Vice-Presidents and is presently chaired by President Hannu-Matti Järvinen of Tampere University. SEFI Special Interest Groups SEFI Special Interest Groups connect the educators, students and industrial stakeholders with interests in similar aspects of the engineering education and they are open to SEFI members. These groups organize meetings, workshops, write on position papers and EU projects. Events Workshops and are regularly organized by the working groups and committees on specific themes of engineering education and in the context of SEFI's priorities. Participation in the working groups is reserved to the members of SEFI. Annual Conferences The SEFI Annual Conferences represent an opportunity for the members and all those involved in Engineering Education to meet colleagues, exchange views and opinions and to establish new contacts. The themes of the Conferences reflect the interests of SEFI members. Previous conferences were organised by Twente University in the Netherlands, and Budapest University of Technology and Economics in Hungary. European Convention for Engineering Deans (ECED) The general objective of the Conventions is to bring together Deans from whole over Europe to meet and to discuss in depth common topics, share experiences, identify solutions for problems and build up a network with peers in different European countries. SEFI launched the conventions in 2005, under the Presidency of Prof. Borri, University of Florence. Since 2011, ECEDs are organised annually by SEFI occasionally partnering with other organisations. SEFI @ Work - online webinars In 2021, SEFI started to offer regular online seminars for the engineering education community. These are dedicated to specific topics in engineering education. Publications The European Journal of Engineering Education published by Taylor and Francis is the official scientific journal of SEFI. SEFI also publishes a monthly electronic Newsletter and a weekly Press review as a benefit to SEFI membership. Among regular publications are also SEFI Annual Report] and the Proceedings of SEFI Annual Conferences, indexed in Scopus. Other publications consist of Ad Hoc Documents presenting the outcomes of seminars organised by the Working Groups/Committees/Projects as well as reference documents. Ongoing EU Projects ENHANCE – European universities alliance ERASMUS + Coordination: University Politècnica de València Partners: SEFI, University Politècnica de València, Technische Universität Berlin, Warsaw University of Technology, Politecnico Milano 1863, RWTH Aachen University, Norwegian University of Science and Technology, Chalmers CiSTEM² Cooperative InterdiSciplinary Teacher Education Model for Coaching Integrated STEM - Erasmus + 01/05/2021 – 30/04/2023 Coordination: KU Leuven Partners: University College Nordjylland (DK), Obuda University (HU), University of Cyprus, SEFI EuroTeQ Engineering Campus - European Universities Initiative Coordination: TU Munich Partners: Technical University of Denmark (DTU), École Polytechnique (L'X), Eindhoven University of Technology (TU/e), Technical University of Munich (TUM), Ecole polytechnique fédérale de Lausanne (EPFL) and the Technion – Israel Institute of Technology, Tallinn University of Technology (TalTech) and Czech Technical University in Prague (CTU). Awards Leonardo da Vinci Medal - The Leonardo da Vinci Medal is the highest distinction SEFI can bestow. The Medal is awarded by the Administrative Council to living persons who have made an outstanding contribution of international significance to engineering education. Since its institution in 1983, the Medal has been awarded to: Mr. Jacques Delors (France); Prof. Heinz Zemanek (Austria); Sir Monty Finniston † (United Kingdom); Prof. John P. Klus † (USA); Prof. Antonio Ruberti † (Italy); Prof. James C. I. Dooge † (Ireland); Prof Hubert Curien † (France); Sir Robert Telford † (United Kingdom); Mr Jean Gandois (France); Prof. Fritz Paschke (Austria);Prof. Olgierd C. Zienkiewicz † (United Kingdom, Poland); Prof. Teuvo Kalevi Kohonen (Finland); Prof. Niklaus Wirth (Switzerland); Senator Pierre Laffitte (France); Prof. Niels I. Meyer (Denmark); Mr. Santiago Calatrava (Spain, Switzerland); Prof. Joaquim A. Ribeiro Sarmento (Portugal); Prof. Giuliano Augusti (Italy); Prof. Gülsün Sağlamer (Turkey); Prof. Ingemar Ingemarsson (Sweden); Mr Paul Soros (Hungary); Prof. Ole Vinther (Denmark); Prof. Jean Michel (France); H.R.H. Prince Friso van Oranje-Nassau †(The Netherlands); in 2010, Prof. Dr. Konrad Osterwalder, Rector of the United Nations University (Switzerland); in 2011, Mr. Luiz Inácio Lula da Silva (former President of Brazil); in 2012, Prof. Joseph Sifakis (Greece/France); in 2013, Dr. Franck De Winne (Belgium); in 2014, Dame Julia King, Baroness Brown of Cambridge (United Kingdom); and in 2015 to Mr. Charles Champion (France), 2016 to Mr. Markku Markkula (Finland), 2017 to Prof. José Carlos Diogo Marques dos Santos (Portugal), in 2018 to Prof. Johan Malmqvist (Sweden), 2019 to Commissioner Tibor Navracsics (Hungary), in 2020 to Dr. Ruth Graham (United Kingdom) and, in 2021, to Günter Heitmann (Germany). The SEFI Fellowship Award - The SEFI Fellowship Award recognises meritorious service to engineering education in Europe. Award recipients may use the expression “Fellow of SEFI” (F.SEFI) as a postscript to their name. The nominees are in principle SEFI individual members who have worked in the field or in the promotion of engineering education for at least the previous five years. Best Papers Award - The Best Papers Award is given at the end of each Annual Conference to congratulate to the quality and originality of someone's work. SEFI Francesco Maffioli Award - The Award has been created to commemorate the 37 years of outstanding support and major contributions to the Society of the late Francesco Maffioli. The Award is given to individual teachers, or a team of teachers, of higher engineering education institutions members of SEFI, in recognition of open-minded development of curriculum, learning environments or tools, novel didactics, methods or systems in engineering studies. This is to reflect Professor Maffioli's passion for cooperation with engineering students and ensuring they had a voice in the development of engineering education in the future. So far it has been awarded in 2014 to the Board of European Students of Technology.; in 2018 to Prof. Ingvar Gustavsson from Blekinge Institute of Technology; in 2019 to Andre Baier for the Blue Engineering Initiative at TU Berlin; in 2020 to Ms Una Beagon and team – Creative Design Studio Framework – TU Dublin; and in 2021 to Gunter Bombaerts from TU Eindhoven. Cooperation SEFI cooperates with other major European and international associations (ASEE, GEDC, IFEES, WFEO, IGIP, BEST, LACCEI, EDEN and JSEE) and international bodies (European Commission, UNESCO, Council of Europe, OECD). SEFI also participated in the creation of numerous international organisations such as ENAEE, IFEES, EuroPace, IACEE, IIDEA, or EEDC. References External links Official SEFI homepage Annual conference 2021 Engineering societies Engineering education Engineering university associations and consortia European student organizations Higher education organisations based in Europe Eng Organisations based in Brussels Educational organizations established in 1973 Scientific organisations based in Belgium Technical universities and colleges
European Society for Engineering Education
[ "Engineering" ]
1,915
[ "Engineering societies" ]
14,345,475
https://en.wikipedia.org/wiki/Panasas
Panasas is a data storage company that creates network-attached storage for technical computing environments. Panasas, which operated for 25 years as a parallel file and HPC company, has changed its name to VDURA. The company has changed its name to reflect its new focus as a software provider with a subscription-based revenue model. History Panasas is a computer data storage product company and is headquartered in San Jose, California. Panasas received seed funding from Mohr Davidow Ventures (MDV) and others. The first Panasas products were shipped in 2004, the same year that Victor M. Perez became CEO. Faye Pairman became CEO in 2011. Tom Shea, formerly Panasas COO, was appointed as CEO in 2020. Technology Panasas developed an extension for managing parallel file access in the Network File System, which was later integrated in Parallel NFS (pNFS), part of the NFS version 4.1 specification, published by the Internet Engineering Task Force as RFC 5661 in January 2010. pNFS described a way for the NFS protocol to process file requests to multiple servers or storage devices at once, instead of handling the requests serially. Panasas supports DirectFlow, NFS, Parallel NFS and Server Message Block (also known as CIFS) data access protocols to integrate into existing local area networks. Panasas blade servers manage metadata, serving data for DirectFlow, NFS and CIFS clients using 10 Gigabit Ethernet. Panasas systems provide data storage and management for high-performance applications in the biosciences, energy, media and entertainment, manufacturing, government and research sectors. ActiveStor The ActiveStor product line is a parallel file system appliance that integrates hybrid storage hardware (hard drives and solid state drives), the PanFS parallel file system, its proprietary DirectFlow data access protocol, and the industry standard NFS and CIFS network protocols. ActiveStor Ultra ActiveStor Ultra (introduced in November 2018) is the newest generation of the Panasas ActiveStor storage system and features a re-engineered, portable file system that delivers performance and reliability on suitably qualified, industry standard storage hardware platforms. ActiveStor 20 (now ActiveStor Classic) was announced in August 2016 with increased capacity, using larger and faster disks. In November 2017, Panasas released the ActiveStor Director 100 and the ActiveStor Hybrid 100 (now ActiveStor Prime), which disaggregated the Director Blade, the controller node of Panasas storage system, from the storage nodes. In November 2018, Panasas introduced ActiveStor Ultra, which featured a completely re-engineered portable file system (PanFS® 8) running on industry standard hardware. DirectFlow DirectFlow is a parallel data access protocol designed by Panasas for ActiveStor. DirectFlow avoids protocol I/O bottlenecks by accessing Panasas storage directly and in parallel. DirectFlow was originally supported on Linux, and expanded in April 2016 to support Apple's MacOS. PanFS Panasas created the PanFS clustered file system as single pool of storage under a global filename space to support multiple applications and workflows in a single storage system. PanFS supports DirectFlow (pNFS), NFS and CIFS data access protocols simultaneously. PanFS 7.0 added a FreeBSD operating foundation and a GUI that supports asynchronous push notification of system changes without user interaction. In August 2020, Panasas announced a new version of PanFS that features Dynamic Data Acceleration technology, which automatically tunes storage for small files and mixed workloads. While other storage systems assign data to media "tiers" based on how recently files were accessed, Dynamic Data Acceleration assigns data based on file size to most efficiently use the underlying media. The "novel" method is designed to improve performance, eliminate manual tuning and control storage costs. References External links Panasas Company web site Parallel NFS Computer storage companies Computer hardware companies Computer companies of the United States Computer companies established in 1999 Technology companies based in the San Francisco Bay Area Privately held companies based in California Companies based in Sunnyvale, California Network file systems American companies established in 1999
Panasas
[ "Technology" ]
861
[ "Computer hardware companies", "Computers" ]
14,345,569
https://en.wikipedia.org/wiki/Calcium%20iodate
Calcium iodate is any of two inorganic compounds with the formula Ca(IO3)2(H2O)x, where x = 0 or 1. Both are colourless salts that occur as the minerals lautarite and bruggenite, respectively. A third mineral form of calcium iodate is dietzeite, a salt containing chromate with the formula Ca2(IO3)2CrO4. These minerals are the most common compounds containing iodate. Production and uses Lautarite, described as the most important mineral source of iodine, is mined in the Atacama Desert. Processing of the ore entails reduction of its aqueous extracts with sodium bisulfite to give sodium iodide. This comproportionation reaction is a major source of the sodium iodide. Calcium iodate can be produced by the anodic oxidation of calcium iodide or by passing chlorine into a hot solution of lime in which iodine has been dissolved. Calcium iodate is used as an iodine supplement in chicken feed. Ethylenediamine dihydroiodide (EDDI) is a more typical source of nutritional iodine. References Antiseptics Calcium compounds Iodates Oxidizing agents
Calcium iodate
[ "Chemistry" ]
258
[ "Iodates", "Redox", "Oxidizing agents" ]
14,346,033
https://en.wikipedia.org/wiki/TOXMAP
TOXMAP was a geographic information system (GIS) from the United States National Library of Medicine (NLM) that was deprecated on December 16, 2019. The application used maps of the United States to help users explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory (TRI) and Superfund programs with visual projections and maps. Description TOXMAP helped users create nationwide, regional, or local area maps showing where TRI chemicals are released on-site into the air, water, ground, and by underground injection, as reported by industrial facilities in the United States. It also identified the releasing facilities, color-codes release amounts for a single year or year range, and provides multi-year aggregate chemical release data and trends over time, starting with 1988. Maps also can show locations of Superfund sites on the Agency for Toxic Substances and Disease Registry National Priorities List (NPL), which lists all chemical contaminants present at these sites. TOXMAP is a useful environmental health tool that makes epidemiological and environmental information available to the public. There were two versions of TOXMAP available from its home page: the classic version of TOXMAP released in 2004 and, a newer version released in 2014 that is based on Adobe Flash/Apache Flex technology. In addition to many of the features of TOXMAP classic, the new version provides an improved map appearance and interactive capabilities as well as a more current GIS look-and-feel. This included seamless panning, immediate update of search results when zooming to a location, two collapsible side panels to maximize map size, and automatic size adjustment after a window resize. The new TOXMAP also improved U.S. Census layers and availability by Census Tract (2000 and 2010), Canadian National Pollutant Release Inventory (NPRI) data, U.S. commercial nuclear power plants, as well as improved and updated congressional district boundaries. TOXMAP classic users may search the system by location (such as city, state, or ZIP code), chemical name, chemical name fragment, release medium, release amount, facility name and ID, and can filter results to those residing within a pre-defined or custom geographic region. Search results may be brought up in Google Maps or Google Earth, or saved for use in other tools. TOXMAP also overlays map data such as U.S. Census population information, income figures from the Bureau of Economic Analysis, and health data from the National Cancer Institute and the National Center for Health Statistics. The data shown in TOXMAP comes from the following sources: EPA Toxics Release Inventory (TRI) EPA Superfund Program (National Priorities List/NPL) Environment Canada National Institute of Environmental Health Sciences (NIEHS) Superfund Research Program Hazardous Substances Data Bank NLM TOXLINE (Toxicology Bibliographic Information) Agency for Toxic Substances and Disease Registry National Atlas of the United States Surveillance, Epidemiology, and End Results database National Center for Health Statistics Nuclear Regulatory Commission Esri Shutdown The database was pulled from the internet by the Trump administration in December 2019. The NLM said in a statement that much of the information remained available from the original sources, and that thus the database could be removed; critics, such as the Environmental Data & Governance Initiative, suggested it was part of a larger effort on the part of the administration to obfuscate the detrimental results of the rollback of Obama-era environmental regulations. The data underlying TOXMAP remains accessible through their original resources: Government of Canada National Pollutant Release Inventory (NPRI), U.S. Census Bureau, U.S. EPA Clean Air Markets Program, U.S. EPA Geospatial Applications, U.S. EPA Facilities Registry System (FRS), U.S. EPA Superfund Program, U.S. EPA Toxics Release Program (TRI), U.S. NIH NCI Surveillance, Epidemiology, and End Results Program (SEER), U.S. Nuclear Regulatory Commission (NRC). See also Hazardous Substances Data Bank Environmental health References Further reading Hochstein, Colette, Gemoets, Darren: Ten Years of Change: National Library of Medicine TOXMAP Gets a New Look. At: 2014 * Hochstein, Colette, Gemoets, Darren: TOXMAP: Environmental Health Maps Now Powered by ArcGIS Server. At: 2011 Esri Federal User Conference. Hochstein, Colette, Gemoets, Darren: Recent Enhancements to TOXMAP, an Environmental Health GIS. At: 2008 Esri Federal User Conference. Hochstein C, Szczur M. TOXMAP: A GIS-based gateway to environmental health resources. In: Thomas C; Standards for Success: GIS for Federal Progress and Accountability. Redlands, CA: ESRI Press; 2006; pages 50–56; . Szczur, Marti; Krahe, Chris; Hochstein, Colette: TOXMAP: A GIS Information Portal to Environmental Health Databases. At: 2004 Esri International User Conference. External links NLM TOXMAP NLM TOXNET NLM Environmental Health and Toxicology Pollution in the United States Toxicology United States Environmental Protection Agency Biochemistry databases Medical search engines American environmental websites Government-owned websites of the United States Hazardous waste Geographical databases in the United States
TOXMAP
[ "Chemistry", "Technology", "Biology", "Environmental_science" ]
1,138
[ "Biochemistry", "Biochemistry databases", "Toxicology", "Hazardous waste" ]
14,346,042
https://en.wikipedia.org/wiki/Double-stranded%20RNA%20viruses
Double-stranded RNA viruses (dsRNA viruses) are a polyphyletic group of viruses that have double-stranded genomes made of ribonucleic acid. The double-stranded genome is used as a template by the viral RNA-dependent RNA polymerase (RdRp) to transcribe a positive-strand RNA functioning as messenger RNA (mRNA) for the host cell's ribosomes, which translate it into viral proteins. The positive-strand RNA can also be replicated by the RdRp to create a new double-stranded viral genome. A distinguishing feature of the dsRNA viruses is their ability to carry out transcription of the dsRNA segments within the capsid, and the required enzymes are part of the virion structure. Double-stranded RNA viruses are classified into two phyla, Duplornaviricota and Pisuviricota (specifically class Duplopiviricetes), in the kingdom Orthornavirae and realm Riboviria. The two phyla do not share a common dsRNA virus ancestor, but evolved their double strands two separate times from positive-strand RNA viruses. In the Baltimore classification system, dsRNA viruses belong to Group III. Virus group members vary widely in host range (animals, plants, fungi, and bacteria), genome segment number (one to twelve), and virion organization (T-number, capsid layers, or turrets). Double-stranded RNA viruses include the rotaviruses, known globally as a common cause of gastroenteritis in young children, and bluetongue virus, an economically significant pathogen of cattle and sheep. The family Reoviridae is the largest and most diverse dsRNA virus family in terms of host range. Classification Two clades of dsRNA viruses exist: the phylum Duplornaviricota and the class Duplopiviricetes, which is in the phylum Pisuviricota. Both are included in the kingdom Orthornavirae in the realm Riboviria. Based on phylogenetic analysis of RdRp, the two clades do not share a common dsRNA ancestor but are instead separately descended from different positive-sense, single-stranded RNA viruses. In the Baltimore classification system, which groups viruses together based on their manner of mRNA synthesis, dsRNA viruses are group III. Duplornaviricota Duplornaviricota contains most dsRNA viruses, including reoviruses, which infect a diverse range of eukaryotes, and cystoviruses, which are the only dsRNA viruses known to infect prokaryotes. Apart from RdRp, viruses in Duplornaviricota also share icosahedral capsids that contain 60 homo- or heterodimers of the capsid protein organized on a pseudo T=2 lattice. The phylum is divided into three classes: Chrymotiviricetes, which primarily contains fungal and protozoan viruses, Resentoviricetes, which contains reoviruses, and Vidaverviricetes, which contains cystoviruses. Duplopiviricetes The class Duplopiviricetes is the second clade of dsRNA viruses and is in the phylum Pisuviricota, which also contains positive-sense single-stranded RNA viruses. Duplopiviricetes mostly contains plant and fungal viruses and includes the following four families: Amalgaviridae, Hypoviridae, Partitiviridae, and Picobirnaviridae. Notes on selected species Reoviridae Reoviridae are currently classified into nine genera. The genomes of these viruses consist of 10 to 12 segments of dsRNA, each generally encoding one protein. The mature virions are non-enveloped. Their capsids, formed by multiple proteins, have icosahedral symmetry and are arranged generally in concentric layers. Orthoreoviruses The orthoreoviruses (reoviruses) are the prototypic members of the virus Reoviridae family and representative of the turreted members, which comprise about half the genera. Like other members of the family, the reoviruses are non-enveloped and characterized by concentric capsid shells that encapsidate a segmented dsRNA genome. In particular, reovirus has eight structural proteins and ten segments of dsRNA. A series of uncoating steps and conformational changes accompany cell entry and replication. High-resolution structures are known for almost all of the proteins of mammalian reovirus (MRV), which is the best-studied genotype. Electron cryo-microscopy (cryoEM) and X-ray crystallography have provided a wealth of structural information about two specific MRV strains, type 1 Lang (T1L) and type 3 Dearing (T3D). Cypovirus The cytoplasmic polyhedrosis viruses (CPVs) form the genus Cypovirus of the family Reoviridae. CPVs are classified into 14 species based on the electrophoretic migration profiles of their genome segments. Cypovirus has only a single capsid shell, which is similar to the orthoreovirus inner core. CPV exhibits striking capsid stability and is fully capable of endogenous RNA transcription and processing. The overall folds of CPV proteins are similar to those of other reoviruses. However, CPV proteins have insertional domains and unique structures that contribute to their extensive intermolecular interactions. The CPV turret protein contains two methylase domains with a highly conserved helix-pair/β-sheet/helix-pair sandwich fold but lacks the β-barrel flap present in orthoreovirus λ2. The stacking of turret protein functional domains and the presence of constrictions and A spikes along the mRNA release pathway indicate a mechanism that uses pores and channels to regulate the highly coordinated steps of RNA transcription, processing, and release. Rotavirus Rotavirus is the most common cause of acute gastroenteritis in infants and young children worldwide. This virus contains a dsRNA genome and is a member of the Reoviridae family. The genome of rotavirus consists of eleven segments of dsRNA. Each genome segment codes for one protein with the exception of segment 11, which codes for two proteins. Among the twelve proteins, six are structural and six are non-structural proteins. It is a double-stranded RNA non-enveloped virus. When at least two rotavirus genomes are present in a host cell, the genome segments may undergo reassortment to form progeny viruses with new gene combinations., or they may undergo intragenic homologous recombination. Some pathogenic rotovirus lineages that infect humans appear to have evolved through multiple interspecies reassortment events. Intragenic homologous recombination also appears to be a significant driver of rotovirus diversity and evolution. Intragenic recombination may occur when the VP1 RNA-dependent RNA polymerase replicates part of one template strand before switching to another. Bluetongue virus The members of genus Orbivirus within the Reoviridae family are arthropod borne viruses and are responsible for high morbidity and mortality in ruminants. Bluetongue virus (BTV) which causes disease in livestock (sheep, goat, cattle) has been in the forefront of molecular studies for the last three decades and now represents the best understood orbivirus at the molecular and structural levels. BTV, like other members of the family, is a complex non-enveloped virus with seven structural proteins and a RNA genome consisting of 10 variously sized dsRNA segments. Phytoreoviruses Phytoreoviruses are non-turreted reoviruses that are major agricultural pathogens, particularly in Asia. One member of this family, Rice Dwarf Virus (RDV), has been extensively studied by electron cryomicroscopy and x-ray crystallography. From these analyses, atomic models of the capsid proteins and a plausible model for capsid assembly have been derived. While the structural proteins of RDV share no sequence similarity to other proteins, their folds and the overall capsid structure are similar to those of other Reoviridae. Saccharomyces cerevisiae virus L-A The L-A dsRNA virus of the yeast Saccharomyces cerevisiae has a single 4.6 kb genomic segment that encodes its major coat protein, Gag (76 kDa) and a Gag-Pol fusion protein (180 kDa) formed by a -1 ribosomal frameshift. L-A can support the replication and encapsidation in separate viral particles of any of several satellite dsRNAs, called M dsRNAs, each of which encodes a secreted protein toxin (the killer toxin) and immunity to that toxin. L-A and M are transmitted from cell to cell by the cytoplasmic mixing that occurs in the process of mating. Neither is naturally released from the cell or enters cells by other mechanisms, but the high frequency of yeast mating in nature results in the wide distribution of these viruses in natural isolates. Moreover, the structural and functional similarities with dsRNA viruses of mammals has made it useful to consider these entities as viruses. Infectious bursal disease virus Infectious bursal disease virus (IBDV) is the best-characterized member of the family Birnaviridae. These viruses have bipartite dsRNA genomes enclosed in single layered icosahedral capsids with T = 13l geometry. IBDV shares functional strategies and structural features with many other icosahedral dsRNA viruses, except that it lacks the T = 1 (or pseudo T = 2) core common to the Reoviridae, Cystoviridae, and Totiviridae. The IBDV capsid protein exhibits structural domains that show homology to those of the capsid proteins of some positive-sense single-stranded RNA viruses, such as the nodaviruses and tetraviruses, as well as the T = 13 capsid shell protein of the Reoviridae. The T = 13 shell of the IBDV capsid is formed by trimers of VP2, a protein generated by removal of the C-terminal domain from its precursor, pVP2. The trimming of pVP2 is performed on immature particles as part of the maturation process. The other major structural protein, VP3, is a multifunctional component lying under the T = 13 shell that influences the inherent structural polymorphism of pVP2. The virus-encoded RNA-dependent RNA polymerase, VP1, is incorporated into the capsid through its association with VP3. VP3 also interacts extensively with the viral dsRNA genome. Bacteriophage Φ6 Bacteriophage Φ6, is a member of the Cystoviridae family. It infects Pseudomonas bacteria (typically plant-pathogenic P. syringae). It has a three-part, segmented, double-stranded RNA genome, totalling ~13.5 kb in length. Φ6 and its relatives have a lipid membrane around their nucleocapsid, a rare trait among bacteriophages. It is a lytic phage, though under certain circumstances has been observed to display a delay in lysis which may be described as a "carrier state". Anti-virals Since cells do not produce double-stranded RNA during normal nucleic acid metabolism, natural selection has favored the evolution of enzymes that destroy dsRNA on contact. The best known class of this type of enzymes is Dicer. It is hoped that broad-spectrum anti-virals could be synthesized that take advantage of this vulnerability of double-stranded RNA viruses. See also Animal virology List of viruses RNA virus TLR3 Virology Virus classification References Bibliography Animal virology Molecular biology RNA viruses
Double-stranded RNA viruses
[ "Chemistry", "Biology" ]
2,475
[ "Biochemistry", "Molecular biology" ]
14,346,064
https://en.wikipedia.org/wiki/Autoregressive%20conditional%20duration
In financial econometrics, an autoregressive conditional duration (ACD, Engle and Russell (1998)) model considers irregularly spaced and autocorrelated intertrade durations. ACD is analogous to GARCH. In a continuous double auction (a common trading mechanism in many financial markets) waiting times between two consecutive trades vary at random. Definition Let denote the duration (the waiting time between consecutive trades) and assume that , where are independent and identically distributed random variables, positive and with and where the series is given by: and where , , , . References Robert F. Engle and J.R. Russell. "Autoregressive Conditional Duration: A New Model for Irregularly Spaced Transaction Data", Econometrica, 66:1127-1162, 1998. N. Hautsch. "Modelling Irregularly Spaced Financial Data", Springer, 2004. Time series Mathematical finance
Autoregressive conditional duration
[ "Mathematics" ]
193
[ "Applied mathematics", "Mathematical finance" ]
14,346,088
https://en.wikipedia.org/wiki/Adduct%20purification
Adduct purification is a technique for preparing extremely pure simple organometallic compounds, which are generally unstable and hard to handle, by purifying a stable adduct with a Lewis acid and then obtaining the desired product from the pure adduct by thermal decomposition. Epichem Limited is the licensee of the major patents in this field, and uses the trademark EpiPure to refer to adduct-purified materials; Professor Anthony Jones at Liverpool University is the initiator of the field and author of many of the important papers. The choice of Lewis acid and of reaction medium is important; the desired organometallics are almost always air- and water-sensitive. Initial work was done in ether, but this led to oxygen impurities, and so more recent work involves tertiary amines or nitrogen-substituted crown ethers. References Professor Anthony C. Jones Purification of dialkylzinc precursors using tertiary amine ligands Chemical reactions Separation processes
Adduct purification
[ "Chemistry" ]
200
[ "Chemical reaction stubs", "nan", "Separation processes" ]
14,346,153
https://en.wikipedia.org/wiki/Steuart%20Pringle
Lieutenant General Sir Steuart Robert Pringle (21 July 1928 – 18 April 2013) was a Scottish Royal Marines officer who served as Commandant General Royal Marines from 1981 to 1985. He was seriously injured by an IRA car bomb in 1981, in which he lost his right leg. He was styled as the 10th Baronet of Stichill from 1961 to 2016, when a court accepted DNA evidence that established he was not the biological grandson of the 8th baronet. His cousin Murray Pringle inherited the baronetcy instead of Sir Steuart's eldest son and expected heir. Early life and education Pringle was born in Dover on 21 July 1928, the only child of Sir Norman Hamilton Pringle of Stichill, 9th Baronet (1903–1961), and his first wife, Winifred Olive Curran (died 1975). He was educated at Sherborne School. Military career Pringle joined the Royal Marines in 1946. He was appointed commanding officer of 45 Commando in 1971 and had a tour at Headquarters Commando Forces from 1974 in which role he was promoted from lieutenant colonel to colonel. Promoted to major-general on 1 February 1978 (local major-general from 20 February 1978), he then became Major General Commando Forces. Pringle went on to be chief of staff to the Commandant General Royal Marines in 1979 and Commandant General Royal Marines in 1981. On 17 October 1981, he was injured by an IRA car bomb attached to his red Volkswagen car outside his home in Dulwich, South London as he went to take his pet black Labrador, Bella, to the park for a run. One of the first questions he asked was, "How's my dog?". Bella was unscathed but Pringle lost his right leg in the incident and badly injured his left. As Commandant General of the Royal Marines, he was seen welcoming the Commandos home following the Falklands War. He was named BBC Pebble Mill Man of the Year for his "outstanding achievement and bravery". He later returned to duties, and retired in June 1984. Later life In retirement he became chairman and Chief Executive of the Chatham Historic Dockyard Trust. He died in London on 18 April 2013. Honours Pringle was appointed a Knight Commander of the Order of the Bath (KCB) in the 1982 Birthday Honours. He was awarded an Honorary DSc of City University London in 1982 and an Honorary LLD of Exeter University in 1994. He was also an Honorary Admiral of the Texas Navy. Personal life In 1953, Sir Steuart married Jacqueline Marie Gladwell, only daughter of Wilfrid Hubert Gladwell. They had two sons and two daughters. His eldest son, Simon, had been the heir apparent to the baronetcy. DNA case Norman Hamilton Pringle and his son Sir Steuart were recognised as the 9th and 10th Pringle Baronets of Nova Scotia, respectively, during their lifetimes; however, questions had been raised in the family as to whether Norman was the biological child of Sir Norman Robert Pringle, 8th Baronet (1871–1919). The 8th Baronet had married Florence Madge Vaughan on 16 October 1902 but she gave birth to Norman only seven months later, on 13 May 1903, leading to questions of legitimacy that were not resolved until more than a century later. In 2009, Sir Steuart agreed to DNA testing for a project launched by his first cousin Murray Pringle (born 1941), an accountant who was attempting to restore a clan chief to Clan Pringle, which has been an armigerous clan since 1737. The results indicated that Sir Steuart's paternal DNA was not consistent with that of other Pringles, but Murray heeded advice that the issue of the legitimate claimant to the baronetcy should not be contested during Sir Steuart's lifetime. After he died in 2013, both Simon (Sir Steuart's eldest son) and Murray attempted to claim the baronetcy. In 2016, a court agreed Murray Pringle was the rightful heir to the baronetcy instead of his first cousin once removed Simon, as DNA evidence demonstrated that Sir Steuart's father was not the biological son of Sir Norman Pringle, 8th Baronet. There were two younger sons – Ronald Steuart (1905–1968; Murray Pringle's father), and James Drummond (1906–1960). Norman Hamilton was proven with a "high degree of probability" to be fathered by someone outside the Pringle clan, and Sir Steuart and his father were removed posthumously from the Official Roll of the Baronetage. Murray Pringle was declared the 10th Baronet and his father the de jure 9th Baronet. However, as a Knight Commander of the Order of the Bath, Sir Steuart was still styled as Sir. References 1928 births 2013 deaths Baronets in the Baronetage of Nova Scotia British amputees British military personnel of the Cyprus Emergency British military personnel of the Indonesia–Malaysia confrontation British military personnel of the Malayan Emergency British military personnel of the Suez Crisis Car bomb victims Explosion survivors Knights Commander of the Order of the Bath People educated at Sherborne School People of The Troubles (Northern Ireland) Royal Marines lieutenant generals Military personnel from Kent 20th-century Royal Marines personnel
Steuart Pringle
[ "Chemistry" ]
1,064
[ "Explosion survivors", "Explosions" ]
14,346,471
https://en.wikipedia.org/wiki/Th%C3%A9%C3%A2trophone
Théâtrophone (, "the theatre phone") was a telephonic distribution system available in portions of Europe that allowed the subscribers to listen to opera and theatre performances over the telephone lines. The théâtrophone evolved from a Clément Ader invention, which was first demonstrated in 1881, in Paris. Subsequently, in 1890, the invention was commercialized by Compagnie du Théâtrophone, which continued to operate until 1932. Origin The origin of the théâtrophone can be traced to a telephonic transmission system demonstrated by Clément Ader at the 1881 International Exposition of Electricity in Paris. The system was inaugurated by the French President Jules Grévy, and allowed broadcasting of concerts or plays. Ader had arranged 80 telephone transmitters across the front of a stage to create a form of binaural stereophonic sound. It was the first two-channel audio system, and consisted of a series of telephone transmitters connected from the stage of the Paris Opera to a suite of rooms at the Paris Electrical Exhibition, where the visitors could hear Comédie-Française and opera performances in stereo using two headphones; the Opera was located more than two kilometers away from the venue. In a note dated 11 November 1881, Victor Hugo describes his first experience of théâtrophone as pleasant. In 1884, the King Luís I of Portugal decided to use the system, when he could not attend an opera in person. The director of the Edison Gower Bell Company, who was responsible for this théâtrophone installation, was later awarded the Military Order of Christ. The théâtrophone technology was made available in Belgium in 1884, and in Lisbon in 1885. In Sweden, the first telephone transmission of an opera performance took place in Stockholm in May 1887. The British writer Ouida describes a female character in the novel Massarenes (1897) as "A modern woman of the world. As costly as an ironclad and as complicated as theatrophone." The Théâtrophone service In 1890, the system became operational as a service under the name "théâtrophone" in Paris. The service was offered by Compagnie du Théâtrophone (The Théâtrophone Company), which was founded by MM. Marinovitch and Szarvady. The théâtrophone offered theatre and opera performances to the subscribers. The service can be called a prototype of the telephone newspaper, as it included five-minute news programs at regular intervals. The Théâtrophone Company set up coin-operated telephone receivers in hotels, cafés, clubs, and other locations, costing 50 centimes for five minutes of listening. The subscription tickets were also issued at a reduced rate, in order to attract regular patrons. The service was also available to home subscribers. French writer Marcel Proust was a keen follower of théâtrophone, as evident by his correspondence. He subscribed to the service in 1911. Many technological improvements were gradually made to the original théâtrophone system. The Brown telephone relay, invented in 1913, yielded interesting results for amplification of the current. The théâtrophone finally succumbed to the rising popularity of radio broadcasting and the phonograph, and the Compagnie du Théâtrophone ceased its operations in 1932. Similar systems Similar systems elsewhere in Europe included Telefon Hírmondó (est. 1893) of Budapest and Electrophone of London (est. 1895). In the United States, the systems similar to théâtrophone were limited to one-off experiments. Erik Barnouw reported a concert by telephone that was organized in the summer of 1890; around 800 people at the Grand Union Hotel in Saratoga listened to a telephonic transmission of The Charge of the Light Brigade conducted at Madison Square Garden. In fiction The Andrew Crumey novel Mr Mee (2000) has a chapter depicting the installation of a théâtrophone in the home of Marcel Proust. The Eça de Queiroz novel A Cidade e as Serras (1901) mentions the device as one of the many technological commodities available for the distraction of the upper classes. In his utopian science fiction novel Looking Backward (1888), Edward Bellamy predicted sermons and music being available in the home through a system like théâtrophone. See also Cable radio Linjesender References External links Le Premier Medium Electrique De Diffusion Culturelle: Le Theatrophone De Clement Ader , "The First Electric Medium Distribution Of Culture: The Theatrophone Of Clement Ader (1881)", in French A 1271x1551 image of a théâtrophone instrument from La collection de Jean-Louis Danièle Laster. Splendeurs et misères du théâtrophone (in French). 1881 establishments in France 1890 establishments in France 1932 disestablishments in France Products introduced in 1890 Products and services discontinued in 1932 French inventions Information by telephone Culture of France Telecommunications systems Telephony Telephone newspapers Music mass media Drama by medium
Théâtrophone
[ "Technology" ]
1,028
[ "Telecommunications systems" ]
14,346,663
https://en.wikipedia.org/wiki/Causality%20conditions
In the study of Lorentzian manifold spacetimes there exists a hierarchy of causality conditions which are important in proving mathematical theorems about the global structure of such manifolds. These conditions were collected during the late 1970s. The weaker the causality condition on a spacetime, the more unphysical the spacetime is. Spacetimes with closed timelike curves, for example, present severe interpretational difficulties. See the grandfather paradox. It is reasonable to believe that any physical spacetime will satisfy the strongest causality condition: global hyperbolicity. For such spacetimes the equations in general relativity can be posed as an initial value problem on a Cauchy surface. The hierarchy There is a hierarchy of causality conditions, each one of which is strictly stronger than the previous. This is sometimes called the causal ladder. The conditions, from weakest to strongest, are: Non-totally vicious Chronological Causal Distinguishing Strongly causal Stably causal Causally continuous Causally simple Globally hyperbolic Given are the definitions of these causality conditions for a Lorentzian manifold . Where two or more are given they are equivalent. Notation: denotes the chronological relation. denotes the causal relation. (See causal structure for definitions of , and , .) Non-totally vicious For some points we have . Chronological There are no closed chronological (timelike) curves. The chronological relation is irreflexive: for all . Causal There are no closed causal (non-spacelike) curves. If both and then Distinguishing Past-distinguishing Two points which share the same chronological past are the same point: Equivalently, for any neighborhood of there exists a neighborhood such that no past-directed non-spacelike curve from intersects more than once. Future-distinguishing Two points which share the same chronological future are the same point: Equivalently, for any neighborhood of there exists a neighborhood such that no future-directed non-spacelike curve from intersects more than once. Strongly causal For every neighborhood of there exists a neighborhood through which no timelike curve passes more than once. For every neighborhood of there exists a neighborhood that is causally convex in (and thus in ). The Alexandrov topology agrees with the manifold topology. Stably causal For each of the weaker causality conditions defined above, there are some manifolds satisfying the condition which can be made to violate it by arbitrarily small perturbations of the metric. A spacetime is stably causal if it cannot be made to contain closed causal curves by any perturbation smaller than some arbitrary finite magnitude. Stephen Hawking showed that this is equivalent to: There exists a global time function on . This is a scalar field on whose gradient is everywhere timelike and future-directed. This global time function gives us a stable way to distinguish between future and past for each point of the spacetime (and so we have no causal violations). Globally hyperbolic is strongly causal and every set (for points ) is compact. Robert Geroch showed that a spacetime is globally hyperbolic if and only if there exists a Cauchy surface for . This means that: is topologically equivalent to for some Cauchy surface (Here denotes the real line). See also Spacetime Lorentzian manifold Causal structure Globally hyperbolic manifold Closed timelike curve References Lorentzian manifolds Theory of relativity General relativity Theoretical physics
Causality conditions
[ "Physics" ]
680
[ "General relativity", "Theoretical physics", "Theory of relativity" ]
14,346,669
https://en.wikipedia.org/wiki/Relativistic%20speed
Relativistic speed refers to speed at which relativistic effects become significant to the desired accuracy of measurement of the phenomenon being observed. Relativistic effects are those discrepancies between values calculated by models considering and not considering relativity. Related words are velocity, rapidity, and celerity which is proper velocity. Speed is a scalar, being the magnitude of the velocity vector which in relativity is the four-velocity and in three-dimension Euclidean space a three-velocity. Speed is empirically measured as average speed, although current devices in common use can estimate speed over very small intervals and closely approximate instantaneous speed. Non-relativistic discrepancies include cosine error which occurs in speed detection devices when only one scalar component of the three-velocity is measured and the Doppler effect which may affect observations of wavelength and frequency. Relativistic effects are highly non-linear and for everyday purposes are insignificant because the Newtonian model closely approximates the relativity model. In special relativity the Lorentz factor is a measure of time dilation, length contraction and the relativistic mass increase of a moving object. See also Lorentz factor Relative velocity Relativistic beaming Relativistic jet Relativistic mass Relativistic particle Relativistic plasma Relativistic wave equations Special relativity Ultrarelativistic limit References Speed Velocity
Relativistic speed
[ "Physics" ]
288
[ "Physical phenomena", "Physical quantities", "Special relativity", "Motion (physics)", "Relativity stubs", "Vector physical quantities", "Theory of relativity", "Velocity", "Wikipedia categories named after physical quantities" ]
14,347,649
https://en.wikipedia.org/wiki/Fistmele
Fistmele, also known as the "brace height", is a term used in archery to describe the distance between a bow and its string. The term itself is a Saxon word (suffix -mele referring to the old form of the archaic sense of as "measure") indicating the measure of a clenched hand with the thumb extended. Different brace heights may be obtained from the same length of string by twisting it around before affixing it to the bow. A proper height helps to reduce noise upon the release of an arrow and vibrations in the bow itself. Consequently, if the distance is too small excess noise and poor arrow flight are the results. A bow is said to be "overstrung" when this distance is exceeded. See also Archery Bow string Archery Trade Association standards References Archery Units of length Human-based units of measurement
Fistmele
[ "Mathematics" ]
171
[ "Quantity", "Units of measurement", "Units of length" ]
14,347,709
https://en.wikipedia.org/wiki/USP26
USP26 is a peptidase enzyme. The USP26 gene is an X-linked gene exclusively expressed in the testis and it codes for the ubiquitin-specific protease 26. The USP26 gene is found at Xq26.2 on the X-chromosome as a single exon. The enzyme that this gene encodes comprises 913 amino acid residues and it is 104 kilodalton in size, which is transcribed from a sequence of 2794 nucleotide base-pairs on the X-chromosome. The USP26 enzyme is a deubiquitinating enzyme that places a very significant role in the regulation of protein turnover during spermatogenesis. It is a testis-specific enzyme that is solely express in spermatogonia and can prevent the degradation of ubiquitinated USP26 substrates. Recent research has suggested that defects in USP26 may be involved in some cases of male infertility, specifically Sertoli cell-only syndrome, and an absence of sperm in the ejaculate (azoospermia). See also Male infertility References External links
USP26
[ "Chemistry", "Biology" ]
236
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
14,348,526
https://en.wikipedia.org/wiki/Hydraulic%20tomography
Hydraulic tomography (HT) is a sequential cross-hole hydraulic test followed by inversion of all the data to map the spatial distribution of aquifer hydraulic properties. Specifically, HT involves installation of multiple wells in an aquifer, which are partitioned into several intervals along the depth using packers. A sequential aquifer test at selected intervals is then conducted. During the test, water is injected or withdrawn (i.e. a pressure excitation) at a selected interval in a given well. Pressure responses of the subsurface are then monitored at other intervals at this well and also in other wells. This test produces a set of pressure excitation/response data of the subsurface. Once a given test has been completed, the pump is moved to another interval and the test is repeated to collect another set of data. The same procedure is then applied to the intervals at other wells. Afterward, the data sets from all tests are processed by a mathematical model to estimate the spatial distribution of hydraulic properties of the aquifer. These pairs of pumping and drawdown data sets at different locations make an inverse problem better posed, because each pair cross-validates the others such that the estimates become less non-unique. In other words, predictions of ground water flow based on the HT estimates will be more accurate and less uncertain than those based on estimates from traditional site-characterization approaches and model calibrations. References https://web.archive.org/web/20071201142040/http://tian.hwr.arizona.edu/yeh/index.html http://tian.hwr.arizona.edu/research/HT/examples Hydrology
Hydraulic tomography
[ "Chemistry", "Engineering", "Environmental_science" ]
354
[ "Hydrology", "Hydrology stubs", "Environmental engineering" ]
14,348,562
https://en.wikipedia.org/wiki/Olivetti%20typewriters
Olivetti is an Italian manufacturer of computers, tablets, smartphones, printers, calculators, and fax machines. It was founded as a typewriter manufacturer by Camillo Olivetti in 1908 in the Turin commune of Ivrea, Italy. By 1994, Olivetti stopped production of typewriters, as more and more users were transitioning to personal computers. Mechanical models M1 (1911) Until the mid-1960s, the Olivetti typewriters were fully mechanical. Introduced at the Word Fair in Turin in 1911, the first Olivetti typewriter, the M1, was made of about 3000 hand made parts and weighed 17 kg. It was the first Italian typewriter and had a keyboard of 42 keys corresponding to 84 signs, 33-cm paper roll allowing for 110 characters and featured two-colored ribbon, automatic reverse direction, and return key. Heavy and massive, it was intended for professional use in offices. M20 (1920) In 1920 the M1 was replaced by a new model, the M20. It featured several innovations, including the trolley running on a fixed guideway. Unlike the M1, which was essentially sold in Italy, it was exported to many European and non-European markets. M40 (1930) To update the M20, Olivetti worked on a new model which came out in 1930 and remained in production until 1948, the M40. A second version came out in 1937 and another one in the 1940s. Customers particularly appreciated the fixed-guide carriage, the lightness of touch of the keyboard and the speed of writing. MP1 (1932) In 1932, Olivetti presented a portable typewriter shortly after the launch of the M40: the MP1 (Modello Portatile in Italian). Conceived by Gino Martinoli and Adriano Olivetti, engineered by Riccardo Levi, and designed by Aldo and Adriano Magnelli, it was intended for both office and domestic use. It weighed only 5.2 kilos, as compared to the M1, which weighed 17 kilos, measured 11.7 centimetres high, half the height of the M1. The mechanism was partly concealed by the body, and the monumental vertical structure of the M1 had been flattened and lightened. In addition to the black colour of the M1 and M20, the MP1 was offered in red, blue, light blue, brown, green, grey, and ivory. Studio 42 (1935) Also known as the M2, it was designed in 1935 by Luigi Figini and Gino Pollini, Ottavio Luzzati and Xanti Schawinsky. It is characterized by the various colors available: in addition to the classic black, it was also available in red, gray, brown and light blue. The keyboard is the QZERTY type, as is usual for Italian machines (apart from modern computer keyboards). In addition to the writing keys, the keyboard includes a space bar, two shift keys, a shift lock, a return key, and a tab key. The set of writing keys has an obvious lack: there is no key with the number 1, which is obtained by using the lowercase letter l (L) or the capital I (i); likewise, there is no zero, which is obtained by typing the capital O (o). Although this may seem strange today, it was quite common in the old typewriters. There is also a portable version, with the machine fixed on a wooden base with black imitation leather and a removable protection also covered in wood, a black leather carrying handle and a chrome lock. Lexikon 80 (1948) Also known as the M80, the Lexikon 80 was designed by Marcello Nizzoli in 1948, and has been a huge success for Olivetti and Hispano-Olivetti. It's the most sold typewriter in the world, and has a unique style in color, shape, and structural strength. This machine was first published as M80, featuring a keyboard with round keys that had a thin metal outline to the circunference, appealing notoriously to many older models, like the M40. Despite the M80 series being a standard machine, it only came in the short 10" carriage, that fulfilled common purposes, just like a portable machine. The M80 was shortly after edited to Lexikon 80. The Olivetti Lexikon 80 was indeed, the most manufactured machine in the world. -M80 is very rare- It came in the first series, with a subtle olive green colour, then came the second series, in grey, and finally, the third series of the machine, which came in blue. The Lexikon 80 had a few changes in contrast to its previous version M80; it had keys fully made out of plastic, it was slightly bigger in size, it startedmost featuring decimal tabulators, and it had a wide variety of dismountable carriages. While dismountable carriages are featured in most standard typewriters, -mostly removable with a pair of screws, or the pull of two latches for very easy access like the Olympia-Werke SG1- the M80 had its carriage completely welded to the chassis, and could only be removed with a special key that was only given to authorised servicemen. Dismountable carriages vary in size, anywhere from 9 or 10 inches up to 30 inches. While they're not actually meant to be dismounted and replaced in the first place, dismountable carriages are just an easy shortcut for servicemen to access the insides of the machine and, if necessary, replace the carriage. A machine must not get its carriage replaced with another one with a different length. Carriages that went over 25 inches required the typewriter body to have supports on each size to prevent the machine from tipping over when the carriage is moved to one extreme or another. Studio 42 (1950) Lettera 22 (1950) The Olivetti Lettera 22 is a portable mechanical typewriter designed by Marcello Nizzoli in 1949 or, according to the company's current owner Telecom Italia, 1950. This typewriter was very popular in Italy, and it still has many fans. It was awarded the Compasso d'Oro prize in 1954. In 1959 the Illinois Institute of Technology chose the Lettera 22 as the best design product of the previous 100 years. The typewriter is sized about 27x37x8 cm (with the carriage return lever adding about 1–2 more centimeters in height), making it quite portable at least for the time's standards, even though its weight may limit portability somewhat. The model was eventually succeeded by the Olivetti Lettera 32. Studio 44 (1952) Diaspron 82 (1959) Lettera 10 (1979) Lettera 32 (1963) The Olivetti Lettera 32 is a portable mechanical typewriter designed by Marcello Nizzoli (with Adriano Menicanti and Natale Capellaro) in 1963 as the successor of the popular Olivetti Lettera 22. The Lettera 32 was also very popular amongst writers, journalists and students. The typewriter is sized about 34x35x10 cm (with the carriage return lever adding about 1–2 centimeters in height), making it portable at least for the time's standards, even though its 5.9 kg weight may limit portability somewhat. The Lettera 32 did not come with a manual but with an instruction card. Mechanics The Lettera 32 is a downstrike typebar typewriter. The typebars strike a red/black inked ribbon, which is positioned between the typebar and the paper by a lever whenever a key is pressed; a small switch located near the upper right side of the keyboard can be used to control the strike position of the ribbon, in order to print with black, red, or no ink (for mimeograph stencils). Ribbon movement, which also occurs at every keypress, automatically reverses direction when there is no ribbon left on the feed reel; two mechanical sensors, situated next to each wheel, move when the ribbon is put under tension (indicating ribbon end), attaching the appropriate wheel to the ribbon transport mechanism and detaching the other. Its mechanical components were used as the basis for the Valentine model. Keyboard The keyboard uses QWERTY, AZERTY and various other layouts. Apart from the typing keys, the keyboard includes a space bar, two shift keys, a caps lock, a backspace key, margin release key, paragraph indentation key and a tab-stop set/unset key. As was common in older typewriters, it lacks the number 1(but the Olivetti Lettera 10 has it), which is supposed to be substituted by the lowercase l. Popular culture Cormac McCarthy used an Olivetti Lettera 32 to write nearly all of his fiction, screenplays, and correspondence, totalling by his estimate more than 5 million words. The Lettera 32 that he purchased in 1963 was auctioned at Christie's on December 4, 2009, to an unidentified American collector for $254,500, more than 10 times its high estimate of $20,000. McCarthy paid $11 for a replacement typewriter of the same model, but in newer condition. Francis Ford Coppola used an Olivetti Lettera 32 to write the screenplay for the 1972 motion picture The Godfather, which he also directed. Subsequent models From then on, the technology of the hand-held portables tends to stabilize. The mechanics of the Lettera 32 is therefore maintained at the base of the subsequent models: the Olivetti Dora and Lettera De Luxe (1965), Lettera 25 and 35 (1974), Lettera 10 and 12 (1979) and 40/41/42 and 50/51/52 (1980) differ mainly in design. Dora (1965) Lettera DL (1965) Studio 45 (1967) Lettera 25 (1972) Lettera 35 (1972) The Olivetti Lettera 35 is a portable mechanical typewriter created in 1972 by Mario Bellini and released to the public in 1974. More than 10 years after the Lettera 32, Olivetti felt the need to renew the design of its portable typewriters. Thus, the Lettera 35 was launched. Unlike the Lettera 22 and 32, which maintain a simple and essential style, the Lettera 35 features a robust design to create the image of a professional machine that recalls the future Lettera 36, an electric typewriter released in 1970. Mechanics With the same mechanics as the Lettera 32, the Olivetti Lettera 35 is a typewriter with pressure writing levers. Each time a key is pressed, the corresponding hammer, through the kinematic mechanism, goes to beat on the tape with red or black ink, behind which is the sheet of paper on which is thus imprinted the corresponding symbol. A lever located at the top right of the keyboard can be used to control the position of the ribbon and select printing in black, red or without ink (in case of copies with carbon paper or for the preparation of ink matrices for the mimeograph). The ribbon winds with each key press and automatically changes winding direction when it is finished in one of the two spools in which it is wound. Two mechanical sensors located near each spool move when the ribbon stretches (this indicates that it is finishing) and reverse its winding direction. Keyboard The original Italian version used the QZERTY keyboard, although versions with different key arrangements were produced which corresponded with other languages. The alphanumeric keys totaled 43 of the 86 total keys. Other than this, the keyboard had a space bar, two keys to set uppercase letters, a caps lock key, a lever to for the ability to go beyond the set margins, a key for backspacing, a lever to switch tabs, and a (red) key to switch between tabs. The set of characters available has obvious shortcomings: there is no key with the number 1, which is obtained by using the lowercase letter l (L) or the capital I (i); there are no keys for the accented uppercase vowels used in the Italian language, which were replaced by normal letters followed by the apostrophe. This type of solution was quite common in the typewriters of the time. Valentine (1969) The Olivetti Valetine is a portable, manual typewriter noted for its typically red ABS plastic bodywork and matching red case. Its mechanical components are derived from the Lettra 32. Despite being an expensive, functionally limited and somewhat technically mediocre product which failed to find success in the marketplace, the Valentine ultimately became a celebrated icon — largely on account of its expressive design and practicality. It was awarded the Compasso d'Oro prize in 1970. The fame of the design was such that late in his career, the designer Ettore Sottsass would lament "I worked sixty years of my life, and it seems the only thing I did is this fucking red machine." The Valentine is featured in the Metropolitan Museum, Museum of Modern Art, Cooper Hewitt, Smithsonian Design Museum, London's Design Museum as well as the Victoria and Albert Museum. In 2016, David Bowie's Valentine typewriter sold at auction by Sotheby's in London for £45,000 (US $57,000). Studio 46 (1973) Electromechanical models The Editor series was used for speed typing championship competition. The Editor 5 from 1969 was the top model of that series, with proportional spacing and the ability to support justified text borders. In 1972 the electromechanical typeball machines of the Lexicon 90 to 94C series were introduced, as competitors to the IBM Selectric typewriters, the top model 94c supported proportional spacing and justified text borders like the Editor 5, as well as lift-off correction. Lexikon 80e Praxis 48 (1964) Based on Tekne 1 but radically changed design from Ettore Sottsass. Tekne 1 (1962) The Tekne and later Editor series introduced Olivett's own solution to drive the type lever. While other manufactures based their drive on a rotating roller, rubber coated or with teeth, Olivetti designed an up and down swinging flag drive which pulls the type lever towards the paper roller. So the movement of the type lever was under full control and made possible to implement a double strike protection based on inertia of the type lever and the force driven by the swinging flag to protect the types from damage when the typist unintentionally hits two or more keys at once. Tekne 1 was the base model of the series. Tekne 2 Tekne 3 Editor 2 Renamed model from Tekne 2. Editor 3 Initially renamed version of Tekne 3, later production derives from simplified Editor 4. Both versions, old flat and later higher version of Editor 3 were used as alphanumeric printer in Olivetti's computer P203. Editor 4 The Editor 4 alternatively was available with textile or carbon ribbon. A special version of it was the Editor 4ST which was used by Olivetti as an alphanumeric printer for some of their first personal computers like P506 and P652. The typewriter was controlled by the computer by additional electromechanical components at the keyboard levers at the bottom of the machine. Editor 5 (1969) The E5 was the top model of the Editor series. Based on the E4 it has carbon ribbon, proportional spacing and block aligned text support. The combination of proportional spacing and block aligned text is a challenge for the typist as he/she has to write every line two times, first time without printing, while a mechanical display counts the words in the line, and in the second pass it displays to the typist how many spaces with 2 or 3 elementary steps he still has to fill in the line to get a perfect result. To type spaces with 2 or 3 elementary steps the space bar is divided into two halves. Lexikon 90 (1972) The Lexikon 9x series was Olivetti's quite successful attempt to follow IBM' Selectrix series to have exchangeable fonts on typeball. To avoid IBM's patents Olivetti made everything different, so the typeball does not rotate horizontally, but vertically, it has no moving print head, but instead the paper wagon was moving what also made possible to take over some mechanics of the Editor series. The mechanics to control the typeball was not controlled by wire ropes which made the Selectrix hard to adjust to work reliable but on turnable axles which made the Lexikon series very reliable and robust. The national keyboard layouts and typeballs were customizeable over a mechanical 'ROM' based on metal flags inside the keyboards where in production or maintenance single teeth could be broken out to change the layout according to tables in the service manual. The degrees to rotate the typeball was depending on this coding, using levers with moving turnpoint when pressing a key. The Lexikon 90 was the base model of the series. Lexikon 92 (1972) This is upgraded version of Lexikon 90 providing to choose different pitches. Lexikon 93c (1972) Upgraded version of Lexikon 92 having additional correction ribbon, so with this model already written text could be deleted half manually by press a delete key which backspaces the paper wagon and then the typist has to press the character he wants to delete. The machine instead of lifting the black ribbon it lifts the correction tape to cover up or lift off the character on the paper. Lexikon 94c (1972) This was the top model of the series providing proportional spacing and block aligned text similar to the Editor 5. Lettera 36 (1974) Portable compact electric type lever machine, produced in GDR for Olivetti. The Lettera 36 had a bad reputation as its mechanics was quite unreliable, but because of its affordable price it was quite successful. There were two different keyboard designs. Lexikon 82 (1976) Portable compact electrical typeball machine, successor of Lettera 36, implementing the IBM selectric 'golfball' mechanism, after its patent ended. Electronic and office models In 1978 Olivetti was one of the first manufacturers to introduce electronic daisywheel printer-based word processing machines, called TES 401 and TES 501. Later the ET series typewriters without (or with) LCD and different levels of text editing capabilities were popular in offices. Models in that line were ET 121, ET 201, ET 221, ET 225, ET 231, ET 351, ET 109, ET 110, ET 111, ET 112, ET 115, ET 116, ET 2000, ET 2100, ET 2200, ET 2250, ET 2300, Et 2400 and ET 2500. Electronic and portable models For home users in 1982 the Praxis 35, Praxis 40 and 45D were some of the first portable electronic typewriters. Later, Olivetti added the Praxis 20, ET Compact 50, ET Compact 60, ET Compact 70, ET Compact 65/66, the ET Personal series and Linea 101. The top models were 8 lines LCD based portables like Top 100 and Studio 801, with the possibility to save the text to a 3.5-inch floppy disk. Video typewriters The professional line was upgraded with the ETV series video typewriters based on CP/M operating system, ETV 240, ETV 250, ETV 300, ETV 350 and later MS-DOS operating system based ETV 260, ETV 500, ETV 2700, ETV 2900, ETV 4000s word processing systems having floppy drives or hard disks. Some of them (ETV 300, 350, 500, 2900) were external boxes that could be connected through an optional serial interface to many of the ET series office typewriters, the others were fully integrated with an external monitor which could be installed on a holder over the desk. Most of the ET/ETV/Praxis series electronic typewriters were designed by Marion Bellini. ETV 300 (1982) The ETV 300 wordprocessor is a Zilog Z80 based box running on CP/M operating system. It was connected over serial interface to an ET 121 or ET 225 office typewriter which was used as the keyboard and daisywheel printer. The 5,25 inch single sided boot diskette starts directly in the MWP called wordprocessing software. Optionally it was available with a second floppy drive to store documents over there. Both floppy drives are 40 tracks 160 kB capacity. A rare version is the diskless ETV 300 which used to have the operating system and MWP in a ROM and storing the documents in battery buffered SRAM. The 80x25 characters green monochrome monitor with a diameter of 12 inch can stand on the ETV 300 box. ETS 1010 (1982) ETV 350 (1984) The ETV 350 is the successor of ETV 300. The box was a bit more compact and instead of one or two 5.25 inch drives it was using the more modern 3,5 inc single sided diskettes with 320 kB. Usually the ETV 350 was delivered with ET 111 or ET 115 as keyboard and printer. ETS 2010 (1984) ETV 240 (1984) The ETV 240 is a typewriter integrated design. On the base board it has two computers, a simpler one, based on NEC 7801 CPU to control the keyboaerd and the ET 115 office typewriter based daisywheel printer and a Z80 CPU based running CP/M 2.2 and the MWP wordprocessor software from ROM. The base configuration is diskless, storing all saved documents in battery buffered 32 kB RAM. This configuration can be enhanced by adding one 360 kB single sided 3.5 inch floppy drive supporting the same disk and document format as ETV 350. The 80x25 characters CRT 12 inch green monitor optionally can be mounted on a pivotable arm. Like ET 111 to 116 the print head has also optical sensor to detect type/pitch and nationality of the daisywheel. ETV 250 (1984) This is enhanced version of ETV 240. Instead of ROM based design the ETV 250 boots CP/M 2.2 and MWP wordprocessor from 3.5 floppy. Optionally the ETV 250 can have two floppy drives. For ETV 240 and 250 sprockets and automatic sheet feeder could be added. The ETV 250 could be enhanced with serial interface cardridges to support serial interface, external 5,25 floppy drive for data exchange with some ET series typewriters and ETV 300. There was a special serial interface to run the ETV 250 on the Teletex network as a sender and receiver of messages, similar to teletype, this was the so-called ETV 250TTX. Like on ETV 300 the wordprocessor was exitable to run standard CP/M software on this video typewriter. ETV 260 (1986) This video typewriter is based on Olivetti M19 mainboard and ET 116 office typewriter daisywheel printer. As it has Intel 8088 CPU at 8 MHz (M19 is only 4.77 MHz), 640 kB RAM and CGA graphics it runs MS-DOS and its applications. The word processing software SWP is based on Olivetti branded MS-DOS 3.20. SWP supports to include tables which also could be used to print simple line based graphics in a document. Computer and printer are integrated in one box, like on ETV 240/250 the monitor can be installed on an arm and the keyboard is similar to IBM's XT layout. As mass storage it is uses two 3.5 inch 720 kB floppy drives or one floppy drive plus 20 MB hard disk. The daisy wheel printer is fully software driven by a config.sys driver running on the 8088 CPU and behaves like an external printer on LPT1:. The printer driver also contains a hotkey function which is running under any MS-DOS software to write directly on the printer like a normal typewriter, inclusive correction function. The print head has optical paper sensor to automatically detect the left margin of the paper and align printing on the paper, it is printing bidirectional at 35 cps which is quite fast for a daisy wheel printer. Like ET 111 to 116, the print head also has an optical sensor to detect type/pitch and nationality of the daisywheel. Optionally, there was a sprocket and automatic sheet feeder available. The SWP wordprocessor software is able to read MWP documents of ETV 350, 240, 250 and 210s from their proprietary CP/M-based disk format. Optionally, there was an external 360 kB 5.25 inch floppy drive called DU 260 to support diskettes of ETV 300 for document import. ETV 500 (1986) This was basically a rebadged Olivetti M19 personal computer. The difference is that the 8088 CPU runs at 8 instead of 4.77 MHz and instead of 5.25 inch floppy drives it uses 720 kB 3.5 inch. The client had the choice to use the same XT layout based keyboard as on ETV 260 or the keyboard of a serial connected ET 112 or ET 116 typewriter. Typically the connected typewriter was also used as the printer, or some of Olivetti's own dot-matrix or daisy wheel printers could be used. ETV 210s (1987) With its thermo transfer print head, based on IBM's quietwriter patents combined with the printer chassis of ET 116 the ETV 210s was announced as the future of the typewriter. The ETV 210s runns with a Zilog Z80 compatible CPU from Hitachi, running CP/M and wordprocessor in ROM. The PC style external keyboard is quite special as it contains 80x5 characters alphanumeric LC display to write the documents. Otherwise its functionally is similar to ETV 240 supporting the same file format in battery buffered SRAM or optionally on one 3.5 inch 720 kb Floppy disks which are also able to read disks from ETV 240, 250 and 350. ETV 2700 (1988) The ETV 2700 is the low cost successor of ETV 260, based on the ET 2000 series printer mechanics. The machine is a fully integrated, NEC V40-based PC, with the electronics to control the printer on one small footprint mainboard. Even the keyboard is directly attached to the machine. The system runs MS-DOS and the SWP word processor software which is document compatible to ETV 260 / 3000 SWP. The external black and white CGA compatible can be installed on a pivotable arm. As mass storage there could be one 720 kB MS-DOS compatible floppy drive, or two of them, or one of them plus 20 MB XTA interface based harddisk. ETV 2900 (1988) VM 2000 (1989) This is same like ETV 2900, the only difference is modified BIOS to prevent user to boot from standard MS-DOS diskettes. So always the VM 2000 own diskettes have to be used. This prevented the user to run other MS-DOS based applications on VM 2000. This is explained by employment laws in the difference of a typewriter- and a PC worker. ETV 3000 (1988) This is mostly a renamed ETV 260, the only difference is that it has different video connector to support the CRT black & white monitor of ETV 2700. ETV 4000s (1989) ETV 400 is similar to ETV 500, but the base is not M19 PC, but M290. So with Intel 80286 CPU this wordprocessing system was running Windows 2.0 with the wordprocessing software of ETV 4000s. The standard printer was TH700 which is more or less a keyboard/floppydrive-less version of the ETV 210s. ETV 5000 (1988) CWP 1 (1988) Editor 100 (1990) Jetwriter 910 (1992) By 1994, Olivetti stopped production of typewriters, as more and more users were transitioning to personal computers. See also Olivetti computers References External links Olivetti, storia di un'impresa, official site. Il design dei prodotti Olivetti, the history of Olivetti design. Olivetti typewriters Articles containing video clips Typewriters Italian design Industrial design
Olivetti typewriters
[ "Engineering" ]
5,859
[ "Industrial design", "Design engineering", "Design" ]
15,849,543
https://en.wikipedia.org/wiki/Donkey%20stone
A donkey stone was a type of scouring block, used mostly in the mill towns of the North of England to highlight the leading edge of stone steps. Etymology The 'donkey brand' was originally the trade mark of a Manchester company called Edward Read & Son, who were one of several makers of the stones. Other companies used other animal designs or simple lettering, but the name 'donkey stone' stuck. In parts of Greater Manchester, the practice of using donkey stone was described as 'brownstoning the step'. Usage Donkey stones were first used in textile mills to clean greasy steps, and give them a non-slip finish, however the stones also became popular with housewives who would use them to give doorsteps and flagged floors a decorative finish. After mopping, a damp donkey stone would be rubbed around the outside edge of a flagged stone floor or along the leading edge and sides of a stone door-step. When skilfully applied the dried residue would give a neat contrasting border or line. It was not very durable and would have to be refreshed on a regular basis. Quite often the stones would be given out in exchange for old clothes or scrap metal, by rag totters, or rag and bone men as they were sometimes called. Manufacture Donkey stones were made from a mixture of pulverised stone, cement, bleach powder and water. The mixture was ground up into a thick paste and then formed into a rectangular slab on a bench. The slab was then cut up to form the individual stones. The finished stones were then placed on racks to dry, usually for several days, although sometimes the drying process would take longer if the weather was cold and damp. Donkey stones were made in three different colours; brown, using a type of sandstone called cotta stone from Northampton; white, using a type of stone from Appley Bridge quarry near Wigan, and cream, using a blend of the two. Decline The use of donkey stones gradually died out during the 1950s and 60s. The last big manufacturer of the stones was a company called Eli Whalley, founded in the 1890s, in Ashton-under-Lyne, which ceased trading in 1979. Some of that company's old machinery is preserved at the town's Portland Basin Industrial Museum, and a blue plaque commemorates the site of the old works at Donkey Stone Wharf on the canal. Eli Whalley's stones were sold under the Lion Brand trade mark, the design of which was based on a photograph of a live specimen at Belle Vue Zoo. Another manufacturer was also based on Donkey Stone Wharf; they were called J. Meakin and Sons and made the "Pony Brand" donkey stone. They were in operation until the late 1960s. Donkey stones are still sold in certain northern markets and towns, their production being continued in Colne, Lancashire by Chris Fawcett. References External links Donkey Stone history A Tribute to Eli Whalley Eli Whalley history Cleaning products Companies based in Tameside
Donkey stone
[ "Chemistry" ]
608
[ "Cleaning products", "Products of chemical industry" ]
15,850,729
https://en.wikipedia.org/wiki/Translinear%20circuit
A translinear circuit is a circuit that carries out its function using the translinear principle. These are current-mode circuits that can be made using transistors that obey an exponential current-voltage characteristic—this includes bipolar junction transistors (BJTs) and CMOS transistors in weak inversion. Translinearity, in a broad sense, is linear dependence of transconductance on current, which occurs in components with exponential current-voltage relationship. History and etymology The word translinear (TL) was invented by Barrie Gilbert in 1975 to describe circuits that used the exponential current-voltage relation of BJTs. By using this exponential relationship, this class of circuits can implement multiplication, amplification and power-law relationships. When Barrie Gilbert described this class of circuits he also described the translinear principle (TLP) which made the analysis of these circuits possible in a way that the simplified view of BJTs as linear current amplifiers did not allow. TLP was later extended to include other elements that obey an exponential current-voltage relationship (such as CMOS transistors in weak inversion). The Translinear Principle The translinear principle (TLP) is that in a closed loop containing an even number of translinear elements (TEs) with an equal number of them arranged clockwise and counter-clockwise, the product of the currents through the clockwise TEs equals the product of the currents through the counter-clockwise TEs or The TLP is dependent on the exponential current-voltage relationship of a circuit element. Thus, an ideal TE follows the relationship where is a pre-exponential scaling current, is a dimensionless multiplier to , is a dimensionless multiplier to the gate-emitter voltage and is the thermal voltage . In a circuit, TEs are described as either clockwise (CW) or counterclockwise (CCW). If the arrow on the emitter points clockwise, it is considered a CW TE, if it points counterclockwise, it is considered a CCW TE. Consider an example: By Kirchhoff's voltage law, the voltage around the loop that goes from to must be 0. In other words, the voltage drops must equal the voltage increases. When a loop that only goes through the emitter-gate connections of TEs exists, we call it a translinear loop. Mathematically, this becomes Because of the exponential current-voltage relationship, this implies TLP: this is effectively because current is used as the signal. Because of this, voltage is the log of the signal and addition in the log domain is like multiplication of the original signal (i.e. ). The translinear principle is the rule that, in a translinear loop, the product of the currents through the CW TEs is equal to the product of the currents through the CCW TEs. For a detailed derivation of the TLP, and physical interpretations of the parameters in the ideal TE law, please refer to or. Example Translinear Circuits Squaring Circuit According to TLP, . This means that where is the unit scaling current (i.e. the definition of unity for the circuit). This is effectively a squaring circuit where . This particular circuit is designed in what is known as an alternating topology, which means that CW TEs alternate with CCW TEs. Here's the same circuit in a stacked topology. The same equation applies to this circuit as to the alternating topology according to TLP. The alternating translinear loop is also called type A and the stacked loop called type B. In realizing the principle the difficulty is that it is current based. The only voltage that is relevant to the principle is the voltage between the nodes N1 and N2. These and all other potentials must allow the transistors to carry the currents while forward biased in such a way that the transistors can follow the principle. What the nodes N1, N2 are depends on the type: For Type A the nodes are the emitter connections. For Type B the nodes are the connected bases of the upper BJT and the emitters of the lower. For every pair of base connected transistors only one may have its collector connected to its base in diode connection and the input current set by the collector potential but the other may not be biased in the same way. Both type A and B realize the same mathematical function with the difference being the voltage between the two nodes of which at least one is an emitter to emitter connection. For type A (alternating), with two emitter connected pairs, the voltage relates to the ratio between the currents within each base coupled pair. For type B (stacked/balanced), the node voltage is the sum of the two base emitter voltages in each pair and thus relates to the product of currents in each stacked base to emitter coupled pair. Thus if the voltage is forced in either case, two currents, one in each pair, must be variable. In the type A (alternating loop) example below, a NMOSFET allows the correct tiny voltage between the emitter nodes of the emitter coupled pairs due to negative feedback, because a higher collector/gate voltage lowers its resistance such that, the base emitter voltage of the output BJT is small enough to let it out of saturation. The collector potential of one of the inner BJT controls both the inner BJT's current by allowing the inner two BJTs to drop their emitter currents through the low residual voltage of the NMOSFET. As the MOSFET should not operate in reverse drain source polarity this restricts the current relations or emitter potentials that the circuit can operate at. Here are some example biasing schemes: 2-Quadrant Multiplier The design of a 2-quadrant multiplier can be easily done using TLP. The first issue with this circuit is that negative values of currents need to be represented. Since all currents must be positive for the exponential relationship to hold (the log operation is not defined for negative numbers), positive currents must represent negative currents. The way this is done is by defining two positive currents whose difference is the current of interest. A two quadrant multiplier has the relationship hold while allowing to be either positive or negative. We'll let and . Also note that and etc. Plugging these values into the original equation yields . This can be rephrased as . By equating the positive and negative portions of the equation, two equations that can be directly built as translinear loops arise: The following are the alternating loops that implement the desired equations and some biasing schemes for the circuit. Usage in electronic circuits The TLP has been used in a variety of circuits including vector arithmetic circuits, current conveyors, current-mode operational amplifiers, and RMS-DC converters. It has been in use since the 1960s (by Gilbert), but was not formalized until 1975. In the 1980s, Evert Seevinck's work helped to create a systematic process for translinear circuit design. In 1990 Seevinck invented a circuit he called a companding current-mode integrator that was effectively a first-order log-domain filter. A version of this was generalized in 1993 by Douglas Frey and the connection between this class of filters and TL circuits was made most explicit in the late 90s work of Jan Mulder et al. where they describe the dynamic translinear principle. More work by Seevinck led to synthesis techniques for extremely low-power TL circuits. More recent work in the field has led to the voltage-translinear principle, multiple-input translinear element networks, and field-programmable analog arrays (FPAAs). References Electronic circuits
Translinear circuit
[ "Engineering" ]
1,601
[ "Electronic engineering", "Electronic circuits" ]
15,850,906
https://en.wikipedia.org/wiki/Nosism
Nosism, from Latin 'we', is the practice of using the pronoun we to refer to oneself when expressing a personal opinion. Depending on the person using the nosism different uses can be distinguished: The royal we or pluralis majestatis The royal we () refers to a single person holding a high office, such as a monarch, bishop, or pope. It can also be used to refer to God, as in Genesis 1:1, “In the beginning [Elohim] created the heavens and the earth.” Elohim being the plural form of El (God) The editorial we The editorial we is a similar phenomenon, in which an editorial columnist in a newspaper or a similar commentator in another medium uses we when giving their opinion. Here, the writer is self-cast in the role of a spokesperson: either for the media institution that employs them, or more generally on behalf of the party or body of citizens who agree with the commentary. The author's we or pluralis modestiae Similar to the editorial we, is the practice common in mathematical and scientific literature of referring to a generic third person by we (instead of the more common one or the informal you): "By adding four and five, we obtain nine." "We are thus led also to a definition of time in physics."—Albert Einstein We in this sense often refers to "the reader and the author", since the author often assumes that the reader knows and agrees with certain principles or previous theorems for the sake of brevity (or, if not, the reader is prompted to look them up). This practice is discouraged in the natural and formal sciences, social sciences, humanities, and technical writing because it fails to distinguish between sole authorship and co-authorship. The patronizing we The patronizing we (also known as the kindergarten or preschool we) is sometimes used in addressing instead of you, suggesting that the addressee is not alone in their situation such as "We won't lose our mittens today." This usage can carry condescending, ironic, praising, or other connotations, depending on intonation. The hospital we This is sometimes employed by healthcare workers when addressing their patients; for example, "How are we feeling today?" The non-confrontative we The non-confrontative we is used in T–V languages such as Spanish where the phrase (literally, 'How are we?') is sometimes used to avoid both over-familiarity and under-formality among near-peer acquaintances. In Spanish, the indicative we form is also often used instead of the imperative for giving instructions, such as in recipes: ('we beat the egg whites until stiff'). References Personal pronouns Sociolinguistics Grammatical number Etiquette
Nosism
[ "Biology" ]
570
[ "Etiquette", "Behavior", "Human behavior" ]
15,851,261
https://en.wikipedia.org/wiki/Masdar%20Institute
The Masdar Institute of Science and Technology in Masdar City, Abu Dhabi was a private, higher-education and research institute active from 2007-2017. In 2017, it merged with two other institutions in Abu Dhabi, Petroleum Institute and Khalifa University, to become the multi-campus, sole-branded Khalifa University. Its previous structure, now part of Khalifa University, is now known as the "Masdar City campus". Masdar Institute was an integral part of the non-profit side of the Masdar Initiative and was the first institution to occupy Masdar City. The Technology and Development Program at the Massachusetts Institute of Technology provided scholarly assessment and advice to Masdar Institute. , the collaborative agreement between the two institutions is still in place and currently hosts several exchange students from the legacy cohorts. History Masdar Institute was established on February 25, 2007. The project developer was Hip Hing Construction and the architects were Foster and Partners. , the institute employed 85 faculty members and had an enrollment of 456 students. The establishment of Masdar Institute was part of a resource diversification policy for the Emirate of Abu Dhabi. Abu Dhabi's leadership views research and education in alternative energy as a keystone for the future development of the emirate and expressed their commitment through the establishment of Masdar Initiative, Masdar City and the Zayed Future Energy Prize. The institute's interim provost, Behjat Al Yousuf, was appointed in May 2015. She previously served as the dean of students at Masdar. In the same year, the institute completed a thermal energy storage project with the Norwegian company EnergyNest AS in Abu Dhabi, which was continued for improvement until 2017. In 2017, it merged with two other institutions in Abu Dhabi, Petroleum Institute and Khalifa University, to become the multi-campus and sole-branded Khalifa University. Campus The campus, like Masdar City, was designed by architectural firm Foster + Partners and the first phase of the project was managed by CH2M Hill. Research centers TwinLab3 Dimensional StackedChips Research Center Sustainable Bio-energy Research Center (SBRC) Smart Grid and Smart Building Center of Excellence Renewable Energy Resource Mapping and Assessment Center Students By 2017, 456 students were enrolled and the institute had more than 550 alumni. Faculty and research Masdar Institute commenced teaching in September 2009. Its academics conducted research individually and in collaboration with several top ranked universities, notably MIT, on topics including water environment and health, advanced energy systems and microsystems, and advanced materials. By 2018, through the MI-MIT collaboration, 8 projects were completed and 11 one-to-one research and 3 flagship projects (larger research teams) were being executed. The collaboration had a scientific outreach that included 201 scientific peer reviewed journal and book publications and 217 conference papers and presentations by April 2018. References External links Masdar Initiative website Educational institutions established in 2007 Technical universities and colleges in the United Arab Emirates Engineering universities and colleges Khalifa University Massachusetts Institute of Technology research institutes 2007 establishments in the United Arab Emirates
Masdar Institute
[ "Engineering" ]
608
[ "Engineering universities and colleges" ]
15,852,294
https://en.wikipedia.org/wiki/Bamboo%20forest
The term bamboo forests is commonly used for bamboo plant communities even though bamboo is a grass, not a tree. Definitions of bamboo forests vary by country and may be contradictory. Overview Bamboos often create communities that are almost entirely composed of a single species unlike other forests. Bamboos also differ from ordinary trees both in appearance and characteristics like having trunks that are sturdy but do not grow thick. Bamboos grow quickly and abundant, often preventing sunlight from touching the ground, making it difficult for other plants to grow in bamboo forests which is why you will only see bamboo trees in these forests and rarely any other type of vegetation. and creating a unique landscape of dense bamboo. Human Uses Bamboo is used in various ways as a valuable natural material in many Asian countries. As such, bamboo forests are seen as a vital source to create tools and resource that are important to the livelihood of these communities. When bamboo forests are managed with moderate extraction, they have the ability of deterring landslides or erosions in the event of an earthquake or other natural disasters. Preventing deforestation Bamboos have a strong reproductive capacity which can be seen in how fast they can regrow after being cut down. Within 2 to 3 months of being cut, a bamboo shoot can grow into a full-grown tree and quickly cover the land with many trees. This is the reason why some say that when you cut a bamboo tree, you are planting a bamboo tree in its place. The underground stem of bamboo is shallow and spreads near the surface of the ground, and the underground stem is covered with “whisker roots,” which hold the ground and the area firmly in place. The spread of alternative materials to make different tools and objects has led to some regional bamboo forests to be mismanaged and left in poor states. The expansion of these neglected bamboo forests puts pressure on other plants by reducing the biodiversity in many of these regions and can also lead to natural disasters having bigger impact on communities. For this reason, some places are taking steps to cut down bamboo forests all together and replace them with forests that are more permanent and will not need as much maintenance or protection as bamboo. References Sources Bamboo Biodiversity Forest ecology
Bamboo forest
[ "Biology" ]
437
[ "Biodiversity" ]
15,853,493
https://en.wikipedia.org/wiki/Sectional%20density
Sectional density (often abbreviated SD) is the ratio of an object's mass to its cross sectional area with respect to a given axis. It conveys how well an object's mass is distributed (by its shape) to overcome resistance along that axis. Sectional density is used in gun ballistics. In this context, it is the ratio of a projectile's weight (often in either kilograms, grams, pounds or grains) to its transverse section (often in either square centimeters, square millimeters or square inches), with respect to the axis of motion. It conveys how well an object's mass is distributed (by its shape) to overcome resistance along that axis. For illustration, a nail can penetrate a target medium with its pointed end first with less force than a coin of the same mass lying flat on the target medium. During World War II, bunker-busting Röchling shells were developed by German engineer August Coenders, based on the theory of increasing sectional density to improve penetration. Röchling shells were tested in 1942 and 1943 against the Belgian Fort d'Aubin-Neufchâteau and saw very limited use during World War II. Formula In a general physics context, sectional density is defined as: SD is the sectional density M is the mass of the projectile A is the cross-sectional area The SI derived unit for sectional density is kilograms per square meter (kg/m2). The general formula with units then becomes: where: SDkg/m2 is the sectional density in kilograms per square meters mkg is the weight of the object in kilograms Am2 is the cross sectional area of the object in meters Units conversion table (Values in bold face are exact.)<noinclude> 1 g/mm2 equals exactly  kg/m2. 1 kg/cm2 equals exactly  kg/m2. With the pound and inch legally defined as and 0.0254 m respectively, it follows that the (mass) pounds per square inch is approximately: 1 lb/in2 = /(0.0254 m × 0.0254 m) ≈ Use in ballistics The sectional density of a projectile can be employed in two areas of ballistics. Within external ballistics, when the sectional density of a projectile is divided by its coefficient of form (form factor in commercial small arms jargon); it yields the projectile's ballistic coefficient. Sectional density has the same (implied) units as the ballistic coefficient. Within terminal ballistics, the sectional density of a projectile is one of the determining factors for projectile penetration. The interaction between projectile (fragments) and target media is however a complex subject. A study regarding hunting bullets shows that besides sectional density several other parameters determine bullet penetration. If all other factors are equal, the projectile with the greatest amount of sectional density will penetrate the deepest. Metric units When working with ballistics using SI units, it is common to use either grams per square millimeter or kilograms per square centimeter. Their relationship to the base unit kilograms per square meter is shown in the conversion table above. Grams per square millimeter Using grams per square millimeter (g/mm2), the formula then becomes: Where: SDg/mm2 is the sectional density in grams per square millimeters mg is the mass of the projectile in grams dmm is the diameter of the projectile in millimeters For example, a small arms bullet with a mass of and having a diameter of has a sectional density of: 4 · 10.4 / (π·6.72) = 0.295 g/mm2 Kilograms per square centimeter Using kilograms per square centimeter (kg/cm2), the formula then becomes: Where: SDkg/cm2 is the sectional density in kilograms per square centimeter mg is the mass of the projectile in grams dcm is the diameter of the projectile in centimeters For example, an M107 projectile with a mass of 43.2 kg and having a body diameter of has a sectional density of: 4 · 43.2 / (π·154.712) = 0.230 kg/cm2 English units In older ballistics literature from English speaking countries, and still to this day, the most commonly used unit for sectional density of circular cross-sections is (mass) pounds per square inch (lbm/in2) The formula then becomes: where: SD is the sectional density in (mass) pounds per square inch the mass of the projectile is: mlb in pounds mgr in grains din is the diameter of the projectile in inches The sectional density defined this way is usually presented without units. In Europe the derivative unit g/cm2 is also used in literature regarding small arms projectiles to get a number in front of the decimal separator. As an example, a bullet with a mass of and a diameter of , has a sectional density (SD) of: 4·(160 gr/7000) / (π·0.264 in2) = 0.418 lbm/in2 As another example, the M107 projectile mentioned above with a mass of and having a body diameter of has a sectional density of: 4 · (95.24) / (π·6.09092) = 3.268 lbm/in2 See also Ballistic coefficient References Projectiles Aerodynamics Ballistics
Sectional density
[ "Physics", "Chemistry", "Engineering" ]
1,087
[ "Applied and interdisciplinary physics", "Aerodynamics", "Aerospace engineering", "Ballistics", "Fluid dynamics" ]
15,855,253
https://en.wikipedia.org/wiki/Quantification%20of%20margins%20and%20uncertainties
Quantification of Margins and Uncertainty (QMU) is a decision support methodology for complex technical decisions. QMU focuses on the identification, characterization, and analysis of performance thresholds and their associated margins for engineering systems that are evaluated under conditions of uncertainty, particularly when portions of those results are generated using computational modeling and simulation. QMU has traditionally been applied to complex systems where comprehensive experimental test data is not readily available and cannot be easily generated for either end-to-end system execution or for specific subsystems of interest. Examples of systems where QMU has been applied include nuclear weapons performance, qualification, and stockpile assessment. QMU focuses on characterizing in detail the various sources of uncertainty that exist in a model, thus allowing the uncertainty in the system response output variables to be well quantified. These sources are frequently described in terms of probability distributions to account for the stochastic nature of complex engineering systems. The characterization of uncertainty supports comparisons of design margins for key system performance metrics to the uncertainty associated with their calculation by the model. QMU supports risk-informed decision-making processes where computational simulation results provide one of several inputs to the decision-making authority. There is currently no standardized methodology across the simulation community for conducting QMU; the term is applied to a variety of different modeling and simulation techniques that focus on rigorously quantifying model uncertainty in order to support comparison to design margins. History The fundamental concepts of QMU were originally developed concurrently at several national laboratories supporting nuclear weapons programs in the late 1990s, including Lawrence Livermore National Laboratory, Sandia National Laboratory, and Los Alamos National Laboratory. The original focus of the methodology was to support nuclear stockpile decision-making, an area where full experimental test data could no longer be generated for validation due to bans on nuclear weapons testing. The methodology has since been applied in other applications where safety or mission critical decisions for complex projects must be made using results based on modeling and simulation. Examples outside of the nuclear weapons field include applications at NASA for interplanetary spacecraft and rover development, missile six-degree-of-freedom (6DOF) simulation results, and characterization of material properties in terminal ballistic encounters. Overview QMU focuses on quantification of the ratio of design margin to model output uncertainty. The process begins with the identification of the key performance thresholds for the system, which can frequently be found in the systems requirements documents. These thresholds (also referred to as performance gates) can specify an upper bound of performance, a lower bound of performance, or both in the case where the metric must remain within the specified range. For each of these performance thresholds, the associated performance margin must be identified. The margin represents the targeted range the system is being designed to operate in to safely avoid the upper and lower performance bounds. These margins account for aspects such as the design safety factor the system is being developed to as well as the confidence level in that safety factor. QMU focuses on determining the quantified uncertainty of the simulation results as they relate to the performance threshold margins. This total uncertainty includes all forms of uncertainty related to the computational model as well as the uncertainty in the threshold and margin values. The identification and characterization of these values allows the ratios of margin-to-uncertainty (M/U) to be calculated for the system. These M/U values can serve as quantified inputs that can help authorities make risk-informed decisions regarding how to interpret and act upon results based on simulations. QMU recognizes that there are multiple types of uncertainty that propagate through a model of a complex system. The simulation in the QMU process produces output results for the key performance thresholds of interest, known as the Best Estimate Plus Uncertainty (BE+U). The best estimate component of BE+U represents the core information that is known and understood about the model response variables. The basis that allows high confidence in these estimates is usually ample experimental test data regarding the process of interest which allows the simulation model to be thoroughly validated. The types of uncertainty that contribute to the value of the BE+U can be broken down into several categories: Aleatory uncertainty: This type of uncertainty is naturally present in the system being modeled and is sometimes known as “irreducible uncertainty” and “stochastic variability.” Examples include processes that are naturally stochastic such as wind gust parameters and manufacturing tolerances. Epistemic uncertainty: This type of uncertainty is due to a lack of knowledge about the system being modeled and is also known as “reducible uncertainty.” Epistemic uncertainty can result from uncertainty about the correct underlying equations of the model, incomplete knowledge of the full set of scenarios to be encountered, and lack of experimental test data defining the key model input parameters. The system may also suffer from requirements uncertainty related to the specified thresholds and margins associated with the system requirements. QMU acknowledges that in some situations, the system designer may have high confidence in what the correct value for a specific metric may be, while at other times, the selected value may itself suffer from uncertainty due to lack of experience operating in this particular regime. QMU attempts to separate these uncertainty values and quantify each of them as part of the overall inputs to the process. QMU can also factor in human error in the ability to identify the unknown unknowns that can affect a system. These errors can be quantified to some degree by looking at the limited experimental data that may be available for previous system tests and identifying what percentage of tests resulted in system thresholds being exceeded in an unexpected manner. This approach attempts to predict future events based on the past occurrences of unexpected outcomes. The underlying parameters that serve as inputs to the models are frequently modeled as samples from a probability distribution. The input parameter model distributions as well as the model propagation equations determine the distribution of the output parameter values. The distribution of a specific output value must be considered when determining what is an acceptable M/U ratio for that performance variable. If the uncertainty limit for U includes a finite upper bound due to the particular distribution of that variable, a lower M/U ratio may be acceptable. However, if U is modeled as a normal or exponential distribution which can potentially include outliers from the far tails of the distribution, a larger value may be required in order to reduce system risk to an acceptable level. Ratios of acceptable M/U for safety critical systems can vary from application to application. Studies have cited acceptable M/U ratios as being in the 2:1 to 10:1 range for nuclear weapons stockpile decision-making. Intuitively, the larger the value of M/U, the less of the available performance margin is being consumed by uncertainty in the simulation outputs. A ratio of 1:1 could result in a simulation run where the simulated performance threshold is not exceeded when in actuality the entire design margin may have been consumed. It is important to note that rigorous QMU does not ensure that the system itself is capable of meeting its performance margin; rather, it serves to ensure that the decision-making authority can make judgments based on accurately characterized results. The underlying objective of QMU is to present information to decision-makers that fully characterizes the results in light of the uncertainty as understood by the model developers. This presentation of results allows decision makers an opportunity to make informed decisions while understanding what sensitivities exist in the results due to the current understanding of uncertainty. Advocates of QMU recognize that decisions for complex systems cannot be made strictly based on the quantified M/U metrics. Subject matter expert (SME) judgment and other external factors such as stakeholder opinions and regulatory issues must also be considered by the decision-making authority before a final outcome is decided. Verification and validation Verification and validation (V & V) of a model is closely interrelated with QMU. Verification is broadly acknowledged as the process of determining if a model was built correctly; validation activities focus on determining if the correct model was built. V&V against available experimental test data is an important aspect of accurately characterizing the overall uncertainty of the system response variables. V&V seeks to make maximum use of component and subsystem-level experimental test data to accurately characterize model input parameters and the physics-based models associated with particular sub-elements of the system. The use of QMU in the simulation process helps to ensure that the stochastic nature of the input variables (due to both aleatory and epistemic uncertainties) as well as the underlying uncertainty in the model are properly accounted for when determining the simulation runs required to establish model credibility prior to accreditation. Advantages and disadvantages QMU has the potential to support improved decision-making for programs that must rely heavily on modeling and simulation. Modeling and simulation results are being used more often during the acquisition, development, design, and testing of complex engineering systems. One of the major challenges of developing simulations is to know how much fidelity should be built into each element of the model. The pursuit of higher fidelity can significantly increase development time and total cost of the simulation development effort. QMU provides a formal method for describing the required fidelity relative to the design threshold margins for key performance variables. This information can also be used to prioritize areas of future investment for the simulation. Analysis of the various M/U ratios for the key performance variables can help identify model components that are in need of fidelity upgrades to order to increase simulation effectiveness. A variety of potential issues related to the use of QMU have also been identified. QMU can lead to longer development schedules and increased development costs relative to traditional simulation projects due to the additional rigor being applied. Proponents of QMU state that the level of uncertainty quantification required is driven by certification requirements for the intended application of the simulation. Simulations used for capability planning or system trade analyses must generally model the overall performance trends of the systems and components being analyzed. However, for safety-critical systems where experimental test data is lacking, simulation results provide a critical input to the decision-making process. Another potential risk related to the use of QMU is a false sense of confidence regarding protection from unknown risks. The use of quantified results for key simulation parameters can lead decision makers to believe all possible risks have been fully accounted for, which is particularly challenging for complex systems. Proponents of QMU advocate for a risk-informed decision-making process to counter this risk; in this paradigm, M/U results as well as SME judgment and other external factors are always factored into the final decision. See also Uncertainty quantification Sandia National Laboratory Los Alamos National Laboratory Lawrence Livermore National Laboratory Verification and Validation References Nuclear stockpile stewardship Numerical analysis Decision-making
Quantification of margins and uncertainties
[ "Mathematics" ]
2,166
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
15,855,590
https://en.wikipedia.org/wiki/Vector%20directory%20number
A vector directory number (VDN) is an extension on an automatic call distributor that directs an incoming call to a "vector" — a user-defined sequence of functions that may be performed, such as routing the call to a destination, giving a busy signal, or playing a recorded message. This number is a "soft" extension number not assigned to an equipment location. VDNs must be set up according to the customer's dial plan and the optional vectoring software must be enabled. VDN is used in different call center environments. See also Virtual number References External links Virtual Phone Number Telephone numbers Computer telephony integration
Vector directory number
[ "Mathematics", "Technology" ]
131
[ "Telephone numbers", "Mathematical objects", "Information technology", "Numbers", "Computer telephony integration" ]
15,856,523
https://en.wikipedia.org/wiki/PTT%20ID
PTT ID, or Push-To-Talk ID, is a generic term for an automatic number identification (ANI)-like system used in two-way radio systems. It provides identification of the transmitting radio over the air, and is commonly used in selective calling/signaling systems, usually in commercial and public safety radio systems. PTT ID features are included in MDC-1200 and other signaling systems. Radio technology
PTT ID
[ "Technology", "Engineering" ]
87
[ "Information and communications technology", "Telecommunications engineering", "Radio technology" ]
15,857,311
https://en.wikipedia.org/wiki/Mining%20feasibility%20study
A mining feasibility study is an evaluation of a proposed mining project to determine whether the mineral resource can be mined economically. There are three types of feasibility study used in mining, order of magnitude, preliminary feasibility and detailed feasibility. Order of magnitude Order of magnitude feasibility studies (sometimes referred to as "scoping studies") are an initial financial appraisal of an inferred mineral resource. Depending on the size of the project, an order of magnitude study may be carried out by a single individual. It will involve a preliminary mine plan, and is the basis for determining whether to proceed with an exploration program, and more detailed engineering work. Order-of-magnitude studies are developed by copying plans and factoring known costs from existing projects completed elsewhere and are accurate to within 40–50%. Preliminary feasibility Preliminary feasibility studies or "pre-feasibility studies" are more detailed than order of magnitude studies. A preliminary feasibility study is used in due diligence work, determining whether to proceed with a detailed feasibility study and as a "reality check" to determine areas within the project that require more attention. Preliminary feasibility studies are done by factoring known unit costs and by estimating gross dimensions or quantities once conceptual or preliminary engineering and mine design has been completed. Preliminary feasibility studies are completed by a small group of multi-disciplined technical individuals and have an accuracy within 20-30%. Detailed feasibility Detailed feasibility studies are the most detailed and will determine definitively whether to proceed with the project. A detailed feasibility study will be the basis for capital appropriation, and will provide the budget figures for the project. Detailed feasibility studies require a significant amount of formal engineering work, are accurate to within 10-15% and can cost between ½-1½% of the total estimated project cost. Footnotes Mining engineering Feasibility study
Mining feasibility study
[ "Engineering" ]
364
[ "Mining engineering" ]
15,858,460
https://en.wikipedia.org/wiki/Ei-ichi%20Negishi
was a Japanese chemist who was best known for his discovery of the Negishi coupling. He spent most of his career at Purdue University in the United States, where he was the Herbert C. Brown Distinguished Professor and the director of the Negishi-Brown Institute. He was awarded the 2010 Nobel Prize in Chemistry "for palladium catalyzed cross couplings in organic synthesis" jointly with Richard F. Heck and Akira Suzuki. Early life and education Negishi was born in Xinjing (today known as Changchun), the capital of Manchukuo, in July 1935. Following the transfer of his father who worked at the South Manchuria Railway in 1936, he moved to Harbin, and lived eight years there. In 1943, when he was nine, the Negishi family moved to Incheon, and a year later to Kyongsong Prefecture (now Seoul), both in Japanese-occupied Korea. In November 1945, three months after World War II ended, they moved to Japan. Since he excelled as a student, a year ahead of what would have been his graduation from grammar school, he was admitted to an elite secondary school, Shonan High School. At the age of 17, he gained admission to the University of Tokyo. After graduation from the University of Tokyo in 1958, Negishi did his internship at Teijin, where he conducted research on polymer chemistry. Later, he continued his studies in the United States after having won a Fulbright Scholarship and obtained his Ph.D. from the University of Pennsylvania in 1963, under the supervision of professor Allan R. Day. Career After obtaining his Ph.D., Negishi decided to become an academic researcher. Although he was hoping to work at a Japanese university, he could not find a position. In 1966 he resigned from Teijin, and became a postdoctoral associate at Purdue University, working under future Nobel laureate Herbert C. Brown. From 1968 to 1972 he was an instructor at Purdue. In 1972, he became an assistant professor at Syracuse University, where began his lifelong study of transition metal–catalyzed reactions, and was promoted to associate professor in 1979. He returned to Purdue University as a full professor in the same year. He discovered Negishi coupling, a process which condenses organic zinc compounds and organic halides under a palladium or nickel catalyst to obtain a C-C bonded product. For this achievement, he was awarded the Nobel Prize in Chemistry in 2010. Negishi also reported that organoaluminum compounds and organic zirconium compounds can be used for cross-coupling. He did not seek a patent for this coupling technology and explained his reasoning as follows: "If we did not obtain a patent, we thought that everyone could use our results easily." In addition, Zr(CH) obtained by reducing zirconocene dichloride is also called Negishi reagent, which can be used in oxidative cyclisation reactions. The technique he developed is estimated to be used in a quarter of all reactions in the pharmaceutical industry. By the time Negishi retired in 2019, he had published more than 400 academic papers. He was committed to instilling rigorous practices in his lab, emphasizing the need of keeping organized and comprehensive records. Before any separations, he asked his student to evaluate crude reaction mixtures in order to minimize loss of any useful scientific information. Recognition Awards 1996 – A. R. Day Award (ACS Philadelphia Section award) 1997 – Chemical Society of Japan Award 1998 – Herbert N. McCoy Award 1998 – American Chemical Society Award for Organometallic Chemistry 1998–2000 – Alexander von Humboldt Senior Researcher Award 2003 – Sigma Xi Award, Purdue University 2007 – Yamada–Koga Prize 2007 – Gold Medal of Charles University, Prague, Czech Republic 2010 – Nobel Prize in Chemistry 2010 – ACS Award for Creative Work in Synthetic Organic Chemistry 2015 – Fray International Sustainability Award, SIPS 2015 Honors 1960–61 – Fulbright–Smith–Mundt Fellowship 1962–63 – Harrison Fellowship at University of Pennsylvania 1986 – Guggenheim Fellowship 2000 – Sir Edward Frankland Prize Lectureship 2009 – Invited Lectureship, 4th Mitsui International Catalysis Symposium (MICS-4), Kisarazu, Japan 2010 – Order of Culture 2010 – Person of Cultural Merit 2011 – Sagamore of the Wabash 2011 – Order of the Griffin, Purdue University 2011 – Fellow, American Academy of Arts & Sciences 2011 – Honorary doctor of science, University of Pennsylvania. 2012 – Honorary Fellow of Royal Society of Chemistry (RSC) 2014 – Foreign Associate of the National Academy of Sciences Personal life and death Negishi began dating Sumire Suzuki in his freshman year and they announced their engagement to their parents in March 1958. They had met at a choir of which they were both members at in university. They married the next year and together they had two daughters. Negishi loved playing the piano and conducting. During the "Pacifichem" 2015 conference's closing ceremony, he conducted an orchestra. Disappearance On the evening of March 12, 2018, both Negishi and his wife were reported missing by family members. Police determined that, based on a purchase made earlier in the day, the couple had left their home in West Lafayette, Indiana, and headed north. At about 5 a.m. the next day, officers in Ogle County, Illinois, received a call to check on the welfare of an elderly man who was walking on a rural road south of Rockford. When he was taken to hospital, officers identified him as Negishi and found that police in Indiana were looking for him and his wife. A short time later, Suzuki's body was found at the Orchard Hills Landfill in Davis Junction, along with the couple's car. According to a statement from the family, the couple was driving to Rockford International Airport for a trip when their car became stuck in a ditch on a road near the landfill. Negishi went looking for help and was said to be suffering from an "acute state of confusion and shock". The Ogle County Sheriff Department said there was no suspicion of foul play in Suzuki's death, although the cause of her death was not immediately released. The family said Suzuki was near the end of her battle with Parkinson's disease. In May 2018, an autopsy concluded that Suzuki died from hypothermia, but Parkinson's disease and hypertension were contributing factors. Death Negishi died in Indianapolis, Indiana, on June 6, 2021. He was 85 years old. No funeral services took place in the United States, but his family planned to lay him to rest in Japan in 2022. See also List of Japanese Nobel laureates Richard F. Heck Makoto Kumada Akira Suzuki Kenkichi Sonogashira References External links Ei-ichi Negishi – – Purdue University 1935 births 2021 deaths Japanese organic chemists Japanese Nobel laureates Japanese people from Manchukuo Nobel laureates in Chemistry Syracuse University faculty Purdue University faculty Academic staff of Hokkaido University University of Tokyo alumni University of Pennsylvania alumni Recipients of the Order of Culture People from Changchun 20th-century Japanese chemists Foreign associates of the National Academy of Sciences 21st-century Japanese chemists Chemists from Jilin Educators from Jilin
Ei-ichi Negishi
[ "Chemistry" ]
1,487
[ "Organic chemists", "Japanese organic chemists" ]
15,859,075
https://en.wikipedia.org/wiki/Dendera%20zodiac
The sculptured Dendera zodiac (or Denderah zodiac) is a widely known Egyptian bas-relief from the ceiling of the pronaos (or portico) of a chapel dedicated to Osiris in the Hathor temple at Dendera, containing images of Taurus (the bull) and Libra (the scales). This chapel was begun in the late Ptolemaic period; its pronaos was added by the emperor Tiberius. This led Jean-François Champollion to date the relief to the Greco-Roman period, but most of his contemporaries believed it to be of the New Kingdom. The relief, which John H. Rogers characterised as "the only complete map that we have of an ancient sky", has been conjectured in the past to represent the basis on which later astronomy systems were based. It is now on display at the Musée du Louvre, Paris. Description The sky disc is centered on the north pole star, with Ursa Minor depicted as a jackal. An inner disc is composed of constellations showing the signs of the zodiac. Some of these are represented in the same Greco-Roman iconographic forms as their familiar counterparts (e.g. the Ram, Taurus, Scorpio, and Capricorn), whilst others are shown in a more Egyptian form: Aquarius is represented as the flood god Hapy, holding two vases which gush water. Rogers noted the similarities of unfamiliar iconology with the three surviving tablets of a Seleucid zodiac and both relating to kudurru ('boundary stone') representations: in short, Rogers sees the Dendera zodiac as "a complete copy of the Mesopotamian zodiac". A comparison with other Mesopotamian pre-zodiac astronomical material led Hoffmann to the suggestion that the depiction shows a Babylonian star chart (and not only the Babylonian zodiac) with some Greco-Egyptian additions and variants. Four women and four pairs of falcon-headed figures, arranged 45° from one another, hold up the sky disc, the outermost ring of which features 36 figures representing the 36 asterisms used to track both the 36 forty-minute "hours" that divided the Egyptian night, as well as the 36 ten-day "weeks" (decans) of the Egyptian year (with 5 days excluded). The square of the overall sculpture is oriented to the walls of the temple. This sculptural representation of the zodiac in circular form is unique in ancient Egyptian art. More typical are the rectangular zodiacs which decorate the same temple's pronaos. History During the Napoleonic campaign in Egypt, Vivant Denon drew the circular zodiac, the more widely known one, and the rectangular zodiacs. In 1802, after the Napoleonic expedition, Denon published engravings of the temple ceiling in his Voyage dans la Basse et la Haute Egypte. These elicited a controversy as to the age of the zodiac representation, ranging from tens of thousands to a thousand years to a few hundred, and whether the zodiac was a planisphere or an astrological chart. Sébastien Louis Saulnier, an antique dealer, commissioned Claude Lelorrain to remove the circular zodiac with saws, jacks, scissors and gunpowder. The zodiac ceiling was moved in 1821 to Restoration Paris and, by 1822, was installed by Louis XVIII in the Royal Library (later called the National Library of France). In 1922 the zodiac was moved from there to the Louvre. In 2022 Egyptologist Zahi Hawass started a petition to bring the ancient work back to Egypt, along with the Rosetta Stone and other artifacts. Dating The controversy around the zodiac's dating, known as the "Dendera Affair", involved people of the likes of Joseph Fourier (who estimated that the age was 2500 BC). Champollion, among others, believed that it was a religious zodiac. Champollion placed the zodiac in the fourth century AD. Georges Cuvier placed the date between 123 AD and 147 AD. His discussion of the dating summarizes the reasoning as he understood it in the 1820s. Sylvie Cauville and Éric Aubourg dated it to 50 BC through an examination of the planetary configuration. It depicts the five planets known to the Egyptians, in a configuration that occurs once every thousand years, and the identification of two eclipses. The solar eclipse indicates the date of March 7, 51 BC: it is represented by a circle containing the goddess Isis holding a baboon (the god Thoth) by the tail. The lunar eclipse indicates the date of September 25, 52 BC: it is represented by an Eye of Horus locked into a circle. Notes See also Athribis (Upper Egypt) Astronomical ceiling of Senenmut's Tomb Farnese Atlas - a 2nd-century AD Roman marble sculpture of Atlas holding up a celestial globe Zodiac synagogue mosaic References Further reading Sébastien Louis Saulnier, Claude Lelorrain, , Éditions Sétier, 1822. Nicolas B. Halma, Examen et explication du zodiaque de Denderah comparé au globe céleste antique d'Alexandrie, Éditions Merlin, 1822. J. Chabert, L. D. Ferlus, Mahmoud Saba, Explication du zodiaque de Denderah (Tentyris), Éditions Guiraudet, 1822. Jean Saint-Martin, Notice sur le zodiaque de Denderah, Éditions C.J. Trouvé, 1822. Jean-Baptiste Biot, , Firmin Didot, 1823. Charles de Hesse, La pierre zodiacale du Temple de Dendérah, Éditions André Seidelin, 1824. Jacques-Joseph Champollion-Figeac, , Firmin Didot, 1832. Jean-Baptiste Prosper Jollois; René Édouard de Villiers du Terrage, Recherches sur les bas-reliefs astronomiques des Égyptiens, Carilian-Goeury, 1834. Letronne Antoine-Jean, Analyse critique des représentations zodiacales de Dendéra et d'Esné, Imprimerie Royale, 1855. Franz Joseph Lauth, Les zodiaques de Denderah, Éditions C. Wolf et Fils, 1865. Éric Aubourg, "La date de conception du zodiaque du temple d'Hathor à Dendérah", Bulletin de l’Institut Français d’Archéologie Orientale, 95 (1995), 1–10. Sylvie Cauville : Le temple d'Isis à Dendéra, BSFE 123, 1992. Le temple de Dendérah, IFAO, 1995. Le zodiaque d'Osiris, Peeters, 1997 (corr. 2nd ed. 2015). L'Œil de Ré, Pygmalion, 1999. Jed Z. Buchwald, "Egyptian Stars under Paris Skies", Engineering & Science, 66 (2003), nr. 4, 20–31. Jed Z. Buchwald & Diane Greco Josefowicz, The Zodiac of Paris: How an Improbable Controversy over an Ancient Egyptian Artifact provoked a Modern Debate between Religion and Science, Princeton University Press, 2010. External links The Zodiac in the Louvre collections database Gyula Priskin, The Dendera zodiacs as narratives of the myth of Osiris, Isis, and the child Horus ENiM 8 (2015), 133–185. 50 BC 1st-century BC sculptures 1802 archaeological discoveries Egyptian antiquities in the Louvre Sculptures of ancient Egypt Ancient astronomy Egyptian calendar Reliefs in France French invasion of Egypt and Syria Isis Ptolemaic Kingdom Louis XVIII
Dendera zodiac
[ "Astronomy" ]
1,557
[ "Ancient astronomy", "History of astronomy" ]
15,859,209
https://en.wikipedia.org/wiki/Molemax
MoleMax was the first digital epiluminescence microscopy (dermatoscopy) system developed in cooperation with medical faculty Department of Dermatology of the Medical University of Vienna. It is currently owned and distributed by DermaMedicalSystems. History In 1997, MoleMax was presented to international experts at the Melanoma World Congress and the following Dermatology World Congress in Sydney and generated great public interest. Since then, over 2000 MoleMax systems are in use in over 50 countries. Today, MoleMax is worldwide accepted clinical standard in digital epiluminescence microscopy. Methodology Thanks to the worldwide patented light polarisation technique for cameras with skin contact, these camera systems do not require any immersion fluid for the epiluminescence microscopic analysis. Scientific use The MoleMax system was part of multiple scientific works such as measurements of the growth rate of pigmented skin lesions and verification of follow-up imaging. Images made by this system also ended up in large public image databases such as HAM10000. References Microscopy Dermatology
Molemax
[ "Chemistry" ]
212
[ "Microscopy" ]
15,859,817
https://en.wikipedia.org/wiki/Cyttaria%20espinosae
Cyttaria espinosae (Lloyd), also known by its local name digüeñe, dihueñe, lihueñe, quireñe, pinatra, or quideñe, is an orange-white coloured and edible ascomycete fungus native to south-central Chile and Argentinean Patagonia. The digüeñe is a strict and specific parasite of Nothofagus, mainly Nothofagus obliqua trees and cause canker-like galls on branches from which the fruiting bodies emerge between spring and early summer. The pitted surface generates air turbulence, preventing a build-up of static air around the fruitbodies, thus facilitating wind-borne spore dispersal. Culinary use C. espinosae's flavor is described as between sweet and bland. In Patagonian cuisine, the digüeñe is usually consumed fresh in salads or fried with scrambled eggs for empanada stuffing. They are traditionally consumed by the Mapuche people. References External links Chileflora.com Fungi of Chile Edible fungi Parasitic fungi Leotiomycetes Fungus species
Cyttaria espinosae
[ "Biology" ]
227
[ "Fungi", "Fungus species" ]