id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
11,436,484
https://en.wikipedia.org/wiki/Cercospora%20longissima
Cercospora longissima is a fungal plant pathogen. References longissima Fungal plant pathogens and diseases Fungus species
Cercospora longissima
Biology
27
2,338,535
https://en.wikipedia.org/wiki/Information%20operations%20condition
INFOCON (short for information operations condition) is a threat level system in the United States similar to that of FPCON. It is a defense system based primarily on the status of information systems and is a method used by the military to defend against a computer network attack. Description There are five levels of INFOCON, which recently changed to more closely correlate to DEFCON levels. They are: INFOCON 5 describes a situation where there is no apparent hostile activity against computer networks. Operational performance of all information systems is monitored, and password systems are used as a layer of protection. INFOCON 4 describes an increased risk of attack. Increased monitoring of all network activities is mandated, and all Department of Defense end users must make sure their systems are secure. Internet usage may be restricted to government sites only, and backing up files to removable media is ideal. INFOCON 3 describes when a risk has been identified. Security review on important systems is a priority, and the Computer Network Defense system's alertness is increased. All unclassified dial-up connections are disconnected. INFOCON 2 describes when an attack has taken place but the Computer Network Defense system is not at its highest alertness. Non-essential networks may be taken offline, and alternate methods of communication may be implemented. INFOCON 1 describes when attacks are taking place and the Computer Network Defense system is at maximum alertness. Any compromised systems are isolated from the rest of the network. Similar concepts in private-sector computing ThreatCon (Symantec) Symantec's ThreatCon service no longer exists. Broadcom has acquired Symantec. In popular culture In the TV Series, Crisis , the US government goes to INFOCON 2 when Francis Gibson has a massive cyber attack initiated upon the United States, nearly bringing it to war with China. See also Alert state Attack (computing) LERTCON DEFCON EMERGCON FPCON (THREATCON) Threat (computer) WATCHCON References Alert measurement systems
Information operations condition
Technology
403
43,758,806
https://en.wikipedia.org/wiki/Gus%20Crystal
Gus Crystal () is a Russian manufacturer of glass (Lead glass or so-called "crystal"). The company is the oldest surviving manufacturer of Russian crystal and was founded in 1756 on the Gus River. The company gave its name to the town of Gus-Khrustalny and its district. Founded by Akim Maltsov, а merchant from Oryol region. From 2013 the plant was known as Gusevskaya Crystal Plant and named after Akim Maltsov. History In the summer of 1756, a merchant from Oryol region, Akim Maltsev founded a glass factory in the Vladimir Province of the Moscow Governorate, near the Gus River. Initially, the factory produced only simple glasses and tumblers, but in 1830, the founder's heir, Ivan Maltsev, established crystal production, making it as high-quality as Bohemian crystal but more affordable. For a century and a half after its founding, the factory has been operated successfully and expanded. In the final years of the Russian Empire, the Maltsev heirs not only renovated the factory but also reconstructed much of the city, building red brick houses for workers that still stand today, as well as individual cottages for management and the St. George Cathedral. The construction of the cathedral involved the participation of Leonty Benois and Viktor Vasnetsov. Today, the cathedral houses a crystal museum, which displays thousands of unique pieces produced by the Gus Crystal Factory. After the October Revolution of 1917 and the resulting devastation, the factory ceased operations. Production was only resumed in 1923 after a visit to Gus-Khrustalny by Mikhail Kalinin and the allocation of special funding. During the Soviet era, the factory became known for producing faceted glasses, which were presumably designed by Vera Mukhina. The factory produced them in quantities of tens of millions. At the same time, the factory also produced artistic glass, including multicolored glass, and glassblowers continued their work there. Additionally, the factory produced products incorporating colored Venetian threads. Recent history In the 1990s, the factory was privatized, with each workshop becoming its own legal entity. Each of these entities not only supplied products to the neighboring workshop but sold them, leading to a markup at each stage and ultimately making the final product's price uncompetitive. At the same time, criminal interests became involved with the factory, focusing on immediate profit rather than the development of the enterprise. As a result, one by one, the workshops declared themselves financially insolvent, and in 2000, the main factory declared bankruptcy as well. On January 19, 2012, the factory in its previous form ceased to exist, and the last hundred employees were laid off. On December 26, 2013, crystal production was resumed at the factory known as "Gusevsky Crystal Factory named after Akim Maltsev". New equipment was installed in the old workshop, and instead of traditional vases and glasses, the production of handcrafted crystal art pieces by individual orders was established. References External links Official website Official website of the Gus Crystal plant Official website of the Administration of the Gus-Khrustalny city Glassmaking companies Companies established in 1756 Manufacturing companies of Russia Russian brands Companies nationalised by the Soviet Union Companies based in Vladimir Oblast Manufacturing companies established in 1856
Gus Crystal
Materials_science,Engineering
673
51,717,116
https://en.wikipedia.org/wiki/Bass%E2%80%93Quillen%20conjecture
In mathematics, the Bass–Quillen conjecture relates vector bundles over a regular Noetherian ring A and over the polynomial ring . The conjecture is named for Hyman Bass and Daniel Quillen, who formulated the conjecture. Statement of the conjecture The conjecture is a statement about finitely generated projective modules. Such modules are also referred to as vector bundles. For a ring A, the set of isomorphism classes of vector bundles over A of rank r is denoted by . The conjecture asserts that for a regular Noetherian ring A the assignment yields a bijection Known cases If A = k is a field, the Bass–Quillen conjecture asserts that any projective module over is free. This question was raised by Jean-Pierre Serre and was later proved by Quillen and Suslin; see Quillen–Suslin theorem. More generally, the conjecture was shown by in the case that A is a smooth algebra over a field k. Further known cases are reviewed in . Extensions The set of isomorphism classes of vector bundles of rank r over A can also be identified with the nonabelian cohomology group Positive results about the homotopy invariance of of isotropic reductive groups G have been obtained by by means of A1 homotopy theory. References Commutative algebra Algebraic K-theory Algebraic geometry
Bass–Quillen conjecture
Mathematics
269
44,030,009
https://en.wikipedia.org/wiki/Generalized%20Wiener%20filter
The Wiener filter as originally proposed by Norbert Wiener is a signal processing filter which uses knowledge of the statistical properties of both the signal and the noise to reconstruct an optimal estimate of the signal from a noisy one-dimensional time-ordered data stream. The generalized Wiener filter generalizes the same idea beyond the domain of one-dimensional time-ordered signal processing, with two-dimensional image processing being the most common application. Description Consider a data vector which is the sum of independent signal and noise vectors with zero mean and covariances and . The generalized Wiener Filter is the linear operator which minimizes the expected residual between the estimated signal and the true signal, . The that minimizes this is , resulting in the Wiener estimator . In the case of Gaussian distributed signal and noise, this estimator is also the maximum a posteriori estimator. The generalized Wiener filter approaches 1 for signal-dominated parts of the data, and S/N for noise-dominated parts. An often-seen variant expresses the filter in terms of inverse covariances. This is mathematically equivalent, but avoids excessive loss of numerical precision in the presence of high-variance modes. In this formulation, the generalized Wiener filter becomes using the identity . An example The cosmic microwave background (CMB) is a homogeneous and isotropic random field, and its covariance is therefore diagonal in a spherical harmonics basis. Any given observation of the CMB will be noisy, with the noise typically having different statistical properties than the CMB. It could for example be uncorrelated in pixel space. The generalized Wiener filter exploits this difference in behavior to isolate as much as possible of the signal from the noise. The Wiener-filtered estimate of the signal (the CMB in this case) requires the inversion of the usually huge matrix . If S and N were diagonal in the same basis this would be trivial, but often, as here, that isn't the case. The solution must in these cases be found by solving the equivalent equation , for example via conjugate gradients iteration. In this case all the multiplications can be performed in the appropriate basis for each matrix, avoiding the need to store or invert more than their diagonal. The result can be seen in the figure. See also Wiener filter Norbert Wiener Wiener deconvolution Maximum a posteriori estimation References Signal processing filter
Generalized Wiener filter
Chemistry
488
25,665,856
https://en.wikipedia.org/wiki/C5H11NO
{{DISPLAYTITLE:C5H11NO}} The molecular formula C5H11NO (molar mass: 101.15 g/mol) may refer to: Diethylformamide N-Hydroxypiperidine Isovaleramide N-Methylmorpholine Pivalamide Prolinol
C5H11NO
Chemistry
70
30,493,979
https://en.wikipedia.org/wiki/TassDB
TassDB (TAndem Splice Site DataBase) is a database of tandem splice sites of eight species See also Alternative splicing References External links https://archive.today/20070106023527/http://helios.informatik.uni-freiburg.de/TassDB/. Genetics databases Gene expression Spliceosome RNA splicing
TassDB
Chemistry,Biology
84
7,907,848
https://en.wikipedia.org/wiki/Surface%20lift
A surface lift is a type of cable transport for snow sports in which skiers and snowboarders remain on the ground as they are pulled uphill. While they were once prevalent, they have been overtaken in popularity by higher-capacity and higher-comfort aerial lifts, such as chairlifts and gondola lifts. Today, surface lifts are most often found on beginner slopes, small ski areas, and peripheral slopes. They are also often used to access glacier ski slopes because their supports can be anchored in glacier ice due to the lower forces and realigned due to glacier movement. Surface lifts have some disadvantages compared to aerial lifts: they require more passenger skill and may be difficult for some beginners (especially snowboarders, whose boards point at an angle different than the direction of travel) and children; sometimes they lack a suitable route back to the piste; the snow surface must be continuous; they can get in the way of skiable terrain; they are relatively slow in speed and have lower capacity. Surface lifts have some advantages over aerial lifts: they can be exited before the lift reaches the top, they can often continue operating in wind conditions too strong for a chairlift; they require less maintenance and are much less expensive to install and operate. History The first surface lift was built in 1908 by German Robert Winterhalder in Schollach/Eisenbach, Hochschwarzwald, Germany, and started operations February 14, 1908. A steam-powered toboggan tow, in length, was built in Truckee, California, in 1910. The first skier-specific tow in North America was apparently installed in 1933 by Alec Foster at Shawbridge in the Laurentians outside Montreal, Quebec. The Shawbridge tow was quickly copied at Woodstock, Vermont, in New England, in 1934 by Bob and Betty Royce, proprietors of the White Cupboard Inn. Their tow was driven by the rear wheel of a Ford Model A. Wallace "Bunny" Bertram took it over for the second season, improved the operation, renamed it from Ski-Way to Ski Tow, and eventually moved it to what became the eastern fringe of Vermont's major southern ski areas, a regional resort still operating as Saskadena Six. Their relative simplicity made tows widespread and contributed to an explosion of the sport in the United States and Europe. Before tows, only people willing to walk uphill could ski. Suddenly relatively nonathletic people could participate, greatly increasing the appeal of the sport. Within five years, more than 100 tow ropes were operating in North America. Rope tow A rope tow consists of a cable or rope running through a bullwheel (large horizontal pulley) at the bottom and one at the top, powered by an engine at one end. In the simplest case, a rope tow is where passengers grab hold of a rope and are pulled along while standing on their skis or snowboards and are pulled up a hill. The grade of this style of tow is limited by passenger grip strength and the fact that sheaves (pulleys that support the rope above the ground) cannot be used. Handle tow A development of the simple rope tow is the handle tow (or pony lift), where plastic or metal handles are permanently attached to the rope. These handles are easier to grip than a rope, making the ski lift easier to ride. Nutcracker tow Steeper, faster and longer tows require a series of pulleys to support the rope at waist height and hence require the use of some sort of "tow gripper". Several were designed and used in the 1930s and 40s, but the most successful was the "nutcracker" attached to a harness around the hips. To this is attached a clamp, much like the nutcracker from which it derives its name, which the rider attaches to the rope. This eliminates the need to hold on to the rope directly. This system was used on many fields worldwide from the 1940s, and remains popular at 'club fields', especially in New Zealand. This type of ski lift is often referred to as a nutcracker tow. J-bar, T-bar, and platter lift J-bar, T-bar, and platter lifts are employed for low-capacity slopes in large resorts and small local areas. These consist of an aerial cable loop running over a series of wheels, powered by an engine at one end. Hanging from the rope are a series of vertical recoiling cables, each attached to a horizontal J- or T-shaped bar – which is placed behind the skier's buttocks or between the snowboarder's legs – or a plastic button or platter that is placed between the skier's legs. Snowboarders place the platter behind the top of their front leg or in front of their chest under their rear arm and hold it in position with their hands. These pull the passengers uphill while they ski or snowboard across the ground. Platter lifts are often referred to as button lifts, and may occasionally feature rigid poles instead of recoiling cables. The modern J-bar and T-bar mechanism was invented in 1934 by the Swiss engineer Ernst Constam, with the first lift installed in Davos, Switzerland. J-bars were installed in the 1930s in North America and Australia, with the Ski Hoist at Charlotte Pass in Australia dating from 1938. The first T-bar lift in the United States was installed in 1940 at the Pico Mountain ski area. It was considered a great improvement over the rope tow. An earlier T-bar was installed at Rib Mountain (now Granite Peak Ski Area), Wisconsin, in 1937. In recent years, J-bars are no longer used in most ski areas. Some operators have combined T-bar and platter lifts, attaching both types of hanger to the cable, giving skiers and snowboarders a choice. Hangers designed to tow sledges uphill are installed on some slopes by operators, and some operators convert hangers in the summer to tow cyclists uphill. Detachable platter or Poma lift A variant of the platter lift is the detachable surface lift, commonly known as a “Poma lift”, after the company which introduced them. Unlike most other platter lifts, which are similar to T-bars with the stick attached to a spring box by a retractable cord, Poma lifts have a detachable grip to the tow cable with the button connected to the grip by a semi-rigid pole. Platters return to the bottom station, detach from the cable, and are stored on a rail until a skier slides the platter forwards to use it. Most detachable surface lifts operate at speeds of around , while platters and T-bars can operate up to , although are generally slower. When the grip attaches to the cable, the passenger's acceleration is lessened by the spring-loaded pole. Magic carpet A magic carpet is a conveyor belt installed at the level of the snow. Some include a canopy or tunnel. Passengers slide onto the belt at the base of the hill and stand with skis or snowboard facing forward. The moving belt pulls the passengers uphill. At the top, the belt pushes the passengers onto the snow and they slide away. They are easier to use than T-bar lifts and Poma lifts. Magic carpets are limited to shallow grades due to their dependence on friction between the carpet and the bottom of the ski or board. Their slow speed, limited distance, and capacity confines them to beginner and novice areas. The longest carpet lift is a Sunkid carpet lift, , installed at Alpine Centre, Bottrop, Germany. Some other notable examples are the nearly carpet in Burnsville, USA, which has an overpass over a ski run, the tubing hill magic carpet at Soldier Hollow in Utah, USA, the tunnel at Snowbird, Utah, USA, and the installation at Stratton Mountain Resort, USA. References Vertical transport devices Cable transport Ski lift types 1908 introductions
Surface lift
Technology
1,645
75,618,329
https://en.wikipedia.org/wiki/List%20of%20Germans%20relocated%20to%20the%20US%20via%20the%20Operation%20Paperclip
Operation Paperclip was a secret United States intelligence program in which more than 1,600 German scientists, engineers, and technicians were taken from the former Nazi Germany to the U.S. for government employment after the end of World War II in Europe, between 1945 and 1959. Conducted by the Joint Intelligence Objectives Agency (JIOA), it was largely carried out by special agents of the U.S. Army's Counterintelligence Corps (CIC). Many of these Germans were former Nazi members and some worked with the leaders of the Nazi Party. Key recruits Aeronautics and rocketry Many engineers had been involved with the V-2 in Peenemünde, and 127 of them eventually entered the U.S. through Operation Paperclip. They were also known as the Von Braun Group. Hans Amtmann Herbert Axster Erich Ball Oscar Bauschinger Hermann Beduerftig Rudi Beichel Anton Beier Herbert Bergeler Rudi Berndt, expert in parachute development Magnus von Braun Wernher von Braun Ernst Czerlinsky Walter Burose Adolf Busemann GN Constan Werner Dahm Konrad Dannenberg Kurt H. Debus Gerd De Beek Walter Dornberger – head of rocket programme Gerhard Drawe Friedrich Duerr Ernst R. G. Eckert Otto Eisenhardt Krafft Arnold Ehricke Alfred Finzel Edward Fischel Karl Fleischer Anton Flettner Anselm Franz Herbert Fuhrmann Ernst Geissler Werner Gengelbach Dieter Grau Hans Gruene Herbert Guendel Fritz Haber Heinz Haber Karl Hager Guenther Haukohl Walter Häussermann Karl Heimburg Emil Hellebrand Gerhard B. Heller Bruno Helm Rudolf Hermann Bruno Heusinger Hans Hueter Guenther Hintze Sighard F. Hoerner Kurt Hohenemser Oscar Holderer Helmut Horn , Director of Flight Dynamics, Marshall Space Flight Center Dieter Huzel Walter Jacobi Erich Kaschig Ernst Klauss Theodore Knacke Siegfried Knemeyer Heinz-Hermann Koelle Gustav Kroll Willi Kuberg Werner Kuers Hermann Kurzweg Hermann Lange Hans Lindenberg Hans Lindenmayer Alexander Martin Lippisch – aeronautical engineer Robert Lusser Hans Maus Helmut Merk Joseph Michel Hans Milde Heinz Millinger Rudolf Minning William Mrazek Erich W. Neubert Hans von Ohain (designer of German jet engines) Robert Paetz Hans Palaoro Kurt Patt Hans Paul Fritz Pauli Arnold Peter Helmuth Pfaff Theodor Poppel Werner Rosinski Ludwig Roth Heinrich Rothe Martin Schilling Helmut Schlitt Albert Schuler Walter Schwidetzky Ernst Steinhoff Wolfgang Steurer Heinrich Struck Ernst Stuhlinger Bernhard Tessmann Adolf Thiel Georg von Tiesenhausen Werner Tiller JG Tschinkel Arthur Urbanski Fritz Vandersee Richard Vogt Woldemar Voigt, designer of Messerschmitt P.1101 Werner Voss Theodor Vowe Herbert A. Wagner Hermann Rudolf Wagner Hermann Weidner Walter Fritz Wiesemann Philipp Wolfgang Zettler-Seidel Architecture Heinz Hilten Hannes Luehrsen Electronics – including guidance systems, radar and satellites Josef Boehm Hans Fichtner Hans Friedrich Eduard Gerber Walter Haeussermann Otto Hoberg Rudolf Hoelker Hans Hollmann Helmut Hölzer Helmut Horn Wilhelm Jungert Horst Kedesdy Georg ("George") Emil Knausenberger Heinz-Hermann Koelle Max Kramer Hubert E. Kroh Hermann H. Kurzweg Kurt Lehovec Kurt Lindner Alexander Martin Lippisch JW Muehlner Fritz Mueller William Mrazek Hans R. Palaoro Johannes Plendl Fritz Karl Preikschat Eberhard Rees Gerhard Reisig Georg Rickhey Werner Rosinski Ludwig Roth Arthur Rudolph Walter Schwidetzky Harry Ruppe Friedrich von Saurma William August Schulze Heinz Schlicke Werner Sieber Othmar Stuetzer Albin Wittmann Hugo Woerdemann Albert Zeiler Hans K. Ziegler Helmut Zoike Material Science (high temperature) Werner Osenberg Klaus Scheufelen Rudolf Schlidt Medicine – including biological weapons, chemical weapons, and space medicine Kurt Blome Rudolf Brill Konrad Johannes Karl Büttner Paul Anton Cibis Fritz Laves Richard Lindenberg Walter Schreiber Hubertus Strughold Hans Georg Clamann Erich Traub Physics Gunter Guttein Willibald Jentschke Gerhard Schwesinger Gottfried Wehner Helmut Weickmann Friedwardt Winterberg Chemistry and Chemical engineering Helmut Pichler Leonard Alberts Ernst Donath Josef Guymer Hans Schappert Max Josenhaus Kurt Bretschneider Erich Frese See also Allied plans for German industry after World War II German influence on the Soviet space program Operation Osoaviakhim, USSR operation on German specialists List of Germans transported to the USSR via the Operation Osoaviakhim Further reading References Operation Paperclip Brain drain Aftermath of World War II in the United States Allied occupation of Germany Cold War history of the United States German-American history German technology-related lists Office of Strategic Services Science and technology during World War II Science in Nazi Germany United States intelligence operations World War II operations and battles of Europe Wernher von Braun American secret government programs German rocket scientists in the United States ! Marshall Space Flight Center NASA people
List of Germans relocated to the US via the Operation Paperclip
Technology
1,107
173,407
https://en.wikipedia.org/wiki/Analog%20sampled%20filter
An analog sampled filter an electronic filter that is a hybrid between an analog and a digital filter. The input is an analog signal, and usually stored in capacitors. The time domain is discrete, however. Distinct analog samples are shifted through an array of holding capacitors as in a bucket brigade. Analog adders and amplifiers do the arithmetic in the signal domain, just as in an analog computer. Note that these filters are subject to aliasing phenomena just like a digital filter, and anti-aliasing filters will usually be required. See . Companies such as Linear Technology and Maxim produce integrated circuits that implement this functionality. Filters up to the 8th order may be implemented using a single chip. Some are fully configurable; some are pre-configured, usually as low-pass filters. Due to the high filter order that can be achieved in an easy and stable manner, single chip analog sampled filters are often used for implementing anti-aliasing filters for digital filters. The analog sampled filter will in its turn need yet another anti-aliasing filter, but this can often be implemented as a simple 1st order low-pass analog filter consisting of one series resistor and one capacitor to ground. Linear filters Electronic circuits
Analog sampled filter
Engineering
249
61,003
https://en.wikipedia.org/wiki/Prototype-based%20programming
Prototype-based programming is a style of object-oriented programming in which behavior reuse (known as inheritance) is performed via a process of reusing existing objects that serve as prototypes. This model can also be known as prototypal, prototype-oriented, classless, or instance-based programming. Prototype-based programming uses the process generalized objects, which can then be cloned and extended. Using fruit as an example, a "fruit" object would represent the properties and functionality of fruit in general. A "banana" object would be cloned from the "fruit" object and general properties specific to bananas would be appended. Each individual "banana" object would be cloned from the generic "banana" object. Compare to the class-based paradigm, where a "fruit" class would be extended by a "banana" class. The first prototype-based programming languages were Director a.k.a. Ani (on top of MacLisp) (1976-1979), and contemporaneously and not independently, ThingLab (on top of Smalltalk) (1977-1981), respective PhD projects by Kenneth Michael Kahn at MIT and Alan Hamilton Borning at Stanford (but working with Alan Kay at Xerox PARC). Borning introduced the word "prototype" in his TOPLAS 1981 paper. The first prototype-based programming language with more than one implementer or user was probably Yale T Scheme (1981-1984), though like Director and ThingLab initially, it just speaks of objects without classes. The language that made the name and notion of prototypes popular was Self (1985-1995), developed by David Ungar and Randall Smith to research topics in object-oriented language design. Since the late 1990s, the classless paradigm has grown increasingly popular. Some current prototype-oriented languages are JavaScript (and other ECMAScript implementations such as JScript and Flash's ActionScript 1.0), Lua, Cecil, NewtonScript, Io, Ioke, MOO, REBOL and AHK. Since the 2010s, a new generation of languages with pure functional prototypes has appeared, that reduce OOP to its very core: Jsonnet is a dynamic lazy pure functional language with a builtin prototype object system using mixin inheritance; Nix is a dynamic lazy pure functional language that builds an equivalent object system (Nix "extensions") in just two short function definitions (plus many other convenience functions). Both languages are used to define large distributed software configurations (Jsonnet being directly inspired by GCL, the Google Configuration Language, with which Google defines all its deployments, and has similar semantics though with dynamic binding of variables). Since then, other languages like Gerbil Scheme have implemented pure functional lazy prototype systems based on similar principles. Design and implementation Etymologically, a "prototype" means "first cast" ("cast" in the sense of being manufactured). A prototype is a concrete thing, from which other objects can be created by copying and modifying. For example, the International Prototype of the Kilogram is an actual object that really exists, from which new kilogram-objects can be created by copying. In comparison, a "class" is an abstract thing, in which objects can belong. For example, all kilogram-objects are in the class of KilogramObject, which might be a subclass of MetricObject, and so on. Prototypal inheritance in JavaScript is described by Douglas Crockford as Advocates of prototype-based programming argue that it encourages the programmer to focus on the behavior of some set of examples and only later worry about classifying these objects into archetypal objects that are later used in a fashion similar to classes. Many prototype-based systems encourage the alteration of prototypes during run-time, whereas only very few class-based object-oriented systems (such as the dynamic object-oriented system, Common Lisp, Dylan, Objective-C, Perl, Python, Ruby, or Smalltalk) allow classes to be altered during the execution of a program. Almost all prototype-based systems are based on interpreted and dynamically typed languages. Systems based on statically typed languages are technically feasible, however. The Omega language discussed in Prototype-Based Programming is an example of such a system, though according to Omega's website even Omega is not exclusively static, but rather its "compiler may choose to use static binding where this is possible and may improve the efficiency of a program." Object construction In prototype-based languages there are no explicit classes. Objects inherit directly from other objects through a prototype property. The prototype property is called prototype in Self and JavaScript, or proto in Io. There are two methods of constructing new objects: ex nihilo ("from nothing") object creation or through cloning an existing object. The former is supported through some form of object literal, declarations where objects can be defined at runtime through special syntax such as {...} and passed directly to a variable. While most systems support a variety of cloning, ex nihilo object creation is not as prominent. In class-based languages, a new instance is constructed through a class's constructor function, a special function that reserves a block of memory for the object's members (properties and methods) and returns a reference to that block. An optional set of constructor arguments can be passed to the function and are usually held in properties. The resulting instance will inherit all the methods and properties that were defined in the class, which acts as a kind of template from which similarly typed objects can be constructed. Systems that support ex nihilo object creation allow new objects to be created from scratch without cloning from an existing prototype. Such systems provide a special syntax for specifying the properties and behaviors of new objects without referencing existing objects. In many prototype languages there exists a root object, often called Object, which is set as the default prototype for all other objects created in run-time and which carries commonly needed methods such as a toString() function to return a description of the object as a string. One useful aspect of ex nihilo object creation is to ensure that a new object's slot (properties and methods) names do not have namespace conflicts with the top-level Object object. (In the JavaScript language, one can do this by using a null prototype, i.e. Object.create(null).) Cloning refers to a process whereby a new object is constructed by copying the behavior of an existing object (its prototype). The new object then carries all the qualities of the original. From this point on, the new object can be modified. In some systems the resulting child object maintains an explicit link (via delegation or resemblance) to its prototype, and changes in the prototype cause corresponding changes to be apparent in its clone. Other systems, such as the Forth-like programming language Kevo, do not propagate change from the prototype in this fashion and instead follow a more concatenative model where changes in cloned objects do not automatically propagate across descendants. // Example of true prototypal inheritance style in JavaScript. // Object creation using the literal object notation {}. const foo = { name: "foo", one: 1, two: 2 }; // Another object. const bar = { two: "two", three: 3 }; // Object.setPrototypeOf() is a method introduced in ECMAScript 2015. // For the sake of simplicity, let us pretend that the following // line works regardless of the engine used: Object.setPrototypeOf(bar, foo); // foo is now the prototype of bar. // If we try to access foo's properties from bar from now on, // we'll succeed. bar.one; // Resolves to 1. // The child object's properties are also accessible. bar.three; // Resolves to 3. // Own properties shadow prototype properties. bar.two; // Resolves to "two". bar.name; // Unaffected, resolves to "foo". foo.name; // Resolves to "foo". For another example: const foo = { one: 1, two: 2 }; // bar.[[prototype]] = foo const bar = Object.create(foo); bar.three = 3; bar.one; // 1 bar.two; // 2 bar.three; // 3 Delegation In prototype-based languages that use delegation, the language runtime is capable of dispatching the correct method or finding the right piece of data simply by following a series of delegation pointers (from object to its prototype) until a match is found. All that is required to establish this behavior-sharing between objects is the delegation pointer. Unlike the relationship between class and instance in class-based object-oriented languages, the relationship between the prototype and its offshoots does not require that the child object have a memory or structural similarity to the prototype beyond this link. As such, the child object can continue to be modified and amended over time without rearranging the structure of its associated prototype as in class-based systems. It is also important to note that not only data, but also methods can be added or changed. For this reason, some prototype-based languages refer to both data and methods as "slots" or "members". Concatenation In concatenative prototyping - the approach implemented by the Kevo programming language - there are no visible pointers or links to the original prototype from which an object is cloned. The prototype (parent) object is copied rather than linked to and there is no delegation. As a result, changes to the prototype will not be reflected in cloned objects. Incidentally, the Cosmos programming language achieves the same through the use of persistent data structures. The main conceptual difference under this arrangement is that changes made to a prototype object are not automatically propagated to clones. This may be seen as an advantage or disadvantage. (However, Kevo does provide additional primitives for publishing changes across sets of objects based on their similarity — so-called family resemblances or clone family mechanism — rather than through taxonomic origin, as is typical in the delegation model.) It is also sometimes claimed that delegation-based prototyping has an additional disadvantage in that changes to a child object may affect the later operation of the parent. However, this problem is not inherent to the delegation-based model and does not exist in delegation-based languages such as JavaScript, which ensure that changes to a child object are always recorded in the child object itself and never in parents (i.e. the child's value shadows the parent's value rather than changing the parent's value). In simplistic implementations, concatenative prototyping will have faster member lookup than delegation-based prototyping (because there is no need to follow the chain of parent objects), but will conversely use more memory (because all slots are copied, rather than there being a single slot pointing to the parent object). More sophisticated implementations can avoid this problem, however, although trade-offs between speed and memory are required. For example, systems with concatenative prototyping can use a copy-on-write implementation to allow for behind-the-scenes data sharing — and such an approach is indeed followed by Kevo. Conversely, systems with delegation-based prototyping can use caching to speed up data lookup. Criticism Advocates of class-based object models who criticize prototype-based systems often have concerns similar to the concerns that proponents of static type systems for programming languages have of dynamic type systems (see datatype). Usually, such concerns involve correctness, safety, predictability, efficiency and programmer unfamiliarity. On the first three points, classes are often seen as analogous to types (in most statically typed object-oriented languages they serve that role) and are proposed to provide contractual guarantees to their instances, and to users of their instances, that they will behave in some given fashion. Regarding efficiency, declaring classes simplifies many compiler optimizations that allow developing efficient method and instance-variable lookup. For the Self language, much development time was spent on developing, compiling, and interpreting techniques to improve the performance of prototype-based systems versus class-based systems. A common criticism made against prototype-based languages is that the community of software developers is unfamiliar with them, despite the popularity and market permeation of JavaScript. However, knowledge about prototype-based systems is increasing with the proliferation of JavaScript frameworks and the complex use of JavaScript as the World Wide Web (Web) matures. ECMAScript 6 introduced classes as syntactic sugar over JavaScript's existing prototype-based inheritance, providing an alternative way to create objects and manage inheritance. Languages supporting prototype-based programming Actor-Based Concurrent Language (ABCL): ABCL/1, ABCL/R, ABCL/R2, ABCL/c+ Agora AutoHotkey Cecil and Diesel of Craig Chambers ColdC COLA Common Lisp Cyan ECMAScript ActionScript 1.0, used by Adobe Flash and Adobe Flex ECMAScript for XML (E4X) JavaScript JScript TypeScript Io Ioke Jsonnet Logtalk LPC Lua M2000 Maple MOO Neko NewtonScript Nim Nix Object Lisp Obliq Omega OpenLaszlo Perl, with the Class::Prototyped module Python with prototype.py. R, with the proto package REBOL Red (programming language) Ruby (programming language) Self Seph Slate (programming language) SmartFrog Snap! Etoys TADS Tcl with snit extension Umajin See also Class-based programming (contrast) Differential inheritance Programming paradigm References Further reading Class Warfare: Classes vs. Prototypes, by Brian Foote. Using Prototypical Objects to Implement Shared Behavior in Object Oriented Systems, by Henry Lieberman, 1986. Object-oriented programming Programming paradigms Type theory
Prototype-based programming
Mathematics
2,907
4,207,958
https://en.wikipedia.org/wiki/Point%20of%20interest
A point of interest (POI) is a specific point location that someone may find useful or interesting. An example is a point on the Earth representing the location of the Eiffel Tower, or a point on Mars representing the location of its highest mountain, Olympus Mons. Most consumers use the term when referring to hotels, campsites, fuel stations or any other categories used in modern automotive navigation systems. Users of a mobile device can be provided with geolocation and time-aware POI service that recommends geolocations nearby and with a temporal relevance (e.g. POI to special services in a ski resort are available only in winter). The term is widely used in cartography, especially in electronic variants including GIS, and GPS navigation software. In this context the synonym waypoint is common. A GPS point of interest specifies, at minimum, the latitude and longitude of the POI, assuming a certain map datum. A name or description for the POI is usually included, and other information such as altitude or a telephone number may also be attached. GPS applications typically use icons to graphically represent different categories of POI on a map. A region of interest (ROI) and a volume of interest (VOI) are similar in concept, denoting a region or a volume (which may contain various individual POIs). In medical fields such as histology, pathology, and histopathology, points of interest are selected from the general background in a field of view; for example, among hundreds of normal cells, the pathologist may find 3 or 4 neoplastic cells that stand out from the others upon staining. POI collections Digital maps for modern GPS devices typically include a basic selection of POI for the map area. However, websites exist that specialize in the collection, verification, management and distribution of POI which end-users can load onto their devices to replace or supplement the existing POI. While some of these websites are generic, and will collect and categorize POI for any interest, others are more specialized in a particular category (such as speed cameras) or GPS device (e.g. TomTom/Garmin). End-users also have the ability to create their own custom collections. Commercial POI collections, especially those that ship with digital maps, or that are sold on a subscription basis are usually protected by copyright. However, there are also many websites from which royalty-free POI collections can be obtained, e.g. SPOI - Smart Points of Interest, which is distributed under ODbL license. Applications The applications for POI are extensive. As GPS-enabled devices as well as software applications that use digital maps become more available, so too the applications for POI are also expanding. Newer digital cameras for example can automatically tag a photograph using Exif with the GPS location where a picture was taken; these pictures can then be overlaid as POI on a digital map or satellite image such as Google Earth. Geocaching applications are built around POI collections. In vehicle tracking systems, POIs are used to mark destination points and/or offices to that users of GPS tracking software would easily monitor position of vehicles according to POIs. File formats Many different file formats, including proprietary formats, are used to store point of interest data, even where the same underlying WGS84 system is used. Reasons for variations to store the same data include: A lack of standards in this area (GPX is a notable attempt to address this). Attempts by some software vendors to protect their data through obfuscation. Licensing issues that prevent companies from using competitor's file specifications. Memory saving, for example, by converting floating point latitude and longitude co-ordinates into smaller integer values. Speed and battery life (operations using integer latitude and longitude values are less CPU-intensive than those that use floating point values). Requirements to add custom fields to the data. Use of older reference systems that predate GPS (for example UTM or the British national grid reference system) Readability/possibility to edit (plain text files are human-readable and may be edited) The following are some of the file formats used by different vendors and devices to exchange POI (and in some cases, also navigation tracks): ASCII Text (.asc .txt .csv .plt) Topografix GPX (.gpx) Garmin Mapsource (.gdb) Google Earth Keyhole Markup Language (.kml .kmz) Pocket Street Pushpins (.psp) Maptech Marks (.msf) Maptech Waypoint (.mxf) Microsoft MapPoint Pushpin (.csv) OziExplorer (.wpt) TomTom Overlay (.ov2) and TomTom plain text format (.asc) OpenStreetMap data (.osm) Third party and vendor-supplied utilities are available to convert point of interest data between different formats to allow them to be exchanged between otherwise incompatible GPS devices or systems. Furthermore, many applications will support the generic ASCII text file format, although this format is more prone to error due to its loose structure as well as the many ways in which GPS co-ordinates can be represented (e.g. decimal vs degree/minute/second). POI format converters are often named after the POI file format they convert and convert to, such as KML2GPX (converts KML to GPX) and KML2OV2 (converts KML to OV2). See also Automotive navigation system Geocoded photograph Map database management OpenLR Tourist attraction World Geodetic System (Used to represent GPS co-ordinates) Photogrammetry References Global Positioning System Geographical technology Navigation
Point of interest
Technology,Engineering
1,191
15,519,155
https://en.wikipedia.org/wiki/Shreveport%20Waterworks%20Pumping%20Station
The Shreveport Waterworks Pumping Station, also known as the McNeil Street Pump Station, is a historic water pumping station at 142 North Common Street in Shreveport, Louisiana. Now hosting the Shreveport Water Works Museum, it exhibits in situ a century's worth of water pumping equipment, and was the nation's last steam-powered waterworks facility when it was shut down in 1980. It was added to the National Register of Historic Places in 1980, declared a National Historic Landmark in 1982, and designated as a National Historic Civil Engineering Landmark in 1999. Description and history The Shreveport Water Works Museum is located west of Shreveport's downtown, between North Common Avenue and Twelve Mile Bayou, which feeds into the Red River just north of downtown. The complex consists of a group of predominantly brick buildings, which house in them a variety of pumping equipment, dating from 1892 to about 1921. The oldest buildings date to 1887, when the city contracted for the construction of a waterworks facility to replace a combination of cisterns and wells that had become inadequate to meet the city's needs. As the technology for pumping and filtering water changed, either the existing buildings were altered, or new ones built, in many cases leaving some of the older equipment in place. It saw significant changes to the plant in the first decade of the 20th century, and again after the city purchased the plant from its private operator in 1917. The city continued to operate the steam pumps through the 1970s, even as they were becoming obsolete due to advances in electric pumping engines. The station was closed in 1980. The property was afterward converted to a museum, featuring displays of the restored steam machinery, including pumps, filters and other equipment. The Shreveport Railroad Museum is located on the grounds of the Shreveport Water Works Museum. Both museums are open to the public. See also List of National Historic Landmarks in Louisiana National Register of Historic Places listings in Caddo Parish, Louisiana References External links Shreveport Water Works Museum - Official site McNeill Street Pumping Station Preservation Society Buildings and structures in Shreveport, Louisiana Museums in Shreveport, Louisiana Water supply pumping stations on the National Register of Historic Places Historic American Engineering Record in Louisiana National Historic Landmarks in Louisiana Industrial buildings and structures on the National Register of Historic Places in Louisiana Buildings and structures completed in 1887 Steam museums in the United States Former pumping stations National Register of Historic Places in Caddo Parish, Louisiana Historic Civil Engineering Landmarks
Shreveport Waterworks Pumping Station
Engineering
480
409,120
https://en.wikipedia.org/wiki/Subscriber%20trunk%20dialling
Subscriber trunk dialling (STD), also known as subscriber toll dialing, is a telephone numbering plan feature and telecommunications technology in the United Kingdom and various Commonwealth countries for the dialling of trunk calls by telephone subscribers without the assistance of switchboard operators. Switching systems to enable automatic dialling of long distance calls by subscribers were introduced in the United Kingdom on 5 December 1958. The system used area codes that were based on the letters in a town's name. A ceremonial first call was made by Queen Elizabeth II from Bristol to Edinburgh. A similar service, built on crossbar equipment, using regionally structured numbering, rather than alphanumeric codes, was experimentally introduced by P&T in Ireland in 1957, with the first services being in Athlone. A full service was rolled out in 1958, initially to exchanges in Cork and then Dublin and its hinterland, and gradually to all areas with automatic exchanges. The term 'STD call' was once commonly used in the UK, Ireland, Australia, India, and parts of Southeast Asia, but it may be considered archaic today, or possibly even no longer be understood. Other less technical terms like 'national calling,' 'long distance calling' and so on are now more commonly used. The distinction between local and long distance / STD calls is also no longer relevant to many users, as calls are charged at flat or bundled rates. It is also necessary to dial area codes on some calls, especially from mobile phones, so they are considered part of the number. Terms such as 'area code', 'prefix' or 'national dialling code' tend to be used in place of 'STD code' in the UK and in Ireland. History In the first half of the 20th century, telecommunication services developed progressively from completely manual setup of calls by operators called by subscribers, to automatic systems that could connect subscribers of the same local exchange through the use of telephone dials installed in each telephone. In the 1940s, the Bell System in the United States and Canada developed methods and technologies, called direct distance dialing and first implemented in 1951, that enabled telephone subscriber to dial long-distance telephone calls themselves without calling an operator. In the United Kingdom, a similar technology called subscriber trunk dialling (STD) was ready by 1958, when Queen Elizabeth II, who was in Bristol, publicised STD by dialling Edinburgh, the farthest distance a call could be directly dialled in the UK, on 5 December 1958. The STD system was completed in 1979. The technology was extended when, from 8 March 1963, subscribers in London were able to directly dial Paris using international direct dialling. The term subscriber trunk dialling is used in the United Kingdom, the Republic of Ireland, Australia, India and South East Asia. In the UK, the term is obsolescent, better known as the UK area codes. The introduction in the UK of subscriber dialling of long-distance calls removed the distinction that had existed between trunk and toll calls. This term however, is still widely prevalent in India to describe any national call made outside one's local unit. A "subscriber" is someone who subscribes to, i.e. rents, a telephone line, and a "trunk call" is one made over a trunk line, i.e. a telephone line connecting two exchanges a long distance apart. Since all calls may be dialled direct today, the term has fallen into disuse. Numbering plan In subscriber trunk dialing, each designated region of a country is identified by a unique numerical code (the STD code) that must be dialed as a prefix to each telephone number when placing calls. Each city with a director system was assigned a three-digit code, in which the second digit corresponded to the first letter of the city name on the telephone dial, except London which had the two-digit code 01. Codes were later changed (e.g., London became 020, and Manchester 0161). 01 London 021 Birmingham 031 Edinburgh 041 Glasgow 051 Liverpool 061 Manchester Calls between the UK and Ireland Because of the high volume of calls between the Republic of Ireland and the UK, international direct dialling was implemented before the formal introduction of International Subscriber Dialling. Calls were processed through the domestic STD networks and passed between the two networks as trunk traffic, without the need for international gateway exchanges. From the Republic of Ireland Calls to Northern Ireland were made by dialling 08 and the Northern Irish STD code, for example Belfast was reached by dialling 08 0232. Calls to Britain were made by dialling 03 and the British STD code, e.g. 03 0222 XXX XXX or 03 061 XXX YYYY. Calls to cities with director area codes could also still be made with the following codes; this was an older arrangement but the numbering remained in service until the 03 code was closed: 031 London 032 Birmingham 033 Edinburgh 034 Glasgow 035 Liverpool 036 Manchester Calls to Belfast could also be dialled with 084 and the local six-digit number. Belfast was not a director area. In 1992, Ireland adopted the harmonised European international access code 00, replacing the 16 prefix for international calls and the legacy arrangements for calling Britain. From that year, calls were made in the standard international format, i.e. , and the 03 range was withdrawn from use. Calls to Northern Ireland are now made by dialling 048 and the eight-digit local number, omitting the 028 STD code. This ensures calls are charged at lower rates. Alternatively, the full international code +44 28 can be used. Calls to Ireland from the UK These were dialled using the full international code 010 353, or using legacy short codes. Examples were: Dublin 0001 Cork 0002 Limerick 0006 Galway 0009 These legacy codes dialled directly into Irish cities that had crossbar switching in the 1950s and 1960s, and predated the introduction of ISD in the UK. The Irish STD system evolved around the introduction of LM Ericsson ARM and ITT Pentaconta crossbar trunk/tandem switches, and did not use the UK's director approach. While these calls were international, they were processed within the UK STD infrastructure, without passing through an international gateway exchange. Calls to Ireland are now made in the standard international format +353 (or 00 353) and special codes are no longer used. See also Trunk prefix Telephone numbers in the United Kingdom Telephone numbers in the Republic of Ireland Telephone numbers in India Telephone numbers in Australia Telephone numbers in New Zealand List of country calling codes References External links The archives of BT including archives of its predecessor organizations : information relating to the history of the telephone system in the UK. 1958: Trunk dialling heralds cheaper calls BBC video of first call taking place Telephone numbers de:Selbstwählferndienst
Subscriber trunk dialling
Mathematics
1,430
21,901,527
https://en.wikipedia.org/wiki/Reflectometric%20interference%20spectroscopy
Reflectometric interference spectroscopy (RIfS) is a physical method based on the interference of white light at thin films, which is used to investigate molecular interaction. Principle The underlying measuring principle corresponds to that of the Michelson interferometer. Realization White light is directed vertically onto a multiple-layer system of a SiO2, a high-refractive Ta2O5 and an additional SiO2 layer (this additional layer can be chemically modified). The partial beams of the white light are reflected at each phase boundary and then refracted (transmitted). These reflected partial beams superimpose which results in an interference spectrum that is detected using a diode array spectrometer. Through chemical modification the upper SiO2 layer is changed in a way to allow interaction with target molecules. This interaction causes a change in the thickness of the physical layer d and the refractive index n within this layer. The product of both defines the optical thickness of the layer: n • d. A change in the optical thickness results in a modulation of the interference spectrum. Monitoring this change over time allows to observe the binding behaviour of the target molecules. Application RIfS is used especially as a detection method in chemo- and biosensors. Chemosensors are particularly suitable for measurements under difficult conditions and in the gaseous phase. As sensitive layers, mostly non-selective measuring polymers are used which sort the analytes according to size (the so-called molecular sieve effect when using microporous polymers) or according to polarity (e.g. functionalized polydimethylsiloxanes). When performing non-selective measurements, a sum signal from several analytes is measured which means that multivariate data analyses such as neural networks have to be used for quantification. However, it is also possible to use selectively measuring polymers, so-called molecular imprinted polymers (MIPs) which provide artificial recognition elements. When using biosensors, polymers such as polyethylene glycols or dextrans are applied onto the layer system, and on these recognition elements for biomolecules are immobilized. Basically, any molecule can be used as recognition element (proteins such as antibodies, DNA/RNA such as aptamers, small organic molecules such as estrone, but also lipids such as phospholipid membranes). RIfS, like SPR is a label-free technique, which allows the time-resolved observation of interaction among the binding partners without the use of fluorescence or radioactive labels. Literature G. Gauglitz, A. Brecht, G. Kraus and W. Nahm. Sensor. Actuat. B-Chem. 11, 1993 A. Jung. Anal. Bioanal. Chem. 372 1, 2002 F. Gesellchen, B. Zimmermann, F. W. Herberg. Methods in Molecular Biology, 2005 T. Nagel, E. Ehrentreich-Forster, M. Singh, et al. Sensors and Actuators B-Chemical 129 2, 2008 P. Fechner, F. Pröll, M. Carlquist and G. Proll. Anal. Bioanal. Chem. Nov 1, 2008 External links Barolo.ipc.uni-tuebingen.de Spectroscopy
Reflectometric interference spectroscopy
Physics,Chemistry
688
13,271,090
https://en.wikipedia.org/wiki/Ireland%E2%80%93Claisen%20rearrangement
The Ireland–Claisen rearrangement is a chemical reaction of an allylic ester with strong base to give an γ,δ-unsaturated carboxylic acid. Several reviews have been published. Mechanism The Ireland–Claisen rearrangement is a type of Claisen rearrangement. The mechanism is therefore a concerted [3,3]-sigmatropic rearrangement which according to the Woodward–Hoffmann rules show a concerted, suprafacial, pericyclic reaction pathway. See also Cope rearrangement Overman rearrangement References Rearrangement reactions Name reactions
Ireland–Claisen rearrangement
Chemistry
132
10,643,339
https://en.wikipedia.org/wiki/Mill%20pond
A mill pond (or millpond) is a body of water used as a reservoir for a water-powered mill. Description Mill ponds were often created through the construction of a mill dam or weir (and mill stream) across a waterway. In many places, the common proper name Mill Pond has remained even though the mill has long since gone. It may be fed by a man-made stream, known by several terms including leat and mill stream. The channel or stream leading from the mill pond is the mill race, which together with weirs, dams, channels and the terrain establishing the mill pond, delivers water to the mill wheel to convert potential and/or kinetic energy of the water to mechanical energy by rotating the mill wheel. The production of mechanical power is the purpose of this civil engineering hydraulic system. The term mill pond is often used colloquially and in literature to refer to a very flat body of water. Witnesses of the loss of RMS Titanic reported that the sea was "like a mill pond". Footnotes and references Footnotes References mill pond. Dictionary.com. Dictionary.com Unabridged. Random House, Inc. http://dictionary.reference.com/browse/mill pond (accessed: September 7, 2013). mill pond. Dictionary.com. Collins English Dictionary - Complete & Unabridged 10th Edition. HarperCollins Publishers. http://dictionary.reference.com/browse/mill pond (accessed: September 7, 2013). leat. Dictionary.com. Collins English Dictionary - Complete & Unabridged 10th Edition. HarperCollins Publishers. http://dictionary.reference.com/browse/leat (accessed: September 7, 2013). External links Hydraulic engineering Ponds Hydrology Topography
Mill pond
Physics,Chemistry,Engineering,Environmental_science
360
30,156,850
https://en.wikipedia.org/wiki/Stereotaxis%20%28company%29
Stereotaxis Inc. is an American publicly traded corporation based in St. Louis, Missouri, that makes robotic products to try to improve the clinical outcomes of electrophysiology studies. Products The Niobe Es Magnetic Navigation System includes two pods that use permanent magnets mounted on pivoting arms and positioned on opposing sides of the operating table. The magnets are controlled by physicians from the outside using a mouse, keyboard, joystick, and a viewing screen. The rotation of the magnets within the Niobe pods influences the magnetic catheters in the heart to make micro movements of the catheter tip (in increments of 1 mm to 9 mm) to navigate throughout the four chambers of the heart. The RMN system was originally designed for applications within the brain; its current usage is guiding magnetic catheters during electrophysiology studies and catheter ablation procedures to treat arrhythmias in the heart. It has been used in over 100,000 procedures worldwide as of 2017. In January 2022, Stereotaxis announced that the Fuwai Central China Cardiovascular Hospital became the first in central China to establish a robotic electrophysiology program with the product. References Medical devices Surgical robots Computer-assisted surgery Companies listed on NYSE American
Stereotaxis (company)
Biology
261
436,287
https://en.wikipedia.org/wiki/Heaps%27%20law
In linguistics, Heaps' law (also called Herdan's law) is an empirical law which describes the number of distinct words in a document (or set of documents) as a function of the document length (so called type-token relation). It can be formulated as where VR is the number of distinct words in an instance text of size n. K and β are free parameters determined empirically. With English text corpora, typically K is between 10 and 100, and β is between 0.4 and 0.6. The law is frequently attributed to Harold Stanley Heaps, but was originally discovered by . Under mild assumptions, the Herdan–Heaps law is asymptotically equivalent to Zipf's law concerning the frequencies of individual words within a text. This is a consequence of the fact that the type-token relation (in general) of a homogenous text can be derived from the distribution of its types. Empirically, Heaps' law is preserved even when the document is randomly shuffled, meaning that it does not depend on the ordering of words, but only the frequency of words. This is used as evidence for deriving Heaps' law from Zipf's law. Heaps' law means that as more instance text is gathered, there will be diminishing returns in terms of discovery of the full vocabulary from which the distinct terms are drawn. Deviations from Heaps' law, as typically observed in English text corpora, have been identified in corpora generated with large language models. Heaps' law also applies to situations in which the "vocabulary" is just some set of distinct types which are attributes of some collection of objects. For example, the objects could be people, and the types could be country of origin of the person. If persons are selected randomly (that is, we are not selecting based on country of origin), then Heaps' law says we will quickly have representatives from most countries (in proportion to their population) but it will become increasingly difficult to cover the entire set of countries by continuing this method of sampling. Heaps' law has been observed also in single-cell transcriptomes considering genes as the distinct objects in the "vocabulary". See also References Citations Sources . . . Heaps' law is proposed in Section 7.5 (pp. 206–208). . . . . External links Computational linguistics Statistical laws Empirical laws Eponyms
Heaps' law
Technology
496
41,226
https://en.wikipedia.org/wiki/Hamming%20code
In computer science and telecommunications, Hamming codes are a family of linear error-correcting codes. Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes, that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. Richard W. Hamming invented Hamming codes in 1950 as a way of automatically correcting errors introduced by punched card readers. In his original paper, Hamming elaborated his general idea, but specifically focused on the Hamming(7,4) code which adds three parity bits to four bits of data. In mathematical terms, Hamming codes are a class of binary linear code. For each integer there is a code-word with block length and message length . Hence the rate of Hamming codes is , which is the highest possible for codes with minimum distance of three (i.e., the minimal number of bit changes needed to go from any code word to any other code word is three) and block length . The parity-check matrix of a Hamming code is constructed by listing all columns of length that are non-zero, which means that the dual code of the Hamming code is the shortened Hadamard code, also known as a Simplex code. The parity-check matrix has the property that any two columns are pairwise linearly independent. Due to the limited redundancy that Hamming codes add to the data, they can only detect and correct errors when the error rate is low. This is the case in computer memory (usually RAM), where bit errors are extremely rare and Hamming codes are widely used, and a RAM with this correction system is an ECC RAM (ECC memory). In this context, an extended Hamming code having one extra parity bit is often used. Extended Hamming codes achieve a Hamming distance of four, which allows the decoder to distinguish between when at most one one-bit error occurs and when any two-bit errors occur. In this sense, extended Hamming codes are single-error correcting and double-error detecting, abbreviated as SECDED. History Richard Hamming, the inventor of Hamming codes, worked at Bell Labs in the late 1940s on the Bell Model V computer, an electromechanical relay-based machine with cycle times in seconds. Input was fed in on punched paper tape, seven-eighths of an inch wide, which had up to six holes per row. During weekdays, when errors in the relays were detected, the machine would stop and flash lights so that the operators could correct the problem. During after-hours periods and on weekends, when there were no operators, the machine simply moved on to the next job. Hamming worked on weekends, and grew increasingly frustrated with having to restart his programs from scratch due to detected errors. In a taped interview, Hamming said, "And so I said, 'Damn it, if the machine can detect an error, why can't it locate the position of the error and correct it?'". Over the next few years, he worked on the problem of error-correction, developing an increasingly powerful array of algorithms. In 1950, he published what is now known as Hamming code, which remains in use today in applications such as ECC memory. Codes predating Hamming A number of simple error-detecting codes were used before Hamming codes, but none were as effective as Hamming codes in the same overhead of space. Parity Parity adds a single bit that indicates whether the number of ones (bit-positions with values of one) in the preceding data was even or odd. If an odd number of bits is changed in transmission, the message will change parity and the error can be detected at this point; however, the bit that changed may have been the parity bit itself. The most common convention is that a parity value of one indicates that there is an odd number of ones in the data, and a parity value of zero indicates that there is an even number of ones. If the number of bits changed is even, the check bit will be valid and the error will not be detected. Moreover, parity does not indicate which bit contained the error, even when it can detect it. The data must be discarded entirely and re-transmitted from scratch. On a noisy transmission medium, a successful transmission could take a long time or may never occur. However, while the quality of parity checking is poor, since it uses only a single bit, this method results in the least overhead. Two-out-of-five code A two-out-of-five code is an encoding scheme which uses five bits consisting of exactly three 0s and two 1s. This provides possible combinations, enough to represent the digits 0–9. This scheme can detect all single bit-errors, all odd numbered bit-errors and some even numbered bit-errors (for example the flipping of both 1-bits). However it still cannot correct any of these errors. Repetition Another code in use at the time repeated every data bit multiple times in order to ensure that it was sent correctly. For instance, if the data bit to be sent is a 1, an repetition code will send 111. If the three bits received are not identical, an error occurred during transmission. If the channel is clean enough, most of the time only one bit will change in each triple. Therefore, 001, 010, and 100 each correspond to a 0 bit, while 110, 101, and 011 correspond to a 1 bit, with the greater quantity of digits that are the same ('0' or a '1') indicating what the data bit should be. A code with this ability to reconstruct the original message in the presence of errors is known as an error-correcting code. This triple repetition code is a Hamming code with since there are two parity bits, and data bit. Such codes cannot correctly repair all errors, however. In our example, if the channel flips two bits and the receiver gets 001, the system will detect the error, but conclude that the original bit is 0, which is incorrect. If we increase the size of the bit string to four, we can detect all two-bit errors but cannot correct them (the quantity of parity bits is even); at five bits, we can both detect and correct all two-bit errors, but not all three-bit errors. Moreover, increasing the size of the parity bit string is inefficient, reducing throughput by three times in our original case, and the efficiency drops drastically as we increase the number of times each bit is duplicated in order to detect and correct more errors. Description If more error-correcting bits are included with a message, and if those bits can be arranged such that different incorrect bits produce different error results, then bad bits could be identified. In a seven-bit message, there are seven possible single bit errors, so three error control bits could potentially specify not only that an error occurred but also which bit caused the error. Hamming studied the existing coding schemes, including two-of-five, and generalized their concepts. To start with, he developed a nomenclature to describe the system, including the number of data bits and error-correction bits in a block. For instance, parity includes a single bit for any data word, so assuming ASCII words with seven bits, Hamming described this as an (8,7) code, with eight bits in total, of which seven are data. The repetition example would be (3,1), following the same logic. The code rate is the second number divided by the first, for our repetition example, 1/3. Hamming also noticed the problems with flipping two or more bits, and described this as the "distance" (it is now called the Hamming distance, after him). Parity has a distance of 2, so one bit flip can be detected but not corrected, and any two bit flips will be invisible. The (3,1) repetition has a distance of 3, as three bits need to be flipped in the same triple to obtain another code word with no visible errors. It can correct one-bit errors or it can detect - but not correct - two-bit errors. A (4,1) repetition (each bit is repeated four times) has a distance of 4, so flipping three bits can be detected, but not corrected. When three bits flip in the same group there can be situations where attempting to correct will produce the wrong code word. In general, a code with distance k can detect but not correct errors. Hamming was interested in two problems at once: increasing the distance as much as possible, while at the same time increasing the code rate as much as possible. During the 1940s he developed several encoding schemes that were dramatic improvements on existing codes. The key to all of his systems was to have the parity bits overlap, such that they managed to check each other as well as the data. General algorithm The following general algorithm generates a single-error correcting (SEC) code for any number of bits. The main idea is to choose the error-correcting bits such that the index-XOR (the XOR of all the bit positions containing a 1) is 0. We use positions 1, 10, 100, etc. (in binary) as the error-correcting bits, which guarantees it is possible to set the error-correcting bits so that the index-XOR of the whole message is 0. If the receiver receives a string with index-XOR 0, they can conclude there were no corruptions, and otherwise, the index-XOR indicates the index of the corrupted bit. An algorithm can be deduced from the following description: Number the bits starting from 1: bit 1, 2, 3, 4, 5, 6, 7, etc. Write the bit numbers in binary: 1, 10, 11, 100, 101, 110, 111, etc. All bit positions that are powers of two (have a single 1 bit in the binary form of their position) are parity bits: 1, 2, 4, 8, etc. (1, 10, 100, 1000) All other bit positions, with two or more 1 bits in the binary form of their position, are data bits. Each data bit is included in a unique set of 2 or more parity bits, as determined by the binary form of its bit position. Parity bit 1 covers all bit positions which have the least significant bit set: bit 1 (the parity bit itself), 3, 5, 7, 9, etc. Parity bit 2 covers all bit positions which have the second least significant bit set: bits 2–3, 6–7, 10–11, etc. Parity bit 4 covers all bit positions which have the third least significant bit set: bits 4–7, 12–15, 20–23, etc. Parity bit 8 covers all bit positions which have the fourth least significant bit set: bits 8–15, 24–31, 40–47, etc. In general each parity bit covers all bits where the bitwise AND of the parity position and the bit position is non-zero. If a byte of data to be encoded is 10011010, then the data word (using _ to represent the parity bits) would be __1_001_1010, and the code word is 011100101010. The choice of the parity, even or odd, is irrelevant but the same choice must be used for both encoding and decoding. This general rule can be shown visually: {| class="wikitable" style="text-align:center;" |- !colspan="2"| Bit position ! 1 !! 2 !! 3 !! 4 !! 5 !! 6 !! 7 !! 8 !! 9 !! 10 !! 11 !! 12 !! 13 !! 14 !! 15 !! 16 !! 17 !! 18 !! 19 !! 20 |rowspan="7"| ... |- !colspan="2"| Encoded data bits !style="background-color: #90FF90;"| p1 !style="background-color: #90FF90;"| p2 !! d1 !style="background-color: #90FF90;"| p4 !! d2 !! d3 !! d4 !style="background-color: #90FF90;"| p8 !! d5 !! d6 !! d7 !! d8 !! d9 !! d10 !! d11 !style="background-color: #90FF90;"| p16 !! d12 !! d13 !! d14 !! d15 |- !rowspan="5"|Paritybitcoverage !style="background-color: #90FF90;"| p1 | || || || || || || || || || || || || || || || || || || || |- !style="background-color: #90FF90;"| p2 | || || || || || || || || || || || || || || || || || || || |- !style="background-color: #90FF90;"| p4 | || || || || || || || || || || || || || || || || || || || |- !style="background-color: #90FF90;"| p8 | || || || || || || || || || || || || || || || || || || || |- !style="background-color: #90FF90;"| p16 | || || || || || || || || || || || || || || || || || || || |} Shown are only 20 encoded bits (5 parity, 15 data) but the pattern continues indefinitely. The key thing about Hamming codes that can be seen from visual inspection is that any given bit is included in a unique set of parity bits. To check for errors, check all of the parity bits. The pattern of errors, called the error syndrome, identifies the bit in error. If all parity bits are correct, there is no error. Otherwise, the sum of the positions of the erroneous parity bits identifies the erroneous bit. For example, if the parity bits in positions 1, 2 and 8 indicate an error, then bit 1+2+8=11 is in error. If only one parity bit indicates an error, the parity bit itself is in error. With parity bits, bits from 1 up to can be covered. After discounting the parity bits, bits remain for use as data. As varies, we get all the possible Hamming codes: Hamming codes with additional parity (SECDED) Hamming codes have a minimum distance of 3, which means that the decoder can detect and correct a single error, but it cannot distinguish a double bit error of some codeword from a single bit error of a different codeword. Thus, some double-bit errors will be incorrectly decoded as if they were single bit errors and therefore go undetected, unless no correction is attempted. To remedy this shortcoming, Hamming codes can be extended by an extra parity bit. This way, it is possible to increase the minimum distance of the Hamming code to 4, which allows the decoder to distinguish between single bit errors and two-bit errors. Thus the decoder can detect and correct a single error and at the same time detect (but not correct) a double error. If the decoder does not attempt to correct errors, it can reliably detect triple bit errors. If the decoder does correct errors, some triple errors will be mistaken for single errors and "corrected" to the wrong value. Error correction is therefore a trade-off between certainty (the ability to reliably detect triple bit errors) and resiliency (the ability to keep functioning in the face of single bit errors). This extended Hamming code was popular in computer memory systems, starting with IBM 7030 Stretch in 1961, where it is known as SECDED (or SEC-DED, abbreviated from single error correction, double error detection). Server computers in 21st century, while typically keeping the SECDED level of protection, no longer use Hamming's method, relying instead on the designs with longer codewords (128 to 256 bits of data) and modified balanced parity-check trees. The (72,64) Hamming code is still popular in some hardware designs, including Xilinx FPGA families. [7,4] Hamming code In 1950, Hamming introduced the [7,4] Hamming code. It encodes four data bits into seven bits by adding three parity bits. As explained earlier, it can either detect and correct single-bit errors or it can detect (but not correct) both single and double-bit errors. With the addition of an overall parity bit, it becomes the [8,4] extended Hamming code and can both detect and correct single-bit errors and detect (but not correct) double-bit errors. Construction of G and H The matrix is called a (canonical) generator matrix of a linear (n,k) code, and is called a parity-check matrix. This is the construction of G and H in standard (or systematic) form. Regardless of form, G and H for linear block codes must satisfy , an all-zeros matrix. Since [7, 4, 3] = [n, k, d] = [2m − 1, 2m − 1 − m, 3]. The parity-check matrix H of a Hamming code is constructed by listing all columns of length m that are pair-wise independent. Thus H is a matrix whose left side is all of the nonzero n-tuples where order of the n-tuples in the columns of matrix does not matter. The right hand side is just the (n − k)-identity matrix. So G can be obtained from H by taking the transpose of the left hand side of H with the identity k-identity matrix on the left hand side of G. The code generator matrix and the parity-check matrix are: and Finally, these matrices can be mutated into equivalent non-systematic codes by the following operations: Column permutations (swapping columns) Elementary row operations (replacing a row with a linear combination of rows) Encoding Example From the above matrix we have 2k = 24 = 16 codewords. Let be a row vector of binary data bits, . The codeword for any of the 16 possible data vectors is given by the standard matrix product where the summing operation is done modulo-2. For example, let . Using the generator matrix from above, we have (after applying modulo 2, to the sum), [8,4] Hamming code with an additional parity bit The [7,4] Hamming code can easily be extended to an [8,4] code by adding an extra parity bit on top of the (7,4) encoded word (see Hamming(7,4)). This can be summed up with the revised matrices: and Note that H is not in standard form. To obtain G, elementary row operations can be used to obtain an equivalent matrix to H in systematic form: For example, the first row in this matrix is the sum of the second and third rows of H in non-systematic form. Using the systematic construction for Hamming codes from above, the matrix A is apparent and the systematic form of G is written as The non-systematic form of G can be row reduced (using elementary row operations) to match this matrix. The addition of the fourth row effectively computes the sum of all the codeword bits (data and parity) as the fourth parity bit. For example, 1011 is encoded (using the non-systematic form of G at the start of this section) into 01100110 where blue digits are data; red digits are parity bits from the [7,4] Hamming code; and the green digit is the parity bit added by the [8,4] code. The green digit makes the parity of the [7,4] codewords even. Finally, it can be shown that the minimum distance has increased from 3, in the [7,4] code, to 4 in the [8,4] code. Therefore, the code can be defined as [8,4] Hamming code. To decode the [8,4] Hamming code, first check the parity bit. If the parity bit indicates an error, single error correction (the [7,4] Hamming code) will indicate the error location, with "no error" indicating the parity bit. If the parity bit is correct, then single error correction will indicate the (bitwise) exclusive-or of two error locations. If the locations are equal ("no error") then a double bit error either has not occurred, or has cancelled itself out. Otherwise, a double bit error has occurred. See also Coding theory Golay code Hamming bound Hamming distance Low-density parity-check code Reed–Muller code Reed–Solomon error correction Turbo code Notes References External links Visual Explanation of Hamming Codes CGI script for calculating Hamming distances (from R. Tervo, UNB, Canada) Tool for calculating Hamming code American inventions Coding theory Error detection and correction Computer arithmetic 1951 in computing
Hamming code
Mathematics,Engineering
4,622
74,536,101
https://en.wikipedia.org/wiki/Antimony%20nitride
Antimony nitride, also called antimony mononitride, is an inorganic compound with the chemical formula SbN. Containing only antimony and nitrogen, this binary nitride material is an interpnictogen. It is the antimony analog of phosphorus mononitride. Antimony nitride forms when antimony trichloride dissolves in liquid ammonia. It has been investigated as a transparent film that conducts electricity. See also Phosphorus mononitride References Nitrides Antimony compounds
Antimony nitride
Chemistry
107
21,368,349
https://en.wikipedia.org/wiki/Wiley%20Prize
The Wiley Prize in Biomedical Sciences is intended to recognize breakthrough research in pure or applied life science research that is distinguished by its excellence, originality and impact on our understanding of biological systems and processes. The award may recognize a specific contribution or series of contributions that demonstrate the nominee's significant leadership in the development of research concepts or their clinical application. Particular emphasis will be placed on research that champions novel approaches and challenges accepted thinking in the biomedical sciences. The Wiley Foundation, established in 2001, is the endowing body that supports the Wiley Prize in Biomedical Sciences. This international award is presented annually and consists of a $35,000 prize and a luncheon in honor of the recipient. The award is presented at a ceremony at The Rockefeller University, where the recipient delivers an honorary lecture as part of the Rockefeller University Lecture Series. As of 2016, six recipients have gone on to be awarded the Nobel Prize in Physiology or Medicine. Award recipients Source: Wiley Foundation 2002 H. Robert Horvitz of the Massachusetts Institute of Technology and Stanley J. Korsmeyer of the Dana Farber Cancer Institute – For his seminal research on programmed cell death and the discovery that a genetic pathway accounts for the programmed cell death within an organism, and Korsmeyer was chosen for his discovery of the relationship between human lymphomas and the fundamental biological process of apoptosis. Korsmeyer's experiments established that blocking cell death plays a primary role in cancer. 2003 Andrew Z. Fire, of both the Carnegie Institution of Washington and the Johns Hopkins University; Craig C. Mello, of the University of Massachusetts Medical School; Thomas Tuschl, formerly of the Max-Planck Institute for Biophysical Chemistry in Goettingen, Germany, and most recently of The Rockefeller University; and David Baulcombe, of the Sainsbury Laboratory at the John Innes Centre in Norwich, England – For contributions to discoveries of novel mechanisms for regulating gene expression by small interfering RNAs (siRNA). 2004 C. David Allis, Ph.D., Joy and Jack Fishman, Professor, Laboratory of Chromatin Biology and Epigenetics at the Rockefeller University in New York – For the significant discovery that transcription factors can enzymatically modify histones to regulate gene activity. 2005 Peter Walter, a Howard Hughes Medical Institute investigator, and Professor and Chairman of the Department of Biochemistry & Biophysics at the University of California San Francisco, and Kazutoshi Mori, a professor of biophysics, in the Graduate School of Science at Kyoto University, in Japan – For the discovery of the novel pathway by which cells regulate the capacity of their intracellular compartments to produce correctly folded proteins for export. 2006 Elizabeth H. Blackburn, Morris Herztein Professor of Biology and Physiology in the Department of Biochemistry and Biophysics at the University of California, San Francisco, and Carol Greider, Daniel Nathans Professor and Director of Molecular Biology & Genetics at Johns Hopkins University – For the discovery of telomerase, the enzyme that maintains chromosomal integrity and the recognition of its importance in aging, cancer and stem cell biology. 2007 F. Ulrich Hartl, Director at the Max Planck Institute of Biochemistry, in Munich, Germany, and Arthur L. Horwich, Eugene Higgins Professor of Genetics and Pediatrics at the Yale University School of Medicine, and Investigator, Howard Hughes Medical Institute. – For elucidation of the molecular machinery that guides proteins into their proper functional shape, thereby preventing the accumulation of protein aggregates that underlie many diseases, such as Alzheimer's and Parkinson's. 2008 Richard P. Lifton of the Yale University School of Medicine. – For the discovery of the genes that cause many forms of high and low blood pressure in humans. 2009 Bonnie Bassler of the Department of Molecular Biology at Princeton University and the Howard Hughes Medical Institute. – For pioneering investigations of quorum sensing, a mechanism that allows bacteria to "talk" to each other to coordinate their behavior, even between species. 2010 Peter Hegemann, Professor of Molecular Biophysics, Humboldt University, Berlin; Georg Nagel, Professor of Molecular Plant Physiology, Department of Botany, University of Würzburg; and Ernst Bamberg, Professor and Director of the Dept of Biophysical Chemistry, Max Planck Institute for Biophysics, Frankfurt, Germany for their discovery of channelrhodopsins, a family of light-activated ion channels. The discovery has greatly enlarged and strengthened the new field of optogenetics. Channelrhodopsins also provide a high potential for biomedical applications such as the recovery of vision and optical deep brain stimulation for treatment of Parkinson's and other diseases, instead of the more invasive electrode-based treatments. 2011 Lily Jan and Yuh Nung Jan of Howard Hughes Medical Institute at the University of California, San Francisco for their molecular identification of a founding member of a family of potassium ion channels that control nerve cell activity throughout the animal kingdom. 2012 Michael Sheetz, Columbia University; James Spudich, Stanford University, and Ronald Vale, University of California, San Francisco for explaining how cargo is moved by molecular motors along two different systems of tracks within cells. 2013 Michael Young, Rockefeller University; Jeffrey Hall, Brandeis University (Emeritus), and Michael Rosbash, Brandeis University for the discovery of the molecular mechanisms governing circadian rhythms. 2014 William Kaelin, Jr.; Steven McKnight; Peter J. Ratcliffe; Gregg L. Semenza for their work in oxygen sensing systems. 2015 Evelyn M. Witkin and Stephen Elledge for their studies of the DNA damage response. 2016 Yoshinori Ohsumi for the discovery of how cells recycle their components in an orderly manner. This process, autophagy (self-eating), is critical for the maintenance and repair of cells and tissues. 2017 Joachim Frank, Richard Henderson, and Marin van Heel for pioneering developments in electron microscopy. 2018 Lynne E. Maquat for elucidating the mechanism of nonsense-mediated messenger RNA decay. 2019 Svante Pääbo and David Reich for sequencing the genomes of ancient humans and extinct relatives. 2020 No award due to the COVID-19 pandemic. 2021 Clifford Brangwynne, Anthony Hyman, and Michael Rosen for a new principle of subcellular compartmentalization based on formation of phase-separated biomolecular condensates. 2022 David Baker, Demis Hassabis, and John Jumper for pioneering studies in protein structure predictions. 2023 Michael J. Welsh, Paul Negulescu, Fredrick Van Goor, and Sabine Hadida for research and development leading to medicines that effectively treat cystic fibrosis by correcting the folding, trafficking, and functioning of the mutated cystic fibrosis transmembrane regulator (CFTR). 2024 Judith Kimble, Allan Spradling, and Raymond Schofield for their discovery of the stem cell niche, a localized environment that controls stem-cell identity. See also List of biology awards List of medicine awards References External links The Wiley Foundation Laureates 2021 - Protein structure prediction Biology awards Medicine awards American awards Awards established in 2001 Medical lecture series Rockefeller University 2001 establishments in New York City Recurring events established in 2001 University and college lecture series
Wiley Prize
Technology
1,468
76,975,616
https://en.wikipedia.org/wiki/Key%20Transparency
Key Transparency allows communicating parties to verify public keys used in end-to-end encryption. In many end-to-end encryption services, to initiate communication a user will reach out to a central server and request the public keys of the user with which they wish to communicate. If the central server is malicious or becomes compromised, a man-in-the-middle attack can be launched through the issuance of incorrect public keys. The communications can then be intercepted and manipulated. Additionally, legal pressure could be applied by surveillance agencies to manipulate public keys and read messages. With Key Transparency, public keys are posted to a public log that can be universally audited. Communicating parties can verify public keys used are accurate. See also Certificate Transparency References Cryptography End-to-end encryption Public-key cryptography
Key Transparency
Mathematics,Engineering
162
12,990,706
https://en.wikipedia.org/wiki/Natrinema
Natrinema (common abbreviation Nnm.) is a genus of the Natrialbaceae. Taxonomy As of 2022, there are 18 species validly published in the genus Natrinema. Natrinema is related to the genus Haloterrigena, established in 1999, resulting in confusion about taxon limits and several species apparently being assigned to the wrong genus. Based on phylogenomic analysis, eight species from Haloterrigena as well as Halopiger salifodinae were transferred to Natrinema in 2022. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). Note: Unassigned Natrinema "N. aidingensis" Liu et al. 2003a "N. ajinwuensis" Mahansaria et al. 2018 "N. xinjiang" Habdin & Tohty 2005 "N. zhouii" Hu et al. 2023 See also List of Archaea genera References Further reading Scientific journals Scientific books Archaea genera Taxa described in 1998
Natrinema
Biology
234
36,794,333
https://en.wikipedia.org/wiki/Birch%20Creek%20Charcoal%20Kilns
The Birch Creek Charcoal Kilns are a group of beehive-shaped clay charcoal kilns near Leadore, Idaho, built in 1886. They were listed on the National Register of Historic Places in 1972. The kilns were built in 1886 to produce charcoal to fuel the smelter at Nicholia, which smelted lead and silver ore from the Viola Mine about 10 miles east of the kilns. The Viola ore deposit was discovered in 1881 and was mined until 1888 when the ore was depleted and the price of lead had fallen. The Nicholia smelter, located about 3 miles west of the mines, had two blast furnaces, each with a daily capacity of of ore. A Butte, Montana, man named Warren King built 16 kilns from brick made from local clay, possibly obtained from Jump Creek on the east side of the Birch Creek valley. The beehive-shaped kilns are each about tall and in diameter. When operating, each kiln used 30 to 40 cords of Douglas fir wood per load, producing about 1,500 to 2,000 bushels (70 cubic meters) of charcoal over a two-day burn. The kiln operation lasted for less than three years, employing 150 to 200 people at its peak, and had a monthly output estimated at 44,000 to 50,000 bushels (1550 to 1762 cubic meters) of charcoal. The ruins of four kilns survive. They are located in the Caribou-Targhee National Forest, which operates the location as a public interpretive site. Nicholia is now a ghost town, with only a few ruins remaining. There are no surviving remains of the town of Woodland, a short distance south of the kilns, where the kiln workers lived. The kilns were listed on the National Register of Historic Places in 1972. The listing included four contributing structures on . In 1987, a volunteer-assisted stabilization effort prevented one of the kilns from collapsing. The Forest Service undertook a restoration of the kilns in 2000. References External links Buildings and structures in Lemhi County, Idaho Historic American Engineering Record in Idaho Kilns Charcoal ovens Tourist attractions in Lemhi County, Idaho Industrial buildings and structures on the National Register of Historic Places in Idaho Charcoal National Register of Historic Places in Lemhi County, Idaho
Birch Creek Charcoal Kilns
Chemistry,Engineering
486
26,762,572
https://en.wikipedia.org/wiki/Fusidic%20acid/betamethasone%20valerate
Fusidic acid/betamethasone valerate is a combination drug with the active ingredients being fusidic acid (an antibiotic) and betamethasone valerate (a corticosteroid). It is a medical cream used for treatment of skin inflammation, eczema, or dermatitis that is also infected with bacteria sensitive to fusidic acid. References Ointments Antibiotics Corticosteroids
Fusidic acid/betamethasone valerate
Biology
90
24,306,623
https://en.wikipedia.org/wiki/Pixel%20shifting
Pixel shifting refers to various technical methods, either to diminish damage to displays by preventing "burn in" of static images or to enhance resolution of displays, projectors, and digital imaging devices. The term is often used synonymously with the more specific term pixel shift. Purposes Avoid burn-in See Pixel shifting avoids burn-in explained in detail for both analogue and digital screens. Enhance character display resolution on terminals Computer terminals such as the HP 2645A used a half-shift algorithm to move pixel positions by half a screen pixel in order to support the generation of multiple complex character sets. Increase projection resolution Pixel shifting has been implemented in video projectors to expand the native 1080p resolution to produce an effectively 4K image on the screen. An exemplary implementation by the electronics corporation JVC is referred to as "e-shift". Increase capture and/or tonal resolution Pixel shifting by movement of one or more sensors is a technique to increase resolution and/or colour rendering of image capturing devices. The image at right displays the visible gain both in detail and in colour resolution produced by the Sony α7R IV 16-shot pixel shift mode, which results in a 240 Mpixel image, as compared to a single shot with the standard sensor resolution of 61 Mpixel. The crops taken from each image display the coat of arms at exactly the same size, albeit with different pixel counts. One or more separate color channel sensors Some camcorders and digital microscopes employ separate color channel sensors (usually RGB = red, green, blue) sensors. Pixel shifting may be implemented for one or more of these sensors by moving such a sensor by a fraction of a pixel (or even a whole pixel value) in both x- and y-direction. For example, early high-definition camcorders used a 3CCD sensor block of 960 × 540 pixels each. Shifting the red and blue sensors (but not the green sensor) by 0.5 pixel in both vertical and horizontal directions permitted the recovery of a 1920 × 1080 luminance signal. One multi colour channel sensor Currently most consumer imaging devices (cameras, camcorders, smartphones) employ a single multi colour channel sensor, on which the RGB (red, green, blue) pixels are usually arranged in a Bayer pattern. Thus any mode of pixel shifting movement either by fractional or by whole pixel values, whether to obtain a more detailed image or to improve tonal resolution, must necessarily engage the whole sensor. More detailed information is to be obtained on page Pixel shift. Other implementations Related features The first stabilization mechanism for a still-camera sensor was implemented by Minolta in 2003 as a new feature of the DiMAGE A1. The purpose of this implementation was only to counteract camera shake. The first consumer still-camera that utilized sensor movement to enhance detail and/or tonal resolution was the K-3 II, released by Pentax in 2015. Also see Image stabilization, section 'Sensor-shift'. References Display technology Television technology Imaging
Pixel shifting
Technology,Engineering
615
66,491,075
https://en.wikipedia.org/wiki/WASP-84
WASP-84, also known as BD+02 2056, is a G-type main-sequence star away in the constellation Hydra. Its surface temperature is 5350 K and is slightly enriched in heavy elements compared to the Sun, with a metallicity Fe/H index of 0.05. It is rich in carbon and depleted of oxygen. WASP-84's age is probably older than the Sun at 8.5 billion years. The star appears to have an anomalously small radius, which can be explained by the unusually high helium fraction or by it being very young. A multiplicity survey did not detect any stellar companions to WASP-84 as of 2015. Planetary system In 2013, one exoplanet, named WASP-84b, was discovered on a tight, circular orbit. The planet is a hot Jupiter that cannot have formed in its current location and likely migrated from elsewhere. The planetary orbit is well aligned with the equatorial plane of the star, misalignment being equal to 0.3°. Planetary equilibrium temperature is 832 K. In 2023, a second planet was discovered around WASP-84. This appears to be a dense rocky planet despite its high mass, comparable to Uranus. References Hydra (constellation) Planetary transit variables G-type main-sequence stars Planetary systems with two confirmed planets J08442570+0151361 BD+02 2056
WASP-84
Astronomy
290
208,305
https://en.wikipedia.org/wiki/Shyness
Shyness (also called diffidence) is the feeling of apprehension, lack of comfort, or awkwardness especially when a person is around other people. This commonly occurs in new situations or with unfamiliar people; a shy person may simply opt to avoid these situations. Although shyness can be a characteristic of people who have low self-esteem, the primary defining characteristic of shyness is a fear of what other people will think of a person's behavior. This fear of negative reactions such as being mocked, humiliated or patronized, criticized or rejected can cause a shy person to retreat. Stronger forms of shyness can be referred to as social anxiety or social phobia. Origins The initial cause of shyness varies. Scientists believe that they have located genetic data supporting the hypothesis that shyness is, at least, partially genetic. However, there is also evidence that suggests the environment in which a person is raised can also be responsible for their shyness. This includes child abuse, particularly emotional abuse such as ridicule. Shyness can originate after a person has experienced a physical anxiety reaction; at other times, shyness seems to develop first and then later causes physical symptoms of anxiety. Shyness differs from social anxiety, which is a narrower, often depression-related psychological condition including the experience of fear, apprehension or worrying about being evaluated by others in social situations to the extent of inducing panic. Shyness may come from genetic traits, the environment in which a person is raised and personal experiences. Shyness may be a personality trait or can occur at certain stages of development in children. Genetics and heredity Shyness is often seen as a hindrance to people and their development. The cause of shyness is often disputed but it is found that fear is positively related to shyness, suggesting that fearful children are much more likely to develop being shy as opposed to children less fearful. Shyness can also be seen on a biological level as a result of an excess of cortisol. When cortisol is present in greater quantities, it is known to suppress an individual's immune system, making them more susceptible to illness and disease. The genetics of shyness is a relatively small area of research that has been receiving an even smaller amount of attention, although papers on the biological bases of shyness date back to 1988. Some research has indicated that shyness and aggression are related—through long and short forms of the gene DRD4, though considerably more research on this is needed. Further, it has been suggested that shyness and social phobia (the distinction between the two is becoming ever more blurred) are related to obsessive-compulsive disorder. As with other studies of behavioral genetics, the study of shyness is complicated by the number of genes involved in, and the confusion in defining, the phenotype. Naming the phenotype – and translation of terms between genetics and psychology — also causes problems. Several genetic links to shyness are current areas of research. One is the serotonin transporter promoter region polymorphism (5-HTTLPR), the long form of which has been shown to be modestly correlated with shyness in grade school children. Previous studies had shown a connection between this form of the gene and both obsessive-compulsive disorder and autism. Mouse models have also been used, to derive genes suitable for further study in humans; one such gene, the glutamic acid decarboxylase gene (which encodes an enzyme that functions in GABA synthesis), has so far been shown to have some association with behavioral inhibition. Another gene, the dopamine D4 receptor gene (DRD4) exon III polymorphism, had been the subject of studies in both shyness and aggression and is currently the subject of studies on the "novelty seeking" trait. A 1996 study of anxiety-related traits (shyness being one of these) remarked that, "Although twin studies have indicated that individual variation in measures of anxiety-related personality traits is 40-60% heritable, none of the relevant genes has yet been identified", and that "10 to 15 genes might be predicted to be involved" in the anxiety trait. Progress has been made since then, especially in identifying other potential genes involved in personality traits, but there has been little progress made towards confirming these relationships. The long version of the 5-HTT gene-linked polymorphic region (5-HTTLPR) is now postulated to be correlated with shyness, but in the 1996 study, the short version was shown to be related to anxiety-based traits. Thalia Eley, professor of developmental behavioural genetics at King's College London, argues that only about 30% of shyness as a trait is genetically inherited, while the rest emerges as a response to the environment. As a symptom of mercury poisoning Excessive shyness, embarrassment, self-consciousness and timidity, social-phobia and lack of self-confidence are also components of erethism, which is a symptom complex that appears in cases of mercury poisoning. Prenatal development The prevalence of shyness in some children can be linked to day length during pregnancy, particularly during the midpoint of prenatal development. An analysis of longitudinal data from children living at specific latitudes in the United States and New Zealand revealed a significant relationship between hours of day length during the midpoint of pregnancy and the prevalence of shyness in children. "The odds of being classified as shy were 1.52 times greater for children exposed to shorter compared to longer daylengths during gestation." In their analysis, scientists assigned conception dates to the children relative to their known birth dates, which allowed them to obtain random samples from children who had a mid-gestation point during the longest hours of the year and the shortest hours of the year (June and December, depending on whether the cohorts were in the United States or New Zealand). The longitudinal survey data included measurements of shyness on a five-point scale based on interviews with the families being surveyed, and children in the top 25th percentile of shyness scores were identified. The data revealed a significant co-variance between the children who presented as being consistently shy over a two-year period, and shorter day length during their mid-prenatal development period. "Taken together, these estimates indicate that about one out of five cases of extreme shyness in children can be associated with gestation during months of limited daylength." Low birth weights In recent years correlations between birth weight and shyness have been studied. Findings suggest that those born at low birth weights are more likely to be shy, risk-aversive and cautious compared to those born at normal birth weights. These results do not however imply a cause-and-effect relationship. Personality trait Shyness is most likely to occur during unfamiliar situations, though in severe cases it may hinder an individual in their most familiar situations and relationships as well. Shy people avoid the objects of their apprehension in order to keep from feeling uncomfortable and inept; thus, the situations remain unfamiliar and the shyness perpetuates itself. Shyness may fade with time; e.g., a child who is shy towards strangers may eventually lose this trait when older and become more socially adept. This often occurs by adolescence or young adulthood (generally around the age of 13). In some cases, though, it may become an integrated, lifelong character trait. Longitudinal data suggests that the three different personality types evident in infancy – easy, slow-to-warm-up, and difficult – tend to change as children mature. Extreme traits become less pronounced, and personalities evolve in predictable patterns over time. What has been proven to remain constant is the tendency to internalize or externalize problems. This relates to individuals with shy personalities because they tend to internalize their problems, or dwell on their problems internally instead of expressing their concerns, which leads to disorders like depression and anxiety. Humans experience shyness to different degrees and in different areas. Shyness can also be seen as an academic determinant. It has been determined that there is a negative relationship between shyness and classroom performance. As the shyness of an individual increased, classroom performance was seen to decrease. Shyness may involve the discomfort of difficulty in knowing what to say in social situations, or may include crippling physical manifestations of uneasiness. Shyness usually involves a combination of both symptoms, and may be quite devastating for the sufferer, in many cases leading them to feel that they are boring, or exhibit bizarre behavior in an attempt to create interest, alienating them further. Behavioral traits in social situations such as smiling, easily producing suitable conversational topics, assuming a relaxed posture and making good eye contact, may not be second nature for a shy person. Such people might only affect such traits by great difficulty, or they may even be impossible to display. Those who are shy are perceived more negatively, in cultures that value sociability, because of the way they act towards others. Shy individuals are often distant during conversations, which can result in others forming poor impressions of them and considering them stand-offish, egoist or snobbish. People who are not shy may be up-front, aggressive, or critical towards shy people in an attempt "to get them out of their shell". Even when an attempt to draw out a shy person is conducted in a kindly and well-intentioned manner the exercise may still backfire, as by focusing attention on the individual it increases their self-consciousness and sense of awkwardness. Concepts Versus introversion The term shyness may be implemented as a lay blanket-term for a family of related and partially overlapping afflictions, including timidity (apprehension in meeting new people), bashfulness and diffidence (reluctance in asserting oneself), apprehension and anticipation (general fear of potential interaction), or intimidation (relating to the object of fear rather than one's low confidence). Apparent shyness, as perceived by others, may simply be the manifestation of reservation or introversion, a character trait which causes an individual to voluntarily avoid excessive social contact or be terse in communication, but are not motivated or accompanied by discomfort, apprehension, or lack of confidence. Introversion is commonly mistaken for shyness. However, introversion is a personal preference, while shyness stems from distress. Rather, according to professor of psychology Bernardo J. Carducci, introverts choose to avoid social situations because they derive no reward from them or may find surplus sensory input overwhelming, whereas shy people may fear such situations. Research using the statistical techniques of factor analysis and correlation have found shyness overlaps mildly with both introversion and neuroticism (i.e., negative emotionality). Low societal acceptance of shyness or introversion may reinforce a shy or introverted individual's low self-confidence. Both shyness and introversion can outwardly manifest with socially withdrawn behaviors, such as tendencies to avoid social situations, especially when they are unfamiliar. A variety of research suggests that shyness and introversion possess clearly distinct motivational forces and lead to uniquely different personal and peer reactions and therefore cannot be described as theoretically the same, with Susan Cain's Quiet (2012) further discerning introversion as involving being differently social (preferring one-on-one or small group interactions) rather than being anti-social altogether. Research suggests that no unique physiological response, such as an increased heart beat, accompanies socially withdrawn behavior in familiar compared with unfamiliar social situations. But unsociability leads to decreased exposure to unfamiliar social situations and shyness causes a lack of response in such situations, suggesting that shyness and unsociability affect two different aspects of sociability and are distinct personality traits. In addition, different cultures perceive unsociability and shyness in different ways, leading to either positive or negative individual feelings of self-esteem. Collectivist cultures view shyness as a more positive trait related to compliance with group ideals and self-control, while perceiving chosen isolation (introverted behavior) negatively as a threat to group harmony; and because collectivist society accepts shyness and rejects unsociability, shy individuals develop higher self-esteem than introverted individuals. On the other hand, individualistic cultures perceive shyness as a weakness and a character flaw, while unsociable personality traits (preference to spend time alone) are accepted because they uphold the value of autonomy; accordingly, shy individuals tend to develop low self-esteem in Western cultures while unsociable individuals develop high self-esteem. Versus social phobia (social anxiety disorder) An extreme case of shyness is identified as a psychiatric illness, which made its debut as social phobia in DSM-III in 1980, but was then described as rare. By 1994, however, when DSM-IV was published, it was given a second, alternative name in parentheses (social anxiety disorder) and was now said to be relatively common, affecting between 3 and 13% of the population at some point during their lifetime. Studies examining shy adolescents and university students found that between 12 and 18% of shy individuals meet criteria for social anxiety disorder. Shyness affects people mildly in unfamiliar social situations where one feels anxiety about interacting with new people. Social anxiety disorder, on the other hand, is a strong irrational fear of interacting with people, or being in situations which may involve public scrutiny, because one feels overly concerned about being criticized if one embarrasses oneself. Physical symptoms of social phobia can include blushing, shortness of breath, trembling, increased heart rate, and sweating; in some cases, these symptoms are intense enough and numerous enough to constitute a panic attack. Shyness, on the other hand, may incorporate many of these symptoms, but at a lower intensity, infrequently, and does not interfere tremendously with normal living. Social versus behavioral inhibition Those considered shy are also said to be socially inhibited. Social inhibition is the conscious or unconscious constraint by a person of behavior of a social nature. In other words, social inhibition is holding back for social reasons. There are different levels of social inhibition, from mild to severe. Being socially inhibited is good when preventing one from harming another and bad when causing one to refrain from participating in class discussions. Behavioral inhibition is a temperament or personality style that predisposes a person to become fearful, distressed and withdrawn in novel situations. This personality style is associated with the development of anxiety disorders in adulthood, particularly social anxiety disorder. Misconceptions and negative aspects Many misconceptions/stereotypes about shy individuals exist in Western culture and negative peer reactions to "shy" behavior abound. This takes place because individualistic cultures place less value on quietness and meekness in social situations, and more often reward outgoing behaviors. Some misconceptions include viewing introversion and social phobia synonymous with shyness, and believing that shy people are less intelligent. Intelligence No correlation (positive or negative) exists between intelligence and shyness. Research indicates that shy children have a harder time expressing their knowledge in social situations (which most modern curricula utilize), and because they do not engage actively in discussions teachers view them as less intelligent. In line with social learning theory, an unwillingness to engage with classmates and teachers makes it more difficult for shy students to learn. Test scores, however, indicate that whereas shyness may limit academic engagement, it is unrelated to actual academic knowledge. Depending on the level of a teacher's own shyness, more indirect (vs. socially oriented) strategies may be used with shy individuals to assess knowledge in the classroom, and accommodations made. Observed peer evaluations of shy people during initial meeting and social interactions thereafter found that peers evaluate shy individuals as less intelligent during the first encounter. During subsequent interactions, however, peers perceived shy individuals' intelligence more positively. Benefits Thomas Benton claims that because shy people "have a tendency toward self-criticism, they are often high achievers, and not just in solitary activities like research and writing. Perhaps even more than the drive toward independent achievement, shy people long to make connections to others often through altruistic behavior." Susan Cain describes the benefits that shy people bring to society that US cultural norms devalue. Without characteristics that shy people bring to social interactions, such as sensitivity to the emotions of others, contemplation of ideas, and valuable listening skills, there would be no balance to society. In earlier generations, such as the 1950s, society perceived shyness as a more socially attractive trait, especially in women, indicating that views on shyness vary by culture. Sociologist Susie Scott challenged the interpretation and treatment of shyness as being pathological. "By treating shyness as an individual pathology, ... we forget that this is also a socially oriented state of mind that is socially produced and managed." She explores the idea that "shyness is a form of deviance: a problem for society as much as for the individual", and concludes that, to some extent, "we are all impostors, faking our way through social life". One of her interview subjects (self-defined as shy) puts this point of view even more strongly: "Sometimes I want to take my cue from the militant disabled lobbyists and say, 'hey, it's not MY problem, it's society's'. I want to be proud to be shy: on the whole, shys are probably more sensitive, and nicer people, than 'normals'. I shouldn't have to change: society should adapt to meet my needs." Different cultural views In cultures that value outspokenness and overt confidence, shyness can be perceived as weakness. To an unsympathetic observer, a shy individual may be mistaken as cold, distant, arrogant or aloof, which can be frustrating for the shy individual. However, in other cultures, shy people may be perceived as being thoughtful, intelligent, as being good listeners, and as being more likely to think before they speak. In cultures that value autonomy, shyness is often analyzed in the context of being a social dysfunction, and is frequently contemplated as a personality disorder or mental health issue. Some researchers are beginning to study comparisons between individualistic and collectivistic cultures, to examine the role that shyness might play in matters of social etiquette and achieving group-oriented goals. "Shyness is one of the emotions that may serve as behavioral regulators of social relationships in collectivistic cultures. For example, social shyness is evaluated more positively in a collectivistic society, but negatively evaluated in an individualistic society." In a cross-cultural study of Chinese and Canadian school children, researchers sought to measure several variables related to social reputation and peer relationships, including "shyness-sensitivity." Using peer nomination questionnaire, students evaluated their fellow students using positive and negative playmate nominations. "Shyness-sensitivity was significantly and negatively correlated with measures of peer acceptance in the Canadian sample. Inconsistent with Western results, it was found that items describing shyness-sensitivity were separated from items assessing isolation in the factor structure for the Chinese sample. Shyness-sensitivity was positively associated with sociability-leadership and with peer acceptance in the Chinese sample." Western perceptions In some Western cultures shyness-inhibition plays an important role in psychological and social adjustment. It has been found that shyness-inhibition is associated with a variety of maladaptive behaviors. Being shy or inhibited in Western cultures can result in rejection by peers, isolation and being viewed as socially incompetent by adults. However, research suggests that if social withdrawal is seen as a personal choice rather than the result of shyness, there are fewer negative connotations. British writer Arthur C. Benson felt shyness is not mere self-consciousness, but a primitive suspicion of strangers, the primeval belief that their motives are predatory, with shyness a sinister quality which needs to be uprooted. He believed the remedy is for the shy to frequent society for courage from familiarity. Also, he claimed that too many shy adults take refuge in a critical attitude, engaging in brutal onslaughts on inoffensive persons. He felt that a better way is for the shy to be nice, to wonder what others need and like, interest in what others do or are talking about, friendly questions, and sympathy. For Charles Darwin shyness was an "odd state of mind", appearing to offer no benefit to our species, and since the 1970s the modern tendency in psychology has been to see shyness as pathology. However, evolutionary survival advantages of careful temperaments over adventurous temperaments in dangerous environments have also been recognized. Eastern perceptions In Eastern cultures shyness-inhibition in school-aged children is seen as positive and those that exhibit these traits are viewed well by peers and are accepted. They tend to be seen as competent by their teachers, to perform well in school and to show well-being. Shy individuals are also more likely to attain leadership status in school. Being shy or inhibited does not correlate with loneliness or depression as in the West. In Eastern cultures, being shy and inhibited is perceived as a sign of politeness, respectfulness, and thoughtfulness. Examples of shyness and inhibition In Hispanic cultures shyness and inhibition with authority figures is common. For instance, Hispanic students may feel shy towards being praised by teachers in front of others, because in these cultures students are rewarded in private with a touch, a smile, or spoken word of praise. Hispanic students may seem shy when they are not. It is considered rude to excel over peers and siblings; therefore it is common for Hispanic students to be reserved in classroom settings. Adults also show reluctance to share personal matters about themselves to authority figures such as nurses and doctors. Cultures in which the community is closed and based on agriculture (Kenya, India, etc.) experience lower social engagement than those in more open communities (United States, Okinawa, etc.) where interactions with peers are encouraged. Children in Mayan, Indian, Mexican, and Kenyan cultures are less expressive in social styles during interactions and they spend little time engaged in socio-dramatic activities. They are also less assertive in social situations. Self-expression and assertiveness in social interactions are related to shyness and inhibition in that when one is shy or inhibited one exhibits little or no expressive tendencies. Assertiveness is demonstrated in the same way, being shy and inhibited lessen one's chances of being assertive because of a lack of confidence. In Italian culture emotional expressiveness during interpersonal interaction is encouraged. From a young age children engage in debates or discussions that encourage and strengthen social assertiveness. Independence and social competence during childhood is also promoted. Being inhibited is looked down upon and those who show this characteristic are viewed negatively by their parents and peers. Like other cultures where shyness and inhibition is viewed negatively, peers of shy and inhibited Italian children reject the socially fearful, cautious and withdrawn. These withdrawn and socially fearful children express loneliness and believe themselves to be lacking the social skills needed in social interactions. Intervention and treatment Psychological methods and pharmaceutical drugs are commonly used to treat shyness in individuals who feel crippled because of low self-esteem and psychological symptoms, such as depression or loneliness. According to research, early intervention methods that expose shy children to social interactions involving team work, especially team sports, decrease their anxiety in social interactions and increase their all around self-confidence later on. Implementing such tactics could prove to be an important step in combating the psychological effects of shyness that make living normal life difficult for anxious individuals. One important aspect of shyness is social skills development. If schools and parents implicitly assume children are fully capable of effective social interaction, social skills training is not given any priority (unlike reading and writing). As a result, shy students are not given an opportunity to develop their ability to participate in class and interact with peers. Teachers can model social skills and ask questions in a less direct and intimidating manner in order to gently encourage shy students to speak up in class, and make friends with other children. See also Boldness Camera shyness Haya (Islam) People skills Selective mutism Avoidant personality disorder Highly sensitive person Medicalization of behaviors as illness References Further reading External links Lynn Henderson and Philip Zimbardo: "Shyness". Entry in Encyclopedia of Mental Health, Academic Press, San Diego, CA (in press) Liebowitz Social Anxiety Scale (LSAS-SR) Emotions Interpersonal relationships
Shyness
Biology
5,025
9,877,825
https://en.wikipedia.org/wiki/Leonard%20Ornstein
Leonard Salomon Ornstein (12 November 1880 in Nijmegen, the Netherlands – 20 May 1941 in Utrecht, the Netherlands) was a Dutch physicist. Biography Ornstein studied theoretical physics with Hendrik Antoon Lorentz at the University of Leiden. He subsequently carried out Ph.D. research under the supervision of Lorentz, concerning an application of the statistical mechanics of Gibbs to molecular problems. In 1914, Ornstein was appointed professor of physics, as successor of Peter Debye, at the University of Utrecht. Among his doctoral students was Jan Frederik Schouten. In 1922, Ornstein became director of the Physical Laboratory (Fysisch Laboratorium) and extended his research interests to experimental subjects. His precision measurements concerning intensities of spectral lines brought the Physical Laboratory in the international limelight. Ornstein is also remembered for the Ornstein-Zernike theory (named after himself and Frederik Zernike) concerning correlation functions, and the Ornstein-Uhlenbeck process (named after Ornstein and George Uhlenbeck), a stochastic process. Together with Gilles Holst, director of the Philips Physics Laboratory (Philips Natuurkundig Laboratorium), Ornstein was the driving force behind establishing the Netherlands Physical Society (Nederlandse Natuurkundige Vereniging, NNV) in 1921. From 1939 until November 1940, he was chairman of this society. From 1918 until 1922, he was chairman of the Dutch Zionist Society (Nederlandse Zionistische Vereniging). In 1929, he became a member of the Royal Netherlands Academy of Arts and Sciences. Immediately after the May 1940 German conquest and occupation of the Netherlands in World War II (see Battle of the Netherlands), a friend from the United States of America, the astronomer Peter van de Kamp, offered to bring Ornstein and his family to America. However, Ornstein did not accept this offer, since, as he put it, he would not leave his laboratory in Utrecht. The Nazis targeted Ornstein for his Jewish heritage and the university dismissed him in September 1940, barring him from entering his laboratory. In November 1940, the university's dismissal became official. On 29 November 1940, Ornstein withdrew his membership of the Netherlands Physical Society. During this period he increasingly distanced himself from public life, to the degree that he no longer wished to receive guests at home. Ornstein died on 20 May 1941, a year after German occupation, and six months after being barred from University. One of the five buildings of the Department of Physics at the University of Utrecht is named the Leonard S. Ornstein Laboratory in his honor. Publications Toepassing der statistische mechanica van Gibbs op moleculair-theoretische vraagstukken, Ph.D. thesis, 26 March 1908 Problemen der kinetische theorie van de stof, 1915 Strahlungsgesetz und Intensität von Mehrfachlinien, 1924 Intensität der Komponenten im Zeemaneffekt, 1924 On the theory of the Brownian motion, 1930 De beteekenis der natuurkunde voor cultuur en maatschappij, 1932 See also Uithof Virial coefficient References External links Snelders H.A.M. (2007-02-01). "Ornstein, Leonard Salomon (1880-1941)", in: Biografisch Woordenboek van Nederland. Retrieved on 2007-03-30. (in Dutch). The Ornstein-Zernike equation in the canonical ensemble 1880 births 1941 deaths 20th-century Dutch physicists Probability theorists Leiden University alumni Academic staff of Utrecht University Jewish Dutch scientists Members of the Royal Netherlands Academy of Arts and Sciences People from Nijmegen Statistical physicists
Leonard Ornstein
Physics
794
35,538,957
https://en.wikipedia.org/wiki/Mixture%20theory
Mixture theory is used to model multiphase systems using the principles of continuum mechanics generalised to several interpenetrable continua. The basic assumption is that, at any instant of time, all phases are present at every material point, and momentum and mass balance equations are postulated. Like other models, mixture theory requires constitutive relations to close the system of equations. Krzysztof Wilmanski extended the model by introducing a balance equation of porosity. References Rheology Scientific modelling
Mixture theory
Chemistry
106
28,811,075
https://en.wikipedia.org/wiki/LBG-2377
LBG-2377 is the most distant galaxy merger discovered, as of 2008, at a distance of 11.4 billion light years. This galaxy merger is so distant that the universe was in its infancy when its light was emitted. It is expected that this galaxy proto-cluster will merge to form a brightest cluster galaxy, and become the core of a larger galaxy cluster. Discovery Observations were conducted with the Keck Telescope in Hawaii by Jeff Cooke, a McCue Postdoctoral Fellow in physics and astronomy at UCI. While looking for single galaxies, Cooke found something that at first appeared like a bright, single object. However, further analysis of wavelengths of the emitted light proved that they were three galaxies merging, and likely two smaller galaxies. See also List of galaxies Antennae Galaxies Galaxy cluster References External links More about LBG-2377 Interacting galaxies Hercules (constellation)
LBG-2377
Astronomy
173
57,954,327
https://en.wikipedia.org/wiki/Data%20Transfer%20Project
The Data Transfer Project (DTP) is an open-source initiative which features data portability between multiple online platforms. The project was launched and introduced by Google on July 20, 2018, and has currently partnered with Facebook, Microsoft, Twitter, and Apple. Background The project was formed by the Google Data Liberation Front in 2017, hoping to provide a platform that could allow individuals to move their online data between different platforms, without the need of downloading and re-uploading data. The ecosystem is achieved by extracting different files through various available APIs released by online platforms and translating such codes so that it could be compatible with other platforms. Similarly, the Data Transfer Project is currently being used as a part of Google Takeout and a similar program in Facebook (called "Access your information"), allowing the two personal data downloading services to be compatible with each other. This allows data to be easily transferred from the two platforms. On July 20, 2018, the joint project was announced. The source code, which has been uploaded to GitHub, was mainly written by Google and Microsoft's engineers. On July 30, 2019, Apple announced that it will be joining the project, allowing data portability in iCloud. Implementations On December 2, 2019, Facebook announced the ability for users to transfer photos and videos to Google Photos, originally available only in a select few countries. This expanded over the following months, and on June 4, 2020, Facebook announced full global availability of this feature. See more Data portability Google Takeout References External links Google Internet properties established in 2018 Free network-related software Interoperability
Data Transfer Project
Engineering
330
26,196,010
https://en.wikipedia.org/wiki/Jackal%27s%20horn
The Jackal's horn () is a mythical boney cone-shaped excrescence which is said to occasionally grow on the skulls of golden jackals. It is associated with magical powers in South Asia. Despite the lack of proof for its existence it is still widely believed to be real. This horn supposedly usually measures half an inch in length, and is concealed by fur. In the 1800s, the natives of Sri Lanka called this growth narric-comboo, and both Tamil and Sinhalese people traditionally believe it to be a potent amulet which can grant wishes and reappear to its owner at its own accord when lost. Some Sinhalese believe that the horn can grant the holder invulnerability in any lawsuit. According to healers and witch doctors in Nepal, a jackal horn can be used to win in gambling bouts, and ward off evil spirits. The Tharu people of Bardia (Nepal) believe that jackal horns are retractible, and only protrude when jackals howl in chorus. A hunter who manages to extract the horn will place it in a silver casket of vermilion powder, which is thought to give the object sustenance. The Tharu believe that the horn can grant the owner the ability to see in the dark. In some areas, the horn is called Seear Singhi or "Geedar Singhi" the word "Geedar" is the Urdu translation of Jackal and (the root words being the Persian "Seaah" meaning black, and "Singh" which means horn in Hindi and Urdu) and is tied to the necks of children. The horn is sometimes traded by low caste people, though it is thought that they are in fact pieces of deer antlers sold to the credulous. In Bengal, it is believed that when placed within a safe, jackal horns can increase the amount of money within three-fold. Some criminal elements of the Bengal Sansi will use fake jackal horns to lull unwitting people into trusting them, and will offer to place these horns into their victim's safe in order to discover its location. References Golden jackal Animal products Amulets Superstitions of India Superstitions of Nepal Culture of Sri Lanka
Jackal's horn
Chemistry
460
38,115
https://en.wikipedia.org/wiki/Crucifixion
Crucifixion is a method of capital punishment in which the condemned is tied or nailed to a large wooden cross, beam or stake and left to hang until eventual death. It was used as a punishment by the Persians, Carthaginians, and Romans, among others. Crucifixion has been used in some countries as recently as the 21st century. The crucifixion of Jesus is central to Christianity and the cross (in Roman Catholicism usually depicted with Jesus nailed to it) is Christianity's preeminent religious symbol. His death is the most prominent example of crucifixion in history, which in turn has led many cultures in the modern world to associate the execution method closely with Jesus and with Christian spirituality. Other figures in Christianity are traditionally believed to have undergone crucifixion as well, including the apostle Saint Peter, who was crucified upside-down. Today, limited numbers of Christians voluntarily undergo non-lethal crucifixions as a devotional practice. Terminology Ancient Greek has two verbs for crucify: (), from (which in modern Greek only means "cross" but which in antiquity was used for any kind of wooden pole, pointed or blunt, bare or with attachments) and () "crucify on a plank", together with ( "impale"). In earlier pre-Roman Greek texts usually means "impale". The Greek used in the Christian New Testament uses four verbs, three of them based upon (), usually translated "cross". The most common term is (), "to crucify", occurring 46 times; (), "to crucify with" or "alongside" occurs five times, while (), "to crucify again" occurs only once at the Epistle to the Hebrews 6:6. (), "to fix or fasten to, impale, crucify" occurs only once, at the Acts of the Apostles 2:23. The English term cross derives from the Latin word , which classically referred to a tree or any construction of wood used to hang criminals as a form of execution. The term later came to refer specifically to a cross. The related term crucifix derives from the Latin or , past participle passive of or , meaning "to crucify" or "to fasten to a cross". Detail Cross shape In the Roman Empire, the gibbet (instrument of execution) for crucifixions took on many shapes. Seneca the Younger () states: "I see crosses there, not just of one kind but made in many different ways: some have their victims with head down to the ground; some impale their private parts; others stretch out their arms on the gibbet." According to Josephus, during Emperor Titus's Siege of Jerusalem (70 CE), Roman soldiers nailed innumerable Jewish captives to crosses in various ways. At times the gibbet was only one vertical stake, called in Latin crux simplex. This was the simplest available construction for torturing and killing the condemned. Frequently, however, there was a cross-piece attached either at the top to give the shape of a T (crux commissa) or just below the top, as in the form most familiar in Christian symbolism (crux immissa). The most ancient image of a Roman crucifixion depicts an individual on a cross. It is a graffito found in a taberna (hostel for wayfarers) in Puteoli, dating to the time of Trajan or Hadrian (late 1st century to early 2nd century CE). Writers in the 2nd century who speak of the execution cross describe the crucified person's arms as outstretched, not attached to a single stake: Lucian speaks of Prometheus as crucified "above the ravine with his hands outstretched". He also says that the shape of the letter Τ (the Greek letter tau) was that of the wooden instrument used for crucifying. Artemidorus, another writer of the same period, says that a cross is made of posts (plural) and nails and that the arms of the crucified are outstretched. Speaking of the generic execution cross, Irenaeus (), a Christian writer, describes it as composed of an upright and a transverse beam, sometimes with a small projection in the upright. New Testament writings about the crucifixion of Jesus do not specify the shape of that cross, but subsequent early writings liken it to the letter T. According to William Barclay, because tau is shaped exactly like the crux commissa and represented the number 300, "wherever the fathers came across the number 300 in the Old Testament they took it to be a mystical prefiguring of the cross of Christ". The earliest example, written around the late 1st century, is the Epistle of Barnabas, with another example being Clement of Alexandria (c. 215). Justin Martyr () sees the cross of Christ represented in the crossed spits used to roast the Passover lamb. Nail placement In popular depictions of the crucifixion of Jesus (possibly because in translations of the wounds are described as being "in his hands"), Jesus is shown with nails in his hands. But in Greek the word "χείρ", usually translated as "hand", could refer to the entire portion of the arm below the elbow, and to denote the hand as distinct from the arm some other word could be added, as "ἄκρην οὔτασε χεῖρα" (he wounded the end of the χείρ, i.e., "he wounded him in the hand". A possibility that does not require tying is that the nails were inserted just above the wrist, through the soft tissue, between the two bones of the forearm (the radius and the ulna). A foot-rest (suppedaneum) attached to the cross, perhaps for the purpose of taking the person's weight off the wrists, is sometimes included in representations of the crucifixion of Jesus but is not discussed in ancient sources. Some scholars interpret the Alexamenos graffito (), the earliest surviving depiction of the crucifixion, as including such a foot-rest. Ancient sources also mention the sedile, a small seat attached to the front of the cross, about halfway down, which could have served a similar purpose. In 1968, archaeologists discovered at Giv'at ha-Mivtar in northeast Jerusalem the remains of one Jehohanan, who was crucified in the 1st century CE. The remains included a heel bone with a nail driven through it from the side. The tip of the nail was bent, perhaps because of striking a knot in the upright beam, which prevented it being extracted from the foot. A first inaccurate account of the length of the nail led some to believe that it had been driven through both heels, suggesting that the man had been placed in a sort of sidesaddle position, but the true length of the nail, , suggests instead that in this case of crucifixion the heels were nailed to opposite sides of the upright. As of 2011, the skeleton from Giv'at ha-Mivtar was the only confirmed example of ancient crucifixion in the archaeological record. A second set of skeletal remains with holes transverse through the calcaneum heel bones, found in 2007, could be a second archaeological record of crucifixion. The find in Cambridgeshire (United Kingdom) in November 2017 of the remains of the heel bone of a (probably enslaved) man with an iron nail through it, is believed by the archeologists to confirm the use of this method in ancient Rome. Cause of death The length of time required to reach death could range from hours to days depending on method, the victim's health, and the environment. A theory attributed to Pierre Barbet held that, when the whole body weight was supported by the stretched arms, the typical cause of death was asphyxiation. He wrote that the condemned would have severe difficulty inhaling, due to hyper-expansion of the chest muscles and lungs. The condemned would therefore have to draw himself up by the arms, leading to exhaustion, or have his feet supported by tying or by a wood block. When no longer able to lift himself, the condemned would die within a few minutes. This theory has been supported by multiple scholars. Other scholars, including Frederick Zugibe, posit other causes of death. Zugibe suspended test subjects with their arms at 60° to 70° from the vertical. The test subjects had no difficulty breathing during experiments, but did suffer rapidly increasing pain, which is consistent with the Roman use of crucifixion to achieve a prolonged, agonizing death. However, Zugibe's positioning of the test subjects necessarily did not precisely replicate the conditions of historical crucifixion. In 2023, an analysis of medical literature concluded that asphyxiation is discredited as the primary cause of death from crucifixion. There is scholarly support for several possible non-asphyxiation causes of death: heart failure or arrhythmia, hypovolemic shock, acidosis, dehydration, and pulmonary embolism. Death could result from any combination of those factors, or from other causes, including sepsis following infection due to the wounds caused by the nails or by the scourging that often preceded crucifixion, or from stabbing by the guards. Survival Since death does not follow immediately on crucifixion, survival after a short period of crucifixion is possible, as in the case of those who choose each year as a devotional practice to be non-lethally crucified. There is an ancient record of one person who survived a crucifixion that was intended to be lethal, but was interrupted. Josephus recounts:I saw many captives crucified, and remembered three of them as my former acquaintances. I was very sorry at this in my mind, and went with tears in my eyes to Titus, and told him of them; so he immediately commanded them to be taken down, and to have the greatest care taken of them, in order to their recovery; yet two of them died under the physician's hands, while the third recovered.Josephus gives no details of the method or duration of the crucifixion of his three friends. History and religious texts Pre-Roman states Crucifixion (or impalement), in one form or another, was used by Persians, Carthaginians, and among the Greeks, the Macedonians. The Greeks were generally opposed to performing crucifixions. However, in his Histories, ix.120–122, Greek writer Herodotus describes the execution of a Persian general at the hands of Athenians in about 479 BC: "They nailed him to a plank and hung him up ... this Artayctes who suffered death by crucifixion." The Commentary on Herodotus by How and Wells remarks: "They crucified him with hands and feet stretched out and nailed to cross-pieces; cf. vii.33. This act, supposedly unusual on the part of Greeks, may be explained by the enormity of the outrage or by Athenian deference to local feeling." Some Christian theologians, beginning with Paul of Tarsus writing in Galatians 3:13, have interpreted an allusion to crucifixion in Deuteronomy . This reference is to being hanged from a tree, and may be associated with lynching or traditional hanging. However, Rabbinic law limited capital punishment to just 4 methods of execution: stoning, burning, strangulation, and decapitation, while the passage in Deuteronomy was interpreted as an obligation to hang the corpse on a tree as a form of deterrence. The fragmentary Aramaic Testament of Levi (DSS 4Q541) interprets in column 6: "God ... (partially legible)-will set ... right errors. ... (partially legible)-He will judge ... revealed sins. Investigate and seek and know how Jonah wept. Thus, you shall not destroy the weak by wasting away or by ... (partially legible)-crucifixion ... Let not the nail touch him." The Jewish king Alexander Jannaeus, king of Judea from 103 to 76 BCE, crucified 800 rebels, said to be Pharisees, in the middle of Jerusalem. Alexander the Great is reputed to have crucified 2,000 survivors from his siege of the Phoenician city of Tyre, as well as the doctor who unsuccessfully treated Alexander's lifelong friend Hephaestion. Some historians have also conjectured that Alexander crucified Callisthenes, his official historian and biographer, for objecting to Alexander's adoption of the Persian ceremony of royal adoration. In Carthage, crucifixion was an established mode of execution, which could even be imposed on generals for suffering a major defeat. The oldest crucifixion may be a post-mortem one mentioned by Herodotus. Polycrates, the tyrant of Samos, was put to death in 522 BCE by the Persians, and his dead body was then crucified. Ancient Rome History The Greek and Latin words corresponding to "crucifixion" applied to many different forms of painful execution, including being impaled on a stake, or affixed to a tree, upright pole (a crux simplex), or to a combination of an upright (in Latin, stipes) and a crossbeam (in Latin, patibulum). Seneca the Younger wrote: "I see crosses there, not just of one kind but made in many different ways: some have their victims with head down to the ground; some impale their private parts; others stretch out their arms on the gibbet". Crucifixion was generally performed within Ancient Rome as a means to dissuade others from perpetrating similar crimes, with victims sometimes left on display after death as a warning. Crucifixion was intended to provide a death that was particularly slow, painful (hence the term excruciating, literally "out of crucifying"), gruesome, humiliating, and public, using whatever means were most expedient for that goal. Crucifixion methods varied considerably with location and period. One hypothesis suggested that the Ancient Roman custom of crucifixion may have developed out of a primitive custom of arbori suspendere—hanging on an arbor infelix ("inauspicious tree") dedicated to the gods of the nether world. This hypothesis is rejected by William A. Oldfather, who shows that this form of execution (the supplicium more maiorum, punishment in accordance with the custom of our ancestors) consisted of suspending someone from a tree, not dedicated to any particular gods, and flogging him to death. Tertullian mentions a 1st-century AD case in which trees were used for crucifixion, but Seneca the Younger earlier used the phrase infelix lignum (unfortunate wood) for the transom ("patibulum") or the whole cross. Plautus and Plutarch are the two main sources for accounts of criminals carrying their own patibula to the upright stipes. Notorious mass crucifixions followed the Third Servile War in 73–71 BCE (the slave rebellion led by Spartacus), and other Roman civil wars in the 2nd and 1st centuries BCE. Crassus ordered the crucifixion of 6,000 of Spartacus' followers who had been hunted down and captured after the slave defeat in battle. Josephus says that in the siege that led to the destruction of Jerusalem in AD 70, the Roman soldiers crucified Jewish captives before the walls of Jerusalem and out of anger and hatred amused themselves by nailing them in different positions. In some cases, the condemned was forced to carry the crossbeam to the place of execution. A whole cross would weigh well over 135 kg (300 lb), but the crossbeam would not be as burdensome, weighing around 45 kg (100 lb). The Roman historian Tacitus records that the city of Rome had a specific place for carrying out executions, situated outside the Esquiline Gate, and had a specific area reserved for the execution of slaves by crucifixion. Upright posts would presumably be fixed permanently in that place, and the crossbeam, with the condemned person perhaps already nailed to it, would then be attached to the post. The person executed may have been attached to the cross by rope, though nails and other sharp materials are mentioned in a passage by Josephus, where he states that at the Siege of Jerusalem (70 CE), "the soldiers out of rage and hatred, nailed those they caught, one after one way, and another after another, to the crosses, by way of jest". Objects used in the crucifixion of criminals, such as nails, were sought as amulets with perceived medicinal qualities. While a crucifixion was an execution, it was also a humiliation, by making the condemned as vulnerable as possible. Although artists have traditionally depicted the figure on a cross with a loin cloth or a covering of the genitals, the person being crucified was usually stripped naked. Writings by Seneca the Younger state some victims suffered a stick forced upwards through their groin. Despite its frequent use by the Romans, the horrors of crucifixion did not escape criticism by some eminent Roman orators. Cicero, for example, described crucifixion as "a most cruel and disgusting punishment", and suggested that "the very mention of the cross should be far removed not only from a Roman citizen's body, but from his mind, his eyes, his ears". Elsewhere he says, "It is a crime to bind a Roman citizen; to scourge him is a wickedness; to put him to death is almost parricide. What shall I say of crucifying him? So guilty an action cannot by any possibility be adequately expressed by any name bad enough for it." Frequently, the legs of the person executed were broken or shattered with an iron club, an act called crurifragium, which was also frequently applied without crucifixion to slaves. This act hastened the death of the person but was also meant to deter those who observed the crucifixion from committing offenses. Constantine the Great, the first Christian emperor, abolished crucifixion in the Roman Empire in 337 out of veneration for Jesus Christ, its most famous victim. Society and law Crucifixion was intended to be a gruesome spectacle: the most painful and humiliating death imaginable. It was used to punish slaves, pirates, and enemies of the state. It was originally reserved for slaves (hence still called "supplicium servile" by Seneca), and later extended to citizens of the lower classes (humiliores). The victims of crucifixion were stripped naked and put on public display while they were slowly tortured to death so that they would serve as a spectacle and an example. According to Roman law, if a slave killed his or her owner, all of the owner's slaves would be crucified as punishment. Both men and women were crucified. Tacitus writes in his Annals that when Lucius Pedanius Secundus was murdered by a slave, some in the Senate tried to prevent the mass crucifixion of four hundred of his slaves because there were so many women and children, but in the end tradition prevailed and they were all executed. Although not conclusive evidence for female crucifixion by itself, the most ancient image of a Roman crucifixion may depict a crucified woman, whether real or imaginary. Crucifixion was such a gruesome and humiliating way to die that the subject was somewhat of a taboo in Roman culture, and few crucifixions were specifically documented. One of the only specific female crucifixions that are documented is that of Ida, a freedwoman (former slave) who was crucified by order of Tiberius. Process Crucifixion was typically carried out by specialized teams, consisting of a commanding centurion and his soldiers. First, the condemned would be stripped naked and scourged. This would cause the person to lose a large amount of blood, and approach a state of shock. The convict then usually had to carry the horizontal beam (patibulum in Latin) to the place of execution, but not necessarily the whole cross. During the death march, the prisoner, probably still nude after the scourging, would be led through the most crowded streets bearing a titulus – a sign board proclaiming the prisoner's name and crime. Upon arrival at the place of execution, selected to be especially public, the convict would be stripped of any remaining clothing, then nailed to the cross naked. If the crucifixion took place in an established place of execution, the vertical beam (stipes) might be permanently embedded in the ground. In this case, the condemned person's wrists would first be nailed to the patibulum, and then he or she would be hoisted off the ground with ropes to hang from the elevated patibulum while it was fastened to the stipes. Next the feet or ankles would be nailed to the upright stake. The 'nails' were tapered iron spikes approximately long, with a square shaft across. The titulus would also be fastened to the cross to notify onlookers of the person's name and crime as they hung on the cross, further maximizing the public impact. There may have been considerable variation in the position in which prisoners were nailed to their crosses and how their bodies were supported while they died. Seneca the Younger recounts: "I see crosses there, not just of one kind but made in many different ways: some have their victims with head down to the ground; some impale their private parts; others stretch out their arms on the gibbet." One source claims that for Jews (apparently not for others), a man would be crucified with his back to the cross as is traditionally depicted, while a woman would be nailed facing her cross, probably with her back to onlookers, or at least with the stipes providing some semblance of modesty if viewed from the front. Such concessions were "unique" and not made outside a Jewish context. Several sources mention some sort of seat fastened to the stipes to help support the person's body, thereby prolonging the person's suffering and humiliation by preventing the asphyxiation caused by hanging without support. Justin Martyr calls the seat a cornu, or "horn," leading some scholars to believe it may have had a pointed shape designed to torment the crucified person. This would be consistent with Seneca's observation of victims with their private parts impaled. In Roman-style crucifixion, the condemned could take up to a few days to die, but death was sometimes hastened by human action. "The attending Roman guards could leave the site only after the victim had died, and were known to precipitate death by means of deliberate fracturing of the tibia and/or fibula, spear stab wounds into the heart, sharp blows to the front of the chest, or a smoking fire built at the foot of the cross to asphyxiate the victim." The Romans sometimes broke the prisoner's legs to hasten death and usually forbade burial. On the other hand, the person was often deliberately kept alive as long as possible to prolong their suffering and humiliation, so as to provide the maximum deterrent effect. Corpses of the crucified were typically left on the crosses to decompose and be eaten by animals. In Islam Islam spread in a region where many societies, including the Persian and Roman empires, had used crucifixion to punish traitors, rebels, robbers and criminal slaves. The Qur'an refers to crucifixion in six passages, of which the most significant for later legal developments is verse 5:33: The corpus of hadith provides contradictory statements about the first use of crucifixion under Islamic rule, attributing it variously to Muhammad himself (for murder and robbery of a shepherd) or to the second caliph Umar (applied to two slaves who murdered their mistress). Classical Islamic jurisprudence applies the verse 5:33 chiefly to highway robbers, as a hadd (scripturally prescribed) punishment. The preference for crucifixion over the other punishments mentioned in the verse or for their combination (which Sadakat Kadri has called "Islam's equivalent of the hanging, drawing and quartering that medieval Europeans inflicted on traitors") is subject to "complex and contested rules" in classical jurisprudence. Most scholars required crucifixion for highway robbery combined with murder, while others allowed execution by other methods for this scenario. The main methods of crucifixion are: Exposure of the culprit's body after execution by another method, ascribed to "most scholars" and in particular to Ibn Hanbal and Al-Shafi'i; or Hanbalis and Shafi'is. Crucifying the culprit alive, then executing him with a lance thrust or another method, ascribed to Malikis, most Hanafis and most Twelver Shi'is; the majority of the Malikis; Malik, Abu Hanifa, and al-Awza'i; or Malikis, Hanafis, and Shafi'is. Crucifying the culprit alive and sparing his life if he survives for three days, ascribed to Shiites. Most classical jurists limit the period of crucifixion to three days. Crucifixion involves affixing or impaling the body to a beam or a tree trunk. Various minority opinions also prescribed crucifixion as punishment for a number of other crimes. Cases of crucifixion under most of the legally prescribed categories have been recorded in the history of Islam, and prolonged exposure of crucified bodies was especially common for political and religious opponents. Japan Crucifixion was introduced into Japan during the Sengoku period (1467–1573), after a 350-year period with no capital punishment. It is believed to have been suggested to the Japanese by the introduction of Christianity into the region, although similar types of punishment had been used as early as the Kamakura period. Known in Japanese as , crucifixion was used in Japan before and during the Tokugawa Shogunate. Several related crucifixion techniques were used. Petra Schmidt, in "Capital Punishment in Japan", writes: In 1597, 26 Christian Martyrs were nailed to crosses at Nagasaki, Japan. Among those executed were Saints Paulo Miki, Philip of Jesus and Pedro Bautista, a Spanish Franciscan. The executions marked the beginning of a long history of persecution of Christianity in Japan, which continued until its decriminalization in 1871. Crucifixion was used as a punishment for prisoners of war during World War II. Ringer Edwards, an Australian prisoner of war, was crucified for killing cattle, along with two others. He survived 63 hours before being let down. Burma In Burma, crucifixion was a central element in several execution rituals. Felix Carey, a missionary in Burma from 1806 to 1812, wrote the following: Europe During World War I, there were persistent rumors that German soldiers had crucified a Canadian soldier on a tree or barn door with bayonets or combat knives. The event was initially reported in 1915 by Private George Barrie of the 1st Canadian Division. Two investigations, one a post-war official investigation, and the other an independent investigation by the Canadian Broadcasting Corporation, concluded that there was no evidence to support the story. However, British documentary maker Iain Overton in 2001 published an article claiming that the story was true, identifying the soldier as Harry Band. Overton's article was the basis for a 2002 episode of the Channel 4 documentary show Secret History. It has been reported that crucifixion was used in several cases against the German civil population of East Prussia when it was occupied by Soviet forces at the end of World War II. Archaeological evidence Although the Roman historians Josephus and Appian refer to the crucifixion of thousands in during the Roman-Jewish wars in Judaea by the Romans, there are few actual archaeological remains. A prominent example is the crucified body found in a Jewish tomb dating back to the 1st century which was discovered at Givat HaMivtar, Jerusalem in 1968. The remains were found accidentally in an ossuary with the crucified man's name on it, "Jehohanan, the son of Hagakol." Nicu Haas, from the Hebrew University Medical School, examined the ossuary and discovered that it contained a heel bone with a nail driven through its side, indicating that the man had been crucified. The position of the nail relative to the bone suggests the feet had been nailed to the cross from their side, not from their front; various opinions have been proposed as to whether they were both nailed together to the front of the cross or one on the left side, one on the right side. The point of the nail had olive wood fragments on it indicating that he was crucified on a cross made of olive wood or on an olive tree. Additionally, a piece of acacia wood was located between the bones and the head of the nail, presumably to keep the condemned from freeing his foot by sliding it over the nail. His legs were found broken, possibly to hasten his death. It is thought that because in earlier Roman times iron was valuable, the nails were removed from the dead body to conserve costs. According to Haas, this could help to explain why only one nail has been found, as the tip of the nail in question was bent in such a way that it could not be removed. Haas had also identified a scratch on the inner surface of the right radius bone of the forearm, close to the wrist. He deduced from the form of the scratch, as well as from the intact wrist bones, that a nail had been driven into the forearm at that position. Many of Haas' findings have, however, been challenged. For instance, it was subsequently determined that the scratches in the wrist area were non-traumatic – and, therefore, not evidence of crucifixion – while reexamination of the heel bone revealed that the two heels were not nailed together, but rather separately to either side of the upright post of the cross. In 2007, a possible case of a crucified body, with a round hole in a heel bone, possibly caused by a nail, was discovered in the Po Valley near Rovigo, in northern Italy. In 2017 part of a crucified body, with a nail in the heel, was additionally discovered at Fenstanton in the United Kingdom. Further studies suggested that the remains may be those of a slave, because at that time crucifixion was banned in Roman law for citizens, although not necessarily for slaves. Modern use Crucifixion in Europe In 2005, a priest and four nuns in Romania were convicted of crucifying Maricica Irina Cornici, a 25 year old nun with schizophrenia, who they believed was possessed by the devil. Crucifixion in the South Sudan Civil War In 2017, The Standard News Channel reported on a series of crimes against civilians, including women being hung up trees Legal execution in Islamic states Crucifixion is still used as a rare method of execution in Saudi Arabia. The punishment of crucifixion (șalb) imposed in Islamic law is variously interpreted as exposure of the body after execution, crucifixion followed by stabbing in the chest, or crucifixion for three days, survivors of which are allowed to live. Several people have been subjected to crucifixion in Saudi Arabia in the 2000s, although on occasion they were first beheaded and then crucified. In March 2013, a robber was set to be executed by being crucified for three days. However, the method was changed to death by firing squad. The Saudi Press Agency reported that the body of another individual was crucified after his execution in April 2019 as part of a crackdown on charges of terrorism. Ali Mohammed Baqir al-Nimr was arrested in 2012 when he was 17 years old for taking part in an anti-government protest in Saudi Arabia during the Arab Spring. In May 2014, Ali al-Nimr was sentenced to be publicly beheaded and crucified. Theoretically, crucifixion is still one of the Hadd punishments in Iran. If a crucified person were to survive three days of crucifixion, that person would be allowed to live. Execution by hanging is described as follows: "In execution by hanging, the prisoner will be hung on a hanging truss which should look like a cross, while his (her) back is toward the cross, and (s)he faces the direction of Mecca [in Saudi Arabia], and his (her) legs are vertical and distant from the ground." Sudan's penal code, based upon the government's interpretation of shari'a, includes execution followed by crucifixion as a penalty. When, in 2002, 88 people were sentenced to death for crimes relating to murder, armed robbery, and participating in ethnic clashes, Amnesty International wrote that they could be executed by either hanging or crucifixion. Jihadism On 5 February 2015, the United Nations Committee on the Rights of the Child (CRC) reported that the Islamic State of Iraq and the Levant (ISIL) had committed "several cases of mass executions of boys, as well as reports of beheadings, crucifixions of children and burying children alive". On 30 April 2014, a total of seven public executions were carried out in Raqqa, northern Syria. The pictures, originally posted to Twitter by a student at Oxford University, were retweeted by a Twitter account owned by a known member of ISIL causing major media outlets to incorrectly attribute the origin of the post to the militant group. In most of these cases of crucifixion the victims are shot first then their bodies are displayed but there have also been reports of crucifixion preceding shootings or decapitations as well as a case where a man was said to have been "crucified alive for eight hours" with no indication of whether he died. Other incidents The human rights group Karen Women Organization documented a case of Tatmadaw forces crucifying several Karen villagers in 2000 in the Dooplaya District in Burma's Kayin State. On 22 January 2014, Dmytro Bulatov, a Ukrainian anti-government activist and member of AutoMaidan, claimed to have been kidnapped by unknown persons "speaking in Russian accents" and tortured for a week. His captors kept him in the dark, beat him, cut off a piece of his ear, and nailed him to a cross. His captors ultimately left him in a forest outside Kyiv after forcing him to confess to being an American spy and accepting money from the US Embassy in Ukraine to organize protests against then-President Viktor Yanukovych. Bulatov said he believed Russian secret services were responsible. In 1997, the Ministry of Justice in the United Arab Emirates issued a statement that a court had sentenced two murderers to be crucified, to be followed by their executions the next day. A Ministry of Justice official later stated that the crucifixion sentence should be considered cancelled. The crucifixions were not carried out, and the convicts were instead executed by firing squad. During the Russian Invasion of Ukraine, Captain Vladyslav Pastukh of the 211th Pontoon Bridge Brigade crucified another member of the brigade by tying the soldier's hands to a wooden cross and the soldier's helmet to his left arm. He then took a picture of himself squatting in front of the cross with the soldier's body hanging from it. On 16 December 2024, Ukrainian defense minister Rustem Umerov ordered an immediate investigation into the incident as well as an investigation into other alleged abuse, extortion, and humiliation of soldiers of the 211th Pontoon Bridge Brigade by their commanding officers. In culture and arts As a devotional practice In July 1805, a man named Mattio Lovat attempted to crucify himself at a public street in Venice, Italy. The attempt was unsuccessful, and he was sent to an asylum, where he died a year later. In some cases, a crucifixion is simulated within a passion play, as in the ceremonial re-enactment that has been performed yearly in the town of Iztapalapa, on the outskirts of Mexico City, since 1833, and in the famous Oberammergau Passion Play. Also, since at least the mid-19th century, a group of flagellants in New Mexico, called Hermanos de Luz ("Brothers of Light"), have annually conducted reenactments of Christ's crucifixion during Holy Week, in which a penitent is tied—but not nailed—to a cross. This tradition is sometimes practiced in other regions of the United States, such as in Appalachia, where members of Protestant churches stage mock crucifixions wherein worshippers hang from straps on the crosses during Good Friday re-enactments. The Catholic Church frowns upon self-crucifixion as a form of devotion: "Penitential practices leading to self-crucifixion with nails are not to be encouraged." Despite this, the practice persists in the Philippines, where some Catholics are voluntarily, non-lethally crucified for a limited time on Good Friday to imitate the sufferings of Christ. Pre-sterilised nails are driven through the palm of the hand between the bones, while there is a footrest to which the feet are nailed. Rolando del Campo, a carpenter in Pampanga, vowed to be crucified every Good Friday for 15 years if God would carry his wife through a difficult childbirth, while in San Pedro Cutud, Ruben Enaje has been crucified 34 times. The Filipino Catholic Church has repeatedly voiced disapproval of crucifixions and self-flagellation, while the government has noted that it cannot deter devotees. The Department of Health recommends that participants in the rites should have tetanus shots and that the nails used should be sterilized. Notable crucifixions The rebel slaves of the Third Servile War: Between 73 and 71 BCE, a band of slaves, eventually numbering about 120,000, under the (at least partial) leadership of Spartacus were in open revolt against the Roman republic. The rebellion was eventually crushed and, while Spartacus himself most likely died in the final battle of the revolt, approximately 6,000 of his followers were crucified along the 200-km Appian Way between Capua and Rome as a warning to any other would-be rebels. Jehohanan: Jewish man who was crucified around the same time as Jesus; it is widely accepted that his ankles were nailed to the side of the stipes of the cross. Jesus: His death by crucifixion under Pontius Pilate (c. 30 or 33 AD), recounted in the four 1st-century canonical Gospels, is referred to repeatedly as something well known in the earlier letters of Saint Paul, for instance, five times in his First Letter to the Corinthians, written in 57 CE (1:13, 1:18, 1:23, 2:2, 2:8). Pilate, the Roman governor of Judaea province at the time, is explicitly linked with the condemnation of Jesus by the Gospels, and subsequently by Tacitus. The civil charge was a claim to be King of the Jews. Saint Peter: Christian apostle, who according to tradition was crucified upside-down at his own request (hence the Cross of Saint Peter), because he did not feel worthy enough to die the same way as Jesus. Saint Andrew: Christian apostle and Saint Peter's brother, who is traditionally said to have been crucified on an cross (hence the Saint Andrew's Cross). Simeon of Jerusalem: second Bishop of Jerusalem, crucified in either 106 or 107 CE. Mani: the founder of Manicheanism, he was depicted by followers as having died by crucifixion in 274 CE. Eulalia of Barcelona was venerated as a saint. According to her hagiography, she was stripped naked, tortured, and ultimately crucified on an cross. Wilgefortis was venerated as a saint and represented as a crucified woman, however her legend comes from a misinterpretation of a full-clothed crucifix known as the Volto Santo of Lucca. The 26 Martyrs of Japan: Japanese martyrs who were crucified and impaled with spears. See also Breaking wheel Crucifixion darkness List of methods of capital punishment Positional asphyxia Shroud of Turin Tropaion References Informational notes Citations External links "Forensic and Clinical Knowledge of the Practice of Crucifixion" by Frederick Zugibe Jesus's death on the cross, from a medical perspective "Dishonour, Degradation and Display: Crucifixion in the Roman World" by Philip Hughes Jewish Encyclopedia: Crucifixion Crucifixion of Joachim of Nizhny-Novgorod Capital punishment Execution methods Torture Human positions Public executions Crosses by function
Crucifixion
Biology
8,749
6,419,661
https://en.wikipedia.org/wiki/Contemporary%20Physics%20Education%20Project
The Contemporary Physics Education Project (CPEP) is an "organization of teachers, educators, and physicists" formed in 1987. The group grew out of the Conference on the Teaching of Modern Physics held at Fermilab in 1986, organized by the American Association of Physics Teachers. The group's first effort aimed to supply a chart for particle physics teaching that would rival the Periodic Table of the elements. The first version of this chart was published in 1989. CPEP has created five charts emphasizing contemporary aspects of physics research: particles and interactions; fusion and plasma physics; nuclear science; and cosmology; and gravity.. Almost half a million of these charts and similar products have been distributed. The group has created website support for teaching for each of the charts. CPEP received the 2017 "Excellence in Physics Education Award" from the American Physical Society, "for leadership in providing educational materials on contemporary physics topics to students for over 25 years." Offshoots of CPEP include the book, "The Charm of Strange Quarks: Mysteries and Revolutions of Particle Physics" (2000), by R. Michael Barnett, Henry Muehry, and Helen R. Quinn, three of the founders of CPEP. See also the web site "The Particle Adventure: The Fundamentals of Matter and Force". R. Michael Barnett described the formation and early days of CPEP in a Nobel Symposium Lecture in 2002. References Physics education
Contemporary Physics Education Project
Physics
291
11,027,988
https://en.wikipedia.org/wiki/Slide%20chart
A slide chart is a hand-held device, usually of paper, cardboard, or plastic, for conducting simple calculations or looking up information. A circular slide chart is sometimes referred to as a wheel chart or volvelle. Unlike other hand-held mechanical calculating devices such as slide rules and addiators, which have been replaced by electronic calculators and computer software, wheel charts and slide charts have survived to the present time. There are a number of companies who design and manufacture these devices. Unlike the general-purpose mechanical calculators, slide charts are typically devoted to carrying out a particular specialized calculation, or displaying information on a single product or a particular process. For example, the "CurveEasy" wheel chart displays information related to spherical geometry calculations, and the Prestolog calculator is used for cost/profit calculations. Another example of a wheel chart is the planisphere, which shows the location of stars in the sky for a given location, date, and time. Slide charts are often associated with particular sports, political campaigns or commercial companies. For example, a pharmaceutical company may create wheel charts printed with their company name and product information for distribution to medical practitioners. Slide charts are common collectables. See also The E6B aviation flight computing device, still in regular use References Reinventing the Wheel, Jessica Helfand, Princeton Architectural Press, 2002. () External links Slide Chart Examples Communication design
Slide chart
Engineering
292
14,764,246
https://en.wikipedia.org/wiki/HHEX
Hematopoietically-expressed homeobox protein HHEX is a protein that in humans is encoded by the HHEX gene and also known as Proline Rich Homeodomain protein PRH. This gene encodes a member of the homeobox family of transcription factors, many of which are involved in developmental processes. Expression in specific hematopoietic lineages suggests that this protein may play a role in hematopoietic differentiation but the expression of this protein is not limited to hematopoietic cells. Function The HHEX transcription factor acts as a activator of transcription in some instances and a repressor of transcription others. It interacts with a number of other signaling molecules to play an important role in the development of multiple organs, such as the liver, thyroid and forebrain. HHEX serves to repress VEGFA, another protein which is important in endothelial cell development. SCL, a significant transcription factor for blood and endothelial cell differentiation, is shown to interact with HHEX to promote the correct development of the hematopoiesis process. HHEX appears to work together with another molecule, β-catenin, for the development of the anterior organizer. It also contributes to developmental remodeling and stabilization of endothelial cells in an unborn organism. The importance of this transcription factor is illustrated by the inability of HHEX knockout mice embryos to survive gestation. Without the expression of HHEX, these mice embryos die in utero between Day 13 and Day 16. HHEX knockout mice display a range of abnormalities including forebrain abnormalities in various levels of severity, as well as a number of other defects including heart, vasculature, liver, monocyte, and thyroid abnormalities. The HHEX protein is important in a variety of cancers and it can act as an tumour suppressor protein or as an oncoprotein depending on the cancer type. Interactions HHEX has been shown to interact with Promyelocytic leukemia protein. References Further reading External links Transcription factors
HHEX
Chemistry,Biology
435
47,620,343
https://en.wikipedia.org/wiki/Maucha%20diagram
A Maucha diagram, or Maucha symbol, is a graphical representation of the major cations and anions in a chemical sample. R. Maucha published the symbol in 1932. It is mainly used by biologists and chemists for quickly recognising samples by their chemical composition. The symbol is similar in concept to the Stiff diagram. It conveys similar ionic information to the Piper diagram, though in a more compact format that is suitable as a map symbol or for showing changes with time. The Maucha diagram is a special case of the Radar chart and overcomes some of the limitations of the Pie chart by having equal angles for all variables and consistently showing each variable in the same position. The star shape comprises eight kite-shaped polygons, the area of each of which is proportional to the concentration of an ion in milliequivalents per litre. The anions carbonate, bicarbonate, chloride and sulphate are on the left, while the cations potassium, sodium, calcium and magnesium are on the right. The total ionic concentration adds up to the area of the background circle, the total anion concentration adds up to the left semicircle and the total cation concentration adds up to the right semicircle. A method for drawing the diagram in R is available on GitHub. Broch and Yake modified Maucha's original fixed-size diagram by scaling for concentration. Further scaling using the logarithm of the ionic concentration enables the plotting of a wide range of concentrations on a single map. References Analytical chemistry Diagrams
Maucha diagram
Chemistry
323
39,620,680
https://en.wikipedia.org/wiki/Hygrocybe%20kula
Hygrocybe kula is a mushroom of the waxcap genus Hygrocybe found only in Royal National Park and Lane Cove Bushland Park. It was described in 1997 by mycologist Cheryl Grgurinovic. See also List of Hygrocybe species References External links Fungi described in 1997 Fungi of Australia kula Taxa named by Cheryl A. Grgurinovic Fungus species
Hygrocybe kula
Biology
87
57,744,417
https://en.wikipedia.org/wiki/SN%202018cow
SN 2018cow (ATLAS name: ATLAS18qqn; also known as Supernova 2018cow, AT 2018cow (AT = Astronomical Transient), and "The Cow") was a very powerful astronomical explosion, 10–100 times brighter than a normal supernova, spatially coincident with galaxy , approximately distant in the Hercules constellation. It was discovered on 16 June 2018 by the ATLAS-HKO telescope, and had generated significant interest among astronomers throughout the world. Later, on 10 July 2018, and after AT 2018cow had significantly faded, astronomers, based on follow-up studies with the Nordic Optical Telescope (NOT), formally described AT 2018cow as SN 2018cow, a type Ib supernova, showing an "unprecedented spectrum for a supernova of this class"; although others, mostly at first but also more recently, have referred to it as a type Ic-BL supernova. An explanation to help better understand the unique features of AT 2018cow has been presented. AT2018cow is one of the few reported Fast Blue Optical Transients (FBOTs) observed in the Universe. In May 2020, however, a much more powerful FBOT than AT 2018cow (namely, CRTS-CSS161010 J045834-081803, or CSS161010 for short) was reportedly observed. On 2 November 2018, two independent teams of astronomers both concluded that the AT 2018cow event was "either a newly formed black hole in the process of accreting matter, or the frenetic rotation of a neutron star." In January 2019, astronomers proposed that the explosion may have been a white dwarf being pulled apart by a black hole; or a supernova leaving behind a black hole or a neutron star, the creation of a compact body being observed for the first time. On 13 December 2021, astronomers reported that AT 2018cow, an extreme FBOT, "could be a neutron star or black hole with a mass less than 850 solar masses" based on high-time-resolution X-ray observation studies. History AT 2018cow was discovered on 16 June 2018 at 10:35:02 UTC by the ATLAS-HKO telescope, a twin system, at the Haleakala Observatory in Hawaii. It was a powerful astronomical explosion (discovery magnitude 14.739; redshift 0.014145, 0.0136), 10 – 100 times brighter than a normal supernova, spatially coincident with galaxy , approximately distant in the Hercules constellation. By 22 June 2018, this transient astronomical event had generated significant interest among astronomers throughout the world. At least 24 major telescopes were observing the event, the largest number, as of 27 June 2018, of concurrent observations (over 35 posted on 27 June 2018) of any astronomical event ever reported on The Astronomer's Telegram. The event had been tentatively identified as a supernova and given the designation Supernova 2018cow and classification SN Ic-BL. The first X-ray and ultraviolet (UV) observations of AT 2018cow were obtained on 19 June 2018 with the Swift telescope. These observations revealed that the object was a bright X-ray/UV transient, with an X-ray luminosity of ~ and a UV brightness of about 11.7 (Vega mag) in the range 1600-3600 Å. On 25 June 2018, astronomers, using the Liverpool Telescope and the telescope at Palomar Observatory, noted on The Astronomer's Telegram: "AT2018cow has faded every night since our first observations . ... observations suggest that although a link to Ic-BL SNe and GRBs remains credible given the smooth spectra and luminous radio and X-ray counterpart, AT2018cow is distinct in other ways and its true identity remains unclear. Observations are continuing." On 29 June 2018, astronomers, using telescopes at the Beijing Astronomical Observatory, reported further support for the fading of AT 2018cow. However, using the Swift/XRT telescope on 30 June 2018, an increase in the X-ray luminosity of the transient was reported. That would be the beginning of an unusual X-ray variable behavior. On 2 July 2018, astronomers, using the Fermi Large Area Telescope (LAT), reported that there were no significant >100 MeV gamma-ray emissions between 19–26 June 2018. Further, on 3 July 2018, astronomers reported, using the Cadmium Zinc Telluride Imager (CZTI) detector aboard the AstroSat space observatory, no hard X-ray transients were detected between 13–16 June 2018 (event detection time) and, using the UVIT fitted with a F172M filter, observed an AB magnitude of an estimated 17.6 at the AT 2018cow location on 3 July 2018. Moreover, astronomers on 3 July 2018 reported, using the MAXI GSC detector aboard the ISS, that no significant X-ray emissions were detected between 11–21 June 2018. On 4 July 2018, astronomers, using NuSTAR, reported a lessening of hard X-ray emissions from AT 2018cow. On 12 July 2018, astronomers, using INTEGRAL, reported no detections of the source from 30 June – 8 July 2018; however, GRB-like bursts may have been observed earlier in the vicinity, on 12 and 15 June 2018, although association of these bursts with AT 2018cow may be "disfavored". Radio emissions, at 5 GHz with a flux density of ~ 170 microJy, were detected from the location of AT 2018cow on 3–4 July 2018 by e-MERLIN; radio emissions at the AT 2018cow location were detected by ATCA at 5.5 GHz with ~0.4 mJy flux density and at 9 GHz with ~1.0 mJy on 3 July 2018, and at 34 GHz with ~10 mJy on 5 July 2018. VLBI observations at 22 GHz, with the NRAO, using the VLBA and Effelsberg radio telescopes, found a total flux density of ~5 mJy around 8 July 2018 at a reportedly more accurate (but consistent within uncertainties) astrometric location of AT2018cow (RA=16h 16m 00.2242s, DEC=22d 16' 04.890") than that of e-MERLIN. On 10 July 2018, astronomers, based on follow-up studies with the Nordic Optical Telescope (NOT), formally described AT 2018cow as SN 2018cow and as a type Ib supernova, showing an "unprecedented spectrum for a supernova of this class". On 19 July 2018, astronomers, using the Kanata telescope at the Higashi-Hiroshima Observatory, observed further declines in the optical and near-infrared luminosity of the AT 2018cow position in early July 2018, and noted that the large decline rates of the light curves were "quite large" compared to Type Ic (Ic-BL) and Type Ib/c supernovae. On 6 August 2018, ultraviolet observations of the AT 2018cow location, using the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST), detected brightness (Vega mag) of about 19 on all four bands (F218W, F225W, F275W, F336W) studied. On 12 August 2018, astronomers at the Giant Metrewave Radio Telescope (GMRT) detected a low frequency radio emission (1390 MHz band; 438+/-82 uJy) at the AT 2018cow position. On 15 August 2018, astronomers using the High Energy Stereoscopic System (H.E.S.S.) array of Cherenkov telescopes (CTA) reported no significant gamma-ray source at the AT 2018cow location on 3–5 July 2018, which, as a consequence, resulted in the preliminary determination of upper limits on the integrated flux of the Very-High-Energy (VHE) gamma emission from AT 2018cow as follows: above the energy threshold 220 GeV (±2sd) an upper limit of 5e-12 ph cm^-2 s^-1; above 1 TeV (±2sd) an upper limit of 5e-13 ph cm^-2 s^-1. Properties According to astronomers at the time of its discovery, the explosion, with a surface temperature of over and traveling , may have been a cataclysmic variable star (CV), gamma-ray burst (GRB), gravitational wave (GW), supernova (SN), or something else. However, the CV scenario was rapidly disfavored given the initial featureless optical spectrum and the large initial X-ray luminosity of the transient. According to astronomer Kate Maguire of Queen's University Belfast: "It really just appeared out of nowhere. There are other objects that have been discovered that are as fast, but the fastness and the brightness, that's quite unusual." The classification of type Ic-BL indicates a spectrum with very unusually broad lines, but with no hydrogen lines and weak or missing helium lines. Such a spectrum is produced by the explosion of a very large star which has lost its outer layers of hydrogen and helium. However, according to astronomer Shubham Srivastav, associated with the Himalayan Chandra Telescope (HCT): "Although spectroscopic features indicate a tentative similarity with broad line Ic supernovae, its true nature remains a puzzle." Also, according to Maguire: "We're not sure yet what it is, but the normal powering mechanism for a supernova is radioactive decay of nickel, and this event is too bright and too fast for that." The AT 2018cow explosion could have been accompanied by a gravitational wave (GW) emission, but the GW emission could not be detected since the LIGO detectors in the states of Washington and Louisiana were down at the time of the event due to service upgradings. An explanation to help better understand the unique features of AT 2018cow, particularly as a white dwarf tidal disruption event, has been presented. As of 29 September 2018, AT 2018cow has been explained in various ways, including as a type Ic supernova, a gamma-ray burst, an interaction between a white dwarf and black hole, and as a magnetar. Preliminary studies to better understand the exact physical nature of AT 2018cow, using the European VLBI Network (EVN), have been presented. On 2 November 2018, two independent teams of astronomers both concluded that the AT 2018cow event was "either a newly formed black hole in the process of accreting matter, or the frenetic rotation of a neutron star." In January 2019, Anna Ho of the California Institute of Technology in Pasadena, who conducted observations with the Submillimeter Array on Mauna Kea in Hawaii, noted that an unusually protracted period of continuing activity after the event was noticed, enabled more extensive study than typically afforded during such events, allowing observation of it while it was brightening. Subsequently, astronomers proposed that AT 2018cow may have been a white dwarf being pulled apart by a black hole; or, a supernova leaving behind a black hole or a neutron star, the creation of a compact body being observed for the first time. See also References External links SN 2018cow webpage on the Transient Name Server AT2018cow webpage at Vanbuitenen.nl AT2018cow webpage by the Astronomy Section Rochester Academy of Science CGCG 137-068 webpage at the NASA/IPAC Extragalactic Database AT2018cow images from 21 June 2018 by Cedric Raguenaud AT2018cow – Originated from supernova in strongly magnetized environment/ALMA at NAOJ Supernovae Discoveries by ATLAS Hercules (constellation) June 2018 2018 in science 2018 in outer space
SN 2018cow
Chemistry,Astronomy
2,465
7,160,955
https://en.wikipedia.org/wiki/Langley%20extrapolation
Langley extrapolation is a method for determining the Sun's irradiance at the top of the atmosphere with ground-based instrumentation, and is often used to remove the effect of the atmosphere from measurements of, for example, aerosol optical thickness or ozone. It is based on repeated measurements with a Sun photometer operated at a given location for a cloudless morning or afternoon as the Sun moves across the sky. It is named for American astronomer and physicist Samuel Pierpont Langley. Theory It is known from Beer's law that, for every instantaneous measurement, the direct-Sun irradiance I is linked to the solar extraterrestrial irradiance I0 and the atmospheric optical depth by the following equation: where m is a geometrical factor accounting for the slant path through the atmosphere, known as the airmass factor. For a plane-parallel atmosphere, the airmass factor is simple to determine if one knows the solar zenith angle θ: m = 1/cos(θ). As time passes, the Sun moves across the sky, and therefore θ and m vary according to known astronomical laws. By taking the logarithm of the above equation, one obtains: and if one assumes that the atmospheric disturbance does not change during the observations (which last for a morning or an afternoon), the plot of ln I versus m is a straight line with a slope equal to . Then, by linear extrapolation to m = 0, one obtains I0, i.e. the Sun's radiance that would be observed by an instrument placed above the atmosphere. The requirement for good Langley plots is a constant atmosphere (constant ). This requirement can be fulfilled only under particular conditions, since the atmosphere is continuously changing. Needed conditions are in particular: the absence of clouds along the optical path, and the absence of variations in the atmospheric aerosol layer. Since aerosols tend to be more concentrated at low altitude, Langley extrapolation is often performed at high mountain sites. Data from NASA Glenn Research Center indicates that the Langley plot accuracy is improved if the data is taken above the tropopause. Solar cell calibration A Langley plot can also be used as a method to calculate the performance of solar cells outside the Earth's atmosphere. At the Glenn Research Center, the performance of solar cells is measured as a function of altitude. By extrapolation, researchers determine their performance under space conditions. Low cost LED-based photometers Sun photometers using low cost light-emitting diode (LED) detectors in place of optical interference filters and photodiodes have a relatively wide spectral response. They might be used by a globally distributed network of students and teachers to monitor atmospheric haze and aerosols, and can be calibrated using Langley extrapolation. In 2001, David Brooks and Forrest Mims were among many to propose detailed procedures to modify the Langley plot in order to account for Rayleigh scattering, and atmospheric refraction by a spherical Earth. Di Justo and Gertz compiled a handbook for using Arduino to develop these photometers in 2012. The handbook refers to in equations () and (), as the AOT (Atmospheric Optical Thickness), and the handbook refers to I0 as the EC (extraterrestrial constant). The manual suggests that once a photometer is constructed, the user waits for a clear day with few clouds, no haze and constant humidity. After the data is fit to equation () to find I0, the handbook suggests a daily measurement of I. Both I0 and I are obtained from the LED current (voltage across sensing resistor) by subtracting the dark current: where is the voltage while the LED is pointing at the Sun, and is the voltage while the LED is kept dark. There is a misprint in the manual regarding the calculation of from this single data point. The correct equation is: where was calculated on that clear and stable day using Langley extrapolation. References Radiometry
Langley extrapolation
Engineering
822
55,641,006
https://en.wikipedia.org/wiki/Cyaphide
Cyaphide, P≡C−, is the phosphorus analogue of cyanide. It is not known as a discrete salt; however, in silico measurements reveal that the −1 charge in this ion is located mainly on carbon (0.65), as opposed to phosphorus. The word "cyaphide" was first coined in 1992, by analogy with cyanide. Preparation Organometallic complexes of cyaphide were first reported in 1992. More recent preparations use two other routes: From SiR3-functionalised phosphaalkynes Treatment of the η1-coordinated phosphaalkyne complex trans– with an alkoxide resulted in desilylation, followed by subsequent rearrangement to the corresponding carbon-bound cyaphide complex. Cyaphide-alkynyl complexes are prepared similarly. From 2-phosphaethynolate anion (−OC≡P) An actinide cyaphide complex can be prepared by C−O bond cleavage of the phosphaethynolate anion, the phosphorus analogue of cyanate. Reaction of the uranium complex [] with [ in the presence of 2.2.2-cryptand results in the formation of a dinuclear, oxo-bridged uranium complex featuring a C≡P ligand. See also phosphaalkyne (P≡CR) Methylidynephosphane Cyaarside References Anions
Cyaphide
Physics,Chemistry
299
10,834,106
https://en.wikipedia.org/wiki/NGC%207314
NGC 7314 is a spiral galaxy located in the southern constellation of Piscis Austrinus. It was discovered by English astronomer John Herschel on July 29, 1834. This is a nearby Seyfert (active) galaxy, located at a distance of approximately from the Milky Way. Since it appears to have detached spiral arm segments (either from dust lanes or bright star clusters), it was listed in Halton Arp's Atlas of Peculiar Galaxies. Walter Scott Houston describes its appearance in small telescopes: Do not let its photographic magnitude of 11.6 scare you off, for it can be seen in a 6-inch telescope as a curiously fuzzy object. But it is small, appearing only 4' by 2'. The morphological classification of this galaxy is SAB(rs)bc, indicating a spiral galaxy with a weak central bar (SAB), an incomplete ring structure around the bar (rs), and moderately–wound arms (bc). The plane of the galactic disk is inclined by 64° to the line of sight from the Earth, with the major axis aligned along a position angle of 178°. Within the galaxy's core is an active galactic nucleus tentatively classified as a type I Seyfert. The central supermassive black hole has a relatively low mass, estimated as . The core is a source for X-ray emission that is seen to vary dramatically on time scales as low as hours. References External links Barred spiral galaxies Seyfert galaxies Piscis Austrinus 7314 69253 014
NGC 7314
Astronomy
317
10,807,783
https://en.wikipedia.org/wiki/Allotype%20%28immunology%29
The word allotype comes from two Greek roots, allo meaning 'other or differing from the norm' and typos meaning 'mark'. In immunology, allotype is an immunoglobulin variation (in addition to isotypic variation) that can be found among antibody classes and is manifested by heterogeneity of immunoglobulins present in a single vertebrate species. The structure of immunoglobulin polypeptide chain is dictated and controlled by number of genes encoded in the germ line. However, these genes, as it was discovered by serologic and chemical methods, could be highly polymorphic. This polymorphism is subsequently projected to the overall amino acid structure of antibody chains. Polymorphic epitopes can be present on immunoglobulin constant regions on both heavy and light chains, differing between individuals or ethnic groups and in some cases may pose as immunogenic determinants. Exposure of individuals to a non-self allotype might elicit an anti- allotype response and became cause of problems for example in a patient after transfusion of blood or in a pregnant woman. However, it is important to mention that not all variations in immunoglobulin amino acid sequence pose as a determinant responsible for immune response. Some of these allotypic determinants may be present at places that are not well exposed and therefore can be hardly serologically discriminated. In other cases, variation in one isotype can be compensated by the presence of this determinant on another antibody isotype in one individual. This means that divergent allotype of heavy chain of IgG antibody may be balanced by presence of this allotype on heavy chain of for example IgA antibody and therefore is called isoallotypic variant. Especially large number of polymorphisms were discovered in IgG antibody subclasses. Which were practically used in forensic medicine and in paternity testing, before replaced by modern day DNA fingerprinting. Definition and organisation of allotypes in humans Human allotypes nomenclature was first described in alphabetical system and further systematized in numerical system, but both could be found in the literature. For example, allotype expressed on constant region of heavy chain on IgG are designated by Gm which stands for ‘genetic marker ‘ together with IgG subclass (IgG1 àG1m, IgG2 àG2m) and the allotype number or letter [ G1m1/ G1m (a) ]. Polymorphisms within IgA are denoted in the same way as A2m (eg. A2m1/2) and kappa light chains constant region polymorphisms as Km (eg. Km1). Despite the fact, that there are multiple known lambda chain isotypes, there have not been reported any lambda chain serological polymorphisms. All these before mentioned allotypes are expressed on constant regions of the immunoglobulin. Genes responsible for encoding structure of constant regions of heavy chains are closely linked and therefore inherited together as one haplotype with low number of crossovers. Although some crossovers did occur during human evolution resulting in the creation of current populations characteristic haplotypes and importance of allotype system in population studies., Implications for monoclonal antibody therapy Antibody allotypes came back to spotlight due to development and use of therapies based on monoclonal antibodies. These recombinant human glycoproteins and proteins are now well established in clinical practise, but sometimes leads to adverse effects such as generation of antitherapeutic antibodies that negates therapy or even cause severe reactions to the therapy. This reaction may be attributed to differences between therapeutics itself or may arise between same therapeutics produced by different companies or even between different lots produced by the same company. To prevent production of such antitherapeutic antibodies, ideally, all clinical used proteins and glycoproteins should poses same allotype as natural patient’s product, this way the presence of ‘altered self‘ which poses a potential target for immune system, is limited. Whilst many parameters connected to developing and manufacturing process that might predispose monoclonal antibodies to cause immune response are well known and appropriate steps are taken to monitor and control these unwanted effects, complications linked with administration of monoclonal antibodies to genetically diverse human population are less well described. Humans exhibit abundance of genotypes and phenotypes, however all currently licensed IgG therapeutic immunoglobulins are developed as single allotypic/ polymorphic form. Patients that are homozygous for alternative phenotype are therefore at higher risk of developing potential immune response to the therapy. See also Allotype (disambiguation) Idiotype Isotype References External links Genetics Transplantation medicine
Allotype (immunology)
Biology
1,000
4,813,788
https://en.wikipedia.org/wiki/Ground-coupled%20heat%20exchanger
A ground-coupled heat exchanger is an underground heat exchanger that can capture heat from and/or dissipate heat to the ground. They use the Earth's near constant subterranean temperature to warm or cool air or other fluids for residential, agricultural or industrial uses. If building air is blown through the heat exchanger for heat recovery ventilation, they are called earth tubes (or Canadian well, Provençal well, Solar chimney, also termed earth cooling tubes, earth warming tubes, earth-air heat exchangers (EAHE or EAHX), air-to-soil heat exchanger, earth channels, earth canals, earth-air tunnel systems, ground tube heat exchanger, hypocausts, subsoil heat exchangers, thermal labyrinths, underground air pipes, and others). Earth tubes are often a viable and economical alternative or supplement to conventional central heating or air conditioning systems since there are no compressors, chemicals or burners and only blowers are required to move the air. These are used for either partial or full cooling and/or heating of facility ventilation air. Their use can help buildings meet Passive House standards or LEED certification. Earth-air heat exchangers have been used in agricultural facilities (animal buildings) and horticultural facilities (greenhouses) in the United States of America over the past several decades and have been used in conjunction with solar chimneys in hot arid areas for thousands of years, probably beginning in the Persian Empire. Implementation of these systems in India as well as in the cooler climates of Austria, Denmark and Germany to preheat the air for home ventilation systems has become fairly common since the mid-1990s, and is slowly being adopted in North America. Ground-coupled heat exchanger may also use water or antifreeze as a heat transfer fluid, often in conjunction with a geothermal heat pump. See, for example downhole heat exchangers. The rest of this article deals primarily with earth-air heat exchangers or earth tubes. Passive designs Passive ground-coupled heat exchange is a common traditional technique. It drives circulation using pressure differences caused by wind, rain, and buoyancy-driven convection (from selectively engineering areas of solar heating and evaporative, radiative, or conductive cooling). Design Earth-air heat exchangers can be analyzed for performance with several software applications using weather gage data. These software applications include GAEA, AWADUKT Thermo, EnergyPlus, L-EWTSim, WKM, and others. However, numerous earth-air heat exchanger systems have been designed and constructed improperly, and failed to meet design expectations. Earth-air heat exchangers appear best suited for air pretreatment rather than for full heating or cooling. Pretreatment of air for an air source heat pump or ground-source heat pump often provides the best economic return on investment, with simple payback often achieved within one year after installation. Most systems are usually constructed from diameter, smooth-walled (so they do not easily trap condensation moisture and mold), rigid or semi-rigid plastic, plastic-coated metal pipes or plastic pipes coated with inner antimicrobial layers, buried underground where the ambient earth temperature is typically all year round in the temperate latitudes where most humans live. Ground temperature becomes more stable with depth. Smaller diameter tubes require more energy to move the air and have less earth contact surface area. Larger tubes permit a slower airflow, which also yields more efficient energy transfer and permits much higher volumes to be transferred, permitting more air exchanges in a shorter time period, when, for example, you want to clear the building of objectionable odors or smoke but suffer from poorer heat transfer from the pipe wall to the air due to increased distances. Some consider that it is more efficient to pull air through a long tube than to push it with a fan. A solar chimney can use natural convection (warm air rising) to create a vacuum to draw filtered passive cooling tube air through the largest diameter cooling tubes. Natural convection may be slower than using a solar-powered fan. Sharp 90-degree angles should be avoided in the construction of the tube – two 45-degree bends produce less-turbulent, more efficient air flow. While smooth-wall tubes are more efficient in moving the air, they are less efficient in transferring energy. There are three configurations, a closed loop design, an open 'fresh air' system or a combination: Closed loop system: Air from inside the home or structure is blown through a U-shaped loop of typically of tube(s) where it is moderated to near earth temperature before returning to be distributed via ductwork throughout the home or structure. The closed loop system can be more effective cooling the air (during air temperature extremes) than an open system, since it cools and recools the same air. Open system: Outside air is drawn from a filtered air intake (Minimum Efficiency Reporting Value MERV 8+ air filter is recommended) to cool or preheat the air. The tubes are typically long straight tubes into the home. An open system combined with energy recovery ventilation can be nearly as efficient (80-95%) as a closed loop, and ensures that entering fresh air is filtered and tempered. Combination system: This can be constructed with dampers that allow either closed or open operation, depending on fresh air ventilation requirements. Such a design, even in closed loop mode, could draw a quantity of fresh air when an air pressure drop is created by a solar chimney, clothes dryer, fireplace, kitchen or bathroom exhaust vents. It is better to draw in filtered passive cooling tube air than unconditioned outside air. Single-pass earth air heat exchangers offer the potential for indoor air quality improvement over conventional systems by providing an increased supply of outdoor air. In some configurations of single-pass systems, a continuous supply of outdoor air is provided. This type of system would usually include one or more ventilation heat recovery units. Shared Ground Arrays A shared ground array comprises connected ground heat exchangers for use by more than one home. They can deliver low-carbon heating where individual ground-coupled heat exchangers are not viable, such as in terraced housing with little outside space. They can also provide opportunities to decarbonise heating for groups of homes away from dense urban centres where traditional district heating is unlikely to be economically viable. Other benefits include higher efficiency and lower capital cost, greater resident control to choose their own electricity supplier, and reduction in the number of exchangers required due to the variance in peak load times between different households. Thermal Labyrinths A thermal labyrinth performs the same function as an earth tube, but they are usually formed from a larger volume rectilinear space, sometimes incorporated into building basements or under ground floors, and which are in turn divided by numerous internal walls to form a labyrinthine air path. Maximising the length of the air path ensures a better heat transfer effect. The construction of the labyrinth walls, floors, and dividing walls is normally of high thermal mass cast concrete and concrete block, with the exterior walls and floors in direct contact with the surrounding earth. Safety If humidity and associated mold colonization is not addressed in system design, occupants may face health risks. At some sites, the humidity in the earth tubes may be controlled simply by passive drainage if the water table is sufficiently deep and the soil has relatively high permeability. In situations where passive drainage is not feasible or needs to be augmented for further moisture reduction, active (dehumidifier) or passive (desiccant) systems may treat the air stream. Formal research indicates that earth-air heat exchangers reduce building ventilation air pollution. Rabindra (2004) states, “The tunnel [earth-Air heat exchanger] is found not to support the growth of bacteria and fungi; rather it is found to reduce the quantity of bacteria and fungi thus making the air safer for humans to inhale. It is therefore clear that the use of EAT [Earth Air Tunnel] not only helps save the energy but also helps reduce the air pollution by reducing bacteria and fungi.” Likewise, Flueckiger (1999) in a study of twelve earth-air heat exchangers varying in design, pipe material, size and age, stated, “This study was performed because of concerns of potential microbial growth in the buried pipes of ground-coupled air systems. The results however demonstrate, that no harmful growth occurs and that the airborne concentrations of viable spores and bacteria, with few exceptions, even decreases after passage through the pipe-system”, and further stated, “Based on these investigations the operation of ground-coupled earth-to-air heat exchangers is acceptable as long as regular controls are undertaken and if appropriate cleaning facilities are available”. Whether using earth tubes with or without antimicrobial material, it is extremely important that the underground cooling tubes have an excellent condensation drain and be installed at a 2-3 degree grade to ensure the constant removal of condensed water from the tubes. When implementing in a house without a basement on a flat lot, an external condensation tower can be installed at a depth lower than where the tube enters into the house and at a point close to the wall entry. The condensation tower installation requires the added use of a condensate pump in which to remove the water from the tower. For installations in houses with basements, the pipes are graded so that the condensation drain located within the house is at the lowest point. In either installation, the tube must continually slope towards either the condensation tower or the condensation drain. The inner surface of the tube, including all joints must be smooth to aid in the flow and removal of condensate. Corrugated or ribbed tubes and rough interior joints must not be used. Joints connecting the tubes together must be tight enough to prevent water or gas infiltration. In certain geographic areas, it is important that the joints prevent Radon gas infiltration. Porous materials like uncoated concrete tubes cannot be used. Ideally, Earth Tubes with antimicrobial inner layers should be used in installations to inhibit the potential growth of molds and bacteria within the tubes. Effectiveness Implementations of earth-air heat exchangers for either partial or full cooling and/or heating of facility ventilation air have had mixed success. The literature is, unfortunately, well populated with over-generalizations about the applicability of these systems – both in favor of, and against. A key aspect of earth-air heat exchangers is the passive nature of operation and consideration of the wide variability of conditions in natural systems. Earth-air heat exchangers can be very cost effective in both up-front/capital costs as well as long-term operation and maintenance costs. However, this varies widely depending on the location’s latitude, altitude, ambient Earth temperature, climatic temperature-and-relative-humidity extremes, solar radiation, water table, soil type (thermal conductivity), soil moisture content and the efficiency of the building's exterior envelope design / insulation. Generally, dry-and-low-density soil with little or no ground shade will yield the least benefit, while dense damp soil with considerable shade should perform well. A slow drip watering system may improve thermal performance. Damp soil in contact with the cooling tube conducts heat more efficiently than dry soil. Earth cooling tubes are much less effective in hot humid climates (like Florida) where the ambient temperature of the earth approaches human comfort temperature. The higher the ambient temperature of the earth, the less effective it is for cooling and dehumidification. However, the earth can be used to partially cool and dehumidify the replacement fresh air intake for passive-solar thermal buffer zone areas like the laundry room, or a solarium / greenhouse, especially those with a hot tub, swim spa, or indoor swimming pool, where warm humid air is exhausted in the summer, and a supply of cooler drier replacement air is desired. Not all regions and sites are suitable for earth-air heat exchangers. Conditions which may hinder or preclude proper implementation include shallow bedrock, high water table, and insufficient space, among others. In some areas, only cooling or heating may be afforded by earth-air heat exchangers. In these areas, provision for thermal recharge of the ground must especially be considered. In dual function systems (both heating and cooling), the warm season provides ground thermal recharge for the cool season and the cool season provides ground thermal recharge for the warm season, though overtaxing the thermal reservoir must be considered even with dual function systems. Environmental impact In the context of today's diminishing fossil fuel reserves, increasing electrical costs, air pollution and global warming, properly designed earth cooling tubes offer a sustainable alternative to reduce or eliminate the need for conventional compressor-based air conditioning systems, in non-tropical climates. They can also help to balance the electricity grid to support fluctuating supply from other renewable energy sources. They also provide the added benefit of controlled, filtered, temperate fresh air intake, which is especially valuable in tight, well-weatherized, efficient building envelopes. Water to earth An alternative to the earth-to-air heat exchanger is the "water" to earth heat exchanger. This is typically similar to a geothermal heat pump tubing embedded horizontally in the soil (or could be a vertical sonde) to a similar depth of the earth-air heat exchanger. It uses approximately double the length of pipe of 35 mm diameter, e.g., around 80 m compared to an EAHX of 40 m. A heat exchanger coil is placed before the air inlet of the heat recovery ventilator. Typically a brine liquid (heavily salted water) is used as the heat exchanger fluid. Many European installations are now using this setup due to the ease of installation. No fall or drainage point is required and it is safe because of the reduced risk from mold. See also Ab anbar Aquifer thermal energy storage Cistern Earth sheltering Geothermal heat pump Geothermal power HVAC Passive cooling Qanat Renewable energy Seasonal thermal energy storage Solar air conditioning Solar chimney Stepwell Windcatcher Yakhchāl References International Energy Agency, Air Infiltration and Ventilation Center, Ventilation Information Paper No. 11, 2006, "Use of Earth to Air Heat Exchangers for Cooling" External links Energy Savers: Earth Cooling Tubes (US Dept of Energy) Low-energy building Heating, ventilation, and air conditioning Heat exchangers ja:地中熱
Ground-coupled heat exchanger
Chemistry,Engineering
2,962
22,006,877
https://en.wikipedia.org/wiki/Plesetsk%20Cosmodrome%20Site%20133
Site 133, also known as Raduga ( meaning Rainbow), is a launch complex at the Plesetsk Cosmodrome in Russia. It is used by Rockot, and previously Kosmos carrier rockets. It consists of a single pad, originally designated 133/1, and later 133/3. The first launch from Site 133 was of a Kosmos-2I, on 16 March 1967, carrying the Kosmos 148 satellite. 91 Kosmos-2 launches were conducted, the last of which was on 18 June 1977, with Kosmos 919. It was later reactivated as Site 133/3, and supported 38 Kosmos-3M launches between 1985 and 1994. During the late 1990s, Site 133/3 was rebuilt as a surface launch pad for Rockot, following the decision to use it for commercial launches. There were concerns that noise generated during a launch from Site 175 at the Baikonur Cosmodrome, a silo-based complex, could cause vibrations that would damage the payload. Rockots are wheeled up to the complex in a vertical position, and then the service tower is rolled around it. The payload is lifted by a crane and placed on top of the rocket. The procedure is in contrast to many other Russian and Soviet rockets, which had traditionally been assembled horizontally and then transferred to the launch site via railways. The first Rockot launch from Site 133 took place on 16 May 2000, orbiting the SimSat-1 DemoSat. The last Rokot flight took place 26 December 2019, from Site 133. 31 Rockots in total were launched from the site. References Plesetsk Cosmodrome
Plesetsk Cosmodrome Site 133
Astronomy
337
27,445,106
https://en.wikipedia.org/wiki/Melanophlogite
Melanophlogite (MEP) is a rare silicate mineral and a polymorph of silica (SiO2). It has a zeolite-like porous structure which results in relatively low and not well-defined values of its density and refractive index. Melanophlogite often overgrows crystals of sulfur or calcite and typically contains a few percent of organic and sulfur compounds. Darkening of organics in melanophlogite upon heating is a possible origin of its name, which comes from the Greek for "black" and "to be burned". History Melanophlogite was identified and named by Arnold von Lasaulx in 1876 although G. Alessi had described a very similar mineral as early as in 1827. The mineral had a cubic crystal structure; chemical analysis revealed that it is mainly composed of SiO2, but also contains up to 12% of carbon and sulfur. It was suggested that the decomposition of organic matter (carbon) in the mineral was responsible for its blackening upon heating. All studied samples originated from Sicily, and thus the mineral was called Girghenti, an old name for Agrigento town in Sicily. The name was officially changed to melanophlogite in 1927. Synthesis and properties Melanophlogite can be grown synthetically at low temperatures and elevated pressures (e.g. 160 °C and 60 bar). It has a zeolite-like porous structure composed of Si5O10 and Si6O12 rings. Its crystalline symmetry depends on the content of its voids: crystals with spherical guest molecules or atoms (e.g. CH4, Xe, Kr) are cubic and the symmetry lowers to tetragonal for non-spherical guests like tetrahydrofuran or tetrahydrothiophene. Since many molecules form unstable guests, the symmetry of melanophlogite can change between cubic and tetragonal upon mild heating (<100 °C). Even the cubic melanophlogite often shows anisotropic optical properties. They were attributed not to tetragonal fragments but to the organic film in the mineral which could be removed by low-temperature annealing (~400 °C). Otherwise, melanophlogite is thermally stable and its physical properties do not change upon 20-day annealing at 800 °C, but it converts to cristobalite after heating at temperatures above 900 °C. Occurrence Melanophlogite is a rare mineral which usually forms round drops (see infobox) or complex intertwinned overgrowth structures over sulfur or calcite crystals. Rarely, it occurs as individual cubic crystallites a few millimeters in size. It is found in Parma, Torino, Caltanissetta and Livorno provinces of Italy; also in several mines of California in the US, in Crimea (Ukraine) and Pardubice Region (Czech Republic). References External links Spectroscopic data on Melanophlogite - zeolite properties of melanophlogite Silica polymorphs
Melanophlogite
Materials_science
637
64,273,672
https://en.wikipedia.org/wiki/Mammalian%20vision
Mammalian vision is the process of mammals perceiving light, analyzing it and forming subjective sensations, on the basis of which the animal's idea of the spatial structure of the external world is formed. Responsible for this process in mammals is the visual sensory system, the foundations of which were formed at an early stage in the evolution of chordates. Its peripheral part is formed by the eyes, the intermediate (by the transmission of nerve impulses) - the optic nerves, and the central - the visual centers in the cerebral cortex. The recognition of visual stimuli in mammals is the result of the joint work of the eyes and the brain. At the same time, a significant part of the visual information is processed already at the receptor level, which allows to significantly reduce the amount of such information received by the brain. Elimination of redundancy in the amount of information is inevitable: if the amount of information delivered to the receptors of the visual system is measured in millions of bits per second (in humans - about 1 bits/s), the capabilities of the nervous system to process it are limited to tens of bits per second. The organs of vision in mammals are, as a rule, well developed, although in their life they are of less importance than for birds: usually mammals pay little attention to immovable objects, so even cautious animals such as a fox or a hare may come close to a human who stands still without movement. The size of the eyes in mammals is relatively small; in humans, eye weight is 1% of the mass of the head, while in a starling it reaches 15%. Nocturnal animals (for example, tarsiers) and animals that live in open landscapes have larger eyes. The vision of forest animals is not so sharp, and in burrowing underground species (moles, gophers, zokors), eyes are reduced to a greater extent, in some cases (marsupial moles, mole rats, blind mole), they are even covered by a skin membrane. Mammalian eye Like other vertebrates, the mammalian eye develops from the anterior brain vesicle and has a rounded shape (eyeball). Literature Vision by taxon Mammal anatomy Animal physiology
Mammalian vision
Biology
445
263,627
https://en.wikipedia.org/wiki/IBM%206400
The IBM 6400 family of line matrix printers were modern highspeed business computer printers introduced by IBM in 1995. These printers were designed for use on a variety IBM systems including mainframes, servers, and PCs. Configuration The 6400 was available in a choice of open pedestal (to minimize floor size requirements) or an enclosed cabinet (for quiet operation). Three models existed, with print speeds of 500, 1000 or 1500 lines/minute. When configured with the appropriate graphics option, it could print mailing bar codes "certified by the U.S. Postal service. Twelve configurations were commonly sold by IBM. Rebadged These printers were manufactured by Printronix Corp and rebranded for IBM. All internal parts had the Printronix Logo and/or artwork. Although they once did, IBM no longer manufactures printers. One of their old printer divisions became Lexmark The other became the IBM Printing Systems Division, which was subsequently sold to Ricoh in 2007. References 6400 Line printers Computer-related introductions in 1995
IBM 6400
Technology
206
70,137,186
https://en.wikipedia.org/wiki/Oleiharenicola
Oleiharenicola is a genus of bacteria from the family of Opitutaceae. See also List of bacterial orders List of bacteria genera References Verrucomicrobiota Bacteria genera Taxa described in 2018
Oleiharenicola
Biology
45
8,648,906
https://en.wikipedia.org/wiki/Critical%20hours
Critical hours for radio stations is the time from sunrise to two hours after sunrise, and from two hours before sunset until sunset, local time. During this time, certain American radio stations may be operating with reduced power as a result of Section 73.187 of the Federal Communications Commission's rules. Canadian restricted hours are similar to critical hours, except that the restriction results from the January 17, 1984, U.S.-Canadian AM Agreement. Canadian restricted hours are called "critical hours" in the U.S.-Canadian Agreement, but in the AM Engineering database, the FCC calls them "Canadian restricted hours" to distinguish them from the domestically defined critical hours. Canadian restricted hours is that time from sunrise to one and one-half hours after sunrise, and from one and one-half hours before sunset until sunset, local time. U.S. stations operate with restricted hours because of Canadian stations, and vice versa. Those radio stations that must lower their power during the critical hours are required to do so because this is when the propagation of radio waves changes from groundwave to skywave (at sunset) or vice versa (at sunrise). This can cause radio stations to be picked up much farther away, possibly causing interference with other stations on the same frequency or adjacent frequencies. Usually stations operating under the restrictions of Critical Hours must sign off the air between the end of the evening critical hours and the beginning of the morning critical hours. In effect, permission to operate during critical hours gives daytime-only stations a few more hours in their broadcast day. This is especially important in autumn and winter, when these stations might otherwise need to be off the air during the important morning and afternoon drive times, when AM radio listening is at its highest. See also Pre-sunrise and post-sunset authorization References Broadcast engineering
Critical hours
Engineering
365
24,389,373
https://en.wikipedia.org/wiki/Network%20management%20software
Network management software is software that is used to provision, discover, monitor and maintain computer networks. Purpose With the expansion of the World Wide Web and the Internet, computer networks have become very large and complex, making them impossible to manage manually. In response, a suite of network management software was developed to help reduce the burden of managing the growing complexity of computer networks. Network management software usually collects information about network devices (which are called Nodes) using protocols like SNMP, ICMP, CDP etc. This information is then presented to network administrators in an easy to understand and accessible manner to help them quickly identify and remediate problems. Problems may present itself in the form of network faults, performance bottlenecks, compliance issues etc. Some advanced network management software may rectify network problems automatically. Network management software may also help with tasks involved in provisioning new networks, such as installing and configuring new network nodes etc. They may also help with maintenance of existing networks like upgrading software on existing network devices, creating new virtual networks etc. Functions Network provisioning: This function enables network managers to provision new network devices in an environment. Automating this step reduces cost and eliminates chances of human error. Mapping or Discovery: This function enables the software to discover the features of a target network. Some features that are usually discovered are: the nodes in a network, the connectivity between these nodes, the vendor types and capabilities for these nodes, the performance characteristics etc. Monitoring: This function enables the network management software to monitor the network for problems and to suggest improvements. The software may poll the devices periodically or register itself to receive alerts from network devices. One mechanism for network devices to volunteer information about itself is by sending an SNMP Trap. Monitoring can reveal faults in the network such as failed or misconfigured nodes, performance bottlenecks, malicious actors, intrusions etc. Configuration management: This function enables the software to ensure that the network configuration is as desired and there is no configuration drift. Regulatory compliance: This function enables the network management software to ensure that the network meets the regulatory standards and complies with applicable laws. Change control: This function enables the software to ensure that the network changes are enacted in a controlled and coordinated manner. Change control can enable audit trails which has applications during a forensic investigation after a network intrusion. Software Asset Management: This function enabled the software to inventory software installed on nodes along with details like version and install date. Additionally, it can also provide software deployment and patch management. Cybersecurity: This function enabled the software to use all the data gathered from the nodes to identify security risks in an IT environment. References Network management
Network management software
Engineering
545
4,623,454
https://en.wikipedia.org/wiki/Nominal%20analogue%20blanking
Nominal analogue blanking is the outermost part of the overscan of a standard definition digital television image. It consists of a gap of black (or nearly black) pixels at the left and right sides, which correspond to the end and start of the horizontal blanking interval: the front porch at the right side (the end of a line, before the sync pulse), and the back porch at the left side (the start of a line, after the sync pulse and before drawing the next line). Digital television ordinarily contains 720 pixels per line, but only 702 (PAL) to 704 (NTSC) of them contain picture content. The location is variable, since analogue equipment may shift the picture sideways in an unexpected amount or direction. The exact width is determined by taking the definition of the time required for an active line in PAL or NTSC, and multiplying it by the pixel clock of 13.5 MHz of Digital SDTV. PAL is exactly 52 μs, so it will equate to exactly 702 pixels. Notably, screen shapes and aspect ratios were defined in an era of purely analogue broadcasting for TV. This means that any picture with nominal analogue blanking, whether it be 702, around 704, or less, will be — by definition — a 4:3 picture. Therefore, when cross-converting into a square-pixel environment (like MPEG-4 and its variants), this width must always scale to 768 (PAL) or 640 (NTSC). This has the outcome of causing a full picture of 720x576 or 720x480 to be wider than 4:3. In fact, a purely digitally sourced SDTV image, with no analogue blanking, will be close to or once stretched to square pixels. Standard definition widescreen pictures were also defined in an analogue environment and must also be treated as such. This means that a purely digitally sourced widescreen SDTV image, with no analogue blanking, will be close to or . For details, see the technical specifications of overscan amounts. References ITU-R BT.601: Studio encoding parameters of digital television for standard 4:3 and wide screen 16:9 aspect ratios ITU-R BT.1700: Characteristics of composite video signals for conventional analogue television systems See also Safe area Overscan Horizontal blanking interval Vertical blanking interval Television technology
Nominal analogue blanking
Technology
487
17,389,946
https://en.wikipedia.org/wiki/Crying
Crying is the dropping of tears (or welling of tears in the eyes) in response to an emotional state or physical pain. Emotions that can lead to crying include sadness, anger, joy, and fear. Crying can also be caused by relief from a period of stress or anxiety, or as an empathetic response. The act of crying has been defined as "a complex secretomotor phenomenon characterized by the shedding of tears from the lacrimal apparatus, without any irritation of the ocular structures", instead, giving a relief which protects from conjunctivitis. A related medical term is lacrimation, which also refers to the non-emotional shedding of tears. Various forms of crying are known as sobbing, weeping, wailing, whimpering, bawling, and blubbering. For crying to be described as sobbing, it usually has to be accompanied by a set of other symptoms, such as slow but erratic inhalation, occasional instances of breath holding, and muscular tremor. A neuronal connection between the lacrimal gland and the areas of the human brain involved with emotion has been established. Tears produced during emotional crying have a chemical composition which differs from other types of tears. They contain significantly greater quantities of the hormones prolactin, adrenocorticotropic hormone, and Leu-enkephalin, and the elements potassium and manganese. Function The question of the function or origin of emotional tears remains open. Theories range from the simple, such as response to inflicted pain, to the more complex, including nonverbal communication in order to elicit altruistic helping behaviour from others. Some have also claimed that crying can serve several biochemical purposes, such as relieving stress and clearance of the eyes. There is some empirical evidence that crying lowers stress levels, potentially due to the release of hormones such as oxytocin. Crying is believed to be an outlet or a result of a burst of intense emotional sensations, such as agony, surprise, or joy. This theory could explain why people cry during cheerful events, as well as very painful events. Individuals tend to remember the positive aspects of crying, and may create a link between other simultaneous positive events, such as resolving feelings of grief. Together, these features of memory reinforce the idea that crying helped the individual. In Hippocratic and medieval medicine, tears were associated with the bodily humors, and crying was seen as purgation of excess humors from the brain. William James thought of emotions as reflexes prior to rational thought, believing that the physiological response, as if to stress or irritation, is a precondition to cognitively becoming aware of emotions such as fear or anger. William H. Frey II, a biochemist at the University of Minnesota, proposed that people feel "better" after crying due to the elimination of hormones associated with stress, specifically adrenocorticotropic hormone. This, paired with increased mucosal secretion during crying, could lead to a theory that crying is a mechanism developed in humans to dispose of this stress hormone when levels grow too high. Tears have a limited ability to eliminate chemicals, reducing the likelihood of this theory. Recent psychological theories of crying emphasize the relationship of crying to the experience of perceived helplessness. From this perspective, an underlying experience of helplessness can usually explain why people cry. For example, a person may cry after receiving surprisingly happy news, ostensibly because the person feels powerless or unable to influence what is happening. Emotional tears have also been put into an evolutionary context. One study proposes that crying, by blurring vision, can handicap aggressive or defensive actions, and may function as a reliable signal of appeasement, need, or attachment. Oren Hasson, an evolutionary psychologist in the zoology department at Tel Aviv University believes that crying shows vulnerability and submission to an attacker, solicits sympathy and aid from bystanders, and signals shared emotional attachments. Another theory that follows evolutionary psychology is given by Paul D. MacLean, who suggests that the vocal part of crying was used first as a "separation cry" to help reunite parents and offspring. The tears, he speculates, are a result of a link between the development of the cerebrum and the discovery of fire. MacLean theorizes that since early humans must have relied heavily on fire, their eyes were frequently producing reflexive tears in response to the smoke. As humans evolved the smoke possibly gained a strong association with the loss of life and, therefore, sorrow. In 2017, Carlo Bellieni analysed the weeping behavior, and concluded that most animals can cry but only humans have psychoemotional shedding of tears, also known as "weeping". Weeping is a behavior that induces empathy perhaps with the mediation of the mirror neurons network, and influences the mood through the release of hormones elicited by the massage effect made by the tears on the cheeks, or through the relief of the sobbing rhythm. Many ethologists would disagree. Biological response It can be very difficult to observe biological effects of crying, especially considering many psychologists believe the environment in which a person cries can alter the experience of the crier. Laboratory studies have shown several physical effects of crying, such as increased heart rate, sweating, and slowed breathing. Although it appears that the type of effects an individual experiences depends largely on the individual, for many it seems that the calming effects of crying, such as slowed breathing, outlast the negative effects, which could explain why people remember crying as being helpful and beneficial. Globus sensation The most common side effect of crying is feeling a lump in the throat of the crier, otherwise known as a globus sensation. Although many things can cause a globus sensation, the one experienced in crying is a response to the stress experienced by the sympathetic nervous system. When an animal is threatened by some form of danger, the sympathetic nervous system triggers several processes to allow the animal to fight or flee. This includes shutting down unnecessary body functions, such as digestion, and increasing blood flow and oxygen to necessary muscles. When an individual experiences emotions such as sorrow, the sympathetic nervous system still responds in this way. Another function increased by the sympathetic nervous system is breathing, which includes opening the throat in order to increase air flow. This is done by expanding the glottis, which allows more air to pass through. As an individual is undergoing this sympathetic response, eventually the parasympathetic nervous system attempts to undo the response by decreasing high stress activities and increasing recuperative processes, which includes running digestion. This involves swallowing, a process which requires closing the fully expanded glottis to prevent food from entering the larynx. The glottis attempts to remain open as an individual cries. This fight to close the glottis creates a sensation that feels like a lump in the individual's throat. Other common side effects of crying are quivering lips, a runny nose, and an unsteady, cracking voice. Frequency According to the German Society of Ophthalmology, which has collated different scientific studies on crying, the average woman cries between 30 and 64 times a year, and the average man cries between 6 and 17 times a year. Men tend to cry for between two and four minutes, and women cry for about six minutes. Crying turns into sobbing for women in 65% of cases, compared to just 6% for men. Before adolescence, no difference between the sexes was found. The gap between how often men and women cry is larger in countries that have more wealth, democracy, and gender egalitarianism. In infants Infants can shed tears at approximately four to eight weeks of age. Crying is critical to when a baby is first born. Their ability to cry upon delivery signals they can breathe on their own and reflects they have successfully adapted to life outside the womb. Although crying is an infant's mode of communication, it is not limited to a monotonous sound. There are three different types of cries apparent in infants. The first of these three is a basic cry, which is a systematic cry with a pattern of crying and silence. The basic cry starts with a cry coupled with a briefer silence, which is followed by a short high-pitched inspiratory whistle. Then, there is a brief silence followed by another cry. Hunger is a main stimulant of the basic cry. An anger cry is much like the basic cry; in this cry, more excess air is forced through the vocal cords, making it a louder, more abrupt cry. This type of cry is characterized by the same temporal sequence as the basic pattern but distinguished by differences in the length of the various phase components. The third cry is the pain cry, which, unlike the other two, has no preliminary moaning. The pain cry is one loud cry, followed by a period of breath holding. Most adults can determine whether an infant's cries signify anger or pain. Most parents also have a better ability to distinguish their own infant's cries than those of a different child. A 2009 study found that babies mimic their parents' pitch contour. French infants wail on a rising note while German infants favor a falling melody. Carlo Bellieni found a correlation between the features of babies' crying and the level of pain, though he found no direct correlation between the cause of crying and its characteristics. T. Berry Brazelton has suggested that overstimulation may be a contributing factor to infant crying and that periods of active crying might serve the purpose of discharging overstimulation and helping the baby's nervous system regain homeostasis. Sheila Kitzinger found a correlation between the mother's prenatal stress level and later amount of crying by the infant. She also found a correlation between birth trauma and crying. Mothers who had experienced obstetrical interventions or who were made to feel powerless during birth had babies who cried more than other babies. Rather than try one remedy after another to stop this crying, she suggested that mothers hold their babies and allow the crying to run its course. Other studies have supported Kitzinger's findings. Babies who had experienced birth complications had longer crying spells at three months of age and awakened more frequently at night crying. Based on these various findings, Aletha Solter has proposed a general emotional release theory of infant crying. When infants cry for no obvious reason after all other causes (such as hunger or pain) are ruled out, she suggests that the crying may signify a beneficial stress-release mechanism. She recommends the "crying-in-arms" approach as a way to comfort these infants. Another way of comforting and calming the baby is to mimic the familiarity and coziness of mother's womb. Robert Hamilton developed a technique to parents where a baby may be calmed and stop crying in five seconds. A study published in Current Biology has shown that some parents with experience of children are better at identifying types of cries than those who do not have experience of children. Categorizing dimensions There have been many attempts to differentiate between the two distinct types of crying: positive and negative. Different perspectives have been broken down into three dimensions to examine the emotions being felt and also to grasp the contrast between the two types. Spatial perspective explains sad crying as reaching out to be "there", such as at home or with a person who may have just died. In contrast, joyful crying is acknowledging being "here." It emphasized the intense awareness of one's location, such as at a relative's wedding. Temporal perspective explains crying slightly differently. In temporal perspective, sorrowful crying is due to looking to the past with regret or to the future with dread. This illustrated crying as a result of losing someone and regretting not spending more time with them or being nervous about an upcoming event. Crying as a result of happiness would then be a response to a moment as if it is eternal; the person is frozen in a blissful, immortalized present. The last dimension is known as the public-private perspective. This describes the two types of crying as ways to imply details about the self as known privately or one's public identity. For example, crying due to a loss is a message to the outside world that pleads for help with coping with internal sufferings. Or, as Arthur Schopenhauer suggested, sorrowful crying is a method of self-pity or self-regard, a way one comforts oneself. Joyful crying, in contrast, is in recognition of beauty, glory, or wonderfulness. Religious views In Orthodox and Catholic Christianity, tears are considered to be a sign of genuine repentance, and a desirable thing in many cases. Tears of true contrition are thought to be sacramental, helpful in forgiving sins, in that they recall the Baptism of the penitent. The Shia Ithna Ashari (Muslims who believe in Twelve Imams after Muhammad) consider crying to be an important responsibility towards their leaders who were martyred. They believe a true lover of Imam Hussain can feel the afflictions and oppressions Imam Hussain suffered; his feelings are so immense that they break out into tears and wail. The pain of the beloved is the pain of the lover. Crying on Imam Hussain is the sign or expression of true love. The imams of Shias have encouraged crying especially on Imam Hussain and have been informed about rewards for this act. They support their view through a tradition (saying) from Muhammad who said: (On the Day of Judgment, a group would be seen in the most excellent and honourable of states. They would be asked if they were of the Angels or of the Prophets.) In reply they would state: "We are neither Angels nor Prophets but of the indigent ones from the ummah of Muhammad". They would then be asked: "How then did you achieve this lofty and honourable status?" They would reply: "We did not perform very many good deeds nor did we pass all the days in a state of fasting or all the nights in a state of worship but yes, we used to offer our (daily) prayers (regularly) and whenever we used to hear the mention of Muhammad, tears would roll down our cheeks". Types of tears There are three types of tears: basal tears, reflexive tears, and psychic tears. Basal tears are produced at a rate of about 1 to 2 microliters a minute, and are made in order to keep the eye lubricated and smooth out irregularities in the cornea. Reflexive tears are tears that are made in response to irritants to the eye, such as when chopping onions or getting poked in the eye. Psychic tears are produced by the lacrimal system and are the tears expelled during emotional states. Related disorders Baby colic, where an infant's excessive crying has no obvious cause or underlying medical disorder. Bell's palsy, where faulty regeneration of the facial nerve can cause sufferers to shed tears while eating. Cri du chat syndrome, where the characteristic cry of affected infants, which is similar to that of a meowing kitten, is due to problems with the larynx and nervous system. Familial dysautonomia, where there can be a lack of overflow tears (alacrima), during emotional crying. Pseudobulbar affect, uncontrollable episodes of laughing and/or crying. References Further reading : examines the taboo that still surrounds public crying. External links Physiological psychology Emotion Reflexes
Crying
Biology
3,167
62,067,518
https://en.wikipedia.org/wiki/Stephen%20Eales
Stephen Eales is a professor of astrophysics at Cardiff University, where he is currently head of the Astronomy Group. In 2015, he was awarded the Herschel Medal from the Royal Astronomical Society for outstanding contributions to observational astrophysics. He also writes articles and books about astronomy. Research His main research field is the new field of submillimetre astronomy, in particular using submillimetre observations to investigate the origin and evolution of galaxies. He has led a number of large submillimetre observing programmes. In particular, with Loretta Dunne he led the Herschel ATLAS the largest survey of the extragalactic sky carried out with the Herschel Space Observatory. Bibliography Origins – how the planets, stars, galaxies and the universe began (Springer 2007 ). Planets and Planetary Systems (textbook) (John Wiley and Sons 2009, ) References Footnotes Sources Winners of RAS medals in 2015 Smoking Supernovae and Dusty Galaxies, Sky and Telescope 2004 The Final Frontier, Astronomy Now, 1997, Vol. 11, No. 6, p. 41 Cool dust and baby stars, Physics World, Volume 26, 1 Pilbratt, G. et al. 2010, Herschel Space Observatory – an ESA facility for far-infrared and submillimetre astronomy, Astronomy and Astrophysics, 518, L1 British astrophysicists Year of birth missing (living people) Living people Academics of Cardiff University
Stephen Eales
Astronomy
284
50,247,718
https://en.wikipedia.org/wiki/Syntrophobacter
Syntrophobacter is a genus of bacteria from the family of Syntrophobacteraceae. Syntrophobacter have the ability to grow on propionate. See also List of bacterial orders List of bacteria genera References Further reading Thermodesulfobacteriota Bacteria genera
Syntrophobacter
Biology
64
37,568,089
https://en.wikipedia.org/wiki/Eleuteroschisis
Eleuteroschisis is asexual reproduction in dinoflagellates in which the parent organism completely sheds its theca (i.e. undergoes ecdysis) either before or immediately following cell division. Neither daughter cell inherits part of the parent theca. In terms of asexual division of motile cells, desmoschisis is generally the case in gonyaulacaleans whereas eleutheroschisis is generally the case in peridinialeans. References Dinoflagellate biology Asexual reproduction
Eleuteroschisis
Biology
115
720,240
https://en.wikipedia.org/wiki/Potassium%20bitartrate
Potassium bitartrate, also known as potassium hydrogen tartrate, with formula KC4H5O6, is a chemical compound with a number of uses. It is the potassium acid salt of tartaric acid (a carboxylic acid). Especially in cooking, it is also known as cream of tartar. It is used as a component of baking powders and baking mixes, as mordant in textile dyeing, as reducer of chromium trioxide in mordants for wool, as a metal processing agent that prevents oxidation, as an intermediate for other potassium tartrates, as a cleaning agent when mixed with a weak acid such as vinegar, and as reference standard pH buffer. Medical uses include as a medical cathartic, as a diuretic, and as a historic veterinary laxative and diuretic. It is produced as a byproduct of winemaking by purifying the precipitate that is deposited in wine barrels. It arises from the tartaric acid and potassium naturally occurring in grapes. In culinary applications, potassium bitartrate is valued for its role in stabilizing egg whites, which enhances the volume and texture of meringues and soufflés. Its acidic properties prevent sugar syrups from crystallizing, aiding in the production of smooth confections such as candies and frostings. When combined with baking soda, it acts as a leavening agent, producing carbon dioxide gas that helps baked goods rise. Additionally, potassium bitartrate is used to stabilize whipped cream, allowing it to retain its shape for longer periods. History Potassium bitartrate was first characterized by Swedish chemist Carl Wilhelm Scheele (1742–1786). This was a result of Scheele's work studying fluorite and hydrofluoric acid. Scheele may have been the first scientist to publish work on potassium bitartrate, but use of potassium bitartrate has been reported to date back 7000 years to an ancient village in northern Iran. Modern applications of cream of tartar started in 1768 after it gained popularity when the French started using it regularly in their cuisine. In 2021, a connection between potassium bitartrate and canine and feline toxicity of grapes was first proposed. Since then, it has been deemed likely as the source of grape and raisin toxicity to pets. Occurrence Potassium bitartrate is naturally formed in grapes from the acid dissociation of tartaric acid into bitartrate and tartrate ions. Potassium bitartrate has a low solubility in water. It crystallizes in wine casks during the fermentation of grape juice, and can precipitate out of wine in bottles. The rate of potassium bitartrate precipitation depends on the rates of nuclei formation and crystal growth, which varies based on a wine's alcohol, sugar, and extract content. The crystals (wine diamonds) will often form on the underside of a cork in wine-filled bottles that have been stored at temperatures below , and will seldom, if ever, dissolve naturally into the wine. Over time, crystal formation is less likely to occur due to the decreasing supersaturation of potassium bitartrate, with the greatest amount of precipitation occurring in the initial few days of cooling. Historically, it was known as beeswing for its resemblance to the sheen of bees' wings. It was collected and purified to produce the white, odorless, acidic powder used for many culinary and other household purposes. These crystals also precipitate out of fresh grape juice that has been chilled or allowed to stand for some time. To prevent crystals from forming in homemade grape jam or jelly, the prerequisite fresh grape juice should be chilled overnight to promote crystallization. The potassium bitartrate crystals are removed by filtering through two layers of cheesecloth. The filtered juice may then be made into jam or jelly. In some cases they adhere to the side of the chilled container, making filtering unnecessary. The presence of crystals is less prevalent in red wines than in white wines. This is because red wines have a higher amount of tannin and colouring matter present as well as a higher sugar and extract content than white wines. Various methods such as promoting crystallization and filtering, removing the active species required for potassium bitartrate precipitation, and adding additives have been implemented to reduce the presence of potassium bitartrate crystals in wine. Applications In food In food, potassium bitartrate is used for: Stabilizing egg whites, increasing their warmth-tolerance and volume Stabilizing whipped cream, maintaining its texture and volume Anti-caking and thickening Preventing sugar syrups from crystallizing by causing some of the sucrose to break down into glucose and fructose Reducing discoloration of boiled vegetables Additionally, it is used as a component of: Baking powder, as an acid ingredient to activate baking soda Salt substitutes, in combination with potassium chloride A similar acid salt, sodium acid pyrophosphate, can be confused with cream of tartar because of its common function as a component of baking powder. Baking Adding cream of tartar to egg whites gives volume to cakes, and makes them more tender. As cream of tartar is added, the pH decreases to around the isoelectric point of the foaming proteins in egg whites. Foaming properties of egg whites are optimal at this pH due to increased protein-protein interactions. The low pH also results in a whiter crumb in cakes due to flour pigments that respond to these pH changes. However, adding too much cream of tartar (>2.4% weight of egg white) can affect the texture and taste of cakes. The optimal cream of tartar concentration to increase volume and the whiteness of interior crumbs without making the cake too tender, is about 1/4 tsp per egg white. As an acid, cream of tartar with heat reduces sugar crystallization in invert syrups by helping to break down sucrose into its monomer components - fructose and glucose in equal parts. Preventing the formation of sugar crystals makes the syrup have a non-grainy texture, shinier and less prone to break and dry. However, a downside of relying on cream of tartar to thin out crystalline sugar confections (like fudge) is that it can be hard to add the right amount of acid to get the desired consistency. Cream of tartar is used as a type of acid salt that is crucial in baking powder. Upon dissolving in batter or dough, the tartaric acid that is released reacts with baking soda to form carbon dioxide that is used for leavening. Since cream of tartar is fast-acting, it releases over 70 percent of carbon dioxide gas during mixing. Household use Potassium bitartrate can be mixed with an acidic liquid, such as lemon juice or white vinegar, to make a paste-like cleaning agent for metals, such as brass, aluminium, or copper, or with water for other cleaning applications, such as removing light stains from porcelain. This mixture is sometimes mistakenly made with vinegar and sodium bicarbonate (baking soda), which actually react to neutralize each other, creating carbon dioxide and a sodium acetate solution. Cream of tartar was often used in traditional dyeing where the complexing action of the tartrate ions was used to adjust the solubility and hydrolysis of mordant salts such as tin chloride and alum. Cream of tartar, when mixed into a paste with hydrogen peroxide, can be used to clean rust from some hand tools, notably hand files. The paste is applied, left to set for a few hours, and then washed off with a baking soda/water solution. After another rinse with water and thorough drying, a thin application of oil will protect the file from further rusting. Slowing the set time of plaster of Paris products (most widely used in gypsum plaster wall work and artwork casting) is typically achieved by the simple introduction of almost any acid diluted into the mixing water. A commercial retardant premix additive sold by USG to trade interior plasterers includes at least 40% potassium bitartrate. The remaining ingredients are the same plaster of Paris and quartz-silica aggregate already prominent in the main product. This means that the only active ingredient is the cream of tartar. Cosmetics For dyeing hair, potassium bitartrate can be mixed with henna as the mild acid needed to activate the henna. Medicinal use Cream of tartar has been used internally as a purgative, but this is dangerous because an excess of potassium, or hyperkalemia, may occur. Chemistry Potassium bitartrate is the United States' National Institute of Standards and Technology's primary reference standard for a pH buffer. Using an excess of the salt in water, a saturated solution is created with a pH of 3.557 at . Upon dissolution in water, potassium bitartrate will dissociate into acid tartrate, tartrate, and potassium ions. Thus, a saturated solution creates a buffer with standard pH. Before use as a standard, it is recommended that the solution be filtered or decanted between and . Potassium carbonate can be made by burning cream of tartar, which produces "pearl ash". This process is now obsolete but produced a higher quality (reasonable purity) than "potash" extracted from wood or other plant ashes. Production It is produced as a byproduct of winemaking by purifying the precipitate that is deposited in wine barrels. It arises from the tartaric acid and potassium naturally occurring in grapes. See also Tartrate Tartaric acid Potassium tartrate (K2C4H4O6) Potassium bicarbonate References External links Description of Potassium Bitartrate at Monash Scientific Material Safety Data Sheet (MSDS) for Potassium Bitartrate at Fisher Scientific Acid salts Potassium compounds Tartrates Leavening agents Edible thickening agents
Potassium bitartrate
Chemistry
2,037
937,771
https://en.wikipedia.org/wiki/Eta%20Cassiopeiae
Eta Cassiopeiae (η Cassiopeiae, abbreviated Eta Cas, η Cas) is a binary star system in the northern constellation of Cassiopeia. Its binary nature was first discovered by William Herschel in August 1779. Based upon parallax measurements, the distance to this system is from the Sun. The two components are designated Eta Cassiopeiae A (officially named Achird ) and B. Nomenclature η Cassiopeiae (Latinised to Eta Cassiopeiae) is the system's Bayer designation. The designations of the two constituents as Eta Cassiopeiae A and B derive from the convention used by the Washington Multiplicity Catalog (WMC) for star systems, and adopted by the International Astronomical Union (IAU). The proper name Achird was apparently first applied to Eta Cassiopeiae in the Skalnate Pleso Atlas of the Heavens published in 1950, but is not known prior to that. Richard Hinckley Allen gives no historical names for the star in his book Star Names: Their Lore and Meaning. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. The WGSN decided to attribute proper names to individual stars rather than entire multiple systems. It approved the name Achird for the component Eta Cassiopeiae A on 5 September 2017 and it is now so included in the List of IAU-approved Star Names. In Chinese astronomy, Eta Cassiopeiae is within the Legs mansion, and is part of the () asterism named for a famous charioteer during the Spring and Autumn period. The other components are Beta Cassiopeiae (Caph), Kappa Cassiopeiae, Alpha Cassiopeiae (Schedar) and Lambda Cassiopeiae. Consequently, the Chinese name for Eta Cassiopeiae itself is (, ). Properties Eta Cassiopeiae's two components are orbiting around each other over a period of 480 years. Based on an estimated semi-major axis of 12″ and a parallax of 0.168″, the two stars are separated by an average distance of , where an AU is the average distance between the Sun and the Earth. However, the large orbital eccentricity of 0.497 means that their periapsis, or closest approach, is as small as 36 AU, with an apoapsis of about 106 AU. For comparison, the semi-major axis of Neptune is 30 AU. There are six dimmer optical components listed in the Washington Double Star Catalog. However, none of them are related to the Eta Cassiopeiae system and are in reality more distant stars. The primary has been reported to be a spectroscopic binary, but this has never been confirmed. Eta Cassiopeiae A has a stellar classification of G0 V, which makes it a G-type main-sequence star like the Sun. It therefore resembles what the Sun might look like were humans to observe it from Eta Cassiopeiae. The star has 97% of the mass of the Sun and 100% of the Sun's radius. It is of apparent magnitude 3.44, radiating 129% of the luminosity of the Sun from its outer envelope at an effective temperature of . It appears to be rotating at a leisurely rate, with a projected rotational velocity of . The cooler and dimmer (magnitude 7.51) Eta Cassiopeiae B is of stellar classification K7 V; a K-type main-sequence star. It has only 57% of the mass of the Sun and 66% of the Sun's radius. Smaller stars generate energy more slowly, so this component radiates only 6% of the luminosity of the Sun. Its outer atmosphere has an effective temperature of 4,036 K. Compared to the Sun, both components show only half the abundance of elements other than hydrogen and helium—what astronomers term their metallicity. A necessary condition for the existence of a planet in this system are stable zones where the object can remain in orbit for long intervals. For hypothetical planets in a circular orbit around the individual members of this star system, this maximum orbital radius is computed to be 9.5 AU for the primary and 7.1 AU for the secondary. (Note that the orbit of Mars is 1.5 AU from the Sun.) A planet orbiting outside of both stars would need to be at least 235 AU distant. See also Lists of stars List of stars in Cassiopeia References External links Cassiopeiae, Eta Binary stars Cassiopeia (constellation) G-type main-sequence stars K-type main-sequence stars Cassiopeiae, Eta Cassiopeiae, 24 003821 RS Canum Venaticorum variables 0219 004614 BD+57 0150 0034 Achird
Eta Cassiopeiae
Astronomy
1,002
38,526,503
https://en.wikipedia.org/wiki/Chronometric%20singularity
In theoretical physics, a chronometric singularity (also called a temporal or horological singularity) is a point at which time cannot be measured or described. An example involves a time at a coordinate singularity, e.g. a geographical pole. Since time on Earth is measured through longitudes, and no unique longitude exists at a pole, time is not defined uniquely at this point. There is a clear connection with coordinate singularities, as can be seen from this example. In relativity, similar singularities can be found in the case of Schwarzschild coordinates. Stephen Hawking once compared by a talk-show guest's question about "before the beginning of time" to asking "what's north of the North Pole". See also Coordinate singularity No-boundary proposal and imaginary time Spacetime singularity Time References Geodesy Timekeeping
Chronometric singularity
Physics,Mathematics
178
37,522,041
https://en.wikipedia.org/wiki/Scalindua%20wagneri
Candidatus Scalindua wagneri is a Gram-negative coccoid-shaped bacterium that was first isolated from a wastewater treatment plant. This bacterium is an obligate anaerobic chemolithotroph that undergoes anaerobic ammonium oxidation (anammox). It can be used in the wastewater treatment industry in nitrogen reactors to remove nitrogenous wastes from wastewater without contributing to fixed nitrogen loss and greenhouse gas emission. Characterization Candidatus Scalindua wagneri is a coccoid-shaped bacterium with a diameter of 1 μm. Like other Planctomycetota, S. wagneri is Gram-negative and does not have peptidoglycan in its cell wall. In addition, the bacterium contains two inner membranes instead of having one inner membrane and one outer membrane that surrounds the cell wall. Some of the near neighbors are other species within the new Scalindua genus, such as "Candidatus S. sorokinii" and "Candidatus S. brodae". Other neighbors include "Candidatus Kuenenia stuttgartiensis" and "Candidatus Brocadia anammoxidans". S. wagneri and its genus share only about 85% similarity with other members in its evolutionary line, which suggests that it is distantly related to other anaerobic ammonium oxidizing (anammox) bacteria. Discovery Markus Schmid from the Jetten lab first discovered S. wagneri in a landfill leachate treatment plant located in Pitsea, UK on August 1, 2001. These bacteria doubled in number about every three weeks in laboratory conditions, which made them very difficult to isolate. Therefore, the researchers used 16S rRNA (ribosomal RNA) gene analysis on the biofilm of wastewater samples to detect the presence of these bacteria. They amplified and isolated the 16S rRNA gene from the biofilm using PCR and gel electrophoresis. Then, they cloned the DNA into TOPO vectors. Once the researchers sequenced the DNA, they aligned the 16S rRNA gene sequences to a genome database and found that the sequences are related to the anammox bacteria. One of the sequences showed a 93% similarity to Candidatus Scalindua sorokinii, which suggests that this sequence belonged to a new species within the genus Scalindua and the researchers named it Candidatus Scalindua wagneri after Michael Wagner, a microbial ecologist. Metabolism S. wagneri is an obligate anaerobic chemolithoautotroph and undergoes anaerobic ammonium oxidation (anammox) in the intracytoplasmic compartment called an anammoxosome. During the anammox process, ammonium is oxidized using nitrite as an electron acceptor and forms dinitrogen gas as a product. It is proposed that this mechanism occurs through the production of a hydrazine intermediate using hydroxylamine, which is derived from nitrite. In addition, S. wagneri uses nitrite as an electron donor to fix carbon dioxide and forms nitrate as a byproduct. To the test the metabolic properties of S. wagneri, Nakajima et al. performed anammox activity tests using nitrogen compounds labeled with the 15N isotopes and measured 28N2, 29N2, and 30N2 concentrations after 15 days. The researchers found that the concentrations of the 28N2 and 29N2 gases increased significantly. These results suggest that ammonia and nitrite is used in equal amounts to make 29N2, and denitrification concurrently occurs with anammox metabolism. Genome Currently, genomic information about S. wagneri is very limited. Current genome sequences were collected from DNA isolated from the bacteria growing in a marine anammox bacteria (MAB) reactor. Then, the 16S rRNA genes on the DNA were amplified using a specific oligonucleotide primer for Planctomycetales, separated using gel electrophoresis, and sequenced using a CEQ 2000 DNA Sequencer. Analysis of the 16S rRNA gene sequences was performed using the GENETYX program, and the alignments and phylogenetic trees were made using BLAST, CLUSTALW and neighbor joining, respectively. To have a better understanding of the genome, S. wagneri can be compared to one of its better-known relatives. For example, Candidatus Scalindua profunda has a genome length of 5.14 million base pairs with a GC content of 39.1%. There is no genomic information about the length or % GC content for S. wagneri. However, there are hundreds of 476 base pair partial sequences for its 16S rRNA gene. Using fluorescent in situ hybridization (FISH) analysis, a technique used to detect specific DNA sequences on chromosomes, researchers were not able to detect hybridization between the chromosome of S. wagneri and the putative anammox DNA probe. This suggests that S. wagneri is not very similar to the known anammox bacteria, so the researchers categorized the bacterium into its own genus. Ecology Although researchers are unable to isolate pure cultures of S. wagneri, it is believed to encompass a broad niche. Using 16S rRNA gene analysis, Schmid first found evidence of the bacteria in wastewater treatment plants. Other researchers also found 16S rRNA gene evidence in a petroleum reservoir held at a temperature range between 55 °C and 75 °C in addition to freshwater and marine ecosystems, such as estuaries. Importance and useful applications S. wagneri allows wastewater treatment plants to reduce operation costs while reducing the adverse effects of nitrification and denitrification on the environment. These bacteria contribute to the development of new technologies for wastewater management by aiding in the efficient removal of nitrogenous compounds in wastewater. Usually, nitrogen reactors use both nitrification and denitrification to remove nitrogenous wastes. These processes have high operation costs due to the continuous maintenance of aerobic conditions in the reactor. Denitrification also produces nitrous oxide (N2O), which is a greenhouse gas that is detrimental to the environment. Production of N2O contributes to the loss of fixed nitrogen, which regulates the biological productivity of ecosystems. By inoculating wastewater reactors with the anaerobic S. wagneri, operation costs can be reduced by about ninety percent without the production of greenhouse gases. This allows for better wastewater management in a more cost-efficient manner without contributing to climate change. References Environmental microbiology Planctomycetota Bacteria described in 2003 Candidatus taxa
Scalindua wagneri
Environmental_science
1,364
12,200,421
https://en.wikipedia.org/wiki/Leaf%20driver
Leaf driver refers to a device driver that accesses logically or physically existent devices on an I/O bus, and implements the functions defined for the device, such as transferring data to or from the device or accessing device registers. In some systems, for example Solaris, drivers are organized into a tree structure. A driver that provides services to other drivers below it in the tree is called, in Solaris terminology, a bus nexus driver. A tree node with no children, a leaf node, is called a leaf driver. Leaf devices (those requiring leaf drivers) are typical peripheral devices such as disks, tapes, network adapters, Framebuffer, and so forth. Drivers for these devices export the traditional character and block driver interfaces for use by user processes to read and write data to storage or communication devices. See also Nexus driver References Device drivers
Leaf driver
Technology
174
24,555,828
https://en.wikipedia.org/wiki/Fimbrial%20usher%20protein
The fimbrial usher protein is involved in biogenesis of the pilus in Gram-negative bacteria. The biogenesis of some fimbriae (or pili) requires a two-component assembly and transport system which is composed of a periplasmic chaperone and a pore-forming outer membrane protein which has been termed a molecular 'usher'; this is the chaperone-usher pathway. The usher protein has a molecular weight ranging from 86 to 100 kDa and is composed of a membrane-spanning 24-stranded beta barrel domain, reminiscent of porins, and of four periplasmic soluble domains: an N-terminal one of about 120 residues (NTD), a 'middle' domain of about 80 residues located as a soluble insertion within the beta barrel region of the sequence (plug domain) and two IG-like domains (each about 80 residues long) at the C-terminus (CTD1 and CTD2). Although the degree of sequence similarity of these proteins is not very high they share a number of characteristics. One of these is the presence of two pairs of disulfide bond-forming cysteines, the first one located in the NTD and the second in CTD2. The best conserved region of the sequence corresponds to the plug domain. References Protein domains Protein families Outer membrane proteins
Fimbrial usher protein
Biology
277
11,459,999
https://en.wikipedia.org/wiki/Alternaria%20sonchi
Alternaria sonchi is a plant pathogen. It was originally found on the leaves of Sonchus asper (a flowering plant) in Wisconsin, USA. References External links Index Fungorum USDA ARS Fungal Database Alternaria Fungal plant pathogens and diseases Fungi described in 1916 Fungus species
Alternaria sonchi
Biology
61
10,439,649
https://en.wikipedia.org/wiki/NGC%20346
NGC 346 is a young open cluster of stars with associated nebula located in the Small Magellanic Cloud (SMC) that appears in the southern constellation of Tucana. It was discovered August 1, 1826 by Scottish astronomer James Dunlop. J. L. E. Dreyer described it as, "bright, large, very irregular figure, much brighter middle similar to double star, mottled but not resolved". On the outskirts of the cluster is the multiple star system HD 5980, one of the brightest stars in the SMC. This cluster is located near the center of the brightest H II region in the SMC, designated N66. This is positioned in the northeast section of the galactic bar. Stellar surveys have identified 230 massive OB stars in the direction of this cluster. 33 of the cluster members are O-type stars, with 11 of type O6.5 or earlier. The inner radius of the cluster appears centrally condensed, while the area outside that volume is more dispersed. The youngest cluster members near the center have ages of less than two million years, and observations suggests the cluster is still engaged in high mass star formation. The cluster star formation rate is estimated at yr−1. Recent observations by NASA's James Webb Space Telescope have provided unprecedented insights into NGC 346. These observations have revealed surprising details about the cluster's dust environment, challenging previous assumptions and shedding light on the processes of protostar formation and early planetary development within this dynamic stellar nursery. Webb's observations mark a significant advancement in our understanding of star formation in the Small Magellanic Cloud and offer exciting avenues for further research into the cosmic evolution of galaxies. See also List of most massive stars References External links ESA' Hubble Heritage site Hubble picture and information on NGC 346 ESO Beautiful Image of a Cosmic Sculpture Open clusters H II regions Star-forming regions Small Magellanic Cloud Tucana 0346 18260801 Discoveries by James Dunlop
NGC 346
Astronomy
396
11,437,563
https://en.wikipedia.org/wiki/Cymadothea%20trifolii
Cymadothea trifolii is a fungal plant pathogen infecting the red clover. External links Index Fungorum USDA ARS Fungal Database References Fungal plant pathogens and diseases Mycosphaerellaceae Fungi described in 1935 Fungus species
Cymadothea trifolii
Biology
52
7,090,506
https://en.wikipedia.org/wiki/Dielectric%20loss
In electrical engineering, dielectric loss quantifies a dielectric material's inherent dissipation of electromagnetic energy (e.g. heat). It can be parameterized in terms of either the loss angle or the corresponding loss tangent . Both refer to the phasor in the complex plane whose real and imaginary parts are the resistive (lossy) component of an electromagnetic field and its reactive (lossless) counterpart. Electromagnetic field perspective For time-varying electromagnetic fields, the electromagnetic energy is typically viewed as waves propagating either through free space, in a transmission line, in a microstrip line, or through a waveguide. Dielectrics are often used in all of these environments to mechanically support electrical conductors and keep them at a fixed separation, or to provide a barrier between different gas pressures yet still transmit electromagnetic power. Maxwell’s equations are solved for the electric and magnetic field components of the propagating waves that satisfy the boundary conditions of the specific environment's geometry. In such electromagnetic analyses, the parameters permittivity , permeability , and conductivity represent the properties of the media through which the waves propagate. The permittivity can have real and imaginary components (the latter excluding effects, see below) such that If we assume that we have a wave function such that then Maxwell's curl equation for the magnetic field can be written as: where is the imaginary component of permittivity attributed to bound charge and dipole relaxation phenomena, which gives rise to energy loss that is indistinguishable from the loss due to the free charge conduction that is quantified by . The component represents the familiar lossless permittivity given by the product of the free space permittivity and the relative real/absolute permittivity, or Loss tangent The loss tangent is then defined as the ratio (or angle in a complex plane) of the lossy reaction to the electric field in the curl equation to the lossless reaction: Solution for the electric field of the electromagnetic wave is where: is the angular frequency of the wave, and is the wavelength in the dielectric material. For dielectrics with small loss, square root can be approximated using only zeroth and first order terms of binomial expansion. Also, for small . Since power is electric field intensity squared, it turns out that the power decays with propagation distance as where: is the initial power There are often other contributions to power loss for electromagnetic waves that are not included in this expression, such as due to the wall currents of the conductors of a transmission line or waveguide. Also, a similar analysis could be applied to the magnetic permeability where with the subsequent definition of a magnetic loss tangent The electric loss tangent can be similarly defined: upon introduction of an effective dielectric conductivity (see relative permittivity#Lossy medium). Discrete circuit perspective A capacitor is a discrete electrical circuit component typically made of a dielectric placed between conductors. One lumped element model of a capacitor includes a lossless ideal capacitor in series with a resistor termed the equivalent series resistance (ESR), as shown in the figure below. The ESR represents losses in the capacitor. In a low-loss capacitor the ESR is very small (the conduction is high leading to a low resistivity), and in a lossy capacitor the ESR can be large. Note that the ESR is not simply the resistance that would be measured across a capacitor by an ohmmeter. The ESR is a derived quantity representing the loss due to both the dielectric's conduction electrons and the bound dipole relaxation phenomena mentioned above. In a dielectric, one of the conduction electrons or the dipole relaxation typically dominates loss in a particular dielectric and manufacturing method. For the case of the conduction electrons being the dominant loss, then where C is the lossless capacitance. When representing the electrical circuit parameters as vectors in a complex plane, known as phasors, a capacitor's loss tangent is equal to the tangent of the angle between the capacitor's impedance vector and the negative reactive axis, as shown in the adjacent diagram. The loss tangent is then . Since the same AC current flows through both ESR and Xc, the loss tangent is also the ratio of the resistive power loss in the ESR to the reactive power oscillating in the capacitor. For this reason, a capacitor's loss tangent is sometimes stated as its dissipation factor, or the reciprocal of its quality factor Q, as follows References Electromagnetism Electrical engineering External links Loss in dielectrics, frequency dependence
Dielectric loss
Physics,Engineering
982
57,731,049
https://en.wikipedia.org/wiki/Estriol%20dipropionate
Estriol dipropionate, or estriol 3,17β-dipropionate, is a synthetic estrogen and estrogen ester – specifically, the C3 and C17β dipropionate ester of estriol – which was first described in 1963 and was never marketed. Following a single intramuscular injection of 6.94 mg estriol dipropionate (equivalent to 5.0 mg estriol) in an oil solution, peak levels of estriol occurred after 0.83 days, an elimination half-life of 12.7 hours was observed, and estriol levels remained elevated for up to 4 days. For comparison, the duration of estriol was much shorter, while that of estriol dihexanoate was much longer. See also List of estrogen esters § Estriol esters References Abandoned drugs Estriol esters Propionate esters Synthetic estrogens
Estriol dipropionate
Chemistry
199
18,267,690
https://en.wikipedia.org/wiki/NGC%206820%20and%20NGC%206823
NGC 6820 is a small reflection nebula near the open cluster NGC 6823 in Vulpecula. The reflection nebula and cluster are embedded in a large faint emission nebula called Sh 2-86. The whole area of nebulosity is often referred to as NGC 6820. M27, the Dumbbell Nebula, is found three degrees to the east, and α Vulpeculae three degrees to the west. Open star cluster NGC 6823 is about 50 light-years across and lies about 6,000 light-years away. The center of the cluster formed about two million years ago and is dominated in brightness by a host of bright young blue stars. Outer parts of the cluster contain even younger stars. It forms the core of the Vulpecula OB1 stellar association. Image gallery References External links National Optical Observatory 6820 NGC 6823 H II regions Vulpecula Sharpless objects 18630808 Star-forming regions
NGC 6820 and NGC 6823
Astronomy
193
157,835
https://en.wikipedia.org/wiki/Water%20table
The water table is the upper surface of the zone of saturation. The zone of saturation is where the pores and fractures of the ground are saturated with groundwater, which may be fresh, saline, or brackish, depending on the locality. It can also be simply explained as the depth below which the ground is saturated. The water table is the surface where the water pressure head is equal to the atmospheric pressure (where gauge pressure = 0). It may be visualized as the "surface" of the subsurface materials that are saturated with groundwater in a given vicinity. The groundwater may be from precipitation or from groundwater flowing into the aquifer. In areas with sufficient precipitation, water infiltrates through pore spaces in the soil, passing through the unsaturated zone. At increasing depths, water fills in more of the pore spaces in the soils, until a zone of saturation is reached. Below the water table, in the phreatic zone (zone of saturation), layers of permeable rock that yield groundwater are called aquifers. In less permeable soils, such as tight bedrock formations and historic lakebed deposits, the water table may be more difficult to define. “Water table” and “water level” are not synonymous. If a deeper aquifer has a lower permeable unit that confines the upward flow, then the water level in this aquifer may rise to a level that is greater or less than the elevation of the actual water table. The elevation of the water in this deeper well is dependent upon the pressure in the deeper aquifer and is referred to as the potentiometric surface, not the water table. Formation The water table may vary due to seasonal changes such as precipitation and evapotranspiration. In undeveloped regions with permeable soils that receive sufficient amounts of precipitation, the water table typically slopes toward rivers that act to drain the groundwater away and release the pressure in the aquifer. Springs, rivers, lakes and oases occur when the water table reaches the surface. Groundwater entering rivers and lakes accounts for the base-flow water levels in water bodies. Surface topography Within an aquifer, the water table is rarely horizontal, but reflects the surface relief due to the capillary effect (capillary fringe) in soils, sediments and other porous media. In the aquifer, groundwater flows from points of higher pressure to points of lower pressure, and the direction of groundwater flow typically has both a horizontal and a vertical component. The slope of the water table is known as the “hydraulic gradient”, which depends on the rate at which water is added to and removed from the aquifer and the permeability of the material. The water table does not always mimic the topography due to variations in the underlying geological structure (e.g., folded, faulted, fractured bedrock). Perched water tables A perched water table (or perched aquifer) is an aquifer that occurs above the regional water table. This occurs when there is an impermeable layer of rock or sediment (aquiclude) or relatively impermeable layer (aquitard) above the main water table/aquifer but below the land surface. If a perched aquifer's flow intersects the surface, at a valley wall, for example, the water is discharged as a spring. Fluctuations Tidal On low-lying oceanic islands with porous soil, freshwater tends to collect in lenticular pools on top of the denser seawater intruding from the sides of the islands. Such an island's freshwater lens, and thus the water table, rises and falls with the tides. Seasonal In some regions, for example, Great Britain or California, winter precipitation is often higher than summer precipitation and so the groundwater storage is not fully recharged in summer. Consequently, the water table is lower during the summer. This disparity between the level of the winter and summer water table is known as the "zone of intermittent saturation", wherein the water table will fluctuate in response to climatic conditions. Long-term Fossil water is groundwater that has remained in an aquifer for several millennia and occurs mainly in deserts. It is non-renewable by present-day rainfall due to its depth below the surface, and any extraction causes a permanent change in the water table in such regions. Effects on crop yield Most crops need a water table at a minimum depth. For some important food and fiber crops a classification was made because at shallower depths the crop suffers a yield decline. (where DWT = depth to water table in centimetres) Effects on construction A water table close to the surface affects excavation, drainage, foundations, wells and leach fields (in areas without municipal water and sanitation), and more. When excavation occurs near enough to the water table to reach its capillary action, groundwater must be removed during construction. This is conspicuous in Berlin, which is built on sandy, marshy ground, and the water table is generally 2 meters below the surface. Pink and blue pipes can often be seen carrying groundwater from construction sites into the Spree river (or canals). See also References Aquifers Hydrology Hydrogeology Irrigation Water supply Water and the environment Karst
Water table
Chemistry,Engineering,Environmental_science
1,080
77,580,960
https://en.wikipedia.org/wiki/HD%2096063
HD 96063 (proper name Dingolay) is a 8th-magnitude red-giant branch star located about away in the constellation of Leo. It is orbited by one confirmed exoplanet, HD 96063 b (proper name Ramajay), a gas giant slightly larger and more massive than Jupiter. Nomenclature In 2019, the Republic of Trinidad and Tobago was assigned to giving the HD 96063 system a proper name as part of the IAU100 NameExoWorlds Project, planned to celebrate the hundredth anniversary of the International Astronomical Union (IAU), which grants the right to name an exoplanetary system to every state and territory in the world. Names were submitted and selected within Trinidad and Tobago, which were then presented to the IAU to be officially recognized. On 17 December 2019, the IAU announced that HD 96063 and its planet, b, were named Dingolay and Ramajay, respectively. The two names are both derived from terms related to the Trinidad and Tobago Carnival. Dingolay is a dance form that represents the culture and language of Trinidad and Tobago's ancestors via intricate movements. Ramajay is a steelpan style of singing and music that celebrates Trinidad and Tobago's forefathers' culture and language. Stellar characteristics HD 96063 is an evolved yellow/orange star with an effective temperature of about 5,000–5,100 K, typical of stars entering the red-giant branch. Its precise nature, however, has been controversial. Once classified as a G6-type main-sequence star, the star is more recently thought to be a K-type "yellow giant," somewhere between three and five times as large as the Sun. When the planet HD 96063 b was discovered, the star was assumed to be billion years old with a sun-like mass ( ), but subsequent studies consider it to be more massive at about 1.4 , and thus younger ( Gyr). With a luminosity roughly ten times that of the Sun and a distance of 454 light-years, the star has an apparent magnitude of 8.254, too faint to be seen from Earth by the naked eye. Planetary system In 2011, radial-velocity observations made at the W. M. Keck Observatory revealed the existence of an exoplanet around HD 96063. The planet, HD 96063 b, is thought to be a gas giant at least 1.265 times the mass of Jupiter, which orbits its host star at a distance of 1.11 AU with an Earth-like period of 362 days. Its orbit is moderately eccentric, with an eccentricity comparable to that of planet Mercury (0.2056). See also List of proper names of stars List of proper names of exoplanets List of stars in Leo List of exoplanets discovered in 2011 References External links Dingolay Leo (constellation) 096063 054158 BD-01 02476 K-type giants Planetary systems with one confirmed planet J11044445-0230475 Planetary systems Stars
HD 96063
Astronomy
640
31,536,987
https://en.wikipedia.org/wiki/Perceptual%20trap
A perceptual trap is an ecological scenario in which environmental change, typically anthropogenic, leads an organism to avoid an otherwise high-quality habitat. The concept is related to that of an ecological trap, in which environmental change causes preference towards a low-quality habitat. History In a 2004 article discussing source–sink dynamics, James Battin did not distinguish between high-quality habitats that are preferred or avoided, labelling both "sources". The latter scenario, in which a high-quality habitat is avoided, was first recognised as an important phenomenon in 2007 by Gilroy and Sutherland, who described them as "undervalued resources". The term "perceptual trap" was first proposed by Michael Patten and Jeffrey Kelly in a 2010 article. Hans Van Dyck argues that the term is misleading because perception is also a major component in other cases of trapping. Description Animals use discrete environmental cues to select habitat. A perceptual trap occurs if change in an environmental cue leads an organism to avoid a high-quality habitat. It differs, therefore, from simple habitat avoidance, which may be a correct decision given the habitat's quality. The concept of a perceptual trap is related to that of an ecological trap, in which environmental change causes preference towards a low-quality habitat. There is expected to be strong natural selection against ecological traps, but not necessarily against perceptual traps, as Allee effects may restrict a population’s ability to establish itself. Examples To support the concept of a perceptual trap, Patten and Kelly cited a study of the lesser prairie chicken (Tympanuchus pallidicinctus). The species' natural environment, shinnery oak grassland, is often treated with the herbicide tebuthiuron to increase grass cover for cattle grazing. Herbicide treatment resulted in less shrub cover, a habitat cue that caused female lesser prairie-chickens to avoid the habitat in favour of untreated areas. However, females who nested in herbicide-treated areas achieved comparable nesting successes and clutch sizes to those in untreated areas. Patten and Kelly suggest that the adverse effects of tebuthiuron treatment on nesting success are countered by various effects, such as greater nest concealment through increased grass cover. Therefore, female birds are erroneously avoiding a high-quality habitat. Patten and Kelly also cited as a possible perceptual trap the cases of the spotted towhee (Pipilo maculatus) and rufous-crowned sparrow (Aimophila ruficeps), which tend to avoid habitat fragments, even though birds nesting in habitat fragments achieve increased nesting success due to a reduction in snake predation. See also Ecological trap Source–sink dynamics Type I and type II errors References Environmental terminology Biology terminology Environmental conservation Ecology Landscape ecology
Perceptual trap
Biology
576
67,371,207
https://en.wikipedia.org/wiki/Elly%20Schwab-Agallidis
Elly Schwab-Agallidis (born Elly Agallidis, , ; – ) was a Greek physicist/physical chemist and one of the first women in Greece to be awarded a PhD in the field. She was the wife of Georg-Maria Schwab, who met her in Munich as the supervisor of the experimental work for her doctoral thesis; the couple then worked together as researchers in the Kanellopoulos Institute after they emigrated in Greece. Her most famous work concerned the properties and reactivity of parahydrogen. Biography Elly Agallidis was born in 1914 to a middle class family of Athens; she was the first child of Ioannis Agallidis and Maria-Edith Agallidis (née Zannou). She graduated with a degree in Physics from the University of Athens in 1934 and continued with postgraduate studies in the Physical Chemistry Laboratory of the University of Munich, then under the direction of Heinrich Otto Wieland. It was there that she met Georg-Maria Schwab, her future husband, who suggested that she examine parahydrogen and supervised her experimental work. Schwab was banned from teaching in Nazi Germany due to his half-Jewish origin. With the increasing fear of prosecution, he decided in 1930 to emigrate to Elly's homeland, Greece. Agallidis and Schwab married in Athens the same year. Schwab-Agallidis was able to find work for both in the chemical laboratory of the Kanellopoulos Institute of Chemistry and Agriculture, where the couple collaborated on various topics of physico-chemical research for the next ten years (1939–1949). Among those topics Schwab-Agallidis continued her work on the properties of parahydrogen, for which she received her PhD by the Department of Physics of the University of Athens in 1939 and published multiple relevant papers in the following years. At the same period she also delivered lectures on Physical Chemistry at the University of Athens. After a difficult period for the couple during the Axis occupation of Greece and the resumption of their research after the liberation of Greece, the two scientists eventually returned to West Germany when Schwab was offered the Professorship of Physical Chemistry at the University of Munich in 1951. Elly Schwab-Agallidis died in Essen at the age of 92 in 2006. References Greek chemists Greek women chemists 20th-century Greek physicists Greek women physicists 1914 births 2006 deaths Physical chemists Greek emigrants to Germany Scientists from Athens
Elly Schwab-Agallidis
Chemistry
519
53,530,154
https://en.wikipedia.org/wiki/Aoussou
Aoussou () is the period of the year extending, according to the Berber calendar, over 40 days from 25 July. It is known to be a very hot period. Event In Tunisia, the Carnival of Aoussou is celebrated during this period, a festive and cultural event taking place in Sousse. References Berber culture Culture of Tunisia Weather lore
Aoussou
Physics
74
51,515,150
https://en.wikipedia.org/wiki/Terse
TERSE is an IBM archive file format that supports lossless compression. A TERSE file may contain a sequential data set, a partitioned data set (PDS), partitioned data set extended (PDSE), or a large format dataset (DSNTYPE=LARGE). Any record format (RECFM) is allowed as long as the record length is less than 32 K (64 K for RECFM=VBS). Records may contain printer control characters. Terse files are compressed using a modification of Ziv, Lempel compression algorithm developed by Victor S. Miller and Mark Wegman at the Thomas J. Watson Research Center in Yorktown Heights, New York. The Terse algorithm was proprietary to IBM; however, IBM has released an open source Java decompressor under the Apache 2 license. The compression/decompression program (called terse and unterse)—AMATERSE or TRSMAIN—is available from IBM for z/OS; the z/VM equivalents are the TERSE and DETERSE commands, for sequential datasets only. Versions for PC DOS, OS/2, AIX, Windows (2000, XP, 2003), Linux, and Mac OS/X are available online. AMATERSE The following JCL can be used to invoke AMATERSE on z/OS (TRSMAIN uses INFILE and OUTFILE instead of SYSUT1 and SYSUT2): //jobname JOB ... //stepname EXEC PGM=AMATERSE,PARM=ppppp //SYSPRINT DD SYSOUT=* //SYSUT1 DD DISP=SHR,DSN=input.dataset //SYSUT2 DD DISP=(NEW,CATLG),DCB=ddd,DSN=output.dataset, // SPACE=space_parameters //SYSUT3 DD DISP=(NEW,DELETE),SPACE=space_parameters Optional temporary dataset Uses Terse can be used as a general-purpose compression/decompression tool. IBM also distributes downloadable Program temporary fixs (PTFs) as tersed datasets. Terse is also used by IBM customers to package diagnostic information such as z/OS dumps and traces, for transmission to IBM. References External links Terse PC versions at Vetusware IBM software Archive formats Data compression American inventions
Terse
Technology
514
53,362,546
https://en.wikipedia.org/wiki/Pockets%20Warhol
Pockets Warhol (born 1992) is a capuchin monkey, and one of 24 residents (as of 2023-08-03) at Story Book Farm Primate Sanctuary near Sunderland, Ontario, Canada. Pockets came to media attention in 2011 when the sanctuary held a fundraiser featuring 40 paintings by the monkey. Early life According to the sanctuary, Pockets was born on April 1, 1992, and lived his early life as a pet in British Columbia. In 2009, Pockets' owner was finding herself challenged to look after him, and searched for a place that could take him. On finding Story Book Farm, she flew herself and Pockets to Ontario, and stayed with Pockets for a week to get him comfortable in his new home. The former owner still keeps in touch with the sanctuary. Start as an artist Shortly after Pockets arrived at the sanctuary, one of the volunteers, Charmaine Quinn, gave Pockets his surname of Warhol because his white hair reminded her of Andy Warhol. This also prompted Quinn to give Pockets some children's paints to keep him busy. In December 2011, having accumulated 40 of Pockets' paintings, the sanctuary arranged an exhibition of the paintings at a Toronto diner, helping to raise funds for the sanctuary. The event was covered in the Toronto Star, which in turn triggered international media coverage in/on: CBC, Global News, the Huffington Post (USA), Maclean's magazine, and Vv Magazine. A few months later, Pockets paintings were made available for sale online. Art collaboration In September 2013, Brent Roe and Scott Cameron (aka Scotch Camera) joined an art show with Pockets Warhol at the Gladstone hotel in Toronto. In September 2014, MacLeans listed Pockets as the #8 top selling art animal in the world, based on the top price fetched for a single item. According to Quinn, Pockets' work has been featured in art shows as far away as Estonia, Finland, and Italy, and purchased online from as far away as Tasmania. In May 2016, Anita Kunz visited Pockets at the sanctuary, and subsequently donated one of her own paintings for Pockets to 'enhance'. Ms. Kunz later organized an art show with 80 other artists as a new fundraiser for the sanctuary, held at The Papermill Gallery, Todmorden Mills from April 6–16, 2017. Other participants in this collaboration included: Barry Blitt, Marc Burckhardt, Cynthia von Buhler, Seymour Chwast, Sue Coe, Yuri Dojc, Louis Fishauf, Jill Greenberg, Terry Mosher, Tim O'Brien, Ralph Steadman, Ann Telnaes and Martin Wittfooth. Celebrity interactions In April 2012, sanctuary volunteers Charmaine Quinn and Izzy Hirji presented Jane Goodall with a photo of Pockets and a painting by Pockets for her birthday at the Jane Goodall Institute in Toronto. In March 2015, the sanctuary sent a painting by Pockets to Ricky Gervais and Jane Fallon as a 'Thank you' for their support of animal rights. In June 2015, Ricky Gervais tweeted that he was donating an acoustic guitar to the sanctuary, with mention of Pockets Warhol. After his performance in Toronto in September 2015, Gervais donated the guitar he used there, which subsequently raised US$4,150 in an online auction. The winning bidder lives in the United Kingdom. As of February 15, 2019, the guitar was up for auction again having been signed by several other celebrities: Brian May, Peter Frampton, Will Ferrell, Bryan Cranston, Dhani Harrison, Ricky Warwick, Steve Cutts. This time the proceeds were split between Story Book Farm Primate Sanctuary and Brian May's Save Me organization. In 2020, Martin Gore of Depeche Mode commissioned artwork by Pockets to be used as the cover art for his latest EP, The Third Chimpanzee, see photos at right. The artwork is also featured in the accompanying music videos. Martin Gore discussed this collaboration in an interview with Rolling Stone magazine on 2021-01-27. The EP was released by Mute Records on 2021-01-29. One track, Mandrill, was released early on 2020-11-17. A second track, Howler, was released 2021-01-07. See also Animal-made art Congo (chimpanzee) Darwin (monkey) List of individual monkeys Pierre Brassau References External links Pockets Warhol Art Gallery Visual arts by animals 1992 animal births Art by primates Canadian male painters Individual monkeys
Pockets Warhol
Biology
915
15,380,061
https://en.wikipedia.org/wiki/Water%20scarcity
Water scarcity (closely related to water stress or water crisis) is the lack of fresh water resources to meet the standard water demand. There are two types of water scarcity. One is physical. The other is economic water scarcity. Physical water scarcity is where there is not enough water to meet all demands. This includes water needed for ecosystems to function. Regions with a desert climate often face physical water scarcity. Central Asia, West Asia, and North Africa are examples of arid areas. Economic water scarcity results from a lack of investment in infrastructure or technology to draw water from rivers, aquifers, or other water sources. It also results from weak human capacity to meet water demand. Many people in Sub-Saharan Africa are living with economic water scarcity. There is enough freshwater available globally and averaged over the year to meet demand. As such, water scarcity is caused by a mismatch between when and where people need water, and when and where it is available. This can happen due to an increase in the number of people in a region, changing living conditions and diets, and expansion of irrigated agriculture. Climate change (including droughts or floods), deforestation, water pollution and wasteful use of water can also mean there is not enough water. These variations in scarcity may also be a function of prevailing economic policy and planning approaches. Water scarcity assessments look at many types of information. They include green water (soil moisture), water quality, environmental flow requirements, and virtual water trade. Water stress is one parameter to measure water scarcity. It is useful in the context of Sustainable Development Goal 6. Half a billion people live in areas with severe water scarcity throughout the year, and around four billion people face severe water scarcity at least one month per year. Half of the world's largest cities experience water scarcity. There are 2.3 billion people who reside in nations with water scarcities (meaning less than 1700 m3 of water per person per year). There are different ways to reduce water scarcity. It can be done through supply and demand side management, cooperation between countries and water conservation. Expanding sources of usable water can help. Reusing wastewater and desalination are ways to do this. Others are reducing water pollution and changes to the virtual water trade. Definitions Water scarcity has been defined as the "volumetric abundance, or lack thereof, of freshwater resources" and it is thought to be "human-driven". This can also be called "physical water scarcity". There are two types of water scarcity. One is physical water scarcity and the other is economic water scarcity. Some definitions of water scarcity look at environmental water requirements. This approach varies from one organization to another. Related concepts are water stress and water risk. The CEO Water Mandate, an initiative of the UN Global Compact, proposed to harmonize these in 2014. In their discussion paper they state that these three terms should not be used interchangeably. Some organizations define water stress as a broader concept. It would include aspects of water availability, water quality and accessibility. Accessibility depends on existing infrastructure. It also depends on whether customers can afford to pay for the water. Some experts call this economic water scarcity. The FAO defines water stress as the "symptoms of water scarcity or shortage". Such symptoms could be "growing conflict between users, and competition for water, declining standards of reliability and service, harvest failures and food insecurity". This is measured with a range of Water Stress Indices. A group of scientists provided another definition for water stress in 2016: "Water stress refers to the impact of high water use (either withdrawals or consumption) relative to water availability." This means water stress would be a demand-driven scarcity. Types Experts have defined two types of water scarcity. One is physical water scarcity. The other is economic water scarcity. These terms were first defined in a 2007 study led by the International Water Management Institute. This examined the use of water in agriculture over the previous 50 years. It aimed to find out if the world had sufficient water resources to produce food for the growing population in the future. Physical water scarcity Physical water scarcity occurs when natural water resources are not enough to meet all demands. This includes water needed for ecosystems to function well. Dry regions often suffer from physical water scarcity. Human influence on climate has intensified water scarcity in areas where it was already a problem. It also occurs where water seems abundant but where resources are over-committed. One example is overdevelopment of hydraulic infrastructure. This can be for irrigation or energy generation. There are several symptoms of physical water scarcity. They include severe environmental degradation, declining groundwater and water allocations favouring some groups over others. Experts have proposed another indicator. This is called ecological water scarcity. It considers water quantity, water quality, and environmental flow requirements. Water is scarce in densely populated arid areas. These are projected to have less than 1000 cubic meters available per capita per year. Examples are Central and West Asia, and North Africa). A study in 2007 found that more than 1.2 billion people live in areas of physical water scarcity. This water scarcity relates to water available for food production, rather than for drinking water which is a much smaller amount. Some academics favour adding a third type which would be called ecological water scarcity. It would focus on the water demand of ecosystems. It would refer to the minimum quantity and quality of water discharge needed to maintain sustainable and functional ecosystems. Some publications argue that this is simply part of the definition of physical water scarcity. Economic water scarcity Economic water scarcity is due to a lack of investment in infrastructure or technology to draw water from rivers, aquifers, or other water sources. It also reflects insufficient human capacity to meet the demand for water. It causes people without reliable water access to travel long distances to fetch water for household and agricultural uses. Such water is often unclean. The United Nations Development Programme says economic water scarcity is the most common cause of water scarcity. This is because most countries or regions have enough water to meet household, industrial, agricultural, and environmental needs. But they lack the means to provide it in an accessible manner. Around a fifth of the world's population currently live in regions affected by physical water scarcity. A quarter of the world's population is affected by economic water scarcity. It is a feature of much of Sub-Saharan Africa. So better water infrastructure there could help to reduce poverty. Investing in water retention and irrigation infrastructure would help increase food production. This is especially the case for developing countries that rely on low-yield agriculture. Providing water that is adequate for consumption would also benefit public health. This is not only a question of new infrastructure. Economic and political intervention are necessary to tackle poverty and social inequality. The lack of funding means there is a need for planning. The emphasis is usually on improving water sources for drinking and domestic purposes. But more water is used for purposes such as bathing, laundry, livestock and cleaning than drinking and cooking. This suggests that too much emphasis on drinking water addresses only part of the problem. So it can limit the range of solutions available. Challenges Simple indicators There are several indicators for measuring water scarcity. One is the water use to availability ratio. This is also known as the criticality ratio. Another is the IWMI Indicator. This measures physical and economic water scarcity. Another is the water poverty index. "Water stress" is a criterion to measure water scarcity. Experts use it in the context of Sustainable Development Goal 6. A report by the FAO in 2018 provided a definition of water stress. It described it as "the ratio between total freshwater withdrawn (TFWW) by all major sectors and total renewable freshwater resources (TRWR), after taking into account environmental flow requirements (EFR)". This means that the value for TFWW is divided by the difference between TRWR minus EFR. Environmental flows are water flows required to sustain freshwater and estuarine ecosystems. A previous definition in Millennium Development Goal 7, target 7.A, was simply the proportion of total water resources used, without taking EFR into consideration. This definition sets out several categories for water stress. Below 10% is low stress; 10-20% is low-to-medium; 20-40% medium-to-high; 40-80% high; above 80% very high. Indicators are used to measure the extent of water scarcity. One way to measure water scarcity is to calculate the amount of water resources available per person each year. One example is the "Falkenmark Water Stress Indicator". This was developed by Malin Falkenmark. This indicator says a country or region experiences "water stress" when annual water supplies drop below 1,700 cubic meters per person per year. Levels between 1,700 and 1,000 cubic meters will lead to periodic or limited water shortages. When water supplies drop below 1,000 cubic meters per person per year the country faces "water scarcity". However, the Falkenmark Water Stress Indicator does not help to explain the true nature of water scarcity. Renewable freshwater resources It is also possible to measure water scarcity by looking at renewable freshwater. Experts use it when evaluating water scarcity. This metric can describe the total available water resources each country contains. This total available water resource gives an idea of whether a country tend to experience physical water scarcity. This metric has a drawback because it is an average. Precipitation delivers water unevenly across the planet each year. So annual renewable water resources vary from year to year. This metric does not describe how easy it is for individuals, households, industries or government to access water. Lastly this metric gives a description of a whole country. So it does not accurately portray whether a country is experiencing water scarcity. For example, Canada and Brazil both have very high levels of available water supply. But they still face various water-related problems. Some tropical countries in Asia and Africa have low levels of freshwater resources. More sophisticated indicators Water scarcity assessments must include several types of information. They include data on green water (soil moisture), water quality, environmental flow requirements, globalisation, and virtual water trade. Since the early 2000s, water scarcity assessments have used more complex models. These benefit from spatial analysis tools. Green-blue water scarcity is one of these. Footprint-based water scarcity assessment is another. Another is cumulative abstraction to demand ratio, which considers temporal variations. Further examples are LCA-based water stress indicators and integrated water quantity–quality environment flow. Since the early 2010s assessments have looked at water scarcity from both quantity and quality perspectives. Experts have proposed a further indicator. This is called ecological water scarcity. It considers water quantity, water quality, and environmental flow requirements. Results from a modelling study in 2022 show that northern China suffered more severe ecological water scarcity than southern China. The driving factor of ecological water scarcity in most provinces was water pollution rather than human water use. A successful assessment will bring together experts from several scientific discipline. These include the hydrological, water quality, aquatic ecosystem science, and social science communities. Available water The United Nations estimates that only 200,000 cubic kilometers of the total 1.4 billion cubic kilometers of water on Earth is freshwater available for human consumption. A mere 0.014% of all water on Earth is both fresh and easily accessible. Of the remaining water, 97% is saline, and a little less than 3% is difficult to access. The fresh water available to us on the planet is around 1% of the total water on earth. The total amount of easily accessible freshwater on Earth is 14,000 cubic kilometers. This takes the form of surface water such as rivers and lakes or groundwater, for example in aquifers. Of this total amount, humanity uses and resuses just 5,000 cubic kilometers. Technically, there is a sufficient amount of freshwater on a global scale. So in theory there is more than enough freshwater available to meet the demands of the current world population of 8 billion people. There is even enough to support population growth to 9 billion or more. But unequal geographical distribution and unequal consumption of water makes it a scarce resource in some regions and groups of people. Rivers and lakes provide common surface sources of freshwater. But other water resources such as groundwater and glaciers have become more developed sources of freshwater. They have become the main source of clean water. Groundwater is water that has pooled below the surface of the Earth. It can provide a usable quantity of water through springs or wells. These areas of groundwater are also known as aquifers. It is becoming harder to use conventional sources because of pollution and climate change. So people are drawing more and more on these other sources. Population growth is encouraging greater use of these types of water resources. Scale Current estimates In 2019 the World Economic Forum listed water scarcity as one of the largest global risks in terms of potential impact over the next decade. Water scarcity can take several forms. One is a failure to meet demand for water, partially or totally. Other examples are economic competition for water quantity or quality, disputes between users, irreversible depletion of groundwater, and negative impacts on the environment. About half of the world's population currently experience severe water scarcity for at least some part of the year. Half a billion people in the world face severe water scarcity all year round. Half of the world's largest cities experience water scarcity. Almost two billion people do not currently have access to clean drinking water. A study in 2016 calculated that the number of people suffering from water scarcity increased from 0.24 billion or 14% of global population in the 1900s to 3.8 billion (58%) in the 2000s. This study used two concepts to analyse water scarcity. One is shortage, or impacts due to low availability per capita. The other is stress, or impacts due to high consumption relative to availability. Future predictions In the 20th century, water use has been growing at more than twice the rate of the population increase. Specifically, water withdrawals are likely to rise by 50 percent by 2025 in developing countries, and 18 per cent in developed countries. One continent, for example, Africa, has been predicted to have 75 to 250 million inhabitants lacking access to fresh water. By 2025, 1.8 billion people will be living in countries or regions with absolute water scarcity, and two-thirds of the world population could be under stress conditions. By 2050, more than half of the world's population will live in water-stressed areas, and another billion may lack sufficient water, MIT researchers find. With the increase in global temperatures and an increase in water demand, six out of ten people are at risk of being water-stressed. The drying out of wetlands globally, at around 67%, was a direct cause of a large number of people at risk of water stress. As global demand for water increases and temperatures rise, it is likely that two thirds of the population will live under water stress in 2025. According to a projection by the United Nations, by 2040, there can be about 4.5 billion people affected by a water crisis (or water scarcity). Additionally, with the increase in population, there will be a demand for food, and for the food output to match the population growth, there would be an increased demand for water to irrigate crops. The World Economic Forum estimates that global water demand will surpass global supply by 40% by 2030. Increasing the water demand as well as increasing the population results in a water crisis where there is not enough water to share in healthy levels. The crises are not only due to quantity but quality also matters. A study found that 6-20% of about 39 million groundwater wells are at high risk of running dry if local groundwater levels decline by a few meters. In many areas and with possibly more than half of major aquifers this would apply if they simply continue to decline. Impacts Water supply shortages Controllable factors such as the management and distribution of the water supply can contribute to scarcity. A 2006 United Nations report focuses on issues of governance as the core of the water crisis. The report noted that: "There is enough water for everyone". It also said: "Water insufficiency is often due to mismanagement, corruption, lack of appropriate institutions, bureaucratic inertia and a shortage of investment in both human capacity and physical infrastructure". Economists and others have argued that a lack of property rights, government regulations and water subsidies have given rise to the situation with water. These factors cause prices to be too low and consumption too high, making a point for water privatization. The clean water crisis is an emerging global crisis affecting approximately 785 million people around the world. 1.1 billion people lack access to water and 2.7 billion experience water scarcity at least one month in a year. 2.4 billion people suffer from contaminated water and poor sanitation. Contamination of water can lead to deadly diarrheal diseases such as cholera and typhoid fever and other waterborne diseases. These account for 80% of illnesses around the world. Environment Using water for domestic, food and industrial uses has major impacts on ecosystems in many parts of the world. This can apply even to regions not considered "water scarce". Water scarcity damages the environment in many ways. These include adverse effects on lakes, rivers, ponds, wetlands and other fresh water resources. Thus results in water overuse because water is scarce. This often occurs in areas of irrigation agriculture. It can harm the environment in several ways. This includes increased salinity, nutrient pollution, and the loss of floodplains and wetlands. Water scarcity also makes it harder to use flow to rehabilitate urban streams. Through the last hundred years, more than half of the Earth's wetlands have been destroyed and have disappeared. These wetlands are important as the habitats of numerous creatures such as mammals, birds, fish, amphibians, and invertebrates. They also support the growing of rice and other food crops. And they provide water filtration and protection from storms and flooding. Freshwater lakes such as the Aral Sea in central Asia have also suffered. It was once the fourth largest freshwater lake in the world. But it has lost more than 58,000 square km of area and vastly increased in salt concentration over the span of three decades. Subsidence is another result of water scarcity. The U.S. Geological Survey estimates that subsidence has affected more than 17,000 square miles in 45 U.S. states, 80 percent of it due to groundwater usage. Vegetation and wildlife need sufficient freshwater. Marshes, bogs and riparian zones are more clearly dependent upon sustainable water supply. Forests and other upland ecosystems are equally at risk as water becomes less available. In the case of wetlands, a lot of ground has been simply taken from wildlife use to feed and house the expanding human population. Other areas have also suffered from a gradual fall in freshwater inflow as upstream water is diverted for human use. Potential for conflict Other impacts include growing conflict between users and growing competition for water. Examples for the potential for conflict from water scarcity include: Food insecurity in the Middle East and North Africa Region and regional conflicts over scarce water resources. Causes and contributing factors Population growth Around fifty years ago, the common view was that water was an infinite resource. At that time, there were fewer than half the current number of people on the planet. People were not as wealthy as today, consumed fewer calories and ate less meat, so less water was needed to produce their food. They required a third of the volume of water we presently take from rivers. Today, the competition for water resources is much more intense. This is because there are now seven billion people on the planet and their consumption of water-thirsty meat is rising. And industry, urbanization, biofuel crops, and water reliant food items are competing more and more for water. In the future, even more water will be needed to produce food because the Earth's population is forecast to rise to 9 billion by 2050. In 2000, the world population was 6.2 billion. The UN estimates that by 2050 there will be an additional 3.5 billion people, with most of the growth in developing countries that already suffer water stress. This will increase demand for water unless there are corresponding increases in water conservation and recycling. In building on the data presented here by the UN, the World Bank goes on to explain that access to water for producing food will be one of the main challenges in the decades to come. It will be necessary to balance access to water with managing water in a sustainable way. At the same time it will be necessary to take the impact of climate change and other environmental and social variables into account. In 60% of European cities with more than 100,000 people, groundwater is being used at a faster rate than it can be replenished. Over-exploitation of groundwater The increase in the number of people is increasing competition for water. This is depleting many of the world's major aquifers. It has two causes. One is direct human consumption. The other is agricultural irrigation. Millions of pumps of all sizes are currently extracting groundwater throughout the world. Irrigation in dry areas such as northern China, Nepal and India draws on groundwater. And it is extracting groundwater at an unsustainable rate. Many cities have experienced aquifer drops of between 10 and 50 meters. They include Mexico City, Bangkok, Beijing, Chennai and Shanghai. Until recently, groundwater was not a highly used resource. In the 1960s, more and more groundwater aquifers developed. Improved knowledge, technology and funding have made it possible to focus more on drawing water from groundwater resources instead of surface water. These made the agricultural groundwater revolution possible. They expanded the irrigation sector which made it possible to increase food production and development in rural areas. Groundwater supplies nearly half of all drinking water in the world. The large volumes of water stored underground in most aquifers have a considerable buffer capacity. This makes it possible to withdraw water during periods of drought or little rainfall. This is crucial for people that live in regions that cannot depend on precipitation or surface water for their only supplies. It provides reliable access to water all year round. As of 2010, the world's aggregated groundwater abstraction is estimated at 1,000 km3 per year. Of this 67% goes on irrigation, 22% on domestic purposes and 11% on industrial purposes. The top ten major consumers of abstracted water make up 72% of all abstracted water use worldwide. They are India, China, United States of America, Pakistan, Iran, Bangladesh, Mexico, Saudi Arabia, Indonesia, and Italy. Goundwater sources are quite plentiful. But one major area of concern is the renewal or recharge rate of some groundwater sources. Extracting from non-rewable groundwater sources could exhaust them if they are not properly monitored and managed. Increasing use of groundwater can also reduce water quality over time. Groundwater systems often show falls in natural outflows, stored volumes, and water levels as well as water degradation. Groundwater depletion can cause harm in many ways. These include more costly groundwater pumping and changes in salinity and other types of water quality. They can also lead to land subsidence, degraded springs and reduced baseflows. Expansion of agricultural and industrial users The main cause of water scarcity as a result of consumption is the extensive use of water in agriculture/livestock breeding and industry. People in developed countries generally use about 10 times more water a day than people in developing countries. A large part of this is indirect use in water-intensive agricultural and industrial production of consumer goods. Examples are fruit, oilseed crops and cotton. Many of these production chains are globalized, So a lot of water consumption and pollution in developing countries occurs to produce goods for consumption in developed countries. Many aquifers have been over-pumped and are not recharging quickly. This does not use up the total fresh water supply. But it means that much has become polluted, salted, unsuitable or otherwise unavailable for drinking, industry and agriculture. To avoid a global water crisis, farmers will have to increase productivity to meet growing demands for food. At the same time industry and cities find will have to find ways to use water more efficiently. Business activities such as tourism are continuing to expand. They create a need for increases in water supply and sanitation. This in turn can lead to more pressure on water resources and natural ecosystems. The approximate 50% growth in world energy use by 2040 will also increase the need for efficient water use. It may means some water use shifts from irrigation to industry. This is because thermal power generation uses water for steam generation and cooling. Water pollution Climate change Climate change could have a big impact on water resources around the world because of the close connections between the climate and hydrological cycle. Rising temperatures will increase evaporation and lead to increases in precipitation. However there will be regional variations in rainfall. Both droughts and floods may become more frequent and more severe in different regions at different times. There will be generally less snowfall and more rainfall in a warmer climate. Changes in snowfall and snow melt in mountainous areas will also take place. Higher temperatures will also affect water quality in ways that scientists do not fully understand. Possible impacts include increased eutrophication. Climate change could also boost demand for irrigation systems in agriculture. There is now ample evidence that greater hydrologic variability and climate change have had a profound impact on the water sector, and will continue to do so. This will show up in the hydrologic cycle, water availability, water demand, and water allocation at the global, regional, basin, and local levels. The United Nations' FAO states that by 2025 1.9 billion people will live in countries or regions with absolute water scarcity. It says two thirds of the world's population could be under stress conditions. The World Bank says that climate change could profoundly alter future patterns of water availability and use. This will make water stress and insecurity worse, at the global level and in sectors that depend on water. Scientists have found that population change is four time more important than long-term climate change in its effects on water scarcity. Retreat of mountain glaciers Options for improvements Supply and demand side management A review in 2006 stated that "It is surprisingly difficult to determine whether water is truly scarce in the physical sense at a global scale (a supply problem) or whether it is available but should be used better (a demand problem)". The International Resource Panel of the UN states that governments have invested heavily in inefficient solutions. These are mega-projects like dams, canals, aqueducts, pipelines and water reservoirs. They are generally neither environmentally sustainable nor economically viable. According to the panel, the most cost-effective way of decoupling water use from economic growth is for governments to create holistic water management plans. These would take into account the entire water cycle: from source to distribution, economic use, treatment, recycling, reuse and return to the environment. In general, there is enough water on an annual and global scale. The issue is more of variation of supply by time and by region. Reservoirs and pipelines would deal with this variable water supply. Well-planned infrastructure with demand side management is necessary. Both supply-side and demand-side management have advantages and disadvantages. Co-operation between countries Lack of cooperation may give rise to regional water conflicts. This is especially the case in developing countries. The main reason is disputes regarding the availability, use and management of water. One example is the dispute between Egypt and Ethiopia over the Grand Ethiopian Renaissance Dam which escalated in 2020. Egypt sees the dam as an existential threat, fearing that the dam will reduce the amount of water it receives from the Nile. Water conservation Expanding sources of usable water Wastewater treatment and reclaimed water Desalination Virtual water trade Regional examples Overview of regions The Consultative Group on International Agricultural Research (CGIAR) published a map showing the countries and regions suffering most water stress. They are North Africa, the Middle East, India, Central Asia, China, Chile, Colombia, South Africa, Canada and Australia. Water scarcity is also increasing in South Asia. As of 2016, about four billion people, or two thirds of the world's population, were facing severe water scarcity. The more developed countries of North America, Europe and Russia will not see a serious threat to water supply by 2025 in general. This is not only because of their relative wealth. Their populations will also be more in line with available water resources. North Africa, the Middle East, South Africa and northern China will face very severe water shortages. This is due to physical scarcity and too many people for the water that is available. Most of South America, Sub-Saharan Africa, southern China and India will face water supply shortages by 2025. For these regions, scarcity will be due to economic constraints on developing safe drinking water, and excessive population growth. Africa West Africa and North Africa Water scarcity in Yemen (see: Water supply and sanitation in Yemen) is a growing problem. Population growth and climate change are among the causes. Others are poor water management, shifts in rainfall, water infrastructure deterioration, poor governance, and other anthropogenic effects. As of 2011, water scarcity is having political, economic and social impacts in Yemen. As of 2015, Yemen is one of the countries suffering most from water scarcity. Most people in Yemen experience water scarcity for at least one month a year. In Nigeria, some reports have suggested that increase in extreme heat, drought and the shrinking of Lake Chad is causing water shortage and environmental migration. This is forcing thousands to migrate to neighboring Chad and towns. Asia A major report in 2019 by more than 200 researchers, found that the Himalayan glaciers could lose 66 percent of their ice by 2100. These glaciers are the sources of Asia's biggest rivers – Ganges, Indus, Brahmaputra, Yangtze, Mekong, Salween and Yellow. Approximately 2.4 billion people live in the drainage basin of the Himalayan rivers. India, China, Pakistan, Bangladesh, Nepal and Myanmar could experience floods followed by droughts in coming decades. In India alone, the Ganges provides water for drinking and farming for more than 500 million people. Even with the overpumping of its aquifers, China is developing a grain deficit. When this happens, it will almost certainly drive grain prices upward. Most of the 3 billion people projected to be added worldwide by mid-century will be born in countries already experiencing water shortages. Unless population growth can be slowed quickly, it is feared that there may not be a practical non-violent or humane solution to the emerging world water shortage. It is highly likely that climate change in Turkey will cause its southern river basins to be water scarce before 2070, and increasing drought in Turkey. America In the Rio Grande Valley, intensive agribusiness has made water scarcity worse. It has sparked jurisdictional disputes regarding water rights on both sides of the U.S.-Mexico border. Scholars such as Mexico's Armand Peschard-Sverdrup have argued that this tension has created the need for new strategic transnational water management. Some have likened the disputes to a war over diminishing natural resources. The west coast of North America, which gets much of its water from glaciers in mountain ranges such as the Rocky Mountains and Sierra Nevada, is also vulnerable. Australia By far the largest part of Australia is desert or semi-arid lands commonly known as the outback. Water restrictions are in place in many regions and cities of Australia in response to chronic shortages resulting from drought. Environmentalist Tim Flannery predicted that Perth in Western Australia could become the world's first ghost metropolis. This would mean it was an abandoned city with no more water to sustain its population, said Flannery, who was Australian of the year 2007. In 2010, Perth suffered its second-driest winter on record and the water corporation tightened water restrictions for spring. Some countries have already proven that decoupling water use from economic growth is possible. For example, in Australia, water consumption declined by 40% between 2001 and 2009 while the economy grew by more than 30%. By country Water scarcity or water crisis in particular countries: Society and culture Global goals Sustainable Development Goal 6 aims for clean water and sanitation for all. It is one of 17 Sustainable Development Goals established by the United Nations General Assembly in 2015. The fourth target of SDG 6 refers to water scarcity. It states: "By 2030, substantially increase water-use efficiency across all sectors and ensure sustainable withdrawals and supply of freshwater to address water scarcity and substantially reduce the number of people suffering from water scarcity". See also References External links The World Bank's work and publications on water resources Climate change adaptation Environmental economics Environmental issues with water Global natural environment Risk management Water Water supply Water treatment Human impact on the environment
Water scarcity
Chemistry,Engineering,Environmental_science
6,793
10,548,542
https://en.wikipedia.org/wiki/Epichlo%C3%AB%20melicicola
Epichloë melicicola is a systemic and seed-transmissible endophyte of Melica dendroides (syn. Melica decumbens ) and Melica racemosa, grasses endemic to southern Africa. It was described as a Neotyphodium species in 2002 but transferred to the genus Epichloë in 2014. The two host plant species are locally called "dronkgras" because they can cause staggers in grazing livestock. Similar staggers symptoms are associated with several other grasses worldwide when they possess certain symbiotic Neotyphodium species that produce indole-diterpene alkaloids such as lolitrems. Molecular phylogenetic analysis indicates that E. melicicola is an interspecific hybrid, and that its closest relatives are the teleomorphic (sexual) species, Epichloë festucae, and the anamorphic (asexual) species, Epichloë aotearoae. References melicicola Fungi described in 2002 Fungi of Africa Fungus species
Epichloë melicicola
Biology
215
47,330,617
https://en.wikipedia.org/wiki/Suillus%20acerbus
Suillus acerbus is a species of bolete fungus in the family Suillaceae. It was first described scientifically by American mycologists Alexander H. Smith and Harry D. Thiers in 1964. See also List of North American boletes References External links acerbus Fungi described in 1964 Fungi of North America Fungus species
Suillus acerbus
Biology
71
12,438,341
https://en.wikipedia.org/wiki/EDGE%20of%20Existence%20programme
The EDGE of Existence programme is a research and conservation initiative that focuses on species deemed to be the world's most Evolutionarily Distinct and Globally Endangered (EDGE) species. Developed by the Zoological Society of London (ZSL), the programme aims to raise awareness of these species, implement targeted research and conservation actions to halt their decline, and to train in-country conservationists (called EDGE Fellows) to protect them. EDGE species are animal species which have a high 'EDGE score', a metric combining endangered conservation status with the genetic distinctiveness of the particular taxon. Distinctive species have few closely related species, and EDGE species are often the only surviving member of their genus or even higher taxonomic rank. The extinction of such species would therefore represent a disproportionate loss of unique evolutionary history and biodiversity. The EDGE logo is the echidna. Some EDGE species, such as elephants and pandas, are well-known and already receive considerable conservation attention, but many others, such as the vaquita (the world's rarest cetacean) the bumblebee bat (arguably the world's smallest mammal) and the egg-laying long-beaked echidnas, are highly threatened yet remain poorly understood, and are frequently overlooked by existing conservation frameworks. The Zoological Society of London launched the EDGE of Existence Programme in 2007 to raise awareness and funds for the conservation of these species. As of 2024, the programme has awarded fellows funds to help conserve 157 different species in 47 countries. The programme lists key supporters as the Fondation Franklinia, On the EDGE, and Darwin Initiative. Donors include the IUCN, US Fish and Wildlife Service, and numerous non-governmental organisations and foundations. In 2024, researchers at the programme identified EDGE Zones that make up 0.7% of Earth's surface but are home to one-third of the world's four-legged EDGE species. Conserving EDGE species The EDGE of Existence programme is centred on an interactive website that features information on the top 100 EDGE mammals, reptiles, birds, amphibians and top 25 EDGE corals, detailing their specific conservation requirements. Each of the top 100 species is given an 'EDGE-ometer' rating according to the degree of conservation attention they are currently receiving, as well as its perceived rarity in its natural environment. 70% of the mammals which have been chosen are receiving little or no conservation attention according to the inventors. EDGE Fellows EDGE research and conservation is carried out by ZSL researchers, a network of partner organizations and local scientists. An important part of the EDGE programme is a fellowship scheme which provides funding and support to local scientists. EDGE Fellows participate in all phases of a research project. Each project is focused on delivering a conservation action plan. Once the action plan is completed, a meeting is held to make additions and corrections to the document. Calculating EDGE Scores ED Some species are more distinct than others because they represent a larger amount of unique evolution. Species like the aardvark have few close relatives and have been evolving independently for many millions of years. Others like the domestic dog originated only recently and have many close relatives. Species uniqueness can be measured as an 'Evolutionary Distinctiveness' (ED) score, using a phylogeny, or evolutionary tree. ED scores are calculated relative to a clade of species descended from a common ancestor. The three clades for which the EDGE of Existence Programme has calculated scores are all classes, namely mammals, amphibians, and corals. The phylogenetic tree has the most recent common ancestor at the root, all the current species as the leaves, and intermediate nodes at each point of branching divergence. The branches are divided into segments (between one node and another node, a leaf, or the root). Each segment is assigned an ED score defined as the timespan it covers (in millions of years) divided by the number of species at the end of the subtree it forms. The ED of a species is the sum of the ED of the segments connecting it to the root. Thus, a long branch which produces few species will have a high ED, as the corresponding species are relatively distinctive, with few close relatives. ED metrics are not exact, because of uncertainties in both the ordering of nodes and the length of segments. GE GE is a number corresponding to a species' conservation status according to the International Union for Conservation of Nature with more endangered species having a higher GE: EDGE The EDGE score of a species is derived from its scores for Evolutionary Distinctness (ED) and for Globally Endangered status (GE) as follows: This means that a doubling in ED affects the EDGE score almost as much as increasing the threat level by one (e.g. from 'vulnerable' to 'endangered'). EDGE scores are an estimate of the expected loss of evolutionary history per unit time. EDGE species are species which have an above average ED score and are threatened with extinction (critically endangered, endangered or vulnerable). There are currently 564 EDGE mammal species (≈12% of the total). Potential EDGE species are those with high ED scores but whose conservation status is unclear (data deficient or not evaluated). Focal species Focal species are typically selected from the priority EDGE species —the top 100 amphibians, birds, mammals and reptiles, top 50 sharks and rays, and top 25 corals— however, they also prioritise species outside these rankings. Such species can also have a very high ED but fall outside the top 100 EDGE rankings. These species are conserved by 'EDGE Fellows', who collect data on these species and develop conservation action plans. Fellows have previously collaborated with institutions like National Geographic and The Disney Conservation Fund. Top 3 ranked species in each taxonomic group, as of September 2024: Amphibians Archey's frog Chinese giant salamander Purple frog Birds Plains-wanderer Giant ibis New Caledonian owlet-nightjar Corals Siderastrea glynni Poritipora paliformis Moseleya latistellata Mammals Mountain pygmy possum Aye-aye Leadbeater's possum Reptiles Madagascar big-headed turtle Central American river turtle Pig-nosed turtle Sharks and Rays Largetooth sawfish Smalltooth sawfish and green sawfish (tie) The species with an EDGE score of 20 or higher are the mountain pygmy possum (25.1) and aye-aye (20.1). Only mammals have and EDGE score of 8 or higher. The non-mammal species with the highest EDGE score is the largetooth sawfish (7.4). The species with the highest ED scores are the pig-nosed turtle (149.7) and the narrow sawfish (125.1). Examples of Critically Endangered species with very low ED scores are porites pukoensis, mountainous star coral, and the magenta petrel. References External links EDGE of Existence website ZSL−Zoological Society of London website . Endangered species Environmental research Zoological Society of London
EDGE of Existence programme
Biology,Environmental_science
1,430
5,671,899
https://en.wikipedia.org/wiki/Linux%20User%20and%20Developer
Linux User & Developer was a monthly magazine about Linux and related Free and open source software published by Future. It was a UK magazine written specifically for Linux professionals and IT decision makers. It was available worldwide in newsagents or via subscription, and it could be downloaded via Zinio or Apple's Newsstand. History and profile Linux User & Developer was first published in September 1999. In August 2014 its sister magazine, RasPi, was launched. The magazine was acquired by Future plc (owner of competing title Linux Format) as part of its acquisition of Imagine Publishing in 2016. The last issue of Linux User & developer was on 20 September 2018 (#196). All previous subscribers received issues of Linux Format as compensation for the next remaining issues of their subscription. Staff Chris Thornett - Editor References External links Official homepage Future plc Defunct computer magazines published in the United Kingdom Linux magazines Magazines established in 1999 Magazines disestablished in 2018 Monthly magazines published in the United Kingdom 1999 establishments in the United Kingdom
Linux User and Developer
Technology
203
145,716
https://en.wikipedia.org/wiki/Lithosphere
A lithosphere () is the rigid, outermost rocky shell of a terrestrial planet or natural satellite. On Earth, it is composed of the crust and the lithospheric mantle, the topmost portion of the upper mantle that behaves elastically on time scales of up to thousands of years or more. The crust and upper mantle are distinguished on the basis of chemistry and mineralogy. Earth's lithosphere Earth's lithosphere, which constitutes the hard and rigid outer vertical layer of the Earth, includes the crust and the lithospheric mantle (or mantle lithosphere), the uppermost part of the mantle that is not convecting. The lithosphere is underlain by the asthenosphere which is the weaker, hotter, and deeper part of the upper mantle that is able to convect. The lithosphere–asthenosphere boundary is defined by a difference in response to stress. The lithosphere remains rigid for very long periods of geologic time in which it deforms elastically and through brittle failure, while the asthenosphere deforms viscously and accommodates strain through plastic deformation. The thickness of the lithosphere is thus considered to be the depth to the isotherm associated with the transition between brittle and viscous behavior. The temperature at which olivine becomes ductile (~) is often used to set this isotherm because olivine is generally the weakest mineral in the upper mantle. The lithosphere is subdivided horizontally into tectonic plates, which often include terranes accreted from other plates. History of the concept The concept of the lithosphere as Earth's strong outer layer was described by the English mathematician A. E. H. Love in his 1911 monograph "Some problems of Geodynamics" and further developed by the American geologist Joseph Barrell, who wrote a series of papers about the concept and introduced the term "lithosphere". The concept was based on the presence of significant gravity anomalies over continental crust, from which he inferred that there must exist a strong, solid upper layer (which he called the lithosphere) above a weaker layer which could flow (which he called the asthenosphere). These ideas were expanded by the Canadian geologist Reginald Aldworth Daly in 1940 with his seminal work "Strength and Structure of the Earth." They have been broadly accepted by geologists and geophysicists. These concepts of a strong lithosphere resting on a weak asthenosphere are essential to the theory of plate tectonics. Types The lithosphere can be divided into oceanic and continental lithosphere. Oceanic lithosphere is associated with oceanic crust (having a mean density of about ) and exists in the ocean basins. Continental lithosphere is associated with continental crust (having a mean density of about ) and underlies the continents and continental shelves. Oceanic lithosphere Oceanic lithosphere consists mainly of mafic crust and ultramafic mantle (peridotite) and is denser than continental lithosphere. Young oceanic lithosphere, found at mid-ocean ridges, is no thicker than the crust, but oceanic lithosphere thickens as it ages and moves away from the mid-ocean ridge. The oldest oceanic lithosphere is typically about thick. This thickening occurs by conductive cooling, which converts hot asthenosphere into lithospheric mantle and causes the oceanic lithosphere to become increasingly thick and dense with age. In fact, oceanic lithosphere is a thermal boundary layer for the convection in the mantle. The thickness of the mantle part of the oceanic lithosphere can be approximated as a thermal boundary layer that thickens as the square root of time. Here, is the thickness of the oceanic mantle lithosphere, is the thermal diffusivity (approximately ) for silicate rocks, and is the age of the given part of the lithosphere. The age is often equal to L/V, where L is the distance from the spreading centre of mid-ocean ridge, and V is velocity of the lithospheric plate. Oceanic lithosphere is less dense than asthenosphere for a few tens of millions of years but after this becomes increasingly denser than asthenosphere. While chemically differentiated oceanic crust is lighter than asthenosphere, thermal contraction of the mantle lithosphere makes it more dense than the asthenosphere. The gravitational instability of mature oceanic lithosphere has the effect that at subduction zones, oceanic lithosphere invariably sinks underneath the overriding lithosphere, which can be oceanic or continental. New oceanic lithosphere is constantly being produced at mid-ocean ridges and is recycled back to the mantle at subduction zones. As a result, oceanic lithosphere is much younger than continental lithosphere: the oldest oceanic lithosphere is about 170 million years old, while parts of the continental lithosphere are billions of years old. Subducted lithosphere Geophysical studies in the early 21st century posit that large pieces of the lithosphere have been subducted into the mantle as deep as to near the core-mantle boundary, while others "float" in the upper mantle. Yet others stick down into the mantle as far as but remain "attached" to the continental plate above, similar to the extent of the old concept of "tectosphere" revisited by Jordan in 1988. Subducting lithosphere remains rigid (as demonstrated by deep earthquakes along Wadati–Benioff zone) to a depth of about . Continental lithosphere Continental lithosphere has a range in thickness from about to perhaps ; the upper approximately of typical continental lithosphere is crust. The crust is distinguished from the upper mantle by the change in chemical composition that takes place at the Moho discontinuity. The oldest parts of continental lithosphere underlie cratons, and the mantle lithosphere there is thicker and less dense than typical; the relatively low density of such mantle "roots of cratons" helps to stabilize these regions. Because of its relatively low density, continental lithosphere that arrives at a subduction zone cannot subduct much further than about before resurfacing. As a result, continental lithosphere is not recycled at subduction zones the way oceanic lithosphere is recycled. Instead, continental lithosphere is a nearly permanent feature of the Earth. Mantle xenoliths Geoscientists can directly study the nature of the subcontinental mantle by examining mantle xenoliths brought up in kimberlite, lamproite, and other volcanic pipes. The histories of these xenoliths have been investigated by many methods, including analyses of abundances of isotopes of osmium and rhenium. Such studies have confirmed that mantle lithospheres below some cratons have persisted for periods in excess of 3 billion years, despite the mantle flow that accompanies plate tectonics. Microorganisms The upper part of the lithosphere is a large habitat for microorganisms, with some found more than below Earth's surface. See also Carbonate–silicate cycle Climate system Cryosphere Geosphere Kola Superdeep Borehole Mohorovičić discontinuity Pedosphere Solid earth Vertical displacement References Further reading External links Earth's Crust, Lithosphere and Asthenosphere Crust and Lithosphere Plate tectonics Earth's mantle Systems ecology
Lithosphere
Environmental_science
1,609
24,019,952
https://en.wikipedia.org/wiki/Weapon%20mount
A weapon mount is an assembly or mechanism used to hold a weapon (typically a gun) onto a platform in order for it to function at maximum capacity. Weapon mounts can be broken down into two categories: static mounts and non-static mounts. Static mount A static mount is a non-portable weapon support component either mounted directly to the ground, on a fortification, or as part of a vehicle. Turret A gun turret protects the crew or mechanism of a weapon and at the same time lets the weapon be aimed and fired in many directions. A turret is a rotating weapon platform, strictly one that crosses the armour of whatever it is mounted on with a structure called a barbette (on ships) or basket (on tanks) and has a protective structure on top (gunhouse). If it has no gunhouse it is a barbette, if it has no barbette (i.e., it is mounted to the outside of the vehicle's armour) it is an installation. Turrets are typically used to mount machine guns, autocannons or large-calibre guns. They may be human operated or remotely controlled. A small turret, or sub-turret on a larger one, is called a cupola. The term cupola also describes rotating turrets that carry no weapons but instead are sighting devices, as in the case of tank commanders. A finial is an extremely small sub-turret or sub-sub-turret mounted on a cupola turret. Typically the gun is fixed on its horizontal axis and rotated by turning the turret, with trunnions on the gun used to allow it to elevate. Alternatively, in an oscillating turret the entire upper section of the turret moves to elevate and depress the gun. Casemate A casemate is an armoured structure consisting of a static primary surface incorporating a limited-traverse gun mount: typically, this takes the form of either a gun mounted through a fixed armour plate (typically seen on tank destroyers and assault guns) or a mount consisting of a partial cylinder of armour "sandwiched" between plates at the top and bottom (as with the sponson guns of early tanks and the secondary armament of Dreadnought-era battleships). Coaxial A coaxial mount, pioneered on T1 Light Tank in late 1920s and widely adopted by late 1930s, is mounted beside or above the primary weapon and thus points in the same general direction as the main armament, relying on the host weapon's ability to traverse in order to change arc. The term coaxial is something of a misnomer as the arrangement is strictly speaking paraxial (i.e., parallel axes, as opposed to the same axis), though for ballistic purposes the axis is effectively the same in practical terms. Nearly all main battle tanks and most infantry fighting vehicles have a coaxial machine gun mounted to fire along a parallel axis to the main gun. Coaxial weapons are usually aimed by use of the main gun control. It is usually used to engage infantry or other "soft" targets where use of shots from the main gun would be dangerous, ineffective or wasteful. Some weapons such as the M40 recoilless rifle and the Mk 153 Shoulder-Launched Multipurpose Assault Weapon have a smaller caliber spotting rifle mounted in coaxial fashion to the barrel or launch tube. These weapons fire special cartridges designed to mimic the ballistic arc of the host weapon's ammunition, using tracer or point-detonating rounds so that a gunner can easily determine where a shot will land in order to place fire accurately. Due to the adoption of more advanced systems such as laser rangefinders, they are rarely used on modern weapons. Ground mount Fixed A fixed mount is incapable of horizontal movement (traverse), though not necessarily incapable of vertical movement (elevation). The entire mounting must be moved in order to change direction of fire. Fully fixed mounts (no traverse or elevation) are most commonly found on aircraft, and most commonly direct the weapon forward, along the aircraft's vector of movement, so that a pilot can aim by pointing the nose of the aircraft at the target. Some aircraft designs used different concept of fixed mounts, as found in Schräge Musik or AC-47 Spooky. The Stridsvagn 103 is an unusual turretless main battle tank with a fixed main gun that is aimed using the tank's tracks and suspension. Military aircraft also often used fixed mounts called hardpoints or weapon stations to attach disposable stores such as missiles, bombs and external fuel tanks: these devices mount a standardised set of locking lugs to which many different types of armament can be affixed. Fixed traverse mounts capable of only elevation are common on larger self-propelled guns, as well being the mounting method used by virtually all railroad guns. Pintle A pintle mount is a swiveling mount that allows the gun to be freely traversed or elevated, while the base of the mount is still fixed keeping the whole system in one stable position: typically the mounting is either a rod on the underside of the gun (a pintle rod) that mates with a socket mechanism, or an intermediary cradle that mounts to the sides of the weapon's barrel or receiver. Due to the stability offered by the mount, the gun typically does not need a shoulder stock, with many modern examples using two-handed spade grips. It is most commonly found on armoured vehicles, improvised fighting vehicles such as technicals, side gun stations on WW2 and earlier-era bomber aircraft, and the door guns of armed transport helicopters. Early single-shot examples referred to as swivel guns were commonly mounted on the deck rails of naval vessels in the Age of Sail to deter boarders at close range. Larger guns require a heavier mounting referred to as a pedestal, and even larger guns a turntable platform: a pedestal mount may be directly manipulated, but larger guns typically require the use of mechanical handwheels or hydraulic/electric actuator assistance for traversing and elevation adjustments. Very large mounts might also include seats for the crew fixed to the gun cradle or the floor of the turntable. Unlike a turret, this type of mount typically has little or no armour protection, usually at most a frontal gun shield. Remote weapon station/installation This is a power-assisted mounting on the outside of whatever it is mounted on, usually bolted down to the surface and with only the control wires crossing the armour. Such mountings are typically used on armoured fighting vehicles for anti-personnel weapons to avoid exposing a crewmen to return fire, and on naval vessels for self-contained CIWS systems. Swing arm A swing mount is a fixed mount that allows a far greater and more flexible arc of fire than the simple pintle mount system. Utilising a system of one or two articulated arms the gunner can swing the weapon through a wide arc even though the gunner's position is fixed relative to the mount. These systems vary in complexity from a simple arm, to a double arm with the ability to lock the weapon in any firing position. Non-static mount A mobile mount is a weapon mount that is portable or can be transported around by infantry. Carriage Large weapons that cannot easily be lifted by infantry require a platform that can be moved around when mobility is needed. Wheels are typically used to allow maneuverability, although skids are sometimes preferred in cold climates where icy/snowy surfaces become problematic for wheels, and some particularly heavy guns have historically used unpowered tracks. Small carriages can be pushed/pulled by hands in the manner of a small cart or wheelbarrow, while larger ones require traction by animals or vehicles. Large weapons often use a deployable base to make them easier to transport and more stable in their firing position: split-trail mounts (where two long "trails" can be brought together to make a towing bar) and cruciform bases with two folding legs are examples. "Pack howitzers" are a special case where the carriage can be completely dismantled and split into a series of loads for transport over rough terrain, typically by mules. Baseplate Typically used by infantry mortars, this is a flat plate mounted to the weapon directly or using a ball joint. The plate is usually square, rectangular or circular, and designed to spread out the weapon's recoil force to prevent it from being piledriven into the ground: it is often, though not always, used with a two-legged stand to elevate the barrel at a desired angle. Monopod A monopod has one leg and does not provide stability along the coordinate axis of motion. Monopods have the advantage of being light and compact although when used in firing mode it does not have enough stability to be used with large firearms. Monopods are typically used on short-barreled, precision-fire firearms. Many sniper rifles feature a monopod integrated into their stock, providing the effect of a tripod when it is combined with a frontal bipod. Bipod A bipod has two legs and provides stability along the left-to-right coordinate axis of motion. The bipod permits the operator to rest the weapon on the ground, a low wall, or other object, reducing operator fatigue and permitting increased accuracy. Bipods can be of fixed or adjustable length, and can either be an accessory mounted to the weapon or integral to it. Those of higher quality can be tilted and also have their tilting point close to the bore central axis, allowing the weapon to tilt left and right a small amount, allowing a quick horizontal sight picture on uneven ground and keeping the operator close to the ground. Tripod A tripod has three legs and provides stability along the left-to-right and fore-and-aft coordinate axis of motion. Tripods have the disadvantage of being heavy and bulky, but provide far superior stability and do not require the user to exert any force in order to keep the mount balanced. Tripods are typically used on support weapons such as heavy machine guns, repeating grenade launchers, recoilless rifles and large infantry anti-tank missiles systems such as BGM-71 TOW. These tripods are often much larger than the weapon itself and may have mechanical elevation and traverse controls for indirect fire. The tripod permits the operator to rest the weapon on the ground and thus the gun feels lighter to the shooter and accuracy is increased. Shooting saddle A shooting saddle typically uses a tripod head but, instead of mounting the weapon directly to the tripod, the saddle is mounted to the tripod head and the rifle is cradled within the saddle. These saddles began to appear in the late 2000s as a solution to provide a stable shooting platform for snipers and marksmen who may need to take a shot from somewhere other than the prone position. Prior to their introduction, snipers had only shooting sticks or jury-rigged setups to use. Fork rest/shooting sticks Shooting sticks are portable weapon mounts used by field shooters, like hunters, snipers and metallic silhouette black-powder rifle shooters. They can be anything from purpose-built rests to constructions made from actual sticks, and have between one and three legs. They have existed since the days of early arquebusiers, when they would typically be a long thin stake with a U-shaped rest at the top, referred to as a fork rest. On firearms, shooting sticks are commonly used on rifles to provide a forward rest and reduce motion. Shooting sticks permit the operator to rest the weapon on the ground, a low wall, or other object, reducing operator fatigue and permitting increased accuracy. Underbarrel This type of infantry weapon mount is used to mount a weapon beneath the barrel of a larger one, using either special mounting equipment or an accessory rail. This allows the user to have two weapons ready in hand and a simple change of grip is all that is needed to fire the accessory weapon. It is most commonly used to mount a single-shot grenade launcher to a rifle or a cut-down shotgun to breach doors. Individual Various forms of weapon mounts have existed for individual use, or experimented with for military trials to ease the handling of heavy weapons and reduce fatigue on the battlefield. An example is the affusto d'assalto (assault carriage) or "bari mount" that was devised by the 139° e 140° Reggimento Fanteria Brigata "Bari" in 1917 and used on the Villar Perosa aircraft submachine gun for walking fire tactics. This allowed the user not only to fire the spade grip weapon but also throw grenades at the same time during combat. The Bari mount was used in trench raids, and was integral to the doctrinal purpose of the so-called 'pistollettieri' sections who were effectively grenadier-submachine gunners. Another example is the Third Arm Weapon Interface System and REAPER weapon support system. See also Free gun Firing port Gun pod References External links Reaper Weapon Support System Army’s Steadicam Third Arm – An Independent Study Army Research Lab Show Off Latest Prototype of the ‘Third Arm’ Steadicam Gun Revisited – Spade Gripped Firearm Firearm components
Weapon mount
Technology
2,662
11,531,277
https://en.wikipedia.org/wiki/Latex%20fixation%20test
A latex fixation test, also called a latex agglutination assay or test (LA assay or test), is an assay used clinically in the identification and typing of many important microorganisms. These tests use the patient's antigen-antibody immune response. This response occurs when the body detects a pathogen and forms an antibody specific to an identified antigen (a protein configuration) present on the surface of the pathogen. Agglutination tests, specific to a variety of pathogens, can be designed and manufactured for clinicians by coating microbeads of latex with pathogen-specific antigens or antibodies. In performing a test, laboratory clinicians will mix a patient's cerebrospinal fluid, serum or urine with the coated latex particles in serial dilutions with normal saline (important to avoid the prozone effect) and observe for agglutination (clumping). Agglutination of the beads in any of the dilutions is considered a positive result, confirming either that the patient's body has produced the pathogen-specific antibody (if the test supplied the antigen) or that the specimen contains the pathogen's antigen (if the test supplied the antibody). Instances of cross-reactivity (where the antibody sticks to another antigen besides the antigen of interest) can lead to confusing results. Agglutination techniques are used to detect antibodies produced in response to a variety of viruses and bacteria, as well as autoantibodies, which are produced against the self in autoimmune diseases. For example, assays exist for rubella virus, rotavirus, and rheumatoid factor, and an excellent LA test is available for cryptococcus. Agglutination techniques are also used in definitive diagnosis of group A streptococcal infection. See also References External links Description of the test Blood tests Immunologic tests
Latex fixation test
Chemistry,Biology
397
49,912,901
https://en.wikipedia.org/wiki/List%20of%20psychoactive%20plants%2C%20fungi%2C%20and%20animals
This is a list of psychoactive plants, fungi, and animals. Plants Psychoactive plants include, but are not limited to, the following examples: Cannabis: cannabinoids Tobacco: nicotine, anabasine, and other Nicotinic agonists, as well as beta-carboline alkaloids Coca: cocaine, ecgonine and other coca alkaloids Opium Poppy: morphine, codeine, thebaine, papaverine, noscapine, and narceine Salvia divinorum: salvinorin A and other Salvinorins Khat: cathine and cathinone Kava: kavalactones Nutmeg: myristicin Nightshade (Solanaceae) plants containing hyoscyamine, atropine, and scopolamine: Datura Deadly nightshade (Atropa belladonna) Henbane (Hyoscyamus niger) Mandrake (Mandragora officinarum) Other Solanaceae Psychoactive cacti, which contain mainly mescaline: Peyote Other Lophophora Peruvian Torch cactus San Pedro cactus Trichocereus macrogonus var. macrogonus (syn. Echinopsis peruviana) Trichocereus macrogonus var. pachanoi (syn. Echinopsis pachanoi) Trichocereus bridgesii Other Echinopsis Mild stimulant and vasoconstrictor plants that contain mainly caffeine and theobromine: Coffee Tea (also contains theanine) Guarana Yerba Mate Cocoa Kola Other plants: Mimosa hostilis: DMT Chacruna: DMT, NMT Cebil and Yopo: DMT, 5-MeO-DMT, bufotenin Mucuna pruriens Morning glory species, notably Hawaiian Baby Woodrose: lysergic acid amide Sinicuichi: Vertine, Lyfoline, Lythrine and other sinicuichi alkaloids Monotropa uniflora: Grayanotoxin (also found in Rhododendron pollen and mad honey) Iboga: ibogaine, noribogaine, ibogamine, voacangine, 18-methoxycoronaridine Ephedra: ephedrine Acacia species Damiana Leonotis leonurus: Docosatetraenoylethanolamide and other alkaloids Calea zacatechichi Silene capensis Valerian Areca nut: arecaidine and arecoline Kratom: mitragynine, mitraphylline, 7-hydroxymitragynine, raubasine, and other Kratom alkaloids Rauvolfia serpentina: rauwolscine Rauvolfia vomitoria Nymphaea caerulea (Egyptian lotus or blue lotus): apomorphine, nuciferine Yohimbe: yohimbine Kanna: mesembrine and mesembrenone Glaucium flavum (yellow horned poppy, yellow hornpoppy or sea poppy): glaucine California poppies: Protopine and Californidine Fungi Psilocybin mushrooms: psilocybin, psilocin, aeruginascin, baeocystin, and norbaeocystin Amanita muscaria: ibotenic acid, muscimol, and muscarine Amanita pantherina Dictyonema huaorani: psilocybin, DMT, and 5-MeO-DMT Collybia maculata: collybolide(unlikely to be psychoactive) Animals Colorado River toad (Sonoran Desert toad or Bufo alvarius): 5-MeO-DMT and bufotenin Asiatic toad and certain tree frogs (Osteocephalus taurinus, Osteocephalus oophagus, and Osteocephalus langsdorfii): bufotenin Tree frogs belonging to the genus Phyllomedusa, notably P. bicolor: opioid peptides, including deltorphin, deltorphin I, deltorphin II, and dermorphin Hallucinogenic fish Ocean life containing DMT analogs: Smenospongia aurea: 5-Bromo-DMT Smenospongia echina: 5,6-Dibromo-DMT Verongula rigida: 5-Bromo-DMT, 5,6-Dibromo-DMT, et al. Eudistoma fragum: 5-Bromo-DMT Paramuricea clavata: DMT, NMT Villogorgia rubra: NMT See also Entheogenic drugs and the archaeological record Hallucinogenic fish List of plants used for smoking List of psychoactive substances and precursor chemicals derived from genetically modified organisms List of psychoactive substances derived from artificial fungi biotransformation List of substances used in rituals Medicinal fungi References Biological sources of psychoactive drugs Psychoactive Psychoactive Psychoactive
List of psychoactive plants, fungi, and animals
Biology
1,073
10,319,171
https://en.wikipedia.org/wiki/Density%20wave%20theory
Density wave theory or the Lin–Shu density wave theory is a theory proposed by C.C. Lin and Frank Shu in the mid-1960s to explain the spiral arm structure of spiral galaxies. The Lin–Shu theory introduces the idea of long-lived quasistatic spiral structure (QSSS hypothesis). In this hypothesis, the spiral pattern rotates with a particular angular frequency (pattern speed), whereas the stars in the galactic disk orbit at varying speeds, which depend on their distance to the galaxy center. The presence of spiral density waves in galaxies has implications on star formation, since the gas orbiting around the galaxy may be compressed and cause shock waves periodically. Theoretically, the formation of a global spiral pattern is treated as an instability of the stellar disk caused by the self-gravity, as opposed to tidal interactions. The mathematical formulation of the theory has also been extended to other astrophysical disk systems, such as Saturn's rings. Galactic spiral arms Originally, astronomers had the idea that the arms of a spiral galaxy were material. However, if this were the case, then the arms would become more and more tightly wound, since the matter nearer to the center of the galaxy rotates faster than the matter at the edge of the galaxy. The arms would become indistinguishable from the rest of the galaxy after only a few orbits. This is called the winding problem. Lin & Shu proposed in 1964 that the arms were not material in nature, but instead made up of areas of greater density, similar to a traffic jam on a highway. The cars move through the traffic jam: the density of cars increases in the middle of it. The traffic jam itself, however, moves more slowly. In the galaxy, stars, gas, dust, and other components move through the density waves, are compressed, and then move out of them. More specifically, the density wave theory argues that the "gravitational attraction between stars at different radii" prevents the so-called winding problem, and actually maintains the spiral pattern. The rotation speed of the arms is defined to be , the global pattern speed. (Thus, within a certain non-inertial reference frame, which is rotating at , the spiral arms appear to be at rest). The stars within the arms are not necessarily stationary, though at a certain distance from the center, , the corotation radius, the stars and the density waves move together. Inside that radius, stars move more quickly () than the spiral arms, and outside, stars move more slowly (). For an m-armed spiral, a star at radius R from the center will move through the structure with a frequency . So, the gravitational attraction between stars can only maintain the spiral structure if the frequency at which a star passes through the arms is less than the epicyclic frequency, , of the star. This means that a long-lived spiral structure will only exist between the inner and outer Lindblad resonance (ILR, OLR, respectively), which are defined as the radii such that: and , respectively. Past the OLR and within the ILR, the extra density in the spiral arms pulls more often than the epicyclic rate of the stars, and the stars are thus unable to react and move in such a way as to "reinforce the spiral density enhancement". Further implications The density wave theory also explains a number of other observations that have been made about spiral galaxies. For example, "the ordering of H I clouds and dust bands on the inner edges of spiral arms, the existence of young, massive stars and H II regions throughout the arms, and an abundance of old, red stars in the remainder of the disk". When clouds of gas and dust enter into a density wave and are compressed, the rate of star formation increases as some clouds meet the Jeans criterion, and collapse to form new stars. Since star formation does not happen immediately, the stars are slightly behind the density waves. The hot OB stars that are created ionize the gas of the interstellar medium, and form H II regions. These stars have relatively short lifetimes, however, and expire before fully leaving the density wave. The smaller, redder stars do leave the wave, and become distributed throughout the galactic disk. Density waves have also been described as pressurizing gas clouds and thereby catalyzing star formation. Application to Saturn's rings Beginning in the late 1970s, Peter Goldreich, Frank Shu, and others applied density wave theory to the rings of Saturn. Saturn's rings (particularly the A Ring) contain a great many spiral density waves and spiral bending waves excited by Lindblad resonances and vertical resonances (respectively) with Saturn's moons. The physics are largely the same as with galaxies, though spiral waves in Saturn's rings are much more tightly wound (extending a few hundred kilometers at most) due to the very large central mass (Saturn itself) compared to the mass of the disk. The Cassini mission revealed very small density waves excited by the ring-moons Pan and Atlas and by high-order resonances with the larger moons, as well as waves whose form changes with time due to the varying orbits of Janus and Epimetheus. See also Barred spiral galaxy Dark matter Galaxy Magellanic spiral Spiral galaxy Self-propagating star formation References External sources Bertin, Giuseppe. 2000. Dynamics of Galaxies. Cambridge: Cambridge University Press. Bertin, G. and C.C. Lin. 1996. Spiral Structure in Galaxies: A Density Wave Theory. Cambridge: MIT Press. C.C. Lin, Yuan, C., and F.H. Shu, "On the Spiral Structure of Disk i Galaxies III. Comparison with Observations", Ap.J. 155, 721 (1969). (SCI) Yuan, C.,"Application of Density-Wave Theory to the Spiral Structure of the Milky Way System I. Systematic Motion of Neutral Hydrogen", Ap.J., 158, 871 (1969). (SCI) External links Britannica.com: Density Wave Theory (galactic structure) Internet Encyclopedia of Science: Density Wave UOttawa FactGuru: Density Wave Theory Extragalactic astronomy Galactic astronomy Articles containing video clips
Density wave theory
Astronomy
1,270
6,876,579
https://en.wikipedia.org/wiki/Spherical%20cow
The spherical cow is a humorous metaphor for highly simplified scientific models of complex phenomena. Originating in theoretical physics, the metaphor refers to some scientific tendencies to develop toy models that reduce a problem to the simplest form imaginable, making calculations more feasible, even if the simplification hinders the model's application to reality. History The phrase comes from a joke that spoofs the simplifying assumptions sometimes used in theoretical physics. John Harte, who received his Ph.D. from the University of Wisconsin in 1965, reported that he first heard the joke as a graduate student. One of the earliest published references is in a 1970 article by Arthur O. Williams Jr. of Brown University, who described it as "a professional joke that circulated among scientists a few years ago". The story is told in many variants, including a joke about a physicist who said he could predict the winner of any race provided it involved spherical horses moving through a vacuum. A 1973 letter to the editor in the journal Science describes the "famous story" about a physicist whose solution to a poultry farm's egg-production problems began with "Postulate a spherical chicken". Cultural references The concept is familiar enough that the phrase is sometimes used as shorthand for the entire issue of proper modeling. For example, Consider a Spherical Cow is a 1985 book about problem solving using simplified models. A 2015 paper on the systemic errors introduced by simplifying assumptions about spherical symmetries in galactic dark-matter haloes was titled "Milking the spherical cow – on aspherical dynamics in spherical coordinates". References to the joke appear even outside the field of scientific modeling. "Spherical Cow" was chosen as the code name for the Fedora 18 Linux distribution. In the sitcom The Big Bang Theory, a joke is told by Dr. Leonard Hofstadter with the punchline mentioning "spherical chickens in a vacuum", in "The Cooper-Hofstadter Polarization" episode. In the space gravity simulator educational video game Universe Sandbox, a spherical cow was added as a user-placeable object in March 2023. See also Assume a can opener, a joke about invalid assumptions in economics Amorphous globosus, a rare and fatal birth defect in cattle, producing a ball of underdeveloped tissue Fermi problem, efforts to produce very broad estimates Homo economicus, a hypothetical rational person Naïve physics, also called folk physics Schwarzschild metric, an exact solution of the Einstein field equations assuming a uniform spherical symmetric nonrotating uncharged mass in a vacuum References External links NASA:Exploration of the Universe Division – Supernova models as spherical cows Hubble Heritage Gallery Page: related history from Space Telescope Institute Humour in science In-jokes Metaphors referring to cattle Scientific modelling Theoretical physics
Spherical cow
Physics
565
2,616,675
https://en.wikipedia.org/wiki/Flavor-changing%20neutral%20current
In particle physics, flavor-changing neutral currents or flavour-changing neutral currents (FCNCs) are hypothetical interactions that change the flavor of a fermion without altering its electric charge. Details If they occur in nature (as reflected by Lagrangian interaction terms), these processes may induce phenomena that have not yet been observed in experiment. Flavor-changing neutral currents may occur in the Standard Model beyond the tree level, but they are highly suppressed by the GIM mechanism. Several collaborations have searched for FCNC. The Tevatron CDF experiment observed evidence of FCNC in the decay of the strange B-meson to phi mesons in 2005. FCNCs are generically predicted by theories that attempt to go beyond the Standard Model, such as the models of supersymmetry or technicolor. Their suppression is necessary for an agreement with observations, making FCNCs important constraints on model-building. Example Consider a toy model in which an undiscovered boson S may couple both to the electron as well as the tau () via the term Since the electron and the tau have equal charges, the electric charge of S clearly must vanish to respect the conservation of electric charge. A Feynman diagram with S as the intermediate particle is able to convert a tau into an electron (plus some neutral decay products of the S). The MEG experiment at the Paul Scherrer Institute near Zürich will search for a similar process, in which an antimuon decays to a photon and an antielectron (a positron). In the Standard Model, such a process proceeds only by emission and re-absorption of a charged , which changes the into a neutrino on emission and then a positron on re-absorption, and finally emits a photon that carries away any difference in energy, spin, and momentum. In most cases of interest, the boson involved is not a new boson S but the conventional  boson itself. This can occur if the coupling to weak neutral currents is (slightly) non-universal. The dominant universal coupling to the Z boson does not change flavor, but sub-dominant non-universal contributions can. FCNCs involving the  boson for the down-type quarks at zero momentum transfer are usually parameterized by the effective action term This particular example of FCNC is often studied the most because we have some fairly strong constraints coming from the decay of  mesons in Belle and BaBar. The off-diagonal entries of U parameterizes the FCNCs and current constraints restrict them to be less than one part in a thousand for |Ubs|. The contribution coming from the one-loop Standard Model corrections are actually dominant, but the experiments are precise enough to measure slight deviations from the Standard Model prediction. Experiments tend to focus on flavor-changing neutral currents as opposed to charged currents, because the weak neutral current ( boson) does not change flavor in the Standard Model proper at the tree level whereas the weak charged currents ( bosons) do. New physics in charged current events would be swamped by more numerous  boson interactions; new physics in the neutral current would not be masked by a large effect due to ordinary Standard Model physics. See also Neutral particle oscillation Penguin diagram Two-Higgs-doublet model References Standard Model Physics beyond the Standard Model Hypothetical processes
Flavor-changing neutral current
Physics
685
62,371,082
https://en.wikipedia.org/wiki/Tom%20Thumb%20Tempest
"Tom Thumb Tempest" is the 22nd episode of Stingray, a British Supermarionation television series created by Gerry and Sylvia Anderson and produced by their company AP Films (APF) for ITC Entertainment. Written by Alan Fennell and directed by Alan Pattillo, it was first broadcast on 28 February 1965 on the Anglia, ATV London, Grampian and Southern franchises of the ITV network. It subsequently aired on ATV Midlands on 3 March 1965. The series follows the missions of the World Aquanaut Security Patrol (WASP), an organisation responsible for policing the Earth's oceans in the 2060s. Headquartered at the self-contained city of Marineville on the West Coast of North America, the WASP operates a fleet of vessels led by Stingray: a combat submarine crewed by Captain Troy Tempest, Lieutenant "Phones" and Marina, a mute young woman from under the sea. Stingrays adventures bring it into contact with undersea civilisations – some friendly, others hostile – as well as mysterious natural phenomena. The WASP's most powerful enemy is King Titan, ruler of the ocean floor city of Titanica. In "Tom Thumb Tempest", Troy has a nightmare in which Stingray and its crew are miniaturised. The use of life-sized sets to convey the shrinking of the puppet characters has drawn a mixed response from commentators. Fennell later authored a picture book based on the episode titled Stingray: Terror of the Giants (1993, Boxtree Ltd). Plot The Stingray crew are relaxing in the Marineville standby lounge when Commander Shore tells them to prepare to launch, warning them of a dangerous mission. Captain Troy Tempest is eager to leave immediately but Shore says to await further instructions. Troy's attention turns to the fish in the lounge aquarium. He then falls asleep in his chair. Troy wakes to hear Shore on the intercom, ordering the crew to launch. He departs in Stingray with Lieutenant "Phones" and Marina. Shore radios in, ordering Troy to pilot Stingray through an undersea tunnel. Troy asks for details of the mission and is left feeling belittled when Shore gruffly denies his request. Stingray exits the tunnel and collides with a sheet of glass. The crew are astonished to find that they have been miniaturised and ended up inside an aquarium within a giant dining room. Leaving Stingray on their personal hovercraft, they investigate the dining table, which has been laid out for various undersea villains. At the head of the table – set for Titan – is a schematic of Marineville's defence systems. The crew realise that they have stumbled across a gathering of the undersea races to plot the destruction of Marineville. The crew take cover when an Aquaphibian dressed as a butler enters the room to check the table. They then use a nearby telephone to call Marineville. Shore answers and Troy attempts to explain the situation, but Shore thinks that it is a prank call and hangs up. The crew are again forced to hide when the Aquaphibian returns with Titan's agent X-2-Zero, who notices the mess the crew have made and reprimands the Aquaphibian for what he assumes to be poor table-setting. The Aquaphibian tidies up and leaves. The crew destroy the schematic by soaking it in alcohol and setting it alight. The fire quickly engulfs the room, forcing them back to Stingray. As the aquarium boils, Troy realises that Stingray is trapped. He orders Phones to launch a torpedo to shatter the glass, hoping that the escaping water will put out the fire. As Phones fires the torpedo, Troy wakes to find himself back in the Marineville lounge. Shore tells the crew to stand down and Troy, realising that he has had a nightmare, apologises to Shore for his earlier impatience. Regular voice cast Ray Barrett as Commander Shore Robert Easton as Lieutenant "Phones" and Surface Agent X-2-Zero Don Mason as Captain Troy Tempest Production The title of the episode was based on the folklore character Tom Thumb. In the script, the Aquaphibian butler was called "Jeevesea" – a pun on the fictional valet Jeeves. "Tom Thumb Tempest" was significant for combining Stingrays -scale Supermarionation puppets with a life-sized dining room set. (Accordingly, at one point Troy says that the contents of the room appear to be "three times" larger than normal.) It was not the first episode of an APF series to deal with miniaturised characters: the idea had previously been explored in Supercars "Calling Charlie Queen" and Fireball XL5s "The Triads". However, while those episodes had used back projection for their miniaturisation effects, "Tom Thumb Tempest" placed "shrunken" characters on a physical set. Stephen La Rivière cites "Tom Thumb Tempest" as another example of the "Land of Giants-type" episode that APF had attempted in its previous two series. Reception Gerry Anderson biographers Simon Archer and Marcus Hearn consider "Tom Thumb Tempest" to be one of Stingrays most entertaining episodes. By contrast, TV Zone names it the worst of the series, calling the ending "reasonably clever" but the overall episode a "wasted opportunity". The magazine argues that the episode is spoiled through its use of "two hoary old clichés – the 'incredible shrinking cast' idea ... and the 'it was all a dream' cop-out ending" – the first of which merely emphasises the "unreality" of the plot while the second renders the episode "entirely inconsequential". It also criticises the dream sequence itself for being insufficiently surreal and "[degenerating] into sub-Tom and Jerry shenanigans" towards the end. Jim Sangster and Paul Condon, authors of Collins Telly Guide, describe the episode as "decidedly less aimed at realism" than those of later Supermarionation series. They also refer to dream sequences in general as "one of Anderson's most annoying recurring plot devices". La Rivière suggests that the "tantalising glimpse of reality" in this episode conflicted with APF's ongoing efforts to make its puppet characters seem more human. Ian Fryer regards the episode as a precursor of the final Supermarionation series, The Secret Service, which featured both puppets and live actors. References Works cited External links 1965 British television episodes Fiction about size change Science fiction television episodes Stingray (1964 TV series) Television episodes about nightmares Television episodes set on fictional islands
Tom Thumb Tempest
Physics,Mathematics
1,357
2,670,513
https://en.wikipedia.org/wiki/Iota%20Sculptoris
ι Sculptoris, Latinized as Iota Sculptoris and abbreviated iot Scl, is a solitary star in the southern constellation of Sculptor. It is visible to the naked eye as a dim, orange-hued point of light with an apparent visual magnitude of 5.18. The star is located approximately 336 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +21 km/s. This is an aging giant star with a stellar classification of K0III, currently on the red giant branch. It has 2.9 times the mass of the Sun and has expanded to 12 times the Sun's radius. The star is radiating 97 times the luminosity of the Sun from its enlarged photosphere at an effective temperature of 5,020 K. These coordinates are a source for X-ray emission, which is most likely (99.4% chance) coming from the star. References K-type giants Sculptor (constellation) Sculptoris, Iota CD-29 86 001737 001708 0084
Iota Sculptoris
Astronomy
221
30,699,077
https://en.wikipedia.org/wiki/EMBiology
EmBiology (formerly EMBiology) is a web-based Software as a service tool from Elsevier in which researchers can view biological relationships between entities, such as genes, proteins, and cells. Launched in 2023, EmBiology queries a Biological Knowledge Graph with 1.4 million entities connected by 15.7 million relationships. It uses a Sankey diagram to visualize search findings, and displays "snippets" of text from relevant scientific literature. Previous Version EmBiology was originally launched as EMBiology in 2005 as a life science bibliographic database in a partnership with Ovid Technologies as a smaller version of Embase. Content Coverage EmBiology Data sources include: 7.2 million PubMed abstracts 430,000 from Clinicaltrials.gov 7.2 million Full-text articles from 936 Elsevier journals and 939 non-Elsevier journals Biological Concepts The following biological concepts are included in EmBiology: Biological Relationships The following biological relationships are included in EmBiology References Bibliographic databases and indexes Biological databases Publications established in 2005 Elsevier Drug discovery
EMBiology
Chemistry,Biology
219
7,098,529
https://en.wikipedia.org/wiki/Fusion%20of%20horizons
In the philosophy of Hans-Georg Gadamer, a fusion of horizons () is the process through which the members of a hermeneutical dialogue establish the broader context within which they come to a shared understanding. In phenomenology, a horizon refers to the context within which of any meaningful presentation is contained. For Gadamer, we exist neither in closed horizons, nor within a horizon that is unique; we must reject both the assumption of absolute knowledge, that universal history can be articulated within a single horizon, and the assumption of objectivity, that we can "forget ourselves" in order to achieve an objective perspective of the other participant. According to Gadamer, since it is not possible to totally remove oneself from one's own broader context, (e.g. the background, history, culture, gender, language, education, etc.) to an entirely different system of attitudes, beliefs and ways of thinking, in order to be able to gain an understanding from a conversation or dialogue about different cultures we must acquire "the right horizon of inquiry for the questions evoked by the encounter with tradition." through negotiation; in order to come to an agreement, the participants must establish a shared context through this "fusion" of their horizons. See also Horizon of expectation Perspectivism Notes References Concepts in epistemology Hans-Georg Gadamer Hermeneutics Phenomenology Social epistemology
Fusion of horizons
Technology
291
40,613,126
https://en.wikipedia.org/wiki/Project%20MinE
Project MinE is an independent large scale whole genome research project that was initiated by 2 patients with amyotrophic lateral sclerosis and started on World ALS Day, June 21, 2013. The symptoms of amyotrophic lateral sclerosis are caused by degeneration of motor nerve cells (motor neurons) in the spinal cord, brainstem, and motor cortex. The exact cause of this degeneration is unknown but it is thought that environmental exposures and genetic factors play a role in susceptibility to the disease. In 5-10% of patients the family history is positive for ALS. However, it is not always possible to establish the mode of inheritance in each pedigree and not all familial cases may suffer from a genuine Mendelian or monogenic disorder. Autosomal-dominant mutations in the C9orf72 and the SOD1 gene are found in a substantial number of familial ALS cases. Mutations in other genes (such as VAPB [2], ANG, TARDBP and FUS) have been reported, but are found at a much lower frequency and with variable penetrance, suggesting the involvement of other genes. Project MinE is a research project to systematically interrogate the human genome for both common and rare genetic variation in ALS (genetic "data mining" explains the project name). The project consists of two phases and combines a genome-wide association study (GWAS) study with whole genome sequencing: Phase 1 of Project MinE consists of whole genome sequencing of 300 DNA samples of ALS patients to detect relevant haplotypes with high fidelity (variant calling & haplotype detection). Subsequently, expansion of the current GWAS for ALS will take place by increasing the amount of DNA samples to be investigated to 15,000 ALS samples and 20,000 healthy controls (so 35,000 samples in total) and imputation using the whole genome sequencing results will be performed. Combining these two processes will result that a relatively small group of whole genome sequenced DNA samples will extend the > 500,000 single nucleotide polymorphism (SNP) markers of a GWAS to 8,000,000 SNP markers and per definition will include ALS-relevant variation. Phase 2 of the project aims to increase the number of whole genome sequenced samples to 22,500, which includes 15,000 ALS samples and 7,500 healthy controls. High-throughput next-generation sequencing will be applied. This sample size will be large enough to reliably analyze whole genome sequencing data outside of a family context. The long-term benefit of the approach taken for project MinE is the priceless catalogue of many non-ALS whole genomes that can be used to investigate other human diseases, including Diabetes Mellitus, some types of cancer, and other neurological disorders. Project MinE is worldwide the largest genetic study for Amyotrophic Lateral Sclerosis. The work has started in the second quarter of 2013 and is a unique international collaboration between scientists, industry, social foundations and patients. On July 25, 2016, the first results were published in 2 publications in Nature Genetics leading to the discovery of NEK1 and C21orf2 as new ALS risk genes. References External links http://www.genome.gov/GWAStudies Genome projects
Project MinE
Biology
676
7,701,587
https://en.wikipedia.org/wiki/Ottobock
Ottobock SE & Co. KGaA, formerly Otto Bock, is an international company based in Duderstadt Germany, that operates in the field of orthopedic technology. It is considered the world market leader in the field of prosthetics and one of the leading suppliers in orthotics, wheelchairs and exoskeletons. Näder Holding GmbH & Co. KG is entirely owned by the Näder family, direct descendants of the company's founder, Otto Bock. Näder Holding controls 80% of the shares in Ottobock SE & Co. KGaA. The remaining 20% of the shares were previously held by the Swedish financial investor EQT. However, in March 2024, it was announced that Näder Holding had repurchased these shares from EQT for EUR 1.1 billion. In 2022, the Ottobock Group as a whole generated sales of €1,3 billion. they had 8,367 employees worldwide. History Foundation of the company from 1919 The company was founded on January 13, 1919 under the appellation Orthopädische Industrie GmbH, headquartered in Berlin. Initiated by a group surrounding a manufacturer named Otto Bock, who hailed from Krefeld, its objective was to supply prostheses and orthopedic products to the many thousands of war invalids of World War I. Bock acted as production manager during this phase. In 1920, production operations were relocated to Königsee in Thuringia, where at times, a workforce of up to 600 people was employed. Faced with high demand that could not be met with traditional handicraft or artisanal methods, Otto Bock began the mass-production of prosthetic components, thus laying the foundation for the orthopedic industry. Bock moved into the management of the company in 1924 and finally took over as sole managing director in 1927. With the evolution of the industry, new materials began to be used in production, notably aluminum, which found early application in prosthetic construction during the 1930s. National Socialist era 1933-1945 In May 1933, Bock joined the NSDAP. During the 1930s he became a supporting member of the SS. He paid a monthly contribution of six Reichmarks, according to his own account, until 1938. At the end of 1933, Bock had Orthopädische Industrie GmbH liquidated and paid out the remaining shareholders. The company was renamed Orthopedic Industry Otto Bock in Königsee. Max Näder, after completing his secondary education in 1935, commenced his professional journey by undertaking training as an orthopedic mechanic and industrial clerk at Otto Bock. During his later studies in Berlin, he became part of the National Socialist German Student Association (NSDStB). During the African campaign, Näder was awarded the Iron Cross II Class. In 1943, while on leave, he married Maria Bock, the daughter of Otto Bock. During the tumult of World War II, the company resorted to the utilisation of forced laborers to sustain its operational endeavors. A company chronicle quotes former employees as saying that from 1942 onwards, around 100 Russian women aged between 18 and 22 were employed in the bandage, sewing and timber departments. Letters from Marie Bock also suggest that the entrepreneurial family used forced labourers not only in the company but also in the private household. 1946-1989 After World War II, when all the family's private assets as well as the factory in Königsee had been confiscated by the Soviet occupiers, the company settled in Duderstadt in southern Lower Saxony in 1946. In 1950, plastics were introduced into production for the first time. The invention of a braking knee joint with high stability, called the Jüpa knee, brought the economic breakthrough after 1949. Together with a newly developed balance device and two other apparatuses for prosthetic alignment, it was also in demand on the American market. In 1955, Ottobock exported the first 500 Jüpa knees to the U.S. The establishment of an American branch in Minneapolis in 1958 marked the beginning of the company's international sales structure. In 1965, Max Näder introduced myoelectric arm prostheses to the market. For the first time, light and fragile as well as heavy objects could be grasped with them. In myoelectrics, weak electrical voltages control the prosthesis. Another development was a fitting solution for modular leg prostheses. The pyramid adapter, patented in 1969, connects the prosthetic foot, knee joint and stem and allows static corrections as well as the exchange of the modules. It remains an integrative element of innovative joints to this day. 1990 until today After reunification, Hans Georg Näder took over the management of the family company from his father Max Näder, the son-in-law of company founder Otto Bock, in 1990. In the same year, the company was able to reacquire the old Ottobock site in Königsee. Today, manual wheelchairs, power wheelchairs, rehabilitation products for children and seat shell bases are produced at the former headquarters. After a five-year development period, the world's first microprocessor-controlled knee joint, the C-Leg, was presented at the World Prosthetics Congress in Nuremberg in 1997. The company's 90th anniversary was also marked by the launch of the C-Leg. To mark the company's 90th anniversary, the newly built Science Center Medical Technology was inaugurated in Berlin in June 2009. Until 2019, this building near Potsdamer Platz served both as a venue for the public exhibition Begreifen, was uns bewegt, and as a venue for congresses and seminars. On January 1, 2009, the subsidiary Otto Bock Mobility Solutions GmbH based in Königsee emerged from the HealthCare division. At the end of 2011, the old logo with the original signature of Otto Bock was replaced by a new international logo. Advancements in electronic knee joint components and mechatronic prosthetic feet, led to enhanced individual fitting and personalised care for recipients. In 2011, these technological improvements enabled prosthetic users to walk backwards safely, overcome obstacles, or climb stairs in alternating steps for the first time. In 2016, Ottobock was banned from operating in parts of Bosnia following an investigation by the Centre for Investigative Reporting that revealed the company was implicated in a scandal involving the misuse of public health funds in which prosthetic limb users were forced to buy Ottoboc's products. In February 2017, Ottobock purchased the myoelectric arm or hand prostheses developed under the product name BeBionic from the British medical technology company Steeper. Since May 2017, the prostheses have been part of Ottobock's product range. In April 2017, Ottobock acquired Boston-based BionX Medical Technologies that manufactured a prosthetic foot and ankle product that utilises robotics technology. In June 2017, Swedish venture capitalist, EQT, acquired a 20 percent stake in Ottobock. In 2018, Ottobock expanded its presence in the orthopaedic technology market, acquiring a 51 percent stake in Pohlig GmbH, a medium-sized orthopedic company based in Traunstein, Bavaria, and one of the most important orthopedic technology companies in Germany. In the same year, Pohlig GmbH became a wholly owned subsidiary of Ottobock. During the period from 2012 to 2018, Hans Georg Näder withdrew substantial sums from Ottobock, exceeding the company's generated profits. This financial practice led to a significant decline in Ottobock's equity ratio, which dropeed from 50% in 2011 to 16% by 2021. In late 2018, Ottobock's subsidiary, Sycor, planned a merger with the IT service provider Allgeier Enterprise Services. However, Ottobock cancelled the merger at the beginning of 2019. Following a series of acquisitions, Ottobock reported in 2019 that for the first time in its history the company's sales exceeded €1 billion. In November 2019, Ottobock was compelled to sell the U.S.-based prothesis manufacturer Freedom Innovations LLC and divest all assets acquired via its purchase of the industry competitor in 2017. This sale was mandated after the U.S. Federal Trade Commission (FTC) filed an anti-competitive complain against Ottobock for breaking competition laws, incurring a damage of €78.1 million to Ottobock. The shares of Freedom Innovations were subsequently acquired by the French prosthesis manufacturer Proteor. In December 2019, the European Investment Bank (EIB) announced that it will provide up to €100 million to Ottobock to support the company's development of new products. In 2018, Ottobock's new generation of orthoses incorporated sensor technology to regulate the stance and swing phases of the leg throughout the gait cycle, enabling an almost natural walking pattern. Additionally, the company introduced an exoskeleton, the first product of the new Ottobock Bionic Exoskeletons business unit, designed to reduce strain during overhead work. Ottobock expanded its exoskeletons business after acquiring US-based exoskeleton startup SuitX, a spinoff from Berkeley Robotics and Human Engineering Laboratory, in November 2021. At the end of 2021, Ottobock announced plans for an initial public offering (IPO) slated for 2022. However, this IPO was repeatedly postponed throughout the following year, accompanied by significant changes in the company's executive leadership. By the end of 2022, Handelsblatt reported that the IPO had been abandoned due to unfavourable market conditions and that the financial investor EQT was considering a direct sale of its shares. In May 2020, an Ottobock subsidiary based in Russia was fined by Russian anti-monopoly authorities for suspected cartel collusion which gave Ottobock and its co-conspirators a monopoly over state tenders for prosthetics, worth 168.1 million Russian Roubles. In June 2023, it was announced that EQT, with the assistance of JP Morgan, had initiated the sale of its 20% stake in Ottobock. Additionally, a 10% stake held by Hans Georg Näder was included as part of the planned transaction. In December 2023, Näder Holding declared its intention to repurchase all of its shares. This buyback was completed in March 2024, with Näder securing €1.1 billion in credit funds for the transaction. Prior to the buyback, Hans Georg Näder sold his company Sycor, which had acquired Näder Ventures GmbH from Ottobock at the beginning of 2021. In March 2023, Ottobock expanded its operations by acquiring the Brillinger chain of medical supply stores. That same month, Cranial Technologies filed a patent infringement lawsuit against Ottobock and Active Life, alleging that the companies used a patented process for 3D printing certain components of infant cranial helmets without authorisation. Controversies Despite Russian invasion of Ukraine, Ottobock continues its operations in Russia, including maintaining a manufacturing site in Tolyatti. Corporate affairs Ownership The largest shareholder of Ottobock SE & Co. KGaA is Näder Holding GmbH & Co. KG, which is headquartered in Duderstadt. It is 100 percent owned by the owner family Näder, the direct descendants of the company founder Otto Bock. A further 20 percent is held by the Swedish financial investor EQT. Hans Georg Näder has publicly stated that he intends to float Ottobock via an initial public offering (IPO) scheduled for 2022, despite previously announcing the intention to take Ottobock public in 2015. In February 2022, the company delayed the IPO to September 2022. According to Reuters, Ottobock announced in May 2022 that it would not pursue the IPO due to market conditions, while company insiders claimed the company is unlikely to reach the target valuation of five to six billion euros. Management The Board of Directors manages the business of Ottobock SE & Co. KGaA and determines the basic guidelines and strategic direction of the company. It consists of four non-executive directors and currently two of the four executive directors (CEO/CSO and CFO). The Chairman of the Board of Directors is Hans Georg Näder. The company's Supervisory Board is European co-determined and consists of six shareholder representatives and four employee representatives. It monitors the activities of the board of directors. The supervisory board is chaired by Bernd Bohr, long-time head of the automotive division of the Bosch Group. Other members include Gesche Joost and Michael Kaschke. Since July 2022, the company has been managed operationally by four executive directors: Oliver Jakobi, chief executive officer (CEO) and chief revenue officer (CRO), Arne Kreitz, chief financial officer (CFO), Arne Jörn chief operating officer (COO) and chief technology officer (CTO) and Martin Böhm chief experience officer. This leadership transition followed a significant restructuring by Hans Georg Näder, who ousted three of the four previous managing directors from the company over a period of three days following the intervention by Hans Georg Näder, who opposed with the plan to take the company public in 2022. These were namely Philipp Schulte-Noelle, Kathrin Dahnke, and Andreas Goppelt. Philipp Schulte-Noelle is a former senior executive of German healthcare public company Fresenius, who was appointed as the CEO of Ottobock in 2019 amid the plan to take Ottobock public. Kathrin Dahnke was hired by Ottobock in July 2021 after she left her position as CFO at German electric lights manufacturer Osram. Kathrin Dahnke told reporters just days before her departure that Ottobock still intends to go public. Locations By February 2022, the company had expanded its operations to a total of almost 52 sites distributed across North and South America, Europe, Asia, Africa and Australia. Ottobock SE & Co. KGaA is the global market leader in technical orthopedics/prosthetics, with sales and service locations in more than 50 countries. At the end of 2022, Ottobock employed over 9,000 people worldwide. The company's corporate headquarters are located in Duderstadt, Germany, with additional German locations in Königsee, Hanover, Traunstein, and Berlin. A competence center and research and development workshop are situated in Göttingen. Ottobock maintains research and development facilities in other locations, including Duderstadt, Salt Lake City, and Vienna. Products and Business Areas Prosthetics and orthotics Since its inception, Ottobock has concentrated on developing prosthetic devices, and it has emerged as a global leader in the field of exo-prosthetics. Another focus area is orthotics, specifically designing devices that help individuals with partial leg paralysis regain mobility. NeuroMobility Ottobock's NeuroMobility division focuses on neuro-orthotics solutions alongside its rehabilitation and wheelchair business segments. Since 2018, the development of high-tech wheelchairs has been undertaken at the company's facility in Berlin. Before production begins, these wheelchairs undergo rigorous testing on a specialised test track and in an integrated workshop to ensure quality and functionalist. The production of the wheelchairs takes place in Königsee, Thuringia, where Ottobock maintains its manufacturing operations. Patient Care Ottobock operates more than 340 care centres worldwide. Additionally, the company continuously optimises processes within its orthopaedic workshops to enhance the quality and effectiveness of its treatments, orthotics and prosethic products. Bionic Exoskeletons In 2018, Ottobock established a new business division focusing on biomechanics, specifically through its Ottobock Bionic Exoskeletons unit. This division specialises in developing and marketing exoskeletons designed for use in industrial work settings to support people in physically demanding work, such as the automotive sector and smartphone manufacturing. The exoskeletons relieve strain on muscles and joints, for example during overhead work or heavy lifting activities. In October 2021, Ottobock completed the acquisition of the US company SuitX. SuitX is a spin-off from the Robotics and Human Engineering Lab at the University of California, Berkeley, and focuses on the research and development of exoskeletons for both professional and medical applications. The acquisition aims to enhance Ottobock's development and distribution efforts in the exoskeleton technology space. Ottobock Science Center In 2009, Ottobock reestablished its presence in Berlin by opening the Science Center at Potsdamer Platz, marking a return to the city where the company was originally founded in Kreuzberg in 1919. The Science Center served as Ottobock's representative office and showroom in the capital for nine years. During its operation, it attracted over one million visitors from around the globe to its interactive exhibition, "Begreifen, was uns bewegt" ("Understanding What Moves Us"). In the summer of 2018, Ottobock relocated to the renovated former Bötzow Brewery buildings, leading to the closure of the Science Center Berlin to the public. Ottobock Future Lab Ottobock established a digital think tank known as the Ottobock Future Lab at the Bötzow Brewery, once Berlin's largest private brewery. After acquiring the site in 2010, Ottobock initiated a revitalization project based on a master plan designed by architect David Chipperfield. This redevelopment blended modern working environments with the brewery's historic brick and industrial architecture. The Future Lab serves as a hub where new products, technologies, and supply solutions are developed and tested by cross-functional teams. The site hosts a variety of digital start-ups and also houses employees from departments including IT, Human Resources, Marketing, Corporate Strategy, Corporate Communications, and Public Affairs. Paralympic Games Ottobock is an official global partner to the International Paralympic Committee (IPC) since 2005, and has been providing technical services at the Paralympic Games since the Summer Games in Seoul, 1988. As an official technical service partner at the Paralympic Games, Ottobock provides support to athletes by offering services free of charge. Many athletes rely heavily on technical aids that undergo extreme stresses, especially wheelchairs in contact sports, which often suffer damage. To address this, Ottobock deploys a technical team on-site during the games and establishes workshops near the Paralympic village, and at key training and competition venues. The team performs repairs and maintenance on equipment, servicing athletes' regardless of nationality or the brand of aids used. Advertising partners in this area include paralympians Johannes Floors, Léon Schäfer, Anna Schaffelhuber and Heinrich Popow. The 2016 Paralympic Games in Rio de Janeiro marked the 13th games at which Ottobock provided technical services. This involved shipping of equipment, including 15,000 spare parts, 1,100 wheelchair tyres, 70 running blades and 300 prosthetic feet, from Duderstadt to the port at Bremerhaven, by sea to Santos, and then by road to Rio de Janeiro. At Seoul in 1988, four Ottobock technicians carried out 350 repairs; in Rio de Janeiro in 2016, 100 technicians from 29 countries speaking 26 languages carried out 3,361 repairs for 1,162 athletes, including 2,745 repairs to wheelchairs, 438 to prosthetics, and 178 to orthotics. In Rio on 10 September, the IPC's president, Sir Philip Craven, announced that Ottobock had agreed to extend its world-wide partnership to the end of 2020, encompassing the 2020 Paralympic Games in Tokyo. References External links Companies based in Lower Saxony Prosthetic manufacturers Bionics Medical technology companies of Germany German brands Assistive technology Manufacturing companies established in 1919
Ottobock
Engineering,Biology
4,147
7,423,338
https://en.wikipedia.org/wiki/Stolarsky%20mean
In mathematics, the Stolarsky mean is a generalization of the logarithmic mean. It was introduced by Kenneth B. Stolarsky in 1975. Definition For two positive real numbers x, y the Stolarsky Mean is defined as: Derivation It is derived from the mean value theorem, which states that a secant line, cutting the graph of a differentiable function at and , has the same slope as a line tangent to the graph at some point in the interval . The Stolarsky mean is obtained by when choosing . Special cases is the minimum. is the geometric mean. is the logarithmic mean. It can be obtained from the mean value theorem by choosing . is the power mean with exponent . is the identric mean. It can be obtained from the mean value theorem by choosing . is the arithmetic mean. is a connection to the quadratic mean and the geometric mean. is the maximum. Generalizations One can generalize the mean to n + 1 variables by considering the mean value theorem for divided differences for the nth derivative. One obtains for . See also Mean References Means
Stolarsky mean
Physics,Mathematics
229
19,673,093
https://en.wikipedia.org/wiki/Matter
In classical physics and general chemistry, matter is any substance that has mass and takes up space by having volume. All everyday objects that can be touched are ultimately composed of atoms, which are made up of interacting subatomic particles, and in everyday as well as scientific usage, matter generally includes atoms and anything made up of them, and any particles (or combination of particles) that act as if they have both rest mass and volume. However it does not include massless particles such as photons, or other energy phenomena or waves such as light or heat. Matter exists in various states (also known as phases). These include classical everyday phases such as solid, liquid, and gas – for example water exists as ice, liquid water, and gaseous steam – but other states are possible, including plasma, Bose–Einstein condensates, fermionic condensates, and quark–gluon plasma. Usually atoms can be imagined as a nucleus of protons and neutrons, and a surrounding "cloud" of orbiting electrons which "take up space". However, this is only somewhat correct because subatomic particles and their properties are governed by their quantum nature, which means they do not act as everyday objects appear to act – they can act like waves as well as particles, and they do not have well-defined sizes or positions. In the Standard Model of particle physics, matter is not a fundamental concept because the elementary constituents of atoms are quantum entities which do not have an inherent "size" or "volume" in any everyday sense of the word. Due to the exclusion principle and other fundamental interactions, some "point particles" known as fermions (quarks, leptons), and many composites and atoms, are effectively forced to keep a distance from other particles under everyday conditions; this creates the property of matter which appears to us as matter taking up space. For much of the history of the natural sciences, people have contemplated the exact nature of matter. The idea that matter was built of discrete building blocks, the so-called particulate theory of matter, appeared in both ancient Greece and ancient India. Early philosophers who proposed the particulate theory of matter include the ancient Indian philosopher Kanada (c. 6th–century BCE or after), pre-Socratic Greek philosopher Leucippus (~490 BCE), and pre-Socratic Greek philosopher Democritus (~470–380 BCE). Related concepts Comparison with mass Matter should not be confused with mass, as the two are not the same in modern physics. Matter is a general term describing any 'physical substance'. By contrast, mass is not a substance but rather an extensive property of matter and other substances or systems; various types of mass are defined within physics – including but not limited to rest mass, inertial mass, relativistic mass, and mass–energy. While there are different views on what should be considered matter, the mass of a substance has exact scientific definitions. Another difference is that matter has an "opposite" called antimatter, but mass has no opposite—there is no such thing as "anti-mass" or negative mass, so far as is known, although scientists do discuss the concept. Antimatter has the same (i.e. positive) mass property as its normal matter counterpart. Different fields of science use the term matter in different, and sometimes incompatible, ways. Some of these ways are based on loose historical meanings from a time when there was no reason to distinguish mass from simply a quantity of matter. As such, there is no single universally agreed scientific meaning of the word "matter". Scientifically, the term "mass" is well-defined, but "matter" can be defined in several ways. Sometimes in the field of physics "matter" is simply equated with particles that exhibit rest mass (i.e., that cannot travel at the speed of light), such as quarks and leptons. However, in both physics and chemistry, matter exhibits both wave-like and particle-like properties, the so-called wave–particle duality. Relation with chemical substance Definition Based on atoms A definition of "matter" based on its physical and chemical structure is: matter is made up of atoms. Such atomic matter is also sometimes termed ordinary matter. As an example, deoxyribonucleic acid molecules (DNA) are matter under this definition because they are made of atoms. This definition can be extended to include charged atoms and molecules, so as to include plasmas (gases of ions) and electrolytes (ionic solutions), which are not obviously included in the atoms definition. Alternatively, one can adopt the protons, neutrons, and electrons definition. Based on protons, neutrons and electrons A definition of "matter" more fine-scale than the atoms and molecules definition is: matter is made up of what atoms and molecules are made of, meaning anything made of positively charged protons, neutral neutrons, and negatively charged electrons. This definition goes beyond atoms and molecules, however, to include substances made from these building blocks that are not simply atoms or molecules, for example electron beams in an old cathode ray tube television, or white dwarf matter—typically, carbon and oxygen nuclei in a sea of degenerate electrons. At a microscopic level, the constituent "particles" of matter such as protons, neutrons, and electrons obey the laws of quantum mechanics and exhibit wave–particle duality. At an even deeper level, protons and neutrons are made up of quarks and the force fields (gluons) that bind them together, leading to the next definition. Based on quarks and leptons As seen in the above discussion, many early definitions of what can be called "ordinary matter" were based upon its structure or "building blocks". On the scale of elementary particles, a definition that follows this tradition can be stated as: "ordinary matter is everything that is composed of quarks and leptons", or "ordinary matter is everything that is composed of any elementary fermions except antiquarks and antileptons". The connection between these formulations follows. Leptons (the most famous being the electron), and quarks (of which baryons, such as protons and neutrons, are made) combine to form atoms, which in turn form molecules. Because atoms and molecules are said to be matter, it is natural to phrase the definition as: "ordinary matter is anything that is made of the same things that atoms and molecules are made of". (However, notice that one also can make from these building blocks matter that is not atoms or molecules.) Then, because electrons are leptons, and protons and neutrons are made of quarks, this definition in turn leads to the definition of matter as being "quarks and leptons", which are two of the four types of elementary fermions (the other two being antiquarks and antileptons, which can be considered antimatter as described later). Carithers and Grannis state: "Ordinary matter is composed entirely of first-generation particles, namely the [up] and [down] quarks, plus the electron and its neutrino." (Higher generations particles quickly decay into first-generation particles, and thus are not commonly encountered.) This definition of ordinary matter is more subtle than it first appears. All the particles that make up ordinary matter (leptons and quarks) are elementary fermions, while all the force carriers are elementary bosons. The W and Z bosons that mediate the weak force are not made of quarks or leptons, and so are not ordinary matter, even if they have mass. In other words, mass is not something that is exclusive to ordinary matter. The quark–lepton definition of ordinary matter, however, identifies not only the elementary building blocks of matter, but also includes composites made from the constituents (atoms and molecules, for example). Such composites contain an interaction energy that holds the constituents together, and may constitute the bulk of the mass of the composite. As an example, to a great extent, the mass of an atom is simply the sum of the masses of its constituent protons, neutrons and electrons. However, digging deeper, the protons and neutrons are made up of quarks bound together by gluon fields (see dynamics of quantum chromodynamics) and these gluon fields contribute significantly to the mass of hadrons. In other words, most of what composes the "mass" of ordinary matter is due to the binding energy of quarks within protons and neutrons. For example, the sum of the mass of the three quarks in a nucleon is approximately , which is low compared to the mass of a nucleon (approximately ). The bottom line is that most of the mass of everyday objects comes from the interaction energy of its elementary components. The Standard Model groups matter particles into three generations, where each generation consists of two quarks and two leptons. The first generation is the up and down quarks, the electron and the electron neutrino; the second includes the charm and strange quarks, the muon and the muon neutrino; the third generation consists of the top and bottom quarks and the tau and tau neutrino. The most natural explanation for this would be that quarks and leptons of higher generations are excited states of the first generations. If this turns out to be the case, it would imply that quarks and leptons are composite particles, rather than elementary particles. This quark–lepton definition of matter also leads to what can be described as "conservation of (net) matter" laws—discussed later below. Alternatively, one could return to the mass–volume–space concept of matter, leading to the next definition, in which antimatter becomes included as a subclass of matter. Based on elementary fermions (mass, volume, and space) A common or traditional definition of matter is "anything that has mass and volume (occupies space)". For example, a car would be said to be made of matter, as it has mass and volume (occupies space). The observation that matter occupies space goes back to antiquity. However, an explanation for why matter occupies space is recent, and is argued to be a result of the phenomenon described in the Pauli exclusion principle, which applies to fermions. Two particular examples where the exclusion principle clearly relates matter to the occupation of space are white dwarf stars and neutron stars, discussed further below. Thus, matter can be defined as everything composed of elementary fermions. Although we do not encounter them in everyday life, antiquarks (such as the antiproton) and antileptons (such as the positron) are the antiparticles of the quark and the lepton, are elementary fermions as well, and have essentially the same properties as quarks and leptons, including the applicability of the Pauli exclusion principle which can be said to prevent two particles from being in the same place at the same time (in the same state), i.e. makes each particle "take up space". This particular definition leads to matter being defined to include anything made of these antimatter particles as well as the ordinary quark and lepton, and thus also anything made of mesons, which are unstable particles made up of a quark and an antiquark. In general relativity and cosmology In the context of relativity, mass is not an additive quantity, in the sense that one cannot add the rest masses of particles in a system to get the total rest mass of the system. In relativity, usually a more general view is that it is not the sum of rest masses, but the energy–momentum tensor that quantifies the amount of matter. This tensor gives the rest mass for the entire system. Matter, therefore, is sometimes considered as anything that contributes to the energy–momentum of a system, that is, anything that is not purely gravity. This view is commonly held in fields that deal with general relativity such as cosmology. In this view, light and other massless particles and fields are all part of matter. Structure In particle physics, fermions are particles that obey Fermi–Dirac statistics. Fermions can be elementary, like the electron—or composite, like the proton and neutron. In the Standard Model, there are two types of elementary fermions: quarks and leptons, which are discussed next. Quarks Quarks are massive particles of spin-, implying that they are fermions. They carry an electric charge of − e (down-type quarks) or + e (up-type quarks). For comparison, an electron has a charge of −1 e. They also carry colour charge, which is the equivalent of the electric charge for the strong interaction. Quarks also undergo radioactive decay, meaning that they are subject to the weak interaction. Baryonic Baryons are strongly interacting fermions, and so are subject to Fermi–Dirac statistics. Amongst the baryons are the protons and neutrons, which occur in atomic nuclei, but many other unstable baryons exist as well. The term baryon usually refers to triquarks—particles made of three quarks. Also, "exotic" baryons made of four quarks and one antiquark are known as pentaquarks, but their existence is not generally accepted. Baryonic matter is the part of the universe that is made of baryons (including all atoms). This part of the universe does not include dark energy, dark matter, black holes or various forms of degenerate matter, such as those that compose white dwarf stars and neutron stars. Microwave light seen by Wilkinson Microwave Anisotropy Probe (WMAP) suggests that only about 4.6% of that part of the universe within range of the best telescopes (that is, matter that may be visible because light could reach us from it) is made of baryonic matter. About 26.8% is dark matter, and about 68.3% is dark energy. The great majority of ordinary matter in the universe is unseen, since visible stars and gas inside galaxies and clusters account for less than 10 per cent of the ordinary matter contribution to the mass–energy density of the universe. Hadronic Hadronic matter can refer to 'ordinary' baryonic matter, made from hadrons (baryons and mesons), or quark matter (a generalisation of atomic nuclei), i.e. the 'low' temperature QCD matter. It includes degenerate matter and the result of high energy heavy nuclei collisions. Degenerate In physics, degenerate matter refers to the ground state of a gas of fermions at a temperature near absolute zero. The Pauli exclusion principle requires that only two fermions can occupy a quantum state, one spin-up and the other spin-down. Hence, at zero temperature, the fermions fill up sufficient levels to accommodate all the available fermions—and in the case of many fermions, the maximum kinetic energy (called the Fermi energy) and the pressure of the gas becomes very large, and depends on the number of fermions rather than the temperature, unlike normal states of matter. Degenerate matter is thought to occur during the evolution of heavy stars. The demonstration by Subrahmanyan Chandrasekhar that white dwarf stars have a maximum allowed mass because of the exclusion principle caused a revolution in the theory of star evolution. Degenerate matter includes the part of the universe that is made up of neutron stars and white dwarfs. Strange Strange matter is a particular form of quark matter, usually thought of as a liquid of up, down, and strange quarks. It is contrasted with nuclear matter, which is a liquid of neutrons and protons (which themselves are built out of up and down quarks), and with non-strange quark matter, which is a quark liquid that contains only up and down quarks. At high enough density, strange matter is expected to be color superconducting. Strange matter is hypothesized to occur in the core of neutron stars, or, more speculatively, as isolated droplets that may vary in size from femtometers (strangelets) to kilometers (quark stars). Two meanings In particle physics and astrophysics, the term is used in two ways, one broader and the other more specific. The broader meaning is just quark matter that contains three flavors of quarks: up, down, and strange. In this definition, there is a critical pressure and an associated critical density, and when nuclear matter (made of protons and neutrons) is compressed beyond this density, the protons and neutrons dissociate into quarks, yielding quark matter (probably strange matter). The narrower meaning is quark matter that is more stable than nuclear matter. The idea that this could happen is the "strange matter hypothesis" of Bodmer and Witten. In this definition, the critical pressure is zero: the true ground state of matter is always quark matter. The nuclei that we see in the matter around us, which are droplets of nuclear matter, are actually metastable, and given enough time (or the right external stimulus) would decay into droplets of strange matter, i.e. strangelets. Leptons Leptons are particles of spin-, meaning that they are fermions. They carry an electric charge of −1 e (charged leptons) or 0 e (neutrinos). Unlike quarks, leptons do not carry colour charge, meaning that they do not experience the strong interaction. Leptons also undergo radioactive decay, meaning that they are subject to the weak interaction. Leptons are massive particles, therefore are subject to gravity. Phases In bulk, matter can exist in several different forms, or states of aggregation, known as phases, depending on ambient pressure, temperature and volume. A phase is a form of matter that has a relatively uniform chemical composition and physical properties (such as density, specific heat, refractive index, and so forth). These phases include the three familiar ones (solids, liquids, and gases), as well as more exotic states of matter (such as plasmas, superfluids, supersolids, Bose–Einstein condensates, ...). A fluid may be a liquid, gas or plasma. There are also paramagnetic and ferromagnetic phases of magnetic materials. As conditions change, matter may change from one phase into another. These phenomena are called phase transitions and are studied in the field of thermodynamics. In nanomaterials, the vastly increased ratio of surface area to volume results in matter that can exhibit properties entirely different from those of bulk material, and not well described by any bulk phase (see nanomaterials for more details). Phases are sometimes called states of matter, but this term can lead to confusion with thermodynamic states. For example, two gases maintained at different pressures are in different thermodynamic states (different pressures), but in the same phase (both are gases). Antimatter Antimatter is matter that is composed of the antiparticles of those that constitute ordinary matter. If a particle and its antiparticle come into contact with each other, the two annihilate; that is, they may both be converted into other particles with equal energy in accordance with Albert Einstein's equation . These new particles may be high-energy photons (gamma rays) or other particle–antiparticle pairs. The resulting particles are endowed with an amount of kinetic energy equal to the difference between the rest mass of the products of the annihilation and the rest mass of the original particle–antiparticle pair, which is often quite large. Depending on which definition of "matter" is adopted, antimatter can be said to be a particular subclass of matter, or the opposite of matter. Antimatter is not found naturally on Earth, except very briefly and in vanishingly small quantities (as the result of radioactive decay, lightning or cosmic rays). This is because antimatter that came to exist on Earth outside the confines of a suitable physics laboratory would almost instantly meet the ordinary matter that Earth is made of, and be annihilated. Antiparticles and some stable antimatter (such as antihydrogen) can be made in tiny amounts, but not in enough quantity to do more than test a few of its theoretical properties. There is considerable speculation both in science and science fiction as to why the observable universe is apparently almost entirely matter (in the sense of quarks and leptons but not antiquarks or antileptons), and whether other places are almost entirely antimatter (antiquarks and antileptons) instead. In the early universe, it is thought that matter and antimatter were equally represented, and the disappearance of antimatter requires an asymmetry in physical laws called CP (charge–parity) symmetry violation, which can be obtained from the Standard Model, but at this time the apparent asymmetry of matter and antimatter in the visible universe is one of the great unsolved problems in physics. Possible processes by which it came about are explored in more detail under baryogenesis. Formally, antimatter particles can be defined by their negative baryon number or lepton number, while "normal" (non-antimatter) matter particles have positive baryon or lepton number. These two classes of particles are the antiparticle partners of one another. In October 2017, scientists reported further evidence that matter and antimatter, equally produced at the Big Bang, are identical, should completely annihilate each other and, as a result, the universe should not exist. This implies that there must be something, as yet unknown to scientists, that either stopped the complete mutual destruction of matter and antimatter in the early forming universe, or that gave rise to an imbalance between the two forms. Conservation Two quantities that can define an amount of matter in the quark–lepton sense (and antimatter in an antiquark–antilepton sense), baryon number and lepton number, are conserved in the Standard Model. A baryon such as the proton or neutron has a baryon number of one, and a quark, because there are three in a baryon, is given a baryon number of 1/3. So the net amount of matter, as measured by the number of quarks (minus the number of antiquarks, which each have a baryon number of −1/3), which is proportional to baryon number, and number of leptons (minus antileptons), which is called the lepton number, is practically impossible to change in any process. Even in a nuclear bomb, none of the baryons (protons and neutrons of which the atomic nuclei are composed) are destroyed—there are as many baryons after as before the reaction, so none of these matter particles are actually destroyed and none are even converted to non-matter particles (like photons of light or radiation). Instead, nuclear (and perhaps chromodynamic) binding energy is released, as these baryons become bound into mid-size nuclei having less energy (and, equivalently, less mass) per nucleon compared to the original small (hydrogen) and large (plutonium etc.) nuclei. Even in electron–positron annihilation, there is no net matter being destroyed, because there was zero net matter (zero total lepton number and baryon number) to begin with before the annihilation—one lepton minus one antilepton equals zero net lepton number—and this net amount matter does not change as it simply remains zero after the annihilation. In short, matter, as defined in physics, refers to baryons and leptons. The amount of matter is defined in terms of baryon and lepton number. Baryons and leptons can be created, but their creation is accompanied by antibaryons or antileptons; and they can be destroyed by annihilating them with antibaryons or antileptons. Since antibaryons/antileptons have negative baryon/lepton numbers, the overall baryon/lepton numbers are not changed, so matter is conserved. However, baryons/leptons and antibaryons/antileptons all have positive mass, so the total amount of mass is not conserved. Further, outside of natural or artificial nuclear reactions, there is almost no antimatter generally available in the universe (see baryon asymmetry and leptogenesis), so particle annihilation is rare in normal circumstances. Dark Ordinary matter, in the quarks and leptons definition, constitutes about 4% of the energy of the observable universe. The remaining energy is theorized to be due to exotic forms, of which 23% is dark matter and 73% is dark energy. In astrophysics and cosmology, dark matter is matter of unknown composition that does not emit or reflect enough electromagnetic radiation to be observed directly, but whose presence can be inferred from gravitational effects on visible matter. Observational evidence of the early universe and the Big Bang theory require that this matter have energy and mass, but not be composed of ordinary baryons (protons and neutrons). The commonly accepted view is that most of the dark matter is non-baryonic in nature. As such, it is composed of particles as yet unobserved in the laboratory. Perhaps they are supersymmetric particles, which are not Standard Model particles but relics formed at very high energies in the early phase of the universe and still floating about. Energy In cosmology, dark energy is the name given to the source of the repelling influence that is accelerating the rate of expansion of the universe. Its precise nature is currently a mystery, although its effects can reasonably be modeled by assigning matter-like properties such as energy density and pressure to the vacuum itself. Exotic Exotic matter is a concept of particle physics, which may include dark matter and dark energy but goes further to include any hypothetical material that violates one or more of the properties of known forms of matter. Some such materials might possess hypothetical properties like negative mass. Historical and philosophical study Classical antiquity (c. 600 BCE–c. 322 BCE) In ancient India, the Buddhist, Hindu, and Jain philosophical traditions each posited that matter was made of atoms (paramanu, pudgala) that were "eternal, indestructible, without parts, and innumerable" and which associated or dissociated to form more complex matter according to the laws of nature. They coupled their ideas of soul, or lack thereof, into their theory of matter. The strongest developers and defenders of this theory were the Nyaya-Vaisheshika school, with the ideas of the Indian philosopher Kanada being the most followed. Buddhist philosophers also developed these ideas in late 1st-millennium CE, ideas that were similar to the Vaisheshika school, but ones that did not include any soul or conscience. Jain philosophers included the soul (jiva), adding qualities such as taste, smell, touch, and color to each atom. They extended the ideas found in early literature of the Hindus and Buddhists by adding that atoms are either humid or dry, and this quality cements matter. They also proposed the possibility that atoms combine because of the attraction of opposites, and the soul attaches to these atoms, transforms with karma residue, and transmigrates with each rebirth. In ancient Greece, pre-Socratic philosophers speculated the underlying nature of the visible world. Thales (c. 624 BCE–c. 546 BCE) regarded water as the fundamental material of the world. Anaximander (c. 610 BCE–c. 546 BCE) posited that the basic material was wholly characterless or limitless: the Infinite (apeiron). Anaximenes (flourished 585 BCE, d. 528 BCE) posited that the basic stuff was pneuma or air. Heraclitus (c. 535 BCE–c. 475 BCE) seems to say the basic element is fire, though perhaps he means that all is change. Empedocles (c. 490–430 BCE) spoke of four elements of which everything was made: earth, water, air, and fire. Meanwhile, Parmenides argued that change does not exist, and Democritus argued that everything is composed of minuscule, inert bodies of all shapes called atoms, a philosophy called atomism. All of these notions had deep philosophical problems. Aristotle (384 BCE–322 BCE) was the first to put the conception on a sound philosophical basis, which he did in his natural philosophy, especially in Physics book I. He adopted as reasonable suppositions the four Empedoclean elements, but added a fifth, aether. Nevertheless, these elements are not basic in Aristotle's mind. Rather they, like everything else in the visible world, are composed of the basic principles matter and form. The word Aristotle uses for matter, ὕλη (hyle or hule), can be literally translated as wood or timber, that is, "raw material" for building. Indeed, Aristotle's conception of matter is intrinsically linked to something being made or composed. In other words, in contrast to the early modern conception of matter as simply occupying space, matter for Aristotle is definitionally linked to process or change: matter is what underlies a change of substance. For example, a horse eats grass: the horse changes the grass into itself; the grass as such does not persist in the horse, but some aspect of it—its matter—does. The matter is not specifically described (e.g., as atoms), but consists of whatever persists in the change of substance from grass to horse. Matter in this understanding does not exist independently (i.e., as a substance), but exists interdependently (i.e., as a "principle") with form and only insofar as it underlies change. It can be helpful to conceive of the relationship of matter and form as very similar to that between parts and whole. For Aristotle, matter as such can only receive actuality from form; it has no activity or actuality in itself, similar to the way that parts as such only have their existence in a whole (otherwise they would be independent wholes). Age of Enlightenment French philosopher René Descartes (1596–1650) originated the modern conception of matter. He was primarily a geometer. Unlike Aristotle, who deduced the existence of matter from the physical reality of change, Descartes arbitrarily postulated matter to be an abstract, mathematical substance that occupies space: For Descartes, matter has only the property of extension, so its only activity aside from locomotion is to exclude other bodies: this is the mechanical philosophy. Descartes makes an absolute distinction between mind, which he defines as unextended, thinking substance, and matter, which he defines as unthinking, extended substance. They are independent things. In contrast, Aristotle defines matter and the formal/forming principle as complementary principles that together compose one independent thing (substance). In short, Aristotle defines matter (roughly speaking) as what things are actually made of (with a potential independent existence), but Descartes elevates matter to an actual independent thing in itself. The continuity and difference between Descartes's and Aristotle's conceptions is noteworthy. In both conceptions, matter is passive or inert. In the respective conceptions matter has different relationships to intelligence. For Aristotle, matter and intelligence (form) exist together in an interdependent relationship, whereas for Descartes, matter and intelligence (mind) are definitionally opposed, independent substances. Descartes's justification for restricting the inherent qualities of matter to extension is its permanence, but his real criterion is not permanence (which equally applied to color and resistance), but his desire to use geometry to explain all material properties. Like Descartes, Hobbes, Boyle, and Locke argued that the inherent properties of bodies were limited to extension, and that so-called secondary qualities, like color, were only products of human perception. English philosopher Isaac Newton (1643–1727) inherited Descartes's mechanical conception of matter. In the third of his "Rules of Reasoning in Philosophy", Newton lists the universal qualities of matter as "extension, hardness, impenetrability, mobility, and inertia". Similarly in Optics he conjectures that God created matter as "solid, massy, hard, impenetrable, movable particles", which were "...even so very hard as never to wear or break in pieces". The "primary" properties of matter were amenable to mathematical description, unlike "secondary" qualities such as color or taste. Like Descartes, Newton rejected the essential nature of secondary qualities. Newton developed Descartes's notion of matter by restoring to matter intrinsic properties in addition to extension (at least on a limited basis), such as mass. Newton's use of gravitational force, which worked "at a distance", effectively repudiated Descartes's mechanics, in which interactions happened exclusively by contact. Though Newton's gravity would seem to be a power of bodies, Newton himself did not admit it to be an essential property of matter. Carrying the logic forward more consistently, Joseph Priestley (1733–1804) argued that corporeal properties transcend contact mechanics: chemical properties require the capacity for attraction. He argued matter has other inherent powers besides the so-called primary qualities of Descartes, et al. 19th and 20th centuries Since Priestley's time, there has been a massive expansion in knowledge of the constituents of the material world (viz., molecules, atoms, subatomic particles). In the 19th century, following the development of the periodic table, and of atomic theory, atoms were seen as being the fundamental constituents of matter; atoms formed molecules and compounds. The common definition in terms of occupying space and having mass is in contrast with most physical and chemical definitions of matter, which rely instead upon its structure and upon attributes not necessarily related to volume and mass. At the turn of the nineteenth century, the knowledge of matter began a rapid evolution. Aspects of the Newtonian view still held sway. James Clerk Maxwell discussed matter in his work Matter and Motion. He carefully separates "matter" from space and time, and defines it in terms of the object referred to in Newton's first law of motion. However, the Newtonian picture was not the whole story. In the 19th century, the term "matter" was actively discussed by a host of scientists and philosophers, and a brief outline can be found in Levere. A textbook discussion from 1870 suggests matter is what is made up of atoms:Three divisions of matter are recognized in science: masses, molecules and atoms. A Mass of matter is any portion of matter appreciable by the senses. A Molecule is the smallest particle of matter into which a body can be divided without losing its identity. An Atom is a still smaller particle produced by division of a molecule. Rather than simply having the attributes of mass and occupying space, matter was held to have chemical and electrical properties. In 1909 the famous physicist J. J. Thomson (1856–1940) wrote about the "constitution of matter" and was concerned with the possible connection between matter and electrical charge. In the late 19th century with the discovery of the electron, and in the early 20th century, with the Geiger–Marsden experiment discovery of the atomic nucleus, and the birth of particle physics, matter was seen as made up of electrons, protons and neutrons interacting to form atoms. There then developed an entire literature concerning the "structure of matter", ranging from the "electrical structure" in the early 20th century, to the more recent "quark structure of matter", introduced as early as 1992 by Jacob with the remark: "Understanding the quark structure of matter has been one of the most important advances in contemporary physics." In this connection, physicists speak of matter fields, and speak of particles as "quantum excitations of a mode of the matter field". And here is a quote from de Sabbata and Gasperini: "With the word 'matter' we denote, in this context, the sources of the interactions, that is spinor fields (like quarks and leptons), which are believed to be the fundamental components of matter, or scalar fields, like the Higgs particles, which are used to introduced mass in a gauge theory (and that, however, could be composed of more fundamental fermion fields)." Protons and neutrons however are not indivisible: they can be divided into quarks. And electrons are part of a particle family called leptons. Both quarks and leptons are elementary particles, and were in 2004 seen by authors of an undergraduate text as being the fundamental constituents of matter. These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see Quantum gravity and Graviton) to the frustration of theoreticians like Stephen Hawking. Interactions between quarks and leptons are the result of an exchange of force-carrying particles such as photons between quarks and leptons. The force-carrying particles are not themselves building blocks. As one consequence, mass and energy (which to our present knowledge cannot be created or destroyed) cannot always be related to matter (which can be created out of non-matter particles such as photons, or even out of pure energy, such as kinetic energy). Force mediators are usually not considered matter: the mediators of the electric force (photons) possess energy (see Planck relation) and the mediators of the weak force (W and Z bosons) have mass, but neither are considered matter either. However, while these quanta are not considered matter, they do contribute to the total mass of atoms, subatomic particles, and all systems that contain them. Summary The modern conception of matter has been refined many times in history, in light of the improvement in knowledge of just what the basic building blocks are, and in how they interact. The term "matter" is used throughout physics in a wide variety of contexts: for example, one refers to "condensed matter physics", "elementary matter", "partonic" matter, "dark" matter, "anti"-matter, "strange" matter, and "nuclear" matter. In discussions of matter and antimatter, the former has been referred to by Alfvén as koinomatter (Gk. common matter). It is fair to say that in physics, there is no broad consensus as to a general definition of matter, and the term "matter" usually is used in conjunction with a specifying modifier. The history of the concept of matter is a history of the fundamental length scales used to define matter. Different building blocks apply depending upon whether one defines matter on an atomic or elementary particle level. One may use a definition that matter is atoms, or that matter is hadrons, or that matter is leptons and quarks depending upon the scale at which one wishes to define matter. These quarks and leptons interact through four fundamental forces: gravity, electromagnetism, weak interactions, and strong interactions. The Standard Model of particle physics is currently the best explanation for all of physics, but despite decades of efforts, gravity cannot yet be accounted for at the quantum level; it is only described by classical physics (see Quantum gravity and Graviton). See also Antimatter Ambiplasma Antihydrogen Antiparticle Particle accelerator Cosmology Cosmological constant Friedmann equations Motion Physical ontology Dark matter Axion Minimal Supersymmetric Standard Model Neutralino Nonbaryonic dark matter Scalar field dark matter Philosophy Atomism Materialism Physicalism Substance theory Other Mass–energy equivalence Mattergy Pattern formation Periodic Systems of Small Molecules Programmable matter References Further reading Stephen Toulmin and June Goodfield, The Architecture of Matter (Chicago: University of Chicago Press, 1962). Richard J. Connell, Matter and Becoming (Chicago: The Priory Press, 1966). Ernan McMullin, The Concept of Matter in Greek and Medieval Philosophy (Notre Dame, Indiana: Univ. of Notre Dame Press, 1965). Ernan McMullin, The Concept of Matter in Modern Philosophy (Notre Dame, Indiana: University of Notre Dame Press, 1978). External links Visionlearning Module on Matter Matter in the universe How much Matter is in the Universe? NASA on superfluid core of neutron star Matter and Energy: A False Dichotomy – Conversations About Science with Theoretical Physicist Matt Strassler
Matter
Physics
8,656
14,756,767
https://en.wikipedia.org/wiki/KCNA2
Potassium voltage-gated channel subfamily A member 2 also known as Kv1.2 is a protein that in humans is encoded by the KCNA2 gene. Function Potassium channels represent the most complex class of voltage-gated ion channels from both functional and structural standpoints. Their diverse functions include regulating neurotransmitter release, heart rate, insulin secretion, neuronal excitability, epithelial electrolyte transport, smooth muscle contraction, and cell volume. Four sequence-related potassium channel genes - shaker, shaw, shab, and shal - have been identified in Drosophila, and each has been shown to have human homolog(s). This gene encodes a member of the potassium channel, voltage-gated, shaker-related subfamily. This member contains six membrane-spanning domains with a shaker-type repeat in the fourth segment. It belongs to the delayed rectifier class, members of which allow nerve cells to efficiently repolarize following an action potential. The coding region of this gene is intronless, and the gene is clustered with genes KCNA3 and KCNA10 on chromosome 1. Interactions KCNA2 has been shown to interact with KCNA4, DLG4, PTPRA, KCNAB2, RHOA and Cortactin. Clinical Mutations in this gene have been associated with hereditary spastic paraplegia. See also Voltage-gated potassium channel Pandinotoxin References Further reading External links Ion channels
KCNA2
Chemistry
311
14,284,351
https://en.wikipedia.org/wiki/Ubora%20Towers
The Ubora Towers is a complex of two towers in the Business Bay district of Dubai, United Arab Emirates. The development consists of the Ubora Commercial Tower and the Ubora Residential Tower. Construction of the Ubora Towers was completed in 2011. It was sold by to Senyar Real Estate in Mid 2018. The Ubora Commercial Tower, also known as the Ubora Tower 1, is a 58-story building. It has a total architectural height of 263 metres (862 ft). The Ubora Residential Tower, or Ubora Tower 2, is a 20-floor structure. The commercial skyscraper was topped out in 2011. The complex was designed by the architectural firm Aedas, with lighting design by AWA Lighting Designers, and is currently managed by Jones Lang Lasalle. There is a public transport bus stop on both sides of the road named as U bora Tower which is where people working in nearby towers use to board and get down. All the buses crossing this tower except buses 26 & 50 lead to the nearest Metro Station which is Business Bay Metro Station. See also List of tallest buildings in Dubai References External links Ubora Tower on Aedas Ubora Tower on Jones Lang Lasalle Ubora Tower on CTBUH on Emporis Ubora Tower on SkyscraperPage Ubora Tower on ProTenders Ubora Tower on ProMaintaince Residential buildings completed in 2011 Commercial buildings completed in 2011 Residential skyscrapers in Dubai Andrew Bromberg buildings High-tech architecture Postmodern architecture
Ubora Towers
Engineering
315
42,429,764
https://en.wikipedia.org/wiki/Twisted%20polynomial%20ring
In mathematics, a twisted polynomial is a polynomial over a field of characteristic in the variable representing the Frobenius map . In contrast to normal polynomials, multiplication of these polynomials is not commutative, but satisfies the commutation rule for all in the base field. Over an infinite field, the twisted polynomial ring is isomorphic to the ring of additive polynomials, but where multiplication on the latter is given by composition rather than usual multiplication. However, it is often easier to compute in the twisted polynomial ring — this can be applied especially in the theory of Drinfeld modules. Definition Let be a field of characteristic . The twisted polynomial ring is defined as the set of polynomials in the variable and coefficients in . It is endowed with a ring structure with the usual addition, but with a non-commutative multiplication that can be summarized with the relation for . Repeated application of this relation yields a formula for the multiplication of any two twisted polynomials. As an example we perform such a multiplication Properties The morphism defines a ring homomorphism sending a twisted polynomial to an additive polynomial. Here, multiplication on the right hand side is given by composition of polynomials. For example using the fact that in characteristic we have the Freshman's dream . The homomorphism is clearly injective, but is surjective if and only if is infinite. The failure of surjectivity when is finite is due to the existence of non-zero polynomials which induce the zero function on (e.g. over the finite field with elements). Even though this ring is not commutative, it still possesses (left and right) division algorithms. References Algebraic number theory Finite fields
Twisted polynomial ring
Mathematics
337