source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/List%20of%20chaotic%20maps
In mathematics, a chaotic map is a map (namely, an evolution function) that exhibits some sort of chaotic behavior. Maps may be parameterized by a discrete-time or a continuous-time parameter. Discrete maps usually take the form of iterated functions. Chaotic maps often occur in the study of dynamical systems. Chaotic maps often generate fractals. Although a fractal may be constructed by an iterative procedure, some fractals are studied in and of themselves, as sets rather than in terms of the map that generates them. This is often because there are several different iterative procedures to generate the same fractal. List of chaotic maps List of fractals Cantor set de Rham curve Gravity set, or Mitchell-Green gravity set Julia set - derived from complex quadratic map Koch snowflake - special case of de Rham curve Lyapunov fractal Mandelbrot set - derived from complex quadratic map Menger sponge Newton fractal Nova fractal - derived from Newton fractal Quaternionic fractal - three dimensional complex quadratic map Sierpinski carpet Sierpinski triangle
https://en.wikipedia.org/wiki/Micro%20Expander
The Micro Expander Model 1 (also known simply as the Expander and sold in Europe as the PAL) is an S-100-based microcomputer introduced by Micro-Expander, Inc., in 1981. The computer was the brainchild of Lee Felsenstein, designer of the Sol-20, the first home computer. After his primary client and marketers of the Sol-20, Processor Technology, went out of business in 1979, Felsenstein founded a new company, Micro-Expander, Inc., in 1980. He gained the capital to sell his prototype of a successor to the Sol-20 as the Micro Expander Model 1 with help from some Swedish investors, primarily Mats Ingemanson, who was hired to market the computer. Specifications The Micro Expander Model 1 is a microcomputer with a built-in, full-sized keyboard complete with a numpad, two programmable function keys, and four cursor keys. The Expander measures and features a form factor identical to the Sol-20, however missing the walnut side panels. The Expander is built on a single printed circuit board on which contains the microprocessor, ROM, the interrupt controller (which handles up to five simultaneous interrupt requests), the keyboard controller, an RS-232 serial I/O controller, a parallel interface controller, and circuitry to drive monochrome and color displays. The mainboard also contains a real-time clock, a polyphonic sound chip and internal beeper speaker, and a cassette interface controller compatible with that of Radio Shack's TRS-80 line of microcomputers. The Expander runs off a Zilog Z80A microprocessor clocked at 4 MHz and features 64 KB of RAM stock, expandable to up to 512 KB. Like the Sol-20, the Expander features the once-ubiquitous S-100 bus, with four S-100 expansion slots on the back of the machine, allowing a wide range of expansion cards for various applications (such as computer graphics, secretarial work, and process control) to be installed into it. As stock, one of the four expansion cards is occupied by a 64-KB RAM card. Cards are allowed to be piggybac
https://en.wikipedia.org/wiki/Pseudoscalar
In linear algebra, a pseudoscalar is a quantity that behaves like a scalar, except that it changes sign under a parity inversion while a true scalar does not. A pseudoscalar, when multiplied by an ordinary vector, becomes a pseudovector (or axial vector); a similar construction creates the pseudotensor. A pseudoscalar also results from any scalar product between a pseudovector and an ordinary vector. The prototypical example of a pseudoscalar is the scalar triple product, which can be written as the scalar product between one of the vectors in the triple product and the cross product between the two other vectors, where the latter is a pseudovector. In physics In physics, a pseudoscalar denotes a physical quantity analogous to a scalar. Both are physical quantities which assume a single value which is invariant under proper rotations. However, under the parity transformation, pseudoscalars flip their signs while scalars do not. As reflections through a plane are the combination of a rotation with the parity transformation, pseudoscalars also change signs under reflections. Motivation One of the most powerful ideas in physics is that physical laws do not change when one changes the coordinate system used to describe these laws. That a pseudoscalar reverses its sign when the coordinate axes are inverted suggests that it is not the best object to describe a physical quantity. In 3D-space, quantities described by a pseudovector are anti-symmetric tensors of order 2, which are invariant under inversion. The pseudovector may be a simpler representation of that quantity, but suffers from the change of sign under inversion. Similarly, in 3D-space, the Hodge dual of a scalar is equal to a constant times the 3-dimensional Levi-Civita pseudotensor (or "permutation" pseudotensor); whereas the Hodge dual of a pseudoscalar is an anti-symmetric (pure) tensor of order three. The Levi-Civita pseudotensor is a completely anti-symmetric pseudotensor of order 3. Since the dual of t
https://en.wikipedia.org/wiki/Ellwood%20Walter%20%28businessman%29
Ellwood Walter (August 16, 1803 – May 7, 1877) was president of the Mercantile Mutual Insurance Company in New York City for 28 years. The Mercantile Mutual Insurance Company was organized in April 1844. He was also secretary of the New York Board of Marine Underwriters since 1849. He insured Cornelius Vanderbilt and many American Steamship companies during the 19th century. Early life Born in Philadelphia, Pennsylvania to a Quaker family. Walter married Deborah Coggeshall on August 9, 1827, in New York City. They had six children: Elizabeth, Thomas, Anna, George, Ellwood Jr., and Sarah Walter. His daughter Sarah married Thomas Burling Hallock (1838–1924) at the Walters' home in Brooklyn Heights. In his early life, he was an editor of a weekly newspaper, The Ariel: A Literary and Critical Gazette, published in his hometown Philadelphia. Career In 1827 Walter started and edited a newspaper in Philadelphia which was a weekly. By 1845 Walter was secretary of the Mercantile Mutual Insurance Company located at 35 Wall Street, New York. In 1847 he became a Vice president of the insurance company, and in 1853 he became its President. Walter had been associated with the Mercantile Mutual Insurance Company for 28 years. Walter was secretary of the New York Board of Marine Underwriters. In 1845, an unofficial Pilot Commission was established with two representatives from the Marine Underwriters and three from the Chamber of Commerce. Pilot boats working under the Underwriters' Commission took on licensed pilots and proved to be more insurable because of their strict rules and regulations. By 1846, the Underwriters' Commission became the official body for governing the pilot service. In 1854, when Cornelius Vanderbilt was creating steamships, it was hard to get Marine insurance for the new design. Walter made a visit to Vanderbilt's house. After their meeting, Vanderbilt put $1,000.000 into the Mercantile Mutual Insurance Company. Once people got word that Walter was in
https://en.wikipedia.org/wiki/Flora%20Danica
Flora Danica is a comprehensive atlas of botany from the Age of Enlightenment, containing folio-sized pictures of all the wild plants native to Denmark, in the period from 1761 to 1883. History Flora Danica was proposed by G. C. Oeder, then professor of botany at the Botanic Garden in Copenhagen, in 1753 and was completed 123 years later, in 1883. The complete work comprises 51 parts and 3 supplements, containing 3,240 copper engraved plates. The original plan was to cover all plants, including bryophytes, lichens and fungi native to crown lands of the Danish king, that is Denmark, Schleswig-Holstein, Oldenburg-Delmenhorst and Norway with its North Atlantic dependencies Iceland, the Faroe Islands and Greenland. However, changes were made due to territorial cessations during the period of publication. After 1814, when the double monarchy of Denmark–Norway was abolished, very few Norwegian plants were included, and similar changes were seen after 1864, when the duchies of Schleswig and Holstein were ceded. However, in the mid-19th-century era of Scandinavism, the Nordiske Naturforskermøde in Copenhagen proposed to make Flora Danica a Scandinavian work. Thus, three supplementary volumes were issued, containing the remaining Norwegian plants and the more important plants only occurring in Sweden. Oeder travelled extensively in the regions covered by the proposed Flora. The illustrations were produced by Michael Rössler (1705–1777), a skilled engraver from Nuremberg, and his son, Martin Rössler (1727–1782), who drew the plants on field trips with Oeder. The first ten issues appeared with a total of 600 plates. To produce the illustrations, both Rösslers moved to Copenhagen in 1755 and remained there until the end of their lives, with Michael becoming a copper engraver. Their illustrations are considered the best in Flora Danica, and set a benchmark in botanical illustration. Later illustrators were Johann Christian Thornam (1822–1908), Christian F. Mueller (1748–1814)
https://en.wikipedia.org/wiki/Airespace
Airespace, Inc., formerly Black Storm Networks, was a networking hardware company founded in 2001, manufacturing wireless access points and Controllers. The company developed the AP-Controller model for fast deployment and the Lightweight Access Point Protocol, the precursor to the CAPWAP protocol. Corporate history Airespace was founded in 2001 by Pat Calhoun, Bob Friday, Bob O'Hara, and Ajay Mishra. The company was venture backed by Storm Ventures, Norwest Venture Partners and Battery Ventures. In 2003, it entered into an agreement to provide OEM equipment to NEC. In 2004 it signed an agreement with Alcatel and Nortel to provide equipment to the two companies on an OEM basis. Airespace was first to market with integrated location tracking. Within a year and a half, the company grew rapidly into the market leader of enterprise Wi-Fi. Cisco Systems acquired Airespace in 2005 for $450 million; this was one of 13 acquisitions Cisco made that year and the largest up to that point. Airespace products were merged into Cisco Aironet product line.
https://en.wikipedia.org/wiki/Cellular%20agriculture
Cellular agriculture focuses on the production of agricultural products from cell cultures using a combination of biotechnology, tissue engineering, molecular biology, and synthetic biology to create and design new methods of producing proteins, fats, and tissues that would otherwise come from traditional agriculture. Most of the industry is focused on animal products such as meat, milk, and eggs, produced in cell culture rather than raising and slaughtering farmed livestock which is associated with substantial global problems of detrimental environmental impacts (e.g. of meat production), animal welfare, food security and human health. Cellular agriculture is a field of the biobased economy. The most well known cellular agriculture concept is cultured meat. History Although cellular agriculture is a nascent scientific discipline, cellular agriculture products were first commercialized in the early 20th century with insulin and rennet. On March 24, 1990, the FDA approved a bacterium that had been genetically engineered to produce rennet, making it the first genetically engineered product for food. Rennet is a mixture of enzymes that turns milk into curds and whey in cheese making. Traditionally, rennet is extracted from the inner lining of the fourth stomach of calves. Today, cheese making processes use rennet enzymes from genetically engineered bacteria, fungi, or yeasts because they are unadulterated, more consistent, and less expensive than animal-derived rennet. In 2004, Jason Matheny founded New Harvest, whose mission is to "accelerate breakthroughs in cellular agriculture". New Harvest is the only organization focused exclusively on advancing the field of cellular agriculture and provided the first PhD funding specifically for cellular agriculture, at Tufts University. By 2014, IndieBio, a synthetic biology accelerator in San Francisco, has incubated several cellular agriculture startups, hosting Muufri (making milk from cell culture, now Perfect Day Fo
https://en.wikipedia.org/wiki/Doppler%20radio%20direction%20finding
Doppler radio direction finding, or Doppler DF for short, is a radio direction finding method that generates accurate bearing information with a minimum of electronics. It is best suited to VHF and UHF frequencies, and takes only a short time to indicate a direction. This makes it suitable for measuring the location of the vast majority of commercial, amateur and automated broadcasts. Doppler DF is one of the most widely used direction finding techniques. Other direction finding techniques are generally used only for fleeting signals, or longer or shorter wavelengths. The Doppler DF system uses the Doppler effect to determine whether a moving receiver antenna is approaching or receding from the source. Early systems used antennas mounted on spinning disks to create this motion. In modern systems, the antennas are not moving physically, but electrically, by rapidly switching between a set of several antennas. As long as the switching occurs rapidly enough, which is easy to arrange, the Doppler effect will be strong enough to determine the direction of the signal. This variation is known as pseudo-Doppler DF, or sometimes sequential phase DF. This newer technique is so widely used that it is often the Doppler DF seen in most references. Direction finding Early radio direction finding (RDF) solutions used highly directional antennas with sharp "nulls" in the reception pattern. The operator rotated the antenna looking for points where the signal either reached a maximum, or more commonly, suddenly disappeared or "nulled". A common RDF antenna design is the loop antenna, which is simply a loop of wire with a small gap in the circle, typically arranged to rotate around the vertical axis with the gap at the bottom. Some systems used dipole antennas instead of loops. Before the 1930s, radio signals were generally in what would today be known as the long wave spectrum. For the effective reception of these signals, very large antennas are needed. Direction finding with rota
https://en.wikipedia.org/wiki/Scorpion%20%28Mortal%20Kombat%29
Scorpion is a fictional character in the Mortal Kombat fighting game franchise by Midway Games and NetherRealm Studios. A ninja dressed in yellow, his primary weapon is a kunai rope dart, which he uses to harpoon opponents. Debuting in the original 1992 game, Scorpion has appeared as playable in every main installment except Mortal Kombat 3 (1995). The series' original Scorpion is Hanzo Hasashi (), an undead Japanese warrior principally defined by his quest to avenge the deaths of himself, his family, and his clan; he kills Sub-Zero, the apparent murderer, in-between the original game and Mortal Kombat II (1993), leading to a bitter feud between himself and the new Sub-Zero Kuai Liang (the original's brother) spanning across most of the series, but subsequent games reveal that his family and clan were actually murdered by the sorcerer Quan Chi, causing his quest for vengeance to begin anew. He is often depicted as a neutral figure neither heroic nor villainous, who pursues his own objectives regardless of larger conflicts, although he sometimes takes whichever side can help him achieve his goals. While Hasashi remains the franchise's Scorpion throughout both the franchise's original and second timelines, the third timeline established in Mortal Kombat 1 (2023), instead establishes Kuai Liang as Scorpion, although Hasashi's Scorpion appears as timeline variants. Scorpion has received critical acclaim since his debut and frequently appears in media outside of the games. He is regarded as Mortal Kombat most iconic fighter; series co-creator Ed Boon cites Scorpion as his favorite character. Character design Scorpion appeared in the first Mortal Kombat as one of three palette-swapped ninja characters along with Sub-Zero and Reptile. His early origins were revealed by the series' original chief character designer John Tobias in September 2011 when he posted several pages of old pre-production character sketches and notes on Twitter. Scorpion and Sub-Zero were simply
https://en.wikipedia.org/wiki/Newton-second
The newton-second (also newton second; symbol: N⋅s or N s) is the unit of impulse in the International System of Units (SI). It is dimensionally equivalent to the momentum unit kilogram-metre per second (kg⋅m/s). One newton-second corresponds to a one-newton force applied for one second. It can be used to identify the resultant velocity of a mass if a force accelerates the mass for a specific time interval. Definition Momentum is given by the formula: is the momentum in newton-seconds (N⋅s) or "kilogram-metres per second" (kg⋅m/s) is the mass in kilograms (kg) is the velocity in metres per second (m/s) Examples This table gives the magnitudes of some momenta for various masses and speeds. See also Power factor Newton-metre – SI unit of torque Orders of magnitude (momentum) – examples of momenta
https://en.wikipedia.org/wiki/Posterior%20ligament%20of%20the%20head%20of%20the%20fibula
The posterior ligament of the head of the fibula is a part of the knee. It is a single thick and broad band, which passes obliquely upward from the back of the head of the fibula to the back of the lateral condyle of the tibia. It is covered by the tendon of the Popliteus.
https://en.wikipedia.org/wiki/Serial%20passage
Serial passage is the process of growing bacteria or a virus in iterations. For instance, a virus may be grown in one environment, and then a portion of that virus population can be removed and put into a new environment. This process is repeated with as many stages as desired, and then the final product is studied, often in comparison with the original virus. This sort of facilitated transmission is often conducted in a laboratory setting, because it is of scientific interest to observe how the virus or bacterium that is being passed evolves over the course of the experiment. In particular, serial passage can be quite useful in studies that seek to alter the virulence of a virus or other pathogen. One consequence of this is that serial passage can be useful in creating vaccines, since scientists can apply serial passage and create a strain of a pathogen that has low virulence, yet has comparable immunogenicity to the original strain. This can also create strains that are more transmissible in addition to lower virulence, as demonstrated by A/H5N1 passage in ferrets. Mechanism Serial passage can either be performed in vitro or in vivo. In the in vitro method, a virus or a strain of bacteria will be isolated and allowed to grow for a certain time. After the sample has grown for that time, part of it will be transferred to a new environment and allowed to grow for the same period. This process will be repeated as many times as desired. Alternatively, an in vivo experiment can be performed where an animal is infected with a pathogen, and this pathogen allowed time to grow in that host before a sample of it is removed from the host and passed to another host. This process is repeated for a certain number of hosts; the individual experiment determines this number. When serial passage is performed either in vitro or in vivo, the virus or bacterium may evolve by mutating repeatedly. Identifying and studying mutations that occur often reveals information about the
https://en.wikipedia.org/wiki/Shock-capturing%20method
In computational fluid dynamics, shock-capturing methods are a class of techniques for computing inviscid flows with shock waves. The computation of flow containing shock waves is an extremely difficult task because such flows result in sharp, discontinuous changes in flow variables such as pressure, temperature, density, and velocity across the shock. Method In shock-capturing methods, the governing equations of inviscid flows (i.e. Euler equations) are cast in conservation form and any shock waves or discontinuities are computed as part of the solution. Here, no special treatment is employed to take care of the shocks themselves, which is in contrast to the shock-fitting method, where shock waves are explicitly introduced in the solution using appropriate shock relations (Rankine–Hugoniot relations). The shock waves predicted by shock-capturing methods are generally not sharp and may be smeared over several grid elements. Also, classical shock-capturing methods have the disadvantage that unphysical oscillations (Gibbs phenomenon) may develop near strong shocks. Euler equations The Euler equations are the governing equations for inviscid flow. To implement shock-capturing methods, the conservation form of the Euler equations are used. For a flow without external heat transfer and work transfer (isoenergetic flow), the conservation form of the Euler equation in Cartesian coordinate system can be written as where the vectors , , , and are given by where is the total energy (internal energy + kinetic energy + potential energy) per unit mass. That is The Euler equations may be integrated with any of the shock-capturing methods available to obtain the solution. Classical and modern shock capturing methods From a historical point of view, shock-capturing methods can be classified into two general categories: classical methods and modern shock capturing methods (also called high-resolution schemes). Modern shock-capturing methods are generally upwind biased in con
https://en.wikipedia.org/wiki/Fluorescence%20cross-correlation%20spectroscopy
Fluorescence cross-correlation spectroscopy (FCCS) is a spectroscopic technique that examines the interactions of fluorescent particles of different colours as they randomly diffuse through a microscopic detection volume over time, under steady conditions. Discovery Eigen and Rigler first introduced the fluorescence cross-correlation spectroscopy (FCCS) method in 1994. Later, in 1997, Schwille experimentally implemented this method. Theory FCCS is an extension of the fluorescence correlation spectroscopy (FCS) method that uses two fluorescent molecules instead of one that emits different colours. The technique measures coincident green and red intensity fluctuations of distinct molecules that correlate if green and red labelled particles move together through a predefined confocal volume. FCCS utilizes two species that are independently labeled with two different fluorescent probes of different colours. These fluorescent probes are excited and detected by two different laser light sources and detectors typically labeled as "green" and "red." By combining FCCS with a confocal microscope, the technique's capabilities are highlighted, as it becomes possible to detect fluorescence molecules in femtoliter volumes within the nanomolar range, with a high signal-to-noise ratio, and at a microsecond time scale. The normalized cross-correlation function is defined for two fluorescent species, G and R, which are independent green and red channels, respectively: where differential fluorescent signals at a specific time, and at a delay time, later is correlated with each other. In the absence of spectral bleed-through -when the fluorescence signal from an adjacent channel is visible in the channel being observed-, the cross-correlation function is zero for non-interacting particles. In contrast to FCS, the cross-correlation function increases with increasing numbers of interacting particles. FCCS is mainly used to study bio-molecular interactions both in living cells
https://en.wikipedia.org/wiki/Protoplanetary%20nebula
A protoplanetary nebula or preplanetary nebula (PPN, plural PPNe) is an astronomical object which is at the short-lived episode during a star's rapid evolution between the late asymptotic giant branch (LAGB) phase and the subsequent planetary nebula (PN) phase. A PPN emits strongly in infrared radiation, and is a kind of reflection nebula. It is the second-from-the-last high-luminosity evolution phase in the life cycle of intermediate-mass stars (1–8 ). Naming The name protoplanetary nebula is an unfortunate choice due to the possibility of confusion with the same term being sometimes employed when discussing the unrelated concept of protoplanetary disks. The name protoplanetary nebula is a consequence of the older term planetary nebula, which was chosen due to early astronomers looking through telescopes and finding a similarity in appearance of planetary nebula to the gas giants such as Neptune and Uranus. To avoid any possible confusion, suggested employing a new term preplanetary nebula which does not overlap with any other disciplines of astronomy. They are often referred to as post-AGB stars, although that category also includes stars that will never ionize their ejected matter. Evolution Beginning During the late asymptotic giant branch (LAGB) phase, when mass loss reduces the hydrogen envelope's mass to around 10−2  for a core mass of 0.60 , a star will begin to evolve towards the blue side of the Hertzsprung–Russell diagram. When the hydrogen envelope has been further reduced to around 10−3 , the envelope will have been so disrupted that it is believed further significant mass loss is not possible. At this point, the effective temperature of the star, T*, will be around 5,000 K and it is defined to be the end of the LAGB and the beginning of the PPN. Protoplanetary nebula phase During the ensuing protoplanetary nebula phase, the central star's effective temperature will continue rising as a result of the envelope's mass loss as a consequence o
https://en.wikipedia.org/wiki/Retina%20bipolar%20cell
As a part of the retina, bipolar cells exist between photoreceptors (rod cells and cone cells) and ganglion cells. They act, directly or indirectly, to transmit signals from the photoreceptors to the ganglion cells. Structure Bipolar cells are so-named as they have a central body from which two sets of processes arise. They can synapse with either rods or cones (rod/cone mixed input BCs have been found in teleost fish but not mammals), and they also accept synapses from horizontal cells. The bipolar cells then transmit the signals from the photoreceptors or the horizontal cells, and pass it on to the ganglion cells directly or indirectly (via amacrine cells). Unlike most neurons, bipolar cells communicate via graded potentials, rather than action potentials. Function Bipolar cells receive synaptic input from either rods or cones, or both rods and cones, though they are generally designated rod bipolar or cone bipolar cells. There are roughly 10 distinct forms of cone bipolar cells, however, only one rod bipolar cell, due to the rod receptor arriving later in the evolutionary history than the cone receptor. In the dark, a photoreceptor (rod/cone) cell will release glutamate, which inhibits (hyperpolarizes) the ON bipolar cells and excites (depolarizes) the OFF bipolar cells. In light, however, light strikes the photoreceptor cell which causes it to be inhibited (hyperpolarized) due to the activation of opsins which activate G-Proteins that activate phosphodiesterase (PDE) which cleaves cGMP into 5'-GMP. In photoreceptor cells, there is an abundance of cGMP in dark conditions, keeping cGMP-gated Na channels open and so, activating PDE diminishes the supply of cGMP, reducing the number of open Na channels and thus hyperpolarizing the photoreceptor cell, causing less glutamate to be released. This causes the ON bipolar cell to lose its inhibition and become active (depolarized), while the OFF bipolar cell loses its excitation (becomes hyperpolarized) and becomes sil
https://en.wikipedia.org/wiki/Supraclavicular%20lymph%20nodes
Supraclavicular lymph nodes are lymph nodes found above the clavicle, that can be felt in the supraclavicular fossa. The supraclavicular lymph nodes on the left side are called Virchow's nodes. It leads to an appreciable mass that can be recognized clinically, called Troisier sign. Structure A Virchow's node is a left-sided supraclavicular lymph node. Clinical significance Malignancies of the internal organs can reach an advanced stage before giving symptoms. Stomach cancer, for example, can remain asymptomatic while metastasizing. One of the first visible spots where these tumors metastasize is one of the left supraclavicular lymph node. Virchow's nodes take their supply from lymph vessels in the abdominal cavity, and are therefore sentinel lymph nodes of cancer in the abdomen, particularly gastric cancer, ovarian cancer, testicular cancer and kidney cancer, that has spread through the lymph vessels, and Hodgkin's lymphoma. Such spread typically results in Troisier's sign, which is the finding of an enlarged, hard Virchow's node. The left supraclavicular nodes are the classical Virchow's node because they receive lymphatic drainage of most of the body (from the thoracic duct) and enters the venous circulation via the left subclavian vein. The metastasis may block the thoracic duct leading to regurgitation into the surrounding Virchow's nodes. Another concept is that one of the supraclavicular nodes corresponds to the end node along the thoracic duct and hence the enlargement. Differential diagnosis of an enlarged Virchow's node includes lymphoma, various intra-abdominal malignancies, breast cancer, and infection (e.g. of the arm). Similarly, an enlarged right supraclavicular lymph node tends to drain thoracic malignancies such as lung and esophageal cancer, as well as Hodgkin's lymphoma. History Virchow's nodes are named after Rudolf Virchow (1821–1902), the German pathologist who first described the nodes and their association with gastric cancer in 1848.
https://en.wikipedia.org/wiki/Dishabituation
Dishabituation (or dehabituation) is a form of recovered or restored behavioral response wherein the reaction towards a known stimulus is enhanced, as opposed to habituation. Initially, it was proposed as an explanation to increased response for a habituated behavior by introducing an external stimulus; however, upon further analysis, some have suggested that a proper analysis of dishabituation should be taken into consideration only when the response is increased by implying the original stimulus. Based on studies conducted over habituation's dual-process theory which attributed towards dishabituation, it is also determined that the latter was independent of any behavioral sensitization. An example of dishabituation is the response of a receptionist in a scenario where a delivery truck arrives at 9:00AM every morning. The first few times it arrives it is noticed by the receptionist, and after weeks, the receptionist does not respond as strongly. One day the truck does not arrive, and the receptionist notices its absence. When it arrives the next day, the receptionist's response is stronger when it arrives as expected. History The phenomenon was studied by an early scientist Samuel Jackson Holmes in 1912, while he was studying the animal behavior in sea urchins. Later in 1933, George Humphrey—while studying the same effects in human babies and extensively over lower vertebrates—argued that dishabituation is in fact the removal of habituation altogether, to a behavior that was not conditioned to begin with. Mechanism In humans According to the dual-process theory of habituation, dishabituation is characterized by an increase in responding to a habituated stimulus after introducing a deviant, to sensitize a change in arousal. For example, when hearing the ticking of a clock and the clock makes a louder ticking sound, you pay more attention to the clock even though you are already familiar with a clock. Further investigations into elicitation and habituation of
https://en.wikipedia.org/wiki/Myeloma%20cast%20nephropathy
Myeloma cast nephropathy, also referred to as light-chain cast nephropathy, is the formation of plugs (urinary casts) in the kidney tubules from free immunoglobulin light chains leading to kidney failure in the context of multiple myeloma. It is the most common cause of kidney injury in myeloma. In myeloma cast nephropathy, filtered κ or λ light chains that bind to Tamm-Horsfall protein precipitate in the kidney's tubules. Hypercalcemia and low fluid intake contribute to the development of casts. Myeloma cast nephropathy is considered to be a medical emergency because if untreated, it leads to irreversible kidney failure. It is diagnosed by histological examination of kidney biopsy. See also Serum-free light-chain measurement
https://en.wikipedia.org/wiki/SBCS
SBCS, or single-byte character set, is used to refer to character encodings that use exactly one byte for each graphic character. An SBCS can accommodate a maximum of 256 symbols, and is useful for scripts that do not have many symbols or accented letters such as the Latin, Greek and Cyrillic scripts used mainly for European languages. Examples of SBCS encodings include ISO/IEC 646, the various ISO 8859 encodings, and the various Microsoft/IBM code pages. The term SBCS is commonly contrasted against the terms DBCS (double-byte character set) and TBCS (triple-byte character set), as well as MBCS (multi-byte character set). The multi-byte character sets are used to accommodate languages with scripts that have large numbers of characters and symbols, predominantly Asian languages such as Chinese, Japanese, and Korean. These are sometimes referred to by the acronym CJK. In these computing systems, SBCSs are traditionally associated with half-width characters, so-called because such SBCS characters would traditionally occupy half the width of a DBCS character on a fixed-width computer terminal or text screen. See also DBCS TBCS MBCS Variable-width encoding
https://en.wikipedia.org/wiki/Proper%20orthogonal%20decomposition
The proper orthogonal decomposition is a numerical method that enables a reduction in the complexity of computer intensive simulations such as computational fluid dynamics and structural analysis (like crash simulations). Typically in fluid dynamics and turbulences analysis, it is used to replace the Navier–Stokes equations by simpler models to solve. It belongs to a class of algorithms called model order reduction (or in short model reduction). What it essentially does is to train a model based on simulation data. To this extent, it can be associated with the field of machine learning. POD and PCA The main use of POD is to decompose a physical field (like pressure, temperature in fluid dynamics or stress and deformation in structural analysis), depending on the different variables that influence its physical behaviors. As its name hints, it's operating an Orthogonal Decomposition along with the Principal Components of the field. As such it is assimilated with the principal component analysis from Pearson in the field of statistics, or the singular value decomposition in linear algebra because it refers to eigenvalues and eigenvectors of a physical field. In those domains, it is associated with the research of Karhunen and Loève, and their Karhunen–Loève theorem. Mathematical expression The first idea behind the Proper Orthogonal Decomposition (POD), as it was originally formulated in the domain of fluid dynamics to analyze turbulences, is to decompose a random vector field u(x, t) into a set of deterministic spatial functions Φk(x) modulated by random time coefficients ak(t) so that: The first step is to sample the vector field over a period of time in what we call snapshots (as display in the image of the POD snapshots). This snapshot method is averaging the samples over the space dimension n, and correlating them with each other along the time samples p: with n spatial elements, and p time samples The next step is to compute the covariance matrix C We
https://en.wikipedia.org/wiki/Standing%20wave%20ratio
In radio engineering and telecommunications, standing wave ratio (SWR) is a measure of impedance matching of loads to the characteristic impedance of a transmission line or waveguide. Impedance mismatches result in standing waves along the transmission line, and SWR is defined as the ratio of the partial standing wave's amplitude at an antinode (maximum) to the amplitude at a node (minimum) along the line. Voltage standing wave ratio (VSWR) (pronounced "vizwar") is the ratio of maximum to minimum voltage on a transmission line . For example, a VSWR of 1.2 means a peak voltage 1.2 times the minimum voltage along that line, if the line is at least one half wavelength long. A SWR can be also defined as the ratio of the maximum amplitude to minimum amplitude of the transmission line's currents, electric field strength, or the magnetic field strength. Neglecting transmission line loss, these ratios are identical. The power standing wave ratio (PSWR) is defined as the square of the VSWR, however, this deprecated term has no direct physical relation to power actually involved in transmission. SWR is usually measured using a dedicated instrument called an SWR meter. Since SWR is a measure of the load impedance relative to the characteristic impedance of the transmission line in use (which together determine the reflection coefficient as described below), a given SWR meter can interpret the impedance it sees in terms of SWR only if it has been designed for the same particular characteristic impedance as the line. In practice most transmission lines used in these applications are coaxial cables with an impedance of either 50 or 75 ohms, so most SWR meters correspond to one of these. Checking the SWR is a standard procedure in a radio station. Although the same information could be obtained by measuring the load's impedance with an impedance analyzer (or "impedance bridge"), the SWR meter is simpler and more robust for this purpose. By measuring the magnitude of the imped
https://en.wikipedia.org/wiki/Retrotransposon%20marker
Retrotransposon markers are components of DNA which are used as cladistic markers. They assist in determining the common ancestry, or not, of related taxa. The "presence" of a given retrotransposon in related taxa suggests their orthologous integration, a derived condition acquired via a common ancestry, while the "absence" of particular elements indicates the plesiomorphic condition prior to integration in more distant taxa. The use of presence/absence analyses to reconstruct the systematic biology of mammals depends on the availability of retrotransposons that were actively integrating before the divergence of a particular species. Details The analysis of SINEs – Short INterspersed Elements – LINEs – Long INterspersed Elements – or truncated LTRs – Long Terminal Repeats – as molecular cladistic markers represents a particularly interesting complement to DNA sequence and morphological data. The reason for this is that retrotransposons are assumed to represent powerful noise-poor synapomorphies. The target sites are relatively unspecific so that the chance of an independent integration of exactly the same element into one specific site in different taxa is not large and may even be negligible over evolutionary time scales. Retrotransposon integrations are currently assumed to be irreversible events; this might change since no eminent biological mechanisms have yet been described for the precise re-excision of class I transposons, but see van de Lagemaat et al. (2005). A clear differentiation between ancestral and derived character state at the respective locus thus becomes possible as the absence of the introduced sequence can be with high confidence considered ancestral. In combination, the low incidence of homoplasy together with a clear character polarity make retrotransposon integration markers ideal tools for determining the common ancestry of taxa by a shared derived transpositional event. The "presence" of a given retrotransposon in related taxa suggests
https://en.wikipedia.org/wiki/Aphanotorulus%20phrixosoma
Aphanotorulus phrixosoma is a dubious species of catfish in the family Loricariidae. It is native to South America, where it occurs in the upper Amazon River basin. The species reaches 10.1 cm (4 inches) SL. It is believed to be a facultative air-breather. A. phrixosoma was originally described as Plecostomus phrixosoma by Henry Weed Fowler in 1940, although it was transferred to the genus Squaliforma (now considered invalid) after the genus' designation by I. J. H. Isbrücker, I. Seidel, J. Michels, E. Schraml, and A. Werner in 2001. In 2004, Jonathan W. Armbruster classified the species within Hypostomus instead of Squaliforma. In 2016, following a review of Isorineloricaria and Aphanotorulus by C. Keith Ray and Armbruster (both of Auburn University), the species was reclassified as a member of Aphanotorulus, although FishBase still lists the species as Squaliforma phrixosoma and lists Aphanotorulus phrixosoma as a synonym. A. phrixosoma is of questionable validity, as it is currently known only from a single specimen that is believed by Ray and Armbruster to be a hybrid between Aphanotorulus horridus and Aphanotorulus unicolor, as it was collected in an area where A. horridus and A. unicolor are sympatric and extensive sampling efforts near the type locality have yielded no additional specimens.
https://en.wikipedia.org/wiki/Stanford%20Extended%20ASCII
Stanford Extended ASCII (SEASCII) is a derivation of the 7-bit ASCII character set developed at the Stanford Artificial Intelligence Laboratory (SAIL/SU-AI) in the early 1970s. Not all symbols match ASCII. Carnegie Mellon University, the Massachusetts Institute of Technology, and the University of Southern California also had their own modified versions of ASCII. Character set Each character is given with a potential Unicode equivalent. See also Stanford Artificial Intelligence Laboratory (SAIL/SU-AI) Stanford Artificial Intelligence Language (SAIL) Stanford/ITS character set
https://en.wikipedia.org/wiki/Atomic%20mirror
In physics, an atomic mirror is a device which reflects neutral atoms in a way similar to the way a conventional mirror reflects visible light. Atomic mirrors can be made of electric fields or magnetic fields, electromagnetic waves or just silicon wafer; in the last case, atoms are reflected by the attracting tails of the van der Waals attraction (see quantum reflection). Such reflection is efficient when the normal component of the wavenumber of the atoms is small or comparable to the effective depth of the attraction potential (roughly, the distance at which the potential becomes comparable to the kinetic energy of the atom). To reduce the normal component, most atomic mirrors are blazed at the grazing incidence. At grazing incidence, the efficiency of the quantum reflection can be enhanced by a surface covered with ridges (ridged mirror). The set of narrow ridges reduces the van der Waals attraction of atoms to the surfaces and enhances the reflection. Each ridge blocks part of the wavefront, causing Fresnel diffraction. Such a mirror can be interpreted in terms of the Zeno effect. We may assume that the atom is "absorbed" or "measured" at the ridges. Frequent measuring (narrowly spaced ridges) suppresses the transition of the particle to the half-space with absorbers, causing specular reflection. At large separation between thin ridges, the reflectivity of the ridged mirror is determined by dimensionless momentum , and does not depend on the origin of the wave; therefore, it is suitable for reflection of atoms. Applications Atomic interferometry See also Quantum reflection Ridged mirror Zeno effect Atomic nanoscope Atom laser
https://en.wikipedia.org/wiki/White%20vine-stem
A white vine-stem or white vine is a kind of border or initial decoration found in illuminated manuscripts and incunabula. Sometimes the Italian term bianchi girari is also used in English. The decoration consists of entangled white vines, usually contrasted with a colourful background. The stems themselves are often simply parchment left unpainted. It became popular among Florentine illuminators in the early 15th century, as a conscious imitation of forms found in Romanesque illuminated manuscripts, thought at the time to be antique forms. For this reason, it was considered suitable to use white vine-stems to decorate texts by classical authors and humanist books. From Florence the use of white vine-stems as a decorative element later spread to Rome and Naples, not least through the prolific work of Gioacchino de’ Gigantibus, during the second half of the century.
https://en.wikipedia.org/wiki/AASHTO%20Soil%20Classification%20System
The AASHTO Soil Classification System was developed by the American Association of State Highway and Transportation Officials, and is used as a guide for the classification of soils and soil-aggregate mixtures for highway construction purposes. The classification system was first developed by Hogentogler and Terzaghi in 1929, but has been revised several times since. Plasticity index of A-7-5 subgroup is equal to or less than the LL - 30. Plasticity index of A-7-6 subgroup is greater than LL - 30.
https://en.wikipedia.org/wiki/Biodiversity%20informatics
Biodiversity informatics is the application of informatics techniques to biodiversity information, such as taxonomy, biogeography or ecology. It is defined as the application of Information technology technologies to management, algorithmic exploration, analysis and interpretation of primary data regarding life, particularly at the species level organization. Modern computer techniques can yield new ways to view and analyze existing information, as well as predict future situations (see niche modelling). Biodiversity informatics is a term that was only coined around 1992 but with rapidly increasing data sets has become useful in numerous studies and applications, such as the construction of taxonomic databases or geographic information systems. Biodiversity informatics contrasts with "bioinformatics", which is often used synonymously with the computerized handling of data in the specialized area of molecular biology. Overview Biodiversity informatics (different but linked to bioinformatics) is the application of information technology methods to the problems of organizing, accessing, visualizing and analyzing primary biodiversity data. Primary biodiversity data is composed of names, observations and records of specimens, and genetic and morphological data associated to a specimen. Biodiversity informatics may also have to cope with managing information from unnamed taxa such as that produced by environmental sampling and sequencing of mixed-field samples. The term biodiversity informatics is also used to cover the computational problems specific to the names of biological entities, such as the development of algorithms to cope with variant representations of identifiers such as species names and authorities, and the multiple classification schemes within which these entities may reside according to the preferences of different workers in the field, as well as the syntax and semantics by which the content in taxonomic databases can be made machine queryable and in
https://en.wikipedia.org/wiki/Mary%20Jackson%20%28engineer%29
Mary Jackson (; April 9, 1921 – February 11, 2005) was an American mathematician and aerospace engineer at the National Advisory Committee for Aeronautics (NACA), which in 1958 was succeeded by the National Aeronautics and Space Administration (NASA). She worked at Langley Research Center in Hampton, Virginia, for most of her career. She started as a computer at the segregated West Area Computing division in 1951. In 1958, after taking engineering classes, she became NASA's first black female engineer. After 34 years at NASA, Jackson had earned the most senior engineering title available. She realized she could not earn further promotions without becoming a supervisor. She accepted a demotion to become a manager of both the Federal Women's Program, in the NASA Office of Equal Opportunity Programs and of the Affirmative Action Program. In this role, she worked to influence the hiring and promotion of women in NASA's science, engineering, and mathematics careers. Jackson's story features in the 2016 non-fiction book Hidden Figures: The American Dream and the Untold Story of the Black Women Who Helped Win the Space Race. She is one of the three protagonists in Hidden Figures, the film adaptation released the same year. In 2019, Jackson was posthumously awarded the Congressional Gold Medal. In 2021, the Washington, D.C. headquarters of NASA was renamed the Mary W. Jackson NASA Headquarters. Biography Mary Jackson was born on April 9, 1921, to Ella Winston (née Scott) and Frank Winston. She grew up in Hampton, Virginia, where she graduated from high school with highest honors. Jackson earned bachelor's degrees in mathematics and physical science from Hampton University in 1942. She was initiated into the Gamma Theta chapter of Alpha Kappa Alpha sorority at Hampton. Jackson served for more than 30 years as a Girl Scout leader. In the 1970s she helped African American children in her community create a miniature wind tunnel for testing airplanes. Jackson was marrie
https://en.wikipedia.org/wiki/Difference%20set
In combinatorics, a difference set is a subset of size of a group of order such that every non-identity element of can be expressed as a product of elements of in exactly ways. A difference set is said to be cyclic, abelian, non-abelian, etc., if the group has the corresponding property. A difference set with is sometimes called planar or simple. If is an abelian group written in additive notation, the defining condition is that every non-zero element of can be written as a difference of elements of in exactly ways. The term "difference set" arises in this way. Basic facts A simple counting argument shows that there are exactly pairs of elements from that will yield nonidentity elements, so every difference set must satisfy the equation If is a difference set and then is also a difference set, and is called a translate of ( in additive notation). The complement of a -difference set is a -difference set. The set of all translates of a difference set forms a symmetric block design, called the development of and denoted by In such a design there are elements (usually called points) and blocks (subsets). Each block of the design consists of points, each point is contained in blocks. Any two blocks have exactly elements in common and any two points are simultaneously contained in exactly blocks. The group acts as an automorphism group of the design. It is sharply transitive on both points and blocks. In particular, if , then the difference set gives rise to a projective plane. An example of a (7,3,1) difference set in the group is the subset . The translates of this difference set form the Fano plane. Since every difference set gives a symmetric design, the parameter set must satisfy the Bruck–Ryser–Chowla theorem. Not every symmetric design gives a difference set. Equivalent and isomorphic difference sets Two difference sets in group and in group are equivalent if there is a group isomorphism between and such that for
https://en.wikipedia.org/wiki/X-linked%20recessive%20inheritance
X-linked recessive inheritance is a mode of inheritance in which a mutation in a gene on the X chromosome causes the phenotype to be always expressed in males (who are necessarily homozygous for the gene mutation because they have one X and one Y chromosome) and in females who are homozygous for the gene mutation, see zygosity. Females with one copy of the mutated gene are carriers. X-linked inheritance means that the gene causing the trait or the disorder is located on the X chromosome. Females have two X chromosomes while males have one X and one Y chromosome. Carrier females who have only one copy of the mutation do not usually express the phenotype, although differences in X-chromosome inactivation (known as skewed X-inactivation) can lead to varying degrees of clinical expression in carrier females, since some cells will express one X allele and some will express the other. The current estimate of sequenced X-linked genes is 499, and the total, including vaguely defined traits, is 983. Patterns of inheritance In humans, inheritance of X-linked recessive traits follows a unique pattern made up of three points. The first is that affected fathers cannot pass X-linked recessive traits to their sons because fathers give Y chromosomes to their sons. This means that males affected by an X-linked recessive disorder inherited the responsible X chromosome from their mothers. Second, X-linked recessive traits are more commonly expressed in males than females. This is due to the fact that males possess only a single X chromosome, and therefore require only one mutated X in order to be affected. Women possess two X chromosomes, and thus must receive two of the mutated recessive X chromosomes (one from each parent). A popular example showing this pattern of inheritance is that of the descendants of Queen Victoria and the blood disease hemophilia. The last pattern seen is that X-linked recessive traits tend to skip generations, meaning that an affected grandfather will
https://en.wikipedia.org/wiki/Diauxic%20growth
Diauxic growth, diauxie or diphasic growth is any cell growth characterized by cellular growth in two phases. Diauxic growth, meaning double growth, is caused by the presence of two sugars on a culture growth media, one of which is easier for the target bacterium to metabolize. The preferred sugar is consumed first, which leads to rapid growth, followed by a lag phase. During the lag phase the cellular machinery used to metabolize the second sugar is activated and subsequently the second sugar is metabolized. This can also occur when the bacterium in a closed batch culture consumes most of its nutrients and is entering the stationary phase when new nutrients are suddenly added to the growth media. The bacterium enters a lag phase where it tries to ingest the food. Once the food starts being utilized, it enters a new log phase showing a second peak on the growth curve. Growth phases Jacques Monod discovered diauxic growth in 1941 during his experiments with Escherichia coli and Bacillus subtilis. While growing these bacteria on various combination of sugars during his doctoral thesis research, Monod observed that often two distinct growth phases are clearly visible in batch culture, as seen in Figure 1. During the first phase, cells preferentially metabolize the sugar on which it can grow faster (often glucose but not always). Only after the first sugar has been exhausted do the cells switch to the second. At the time of the "diauxic shift", there is often a lag period during which cells produce the enzymes needed to metabolize the second sugar. Monod later put aside his work on diauxic growth and focused on the lac operon model of gene expression, which led to a Nobel prize. Diauxie occurs because organisms use operons or multiple sets of genes to control differently the expression of enzymes needed to metabolize the different nutrients or sugars they encounter. If an organism allocates its energy and other resources (e.g. amino acids) to synthesize enzym
https://en.wikipedia.org/wiki/Newton%27s%20law%20of%20cooling
In the study of heat transfer, Newton's law of cooling is a physical law which states that The rate of heat loss of a body is directly proportional to the difference in the temperatures between the body and its environment. The law is frequently qualified to include the condition that the temperature difference is small and the nature of heat transfer mechanism remains the same. As such, it is equivalent to a statement that the heat transfer coefficient, which mediates between heat losses and temperature differences, is a constant. In heat conduction, Newton's Law is generally followed as a consequence of Fourier's law. The thermal conductivity of most materials is only weakly dependent on temperature, so the constant heat transfer coefficient condition is generally met. In convective heat transfer, Newton's Law is followed for forced air or pumped fluid cooling, where the properties of the fluid do not vary strongly with temperature, but it is only approximately true for buoyancy-driven convection, where the velocity of the flow increases with temperature difference. In the case of heat transfer by thermal radiation, Newton's law of cooling holds only for very small temperature differences. When stated in terms of temperature differences, Newton's law (with several further simplifying assumptions, such as a low Biot number and a temperature-independent heat capacity) results in a simple differential equation expressing temperature-difference as a function of time. The solution to that equation describes an exponential decrease of temperature-difference over time. This characteristic decay of the temperature-difference is also associated with Newton's law of cooling. Historical background Isaac Newton published his work on cooling anonymously in 1701 as "Scala graduum Caloris. Calorum Descriptiones & signa" in Philosophical Transactions, volume 22, issue 270. Newton did not originally state his law in the above form in 1701. Rather, using today's terms, Newto
https://en.wikipedia.org/wiki/Kibong%27oto%20Hospital
Kibong'oto Infectious Disease Hospital is a specialty 340-bed hospital in Kilimanjaro Region, Tanzania. It serves as the country's infectious disease hospital, treating Multi-drug-resistant tuberculosis as well as offering general hospital services to the population of the district. The hospital is headed by Dr. Leonard Subi as Director and Atanasia Karoli as Head Nurse of this hospital. The hospital was founded by a British physician Dr. Normal Davies in 1926, initially as a tuberculosis sanitorium, then designated as the regional tuberculosis hospital for East Africa. Over many decades the hospital has expanded its services and is now the leading hospital for infectious disease care in the country. In 2020 the hospital began construction of a viral infections laboratory that is expected to have a substantial impact on the hospitals ability to monitor and treat infectious diseases.
https://en.wikipedia.org/wiki/Goal%20programming
Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA). It can be thought of as an extension or generalisation of linear programming to handle multiple, normally conflicting objective measures. Each of these measures is given a goal or target value to be achieved. Deviations are measured from these goals both above and below the target. Unwanted deviations from this set of target values are then minimised in an achievement function. This can be a vector or a weighted sum dependent on the goal programming variant used. As satisfaction of the target is deemed to satisfy the decision maker(s), an underlying satisficing philosophy is assumed. Goal programming is used to perform three types of analysis: Determine the required resources to achieve a desired set of objectives. Determine the degree of attainment of the goals with the available resources. Providing the best satisfying solution under a varying amount of resources and priorities of the goals. History Goal programming was first used by Charnes, Cooper and Ferguson in 1955, although the actual name first appeared in a 1961 text by Charnes and Cooper. Seminal works by Lee, Ignizio, Ignizio and Cavalier, and Romero followed. Schniederjans gives in a bibliography of a large number of pre-1995 articles relating to goal programming, and Jones and Tamiz give an annotated bibliography of the period 1990-2000. A recent textbook by Jones and Tamiz . gives a comprehensive overview of the state-of-the-art in goal programming. The first engineering application of goal programming, due to Ignizio in 1962, was the design and placement of the antennas employed on the second stage of the Saturn V. This was used to launch the Apollo space capsule that landed the first men on the moon. Variants The initial goal programming formulations ordered the unwanted deviations into a number of priority levels, with the minimisation of a deviation in a high
https://en.wikipedia.org/wiki/GABRA6
Gamma-aminobutyric acid receptor subunit alpha-6 is a protein that in humans is encoded by the GABRA6 gene. GABA is the major inhibitory neurotransmitter in the mammalian brain where it acts at GABA-A receptors, which are ligand-gated chloride channels. Chloride conductance of these channels can be modulated by agents such as benzodiazepines that bind to the GABA-A receptor. At least 16 distinct subunits of GABA-A receptors have been identified. One study has found a genetic variant in the gene to be associated with the personality trait neuroticism. See also GABAA receptor
https://en.wikipedia.org/wiki/Mesothelium
The mesothelium is a membrane composed of simple squamous epithelial cells of mesodermal origin, which forms the lining of several body cavities: the pleura (pleural cavity around the lungs), peritoneum (abdominopelvic cavity including the mesentery, omenta, falciform ligament and the perimetrium) and pericardium (around the heart). Mesothelial tissue also surrounds the male testis (as the tunica vaginalis) and occasionally the spermatic cord (in a patent processus vaginalis). Mesothelium that covers the internal organs is called visceral mesothelium, while one that covers the surrounding body walls is called the parietal mesothelium. The mesothelium that secretes serous fluid as a main function is also known as a serosa. Origin Mesothelium derives from the embryonic mesoderm cell layer, that lines the coelom (body cavity) in the embryo. It develops into the layer of cells that covers and protects most of the internal organs of the body. Structure The mesothelium forms a monolayer of flattened squamous-like epithelial cells resting on a thin basement membrane supported by dense irregular connective tissue. Cuboidal mesothelial cells may be found at areas of injury, the milky spots of the omentum, and the peritoneal side of the diaphragm overlaying the lymphatic lacunae. The luminal surface is covered with microvilli. The proteins and serosal fluid trapped by the microvilli provide a slippery surface for internal organs to slide past one another. Function The mesothelium is composed of an extensive monolayer of specialized cells (mesothelial cells) that line the body's serous cavities and internal organs. The main purpose of these cells is to produce a lubricating fluid that is released between layers, providing a slippery, non-adhesive, and protective surface to facilitate intracoelomic movement. The mesothelium is also implicated in the transport and movement of fluid and particulate matter across the serosal cavities, leukocyte migration in response to in
https://en.wikipedia.org/wiki/Mir-558%20microRNA%20precursor%20family
In molecular biology mir-558 microRNA is a short RNA molecule. MicroRNAs function to regulate the expression levels of other genes by several mechanisms. See also MicroRNA
https://en.wikipedia.org/wiki/25th%20meridian%20west
The meridian 25° west of Greenwich is a line of longitude that extends from the North Pole across the Arctic Ocean, Greenland, the Atlantic Ocean, Cape Verde Islands, the Southern Ocean, and Antarctica to the South Pole. The 25th meridian west forms a great circle with the 155th meridian east. In Antarctica, the meridian defines the eastern limit of Argentina's territorial claim and passes through the British claim - the two claims overlap. From Pole to Pole Starting at the North Pole and heading south to the South Pole, the 25th meridian west passes through: {| class="wikitable plainrowheaders" ! scope="col" width="125" | Co-ordinates ! scope="col" | Country, territory or sea ! scope="col" | Notes |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Arctic Ocean | style="background:#b0e0e6;" | |- | ! scope="row" | | Peary Land peninsula |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | G.B. Schley Fjord | style="background:#b0e0e6;" | |- | ! scope="row" | | Peary Land peninsula |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Independence Fjord | style="background:#b0e0e6;" | |- | ! scope="row" | | |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Hagen Fjord | style="background:#b0e0e6;" | |-valign="top" | ! scope="row" | | Mainland, Ymer Island, Ella Island and the mainland again (passing through the Stauning Alps at ) |- | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Scoresby Sound | style="background:#b0e0e6;" | |- | ! scope="row" | | |-valign="top" | style="background:#b0e0e6;" | ! scope="row" style="background:#b0e0e6;" | Atlantic Ocean | style="background:#b0e0e6;" | Passing just east of São Miguel Island, Azores, (at ) Passing just west of the islet of Formigas, Azores, (at ) Passing just east of Santa Maria Island, Azores, (at ) |- | ! scope="row" | | Islands of Santo Antão and São Vicente |
https://en.wikipedia.org/wiki/Pacific%20Research%20Laboratories
Pacific Research Laboratories, Inc. (PRL) is a design, research and development (R&D) and prototype manufacturing company. It is the leading producer of Sawbones, designed to simulate bone architecture and a bone’s physical properties. It was founded in 1978. The company had 135 employees as of April 2016 and is the largest manufacturer in Vashon, Washington. It is locally referred to as "The Bone Factory." PRL has capabilities in (R&D) prototypes, short run production, and rapid prototyping. It is the manufacturer for Seaglider fairings, wings and rudders; Seaglider is an underwater glider autonomous underwater vehicle (AUV) developed by the University of Washington. PRL also manufactures Super Shroud cell tower concealment shrouds. The company also works in product development and design, including quick-turnaround projects using urethanes, silicones, glass/carbon fibers, braided fiberglass, thermoplastics, electronics, hydraulics, and pneumatics; the creation of prototypes, master patterns, and tooling; reverse engineering; laser scanning; and manufacturing using 3D printing, 3-axis CNC router, 4-axis CNC machining, and a triaxial fiberglass braider. In December 2010, Pacific Research Laboratories became employee owned under an employee stock ownership plan (ESOP).
https://en.wikipedia.org/wiki/KCOP-TV
KCOP-TV (channel 13) is a television station in Los Angeles, California, United States, serving as the West Coast flagship of MyNetworkTV. It is owned and operated by Fox Television Stations alongside Fox outlet KTTV (channel 11). Both stations share studios at the Fox Television Center located in West Los Angeles, while KCOP-TV's transmitter is located atop Mount Wilson. History Early history Channel 13 first signed on the air on September 17, 1948, as KLAC-TV (standing for Los Angeles, California), and adopted the moniker "Lucky 13". It was originally co-owned with local radio station KLAC (570 AM). Operating as an independent station early on, it began running some programming from the DuMont Television Network in 1949 after KTLA (channel 5) ended its affiliation with the network after a one-year tenure. One of KLAC-TV's earlier stars was veteran actress Betty White, who starred in Al Jarvis's Make-Believe Ballroom (later Hollywood on Television) from 1949 to 1952, and then her own sitcom, Life with Elizabeth from 1952 to 1956. Television personality Regis Philbin and actor/director Leonard Nimoy once worked behind the scenes at channel 13, and Oscar Levant had his own show on the station from 1958 to 1960. On December 23, 1953, the now-defunct Copley Press (publishers of the San Diego Union-Tribune) purchased KLAC-TV and changed its call letters to the current KCOP, which reflected their ownership. A Bing Crosby-led group purchased the station in June 1957. In 1959, the NAFI Corporation, which would later merge with Chris-Craft Boats to become Chris-Craft Industries, bought channel 13. NAFI/Chris-Craft would be channel 13's longest-tenured owner, running it for over 40 years. For most of its first 46 years on the air, channel 13 was a typical general entertainment independent station. It was usually the third or fourth highest-rated independent in Southern California, trading the #3 spot with KHJ-TV (channel 9, now KCAL-TV). The station carried Operation Pr
https://en.wikipedia.org/wiki/NanoLumens
NanoLumens Inc. is a private American corporation that designs and manufactures digital LED displays. Since its founding in 2006, it has designed products that target the market gap between consumer flat panel displays and commercial outdoor LED billboards in indoor spaces. It is best known for creating the world's first large-format flexible display, the NanoFlex 112"; but now has a full product line of fixed, indoor LED displays in any size, shape, or curvature. It is currently headquartered in Peachtree Corners, Georgia, a North Atlanta suburb. History NanoLumens was launched in 2006 by then CEO Richard 'Rick' Cope. NanoLumens now holds over 200 patents on the technology used in their displays. The company raised more than $10 million from 2006 to 2009 from angel investors to fund their operations, and primarily targets the digital out-of-home (DOOH) market, with sales in the U.S. & Canada, and expansion in Europe, the Middle East, Australia, and South America. NanoLumens relocated its headquarters at the beginning of 2012 to a new 32,000 square foot office center in the North Atlanta suburb of Peachtree Corners after being courted by the Gwinnett Chamber Economic Development Department and the Georgia Department of Economic Development to stay in the area. The new facility houses the largest digital display showroom in the US. 2012 also marked the company's first full year of commercial production. Key personnel Gary Feather, Chief Technology Officer Chuck Gottschalk, Chief Operating Officer Dave Merlino, Vice President of Sales Joe' Lloyd, Vice President of Global Marketing and Business Development Product lines NanoLumens produces digital display solutions targeting the Digital Out-of-Home (DOOH) Market for indoor installations. Displays range from 0.93mm to 10mm pixel pitches depending on the model and can be flat, curved, or cylindrical. Markets served include broadcast facilities, the conventions/hospitality industry, casinos, educational instit
https://en.wikipedia.org/wiki/Bending%20of%20plates
Bending of plates, or plate bending, refers to the deflection of a plate perpendicular to the plane of the plate under the action of external forces and moments. The amount of deflection can be determined by solving the differential equations of an appropriate plate theory. The stresses in the plate can be calculated from these deflections. Once the stresses are known, failure theories can be used to determine whether a plate will fail under a given load. Bending of Kirchhoff-Love plates Definitions For a thin rectangular plate of thickness , Young's modulus , and Poisson's ratio , we can define parameters in terms of the plate deflection, . The flexural rigidity is given by Moments The bending moments per unit length are given by The twisting moment per unit length is given by Forces The shear forces per unit length are given by Stresses The bending stresses are given by The shear stress is given by Strains The bending strains for small-deflection theory are given by The shear strain for small-deflection theory is given by For large-deflection plate theory, we consider the inclusion of membrane strains Deflections The deflections are given by Derivation In the Kirchhoff–Love plate theory for plates the governing equations are and In expanded form, and where is an applied transverse load per unit area, the thickness of the plate is , the stresses are , and The quantity has units of force per unit length. The quantity has units of moment per unit length. For isotropic, homogeneous, plates with Young's modulus and Poisson's ratio these equations reduce to where is the deflection of the mid-surface of the plate. Small deflection of thin rectangular plates This is governed by the Germain-Lagrange plate equation This equation was first derived by Lagrange in December 1811 in correcting the work of Germain who provided the basis of the theory. Large deflection of thin rectangular plates This is governed by the Föppl–von Ká
https://en.wikipedia.org/wiki/PNRC1
Proline-rich nuclear receptor coactivator 1 is a protein that, in humans, is encoded by the PNRC1 gene. Function PNRC1 functions as a coactivator for several nuclear receptors including AR, ERα, ERRα, ERRγ, GR, SF1, PR, TR, RAR and RXR. The interaction between PNRC1 with nuclear receptors occurs through the SH3 domain of PNRC1.
https://en.wikipedia.org/wiki/Inner%20core%20super-rotation
Inner core super-rotation is the eastward rotation of the inner core of Earth relative to its mantle, for a net rotation rate that is usually faster than Earth as a whole. A 1995 model of Earth's dynamo predicted super-rotations of up to 3 degrees per year; the following year, this prediction was supported by observed discrepancies in the time that p-waves take to travel through the inner and outer core. Seismic observations have made use of a direction dependence (anisotropy) of the speed of seismic waves in the inner core, as well as spatial variations in the speed. Other estimates come from free oscillations of Earth. The results are inconsistent and the existence of a super-rotation is still controversial, but it is probably less than 0.1 degrees per year. When geodynamo models take into account gravitational coupling between the inner core and mantle, it lowers the predicted super-rotation to as little as 1 degree per million years. For the inner core to rotate despite gravitational coupling, it must be able to change shape, which places constraints on its viscosity. A 2023 study reported that the spin of the Earth's inner core has stopped spinning faster than the planet's surface around 2009 and likely is now rotating slower than it. Background At the center of Earth is the core, a ball with a mean radius of 3480 kilometres that is composed mostly of iron. The outer core is liquid while the inner core, with a radius of 1220 km, is solid. Because the outer core has a low viscosity, it could be rotating at a different rate from the mantle and crust. This possibility was first proposed in 1975 to explain a phenomenon of Earth's magnetic field called westward drift: some parts of the field rotate about 0.2 degrees per year westward relative to Earth's surface. In 1981, David Gubbins of Leeds University predicted that a differential rotation of the inner and outer core could generate a large toroidal magnetic field near the shared boundary, accelerating the i
https://en.wikipedia.org/wiki/Reconstructor
Reconstructor is a commercial point cloud processing software. Developed and marketed by the Italian software house Gexcel, Reconstructor was first released in September 2007 and continuously updated since then. It's a complete point cloud processing software package that includes many post processing tools for 3D reconstruction, post processing, measurements, 3D modeling and content creation. The Geomax, Stonex and Teledyne-Opetch manufacturers have chosen Reconstructor technology for their customers. History The technology behind Reconstructor has been initially born through a technology transfer agreement between the Joint Research Centre and Gexcel in 2007, to introduce the lidar technology for international nuclear plants monitoring into the market. Nowadays the software is completely built in-house by Gexcel. Initially called JRC3DReconstructor, it is constantly growing to adapt the structure, functionalities and tools to the surveyors' needs. From 2019 release become Reconstructor and introduced a new add-on structure. The core software available with a perpetual licence of monthly temporary licence allows to register, analyze, inspect, measure and share data. Then you can add a set of commands to detect specific industries like Land and Quarry (Mining add-on), Cultural Heritage (Color add-on), Mobile mapping dataset (HERON add-on). The technology behind Reconstructor is also academically traceable, as it can be available with a special educational licence. See also 3D reconstruction Joint Research Centre External links Gexcel official site Reconstructor software features list
https://en.wikipedia.org/wiki/Unfolding%20%28functions%29
In mathematics, an unfolding of a smooth real-valued function ƒ on a smooth manifold, is a certain family of functions that includes ƒ. Definition Let be a smooth manifold and consider a smooth mapping Let us assume that for given and we have . Let be a smooth -dimensional manifold, and consider the family of mappings (parameterised by ) given by We say that is a -parameter unfolding of if for all In other words the functions and are the same: the function is contained in, or is unfolded by, the family Example Let be given by An example of an unfolding of would be given by As is the case with unfoldings, and are called variables, and and are called parameters, since they parameterise the unfolding. Well-behaved unfoldings In practice we require that the unfoldings have certain properties. In , is a smooth mapping from to and so belongs to the function space As we vary the parameters of the unfolding, we get different elements of the function space. Thus, the unfolding induces a function The space , where denotes the group of diffeomorphisms of etc., acts on The action is given by If lies in the orbit of under this action then there is a diffeomorphic change of coordinates in and , which takes to (and vice versa). One property that we can impose is that where "" denotes "transverse to". This property ensures that as we vary the unfolding parameters we can predict – by knowing how the orbit foliates – how the resulting functions will vary. Versal unfoldings There is an idea of a versal unfolding. Every versal unfolding has the property that , but the converse is false. Let be local coordinates on , and let denote the ring of smooth functions. We define the Jacobian ideal of , denoted by , as follows: Then a basis for a versal unfolding of is given by the quotient . This quotient is known as the local algebra of . The dimension of the local algebra is called the Milnor number of . The minimum number of unfoldi
https://en.wikipedia.org/wiki/Brett%27s%20hypothesis
Brett's hypothesis also known as the heat-invariant hypothesis or Brett's heat-invariant hypothesis proposes that upper thermal tolerance limits are less variable geographically than lower thermal tolerance limits. This hypothesis was originally proposed for fish but lately has been supported by studies with reptiles, amphibians, and aquatic insects. Three different mechanisms are proposed for the existence of this large-scale pattern of thermal tolerance limits variation: A constrained evolutionary potential of upper thermal tolerance limits The buffering effects of thermoregulatory behaviour has greater potential to face heat rather than cold stress Resolution of thermal data used Global versus local scales in Brett's hypothesis While Brett's hypothesis has been strongly supported at global scales, heat tolerance seems to respond differently to smaller-scale climatic and habitat factors. For instance, lizards from the Iberian Peninsula show higher variation in upper thermal tolerance limits than in lower thermal tolerance limits. Similar results are found in adult frogs, tadpoles, and dragonfly larvae at local scales.
https://en.wikipedia.org/wiki/Genetic%20admixture
Genetic admixture occurs when previously diverged or isolated genetic lineages mix. Admixture results in the introduction of new genetic lineages into a population. Examples Climatic cycles facilitate genetic admixture in cold periods and genetic diversification in warm periods. Natural flooding can cause genetic admixture within populations of migrating fish species. Genetic admixture may have an important role for the success of populations that colonise a new area and interbreed with individuals of native populations. Mapping Admixture mapping is a method of gene mapping that uses a population of mixed ancestry (an admixed population) to find the genetic loci that contribute to differences in diseases or other phenotypes found between the different ancestral populations. The method is best applied to populations with recent admixture from two populations that were previously genetically isolated. The method attempts to correlate the degree of ancestry near a genetic locus with the phenotype or disease of interest. Genetic markers that differ in frequency between the ancestral populations are needed across the genome. Admixture mapping is based on the assumption that differences in disease rates or phenotypes are due in part to differences in the frequencies of disease-causing or phenotype-causing genetic variants between populations. In an admixed population, these causal variants occur more frequently on chromosomal segments inherited from one or another ancestral population. The first admixture scans were published in 2005 and since then genetic contributors to a variety of disease and trait differences have been mapped. By 2010, high-density mapping panels had been constructed for African Americans, Latino/Hispanics, and Uyghurs. See also Chloroplast capture Gene cluster Gene flow Haplogroup Human genetic variation Hybrid Hybrid vigor Interbreeding between archaic and modern humans Introgression Population groups in biomedicine
https://en.wikipedia.org/wiki/SCSI%20/%20ATA%20Translation
SCSI / ATA Translation (SAT) is a set of standards developed by the T10 subcommittee, defining how to communicate with ATA devices through a SCSI application layer. The standard attempts to be consistent with the SCSI architectural model, the SCSI Primary Commands, and the SCSI Block Commands standards. The standard allows for translation of SCSI read and write commands. The standard also provides the ability to control exactly what ATA operations are executed on a target device by defining two new SCSI operation codes: ATA PASS THROUGH (Ax, 12-byte) – 28-bit ATA command without or fields ATA PASS THROUGH (8x, 16-byte) – 28- or 48-bit ATA command without or fields History The first SAT standard was finalized in 2007 and published as ANSI INCITS 431–2007. It was succeeded by SAT-2 published as INCITS 465 in 2009, and SAT-3, which was finalized by T10 and is expected to be published as INCITS 517 in 2014. SAT-4 is in development. SAT has also been adopted in 2008 as an ISO/IEC JTC 1/SC 25 standard, namely ISO/IEC 14776-921. SAT-2 was finalized in 2009. Significant additions in SAT-2 are ATAPI translations, NCQ control, persistent reservations, non-volatile cache translation, and ATA security mode translations. The standard also defines a new data structure returned in the sense data known as the ATA Return Descriptor that contains the ATA taskfile registers. SAT-2 was promulgated as ISO/IEC 14776–922 in 2011. SAT-3 was finalized in 2014, and SAT-4 in 2016. Since the standards have become ANSI standards, the drafts are inaccessible to the public. SAT-4 added a 32-byte ATA PASS-THROUGH command. This version of the command support additional and fields used by some ATA commands. Work on SAT-5 began in 2017. , it has not yet become a standard, making its drafts freely available. Applications SAT is useful for enabling ATA-device-specific commands in a number of scenarios: SATA disks attached to SAS controllers [P]ATA or SATA disks attached via USB brid
https://en.wikipedia.org/wiki/Binomial%20type
In mathematics, a polynomial sequence, i.e., a sequence of polynomials indexed by non-negative integers in which the index of each polynomial equals its degree, is said to be of binomial type if it satisfies the sequence of identities Many such sequences exist. The set of all such sequences forms a Lie group under the operation of umbral composition, explained below. Every sequence of binomial type may be expressed in terms of the Bell polynomials. Every sequence of binomial type is a Sheffer sequence (but most Sheffer sequences are not of binomial type). Polynomial sequences put on firm footing the vague 19th century notions of umbral calculus. Examples In consequence of this definition the binomial theorem can be stated by saying that the sequence is of binomial type. The sequence of "lower factorials" is defined by(In the theory of special functions, this same notation denotes upper factorials, but this present usage is universal among combinatorialists.) The product is understood to be 1 if n = 0, since it is in that case an empty product. This polynomial sequence is of binomial type. Similarly the "upper factorials"are a polynomial sequence of binomial type. The Abel polynomialsare a polynomial sequence of binomial type. The Touchard polynomialswhere is the number of partitions of a set of size into disjoint non-empty subsets, is a polynomial sequence of binomial type. Eric Temple Bell called these the "exponential polynomials" and that term is also sometimes seen in the literature. The coefficients are "Stirling numbers of the second kind". This sequence has a curious connection with the Poisson distribution: If is a random variable with a Poisson distribution with expected value then . In particular, when , we see that the th moment of the Poisson distribution with expected value is the number of partitions of a set of size , called the th Bell number. This fact about the th moment of that particular Poisson distribution is "Dobinski's
https://en.wikipedia.org/wiki/Neocalyptrella
Neocalyptrella is a genus of diatoms belonging to the family Rhizosoleniaceae. Species: Neocalyptrella robusta (G.Norman ex Ralfs) Hernández-Becerril & Castillo
https://en.wikipedia.org/wiki/Cray%20XMS
The Cray XMS was a vector processor minisupercomputer sold by Cray Research from 1990 to 1991. The XMS was originally designed by Supertek Computers Inc. as the Supertek S-1, intended to be a low-cost air-cooled clone of the Cray X-MP with a CMOS re-implementation of the X-MP processor architecture, and a VMEbus-based Input/Output Subsystem (IOS). The XMS could run Cray's UNICOS operating system. Supertek were acquired by Cray Research in 1990, and the S-1 was rebadged XMS by Cray. Its processor had a 55 ns clock period (18.2 MHz clock frequency) and 16 megawords (128 MB) of memory. The CRAY XMS system was the first CRI computer system to be supported by removable disk drives. Serial 5011, on display, was used for marketing purposes in the Eastern Region. It traveled for over 80,000 miles during its short working life and appeared at many trade shows. The XMS was a short-lived model, and was superseded by the Cray Y-MP EL, which was under development by Supertek (as the Supertek S-2 and briefly as the Cray YMS) at the time of the Cray acquisition. Though powerful for its time, the CRAY XMS only had half the processing power of Microsoft's original Xbox gaming console.
https://en.wikipedia.org/wiki/Daniel%20Kane%20%28mathematician%29
Daniel Mertz Kane (born 1986) is an American mathematician. He is an associate professor with a joint position in the Mathematics Department and the Computer Science and Engineering Department at the University of California, San Diego. Early life and education Kane was born in Madison, Wisconsin, to Janet E. Mertz and Jonathan M. Kane, professors of oncology and of mathematics and computer science, respectively. He attended Wingra School, a small alternative K-8 school in Madison that focuses on self-guided education. By 3rd grade, he had mastered K through 9th-grade mathematics. Starting at age 13, he took honors math courses at the University of Wisconsin–Madison and did research under the mentorship of Ken Ono while dual enrolled at Madison West High School. He earned gold medals in the 2002 and 2003 International Mathematical Olympiads. Prior to his 17th birthday, he resolved an open conjecture proposed years earlier by Andrews and Lewis; for this research, he was named Fellow Laureate of the Davidson Institute for Talent Development. He graduated Phi Beta Kappa from the Massachusetts Institute of Technology in 2007 with two bachelor's degrees, one in mathematics with computer science and the other in physics. While at MIT, Kane was one of four people since 2003 (and one of eight in the history of the competition) to be named a four-time Putnam Fellow in the William Lowell Putnam Mathematical Competition. He also won the 2007 Morgan Prize and competed as part of the MIT team in the Mathematical Contest in Modeling four times, earning the highest score three times and winning the Ben Fusaro Award in 2004, INFORMS Award in 2006, and SIAM Award in 2007. He also won the Machtey Award as an undergraduate in 2005, with Tim Abbott and Paul Valiant, for the best student-authored paper at the Symposium on Foundations of Computer Science that year, on the complexity of two-player win-loss games. Kane received his doctorate in mathematics from Harvard University in 2
https://en.wikipedia.org/wiki/Binary%20Reed%E2%80%93Solomon%20encoding
Binary Reed–Solomon coding (BRS), which belongs to a RS code, is a way of encoding that can fix node data loss in a distributed storage environment. It has maximum distance separable (MDS) encoding properties. Its encoding and decoding rate outperforms conventional RS coding and optimum CRS coding. Background RS coding is a fault-tolerant encoding method for a distributed storage environment. Suppose we wish to distribute data across individual devices for improved storage capacity or bandwidth, for example in a hardware RAID setup. Such a configuration risks significant data loss in the event of device failure. The Reed-Solomon encoding produces a storage coding system which robust to the simultaneous failure of any subset of nodes. To do this, we adding additional nodes to the system, for a total of storage nodes. Traditional RS encoding method uses the Vandermonde matrix as a coding matrix and its inverse as the decoding matrix. Traditional RS encoding and decoding operations are all carried out on a large finite domain. Because BRS encoding and decoding employ only shift and XOR operations, they are much faster than traditional RS coding. The algorithm of BRS coding is proposed by the advanced network technology laboratory of Peking University, and it also released the open source implementation of BRS coding. In the actual environment test, the encoding and decoding speed of BRS is faster than that of CRS. In the design and implementation of distributed storage system, using BRS coding can make the system have the characteristics of fault tolerant regeneration. Principle BRS encoding principle The structure of traditional Reed–Solomon codes is based on finite fields, and the BRS code is based on the shift and XOR operation. BRS encoding is based on the Vandermonde matrix, and its specific encoding steps are as follows: 1、Equally divides the original data blocks into blocks, and each block of data has -bit data, recorded as where , . 2、Builds
https://en.wikipedia.org/wiki/Reference%20distance
In broadcast engineering, the reference distance is the distance which, under normal circumstances and flat terrain, a radio station would reach with a particular level of signal strength. This distance depends on two factors: effective radiated power (ERP) and height above average terrain (HAAT). The actual distance a station's signal travels depends highly on weather, where factors like temperature inversions and heavy precipitation have a strong and highly variable influence on radio propagation. However, for purposes of broadcast law such as construction permits and broadcast licenses, fixed calculations called propagation curves are applied to determine the reference distances for all existing and proposed stations. These also take into account beam tilt, carrier frequency, and even the Earth's curvature at longer distances. This is in turn used to define most broadcast classes for FM stations in North America. Each class (except D) is defined as having a maximum ERP and HAAT. If the HAAT of a station's radio antenna exceeds that specified for the class, it must reduce ERP so that its signal does not exceed the reference distance. The signal strength used differs by class, but generally the value is 1.0mV/m (millivolt per meter) or 60dBu (decibels over one microvolt per meter) for most of the United States, and 0.5mV/m or 54dBu in Canada, and for some U.S. stations in parts of certain areas including California, the Great Lakes region, and the Northeast. This is considered the service contour of a station by the Federal Communications Commission (FCC) and the Canadian Radio-television and Telecommunications Commission (CRTC), and by COFETEL in Mexico, according to NARBA and other international agreements among the three. The reference distances are in turn used to create mandatory minimum spacing distances among co-channel stations, and certain adjacent channels as well. Real-world calculations can also be done by including data from digital topograp
https://en.wikipedia.org/wiki/Differential%20amplifier
A differential amplifier is a type of electronic amplifier that amplifies the difference between two input voltages but suppresses any voltage common to the two inputs. It is an analog circuit with two inputs and and one output , in which the output is ideally proportional to the difference between the two voltages: where is the gain of the amplifier. Single amplifiers are usually implemented by either adding the appropriate feedback resistors to a standard op-amp, or with a dedicated integrated circuit containing internal feedback resistors. It is also a common sub-component of larger integrated circuits handling analog signals. Theory The output of an ideal differential amplifier is given by where and are the input voltages, and is the differential gain. In practice, however, the gain is not quite equal for the two inputs. This means, for instance, that if and are equal, the output will not be zero, as it would be in the ideal case. A more realistic expression for the output of a differential amplifier thus includes a second term: where is called the common-mode gain of the amplifier. As differential amplifiers are often used to null out noise or bias voltages that appear at both inputs, a low common-mode gain is usually desired. The common-mode rejection ratio (CMRR), usually defined as the ratio between differential-mode gain and common-mode gain, indicates the ability of the amplifier to accurately cancel voltages that are common to both inputs. The common-mode rejection ratio is defined as In a perfectly symmetric differential amplifier, is zero, and the CMRR is infinite. Note that a differential amplifier is a more general form of amplifier than one with a single input; by grounding one input of a differential amplifier, a single-ended amplifier results. Long-tailed pair Historical background Modern differential amplifiers are usually implemented with a basic two-transistor circuit called a “long-tailed” pair or differential
https://en.wikipedia.org/wiki/Von%20Babo%27s%20law
Von Babo's law (sometimes styled Babo's law) is a experimentally determined scientific law formulated by German chemist Lambert Heinrich von Babo in 1857. It states that the vapor pressure of solution decreases according to the concentration of solute. The law is related to other laws concerning the vapor pressure of solutions, such as Henry's Law and Raoults Law.
https://en.wikipedia.org/wiki/Pin-point%20method%20%28ecology%29
The pin-point method (or point-intercept method) is used for non-destructive measurements of plant cover and plant biomass. In a pin-point analysis, a frame (or a transect) with a fixed grid pattern is placed above the vegetation. A pin is inserted vertically through one of the grid points into the vegetation and will typically touch a number of plants. The number of times the pin touches different plant species is then recorded. This procedure is repeated at each grid point. Vertical rulers connected to the frame are used to prevent horizontal drift of the pins and to measure the height of vegetation hit by the pins.
https://en.wikipedia.org/wiki/PEDOT%3APSS
poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) is a polymer mixture of two ionomers. One component in this mixture is made up of polystyrene sulfonate which is a sulfonated polystyrene. Part of the sulfonyl groups are deprotonated and carry a negative charge. The other component poly(3,4-ethylenedioxythiophene) (PEDOT) is a conjugated polymer and carries positive charges and is based on polythiophene. Together the charged macromolecules form a macromolecular salt. Synthesis PEDOT:PSS can be prepared by mixing an aqueous solution of PSS with EDOT monomer, and to the resulting mixture, a solution of sodium persulfate and ferric sulfate. Applications PEDOT:PSS has the highest efficiency among conductive organic thermoelectric materials (ZT~0.42) and thus can be used in flexible and biodegradable thermoelectric generators. Yet its largest application is as a transparent, conductive polymer with high ductility. For example, AGFA coats 200 million photographic films per year with a thin, extensively-stretched layer of virtually transparent and colorless PEDOT:PSS as an antistatic agent to prevent electrostatic discharges during production and normal film use, independent of humidity conditions, and as electrolyte in polymer electrolytic capacitors. If organic compounds, including high boiling solvents like methylpyrrolidone, dimethyl sulfoxide, sorbitol, ionic liquids and surfactants, are added conductivity increases by many orders of magnitude. This makes it also suitable as a transparent electrode, for example in touchscreens, organic light-emitting diodes, flexible organic solar cells and electronic paper to replace the traditionally used indium tin oxide (ITO). Owing to the high conductivity (up to 4600 S/cm), it can be used as a cathode material in capacitors replacing manganese dioxide or liquid electrolytes. It is also used in organic electrochemical transistors. The conductivity of PEDOT:PSS can also be significantly improved by a post-trea
https://en.wikipedia.org/wiki/Colorado%20potato%20beetle
The Colorado potato beetle (Leptinotarsa decemlineata), also known as the Colorado beetle, the ten-striped spearman, the ten-lined potato beetle, or the potato bug, is a major pest of potato crops. It is about long, with a bright yellow/orange body and five bold brown stripes along the length of each of its elytra. Native to the Rocky Mountains, it spread rapidly in potato crops across America and then Europe from 1859 onwards. Taxonomy The Colorado potato beetle was first observed in 1811 by Thomas Nuttall and was formally described in 1824 by American entomologist Thomas Say. The beetles were collected in the Rocky Mountains, where they were feeding on the buffalo bur, Solanum rostratum. The genus Leptinotarsa is assigned to the chrysomelid beetle tribe Chrysomelini (in subfamily Chrysomelinae). Description Adult beetles typically are in length and in width. They weigh 50-170 mg. The beetles are orange-yellow in colour with 10 characteristic black stripes on their elytra. The specific name decemlineata, meaning 'ten-lined', derives from this feature. Adult beetles may, however, be visually confused with L. juncta, the false potato beetle, which is not an agricultural pest. L. juncta also has alternating black and white strips on its back, but one of the white strips in the center of each wing cover is missing and replaced by a light brown strip. The orange-pink larvae have a large, 9-segmented abdomen, black head, and prominent spiracles, and may measure up to in length in their final instar stage. The beetle larva has four instar stages. The head remains black throughout these stages, but the pronotum changes colour from black in first- and second-instar larvae to having an orange-brown edge in its third-instar. In fourth-instar larvae, about half the pronotum is coloured light brown. This tribe is characterised within the subfamily by round to oval-shaped convex bodies, which are usually brightly coloured, simple claws which separate at the base, open c
https://en.wikipedia.org/wiki/Polynucleotide
In molecular biology, a polynucleotide () is a biopolymer composed of nucleotide monomers that are covalently bonded in a chain. DNA (deoxyribonucleic acid) and RNA (ribonucleic acid) are examples of polynucleotides with distinct biological functions. DNA consists of two chains of polynucleotides, with each chain in the form of a helix (like a spiral staircase). Sequence Although DNA and RNA do not generally occur in the same polynucleotide, the four species of nucleotides may occur in any order in the chain. The sequence of DNA or RNA species for a given polynucleotide is the main factor determining its function in a living organism or a scientific experiment. Polynucleotides in organisms Polynucleotides occur naturally in all living organisms. The genome of an organism consists of complementary pairs of enormously long polynucleotides wound around each other in the form of a double helix. Polynucleotides have a variety of other roles in organisms. Polynucleotides in scientific experiments Polynucleotides are used in biochemical experiments such as polymerase chain reaction (PCR) or DNA sequencing. Polynucleotides are made artificially from oligonucleotides, smaller nucleotide chains with generally fewer than 30 subunits. A polymerase enzyme is used to extend the chain by adding nucleotides according to a pattern specified by the scientist. Prebiotic condensation of nucleobases with ribose In order to understand how life arose, knowledge is required of the chemical pathways that permit formation of the key building blocks of life under plausible prebiotic conditions. According to the RNA world hypothesis free-floating ribonucleotides were present in the primitive soup. These were the fundamental molecules that combined in series to form RNA. Molecules as complex as RNA must have arisen from small molecules whose reactivity was governed by physico-chemical processes. RNA is composed of purine and pyrimidine nucleotides, both of which are necessary for re
https://en.wikipedia.org/wiki/Boussinesq%20approximation%20%28water%20waves%29
In fluid dynamics, the Boussinesq approximation for water waves is an approximation valid for weakly non-linear and fairly long waves. The approximation is named after Joseph Boussinesq, who first derived them in response to the observation by John Scott Russell of the wave of translation (also known as solitary wave or soliton). The 1872 paper of Boussinesq introduces the equations now known as the Boussinesq equations. The Boussinesq approximation for water waves takes into account the vertical structure of the horizontal and vertical flow velocity. This results in non-linear partial differential equations, called Boussinesq-type equations, which incorporate frequency dispersion (as opposite to the shallow water equations, which are not frequency-dispersive). In coastal engineering, Boussinesq-type equations are frequently used in computer models for the simulation of water waves in shallow seas and harbours. While the Boussinesq approximation is applicable to fairly long waves – that is, when the wavelength is large compared to the water depth – the Stokes expansion is more appropriate for short waves (when the wavelength is of the same order as the water depth, or shorter). Boussinesq approximation The essential idea in the Boussinesq approximation is the elimination of the vertical coordinate from the flow equations, while retaining some of the influences of the vertical structure of the flow under water waves. This is useful because the waves propagate in the horizontal plane and have a different (not wave-like) behaviour in the vertical direction. Often, as in Boussinesq's case, the interest is primarily in the wave propagation. This elimination of the vertical coordinate was first done by Joseph Boussinesq in 1871, to construct an approximate solution for the solitary wave (or wave of translation). Subsequently, in 1872, Boussinesq derived the equations known nowadays as the Boussinesq equations. The steps in the Boussinesq approximation are: a Taylor
https://en.wikipedia.org/wiki/Asymptotic%20distribution
In mathematics and statistics, an asymptotic distribution is a probability distribution that is in a sense the "limiting" distribution of a sequence of distributions. One of the main uses of the idea of an asymptotic distribution is in providing approximations to the cumulative distribution functions of statistical estimators. Definition A sequence of distributions corresponds to a sequence of random variables Zi for i = 1, 2, ..., I . In the simplest case, an asymptotic distribution exists if the probability distribution of Zi converges to a probability distribution (the asymptotic distribution) as i increases: see convergence in distribution. A special case of an asymptotic distribution is when the sequence of random variables is always zero or Zi = 0 as i approaches infinity. Here the asymptotic distribution is a degenerate distribution, corresponding to the value zero. However, the most usual sense in which the term asymptotic distribution is used arises where the random variables Zi are modified by two sequences of non-random values. Thus if converges in distribution to a non-degenerate distribution for two sequences {ai} and {bi} then Zi is said to have that distribution as its asymptotic distribution. If the distribution function of the asymptotic distribution is F then, for large n, the following approximations hold If an asymptotic distribution exists, it is not necessarily true that any one outcome of the sequence of random variables is a convergent sequence of numbers. It is the sequence of probability distributions that converges. Central limit theorem Perhaps the most common distribution to arise as an asymptotic distribution is the normal distribution. In particular, the central limit theorem provides an example where the asymptotic distribution is the normal distribution. Central limit theorem Suppose is a sequence of i.i.d. random variables with and . Let be the average of . Then as approaches infinity, the random variables converge in d
https://en.wikipedia.org/wiki/Marcela%20Agoncillo
Doña Marcela Mariño de Agoncillo (née Mariño y Coronel; June 24, 1859 – May 30, 1946) was a Filipina who was the principal seamstress of the first and official flag of the Philippines, gaining her the title of "The Mother of the Philippine Flag." Marcela Coronel Mariño was the daughter of Don Francisco Diokno Mariño and Doña Eugenia Coronel Mariño, a rich family in her hometown of Taal, Batangas. She finished her studies at Santa Catalina College, Marcela acquired her learning in music and feminine crafts. At the age of 30, Marcela Coronel Mariño married Felipe Encarnacion Agoncillo, a Filipino lawyer, and a jurist, and gave birth to six children. Her marriage led an important role in Philippine history. When her husband was exiled in Hong Kong during the outbreak of the Philippine Revolution, Marcela Mariño Agoncillo and the rest of the family joined him and temporarily resided there to avoid the anti-Filipino hostilities of the occupying Spain. While in Hong Kong, General Emilio Aguinaldo requested her to sew the flag that would represent the Republic of the Philippines. Doña Marcela Mariño de Agoncillo, with her eldest daughter Lorenza and a friend Delfina Herbosa de Natividad, niece of Dr. Jose Rizal, manually sewed the flag in accordance with General Emilio Aguinaldo's design which later became the official flag of the Republic of the Philippines. While the flag itself is the perpetual legacy of Doña Marcela Mariño de Agoncillo, she is also commemorated through museums and monuments: like the marker in Hong Kong (where her family temporarily sojourned), at her ancestral home in Taal, Batangas which has been turned into a museum, in paintings by notable painters as well as through other visual arts. Early life Marcela Coronel Mariño was born on June 24, 1859, in Taal, Batangas, Philippines to Don Francisco Diokno Mariño and Doña Eugenia Coronel Mariño. She grew up in her ancestral Mariño house in Taal, Batangas built in the 1770s by her grandparents,
https://en.wikipedia.org/wiki/Droplet-shaped%20wave
In physics, droplet-shaped waves are casual localized solutions of the wave equation closely related to the X-shaped waves, but, in contrast, possessing a finite support. A family of the droplet-shaped waves was obtained by extension of the "toy model" of X-wave generation by a superluminal point electric charge (tachyon) at infinite rectilinear motion to the case of a line source pulse started at time . The pulse front is supposed to propagate with a constant superluminal velocity (here is the speed of light, so ). In the cylindrical spacetime coordinate system , originated in the point of pulse generation and oriented along the (given) line of source propagation (direction z), the general expression for such a source pulse takes the form where and are, correspondingly, the Dirac delta and Heaviside step functions while is an arbitrary continuous function representing the pulse shape. Notably, for , so for as well. As far as the wave source does not exist prior to the moment , a one-time application of the causality principle implies zero wavefunction for negative values of time. As a consequence, is uniquely defined by the problem for the wave equation with the time-asymmetric homogeneous initial condition The general integral solution for the resulting waves and the analytical description of their finite, droplet-shaped support can be obtained from the above problem using the STTD technique. See also X-wave
https://en.wikipedia.org/wiki/GlassBridge%20Enterprises
GlassBridge Enterprises, Inc., formerly Imation Corporation, is an American holding company. Through its subsidiary, Glassbridge focuses primarily on investment and asset management. Prior to the name change, Glassbridge had three core elements: traditional storage (magnetic tape and optical products), secure and scalable storage (data backup, data archive and data security for small and medium businesses) and what the company calls "audio and video information" products. History Imation was started in 1996, when 3M spun off its data storage and imaging business. The company underwent a divestment of non-core businesses, and invested in four core product technology areas: secure storage, scalable storage, wireless/connectivity, and magnetic tape. As part of 3M, the company was involved in the development of many technological improvements in data storage, such as the introduction of the first American-made magnetic tape in 1947, the first quarter-inch tape cartridge for data storage (QIC) in 1971, and the floppy disk in 1984. In February 2012, Imation announced a product set to secure mobile data, identities, and workspaces, based on the idea that employees used portable storage devices to transport corporate data. The security news followed five acquisitions the company made in 2011 in scalable storage and data security: ENCRYPTX; MXI Security from Memory Experts International; the assets of ProStor; the secure data storage hardware business of IronKey; and intellectual property from Nine Technologies. Imation received an exclusive license from IronKey for its secure storage management software and service and a license to use the IronKey brand for secure storage products. In October 2011, Imation products for small and medium businesses centered on its DataGuard and InfiniVault multi-tier data protection and data archive appliances. The company sold consumer electronics, headphones and accessories under the Imation, Memorex, TDK Life on Record, and XtremeMa
https://en.wikipedia.org/wiki/Abhorchdienst
The Abhorchdienst (i.e. "Listening Bureau") was a German code-breaking bureau which operated during the final years of the First World War. It was established in 1916 and was composed mainly of mathematicians. Other countries, such as France and Austria-Hungary, had set up similar organisations at an earlier stage. Still, the military context did not necessitate the development of the German bureau until 1916.
https://en.wikipedia.org/wiki/Tool%20use%20by%20non-humans
Tool use by non-humans is a phenomenon in which a non-human animal uses any kind of tool in order to achieve a goal such as acquiring food and water, grooming, combat, defence, communication, recreation or construction. Originally thought to be a skill possessed only by humans, some tool use requires a sophisticated level of cognition. There is considerable discussion about the definition of what constitutes a tool and therefore which behaviours can be considered true examples of tool use. A wide range of animals, including mammals, birds, fish, cephalopods, and insects, are considered to use tools. Primates are well known for using tools for hunting or gathering food and water, cover for rain, and self-defence. Chimpanzees have often been the object of study in regard to their usage of tools, most famously by Jane Goodall, since these animals are frequently kept in captivity and are closely related to humans. Wild tool use in other primates, especially among apes and monkeys, is considered relatively common, though its full extent remains poorly documented, as many primates in the wild are mainly only observed distantly or briefly when in their natural environments and living without human influence. Some novel tool-use by primates may arise in a localised or isolated manner within certain unique primate cultures, being transmitted and practised among socially connected primates through cultural learning. Many famous researchers, such as Charles Darwin in his 1871 book The Descent of Man, have mentioned tool use in monkeys (such as baboons). Among other mammals, both wild and captive elephants are known to create tools using their trunks and feet, mainly for swatting flies, scratching, plugging up waterholes that they have dug (to close them up again so the water does not evaporate), and reaching food that is out of reach. In addition to primates and elephants, many other social mammals particularly have been observed engaging in tool use. A group of dolphins in
https://en.wikipedia.org/wiki/Kernighan%E2%80%93Lin%20algorithm
The Kernighan–Lin algorithm is a heuristic algorithm for finding partitions of graphs. The algorithm has important practical application in the layout of digital circuits and components in electronic design automation of VLSI. Description The input to the algorithm is an undirected graph with vertex set , edge set , and (optionally) numerical weights on the edges in . The goal of the algorithm is to partition into two disjoint subsets and of equal (or nearly equal) size, in a way that minimizes the sum of the weights of the subset of edges that cross from to . If the graph is unweighted, then instead the goal is to minimize the number of crossing edges; this is equivalent to assigning weight one to each edge. The algorithm maintains and improves a partition, in each pass using a greedy algorithm to pair up vertices of with vertices of , so that moving the paired vertices from one side of the partition to the other will improve the partition. After matching the vertices, it then performs a subset of the pairs chosen to have the best overall effect on the solution quality . Given a graph with vertices, each pass of the algorithm runs in time . In more detail, for each , let be the internal cost of a, that is, the sum of the costs of edges between a and other nodes in A, and let be the external cost of a, that is, the sum of the costs of edges between a and nodes in B. Similarly, define , for each . Furthermore, let be the difference between the external and internal costs of s. If a and b are interchanged, then the reduction in cost is where is the cost of the possible edge between a and b. The algorithm attempts to find an optimal series of interchange operations between elements of and which maximizes and then executes the operations, producing a partition of the graph to A and B. Pseudocode Source: function Kernighan-Lin(G(V, E)) is determine a balanced initial partition of the nodes into sets A and B do compute D v
https://en.wikipedia.org/wiki/Sphericon
In solid geometry, the sphericon is a solid that has a continuous developable surface with two congruent, semi-circular edges, and four vertices that define a square. It is a member of a special family of rollers that, while being rolled on a flat surface, bring all the points of their surface to contact with the surface they are rolling on. It was discovered independently by carpenter Colin Roberts (who named it) in the UK in 1969, by dancer and sculptor Alan Boeding of MOMIX in 1979, and by inventor David Hirsch, who patented it in Israel in 1980. Construction The sphericon may be constructed from a bicone (a double cone) with an apex angle of 90 degrees, by splitting the bicone along a plane through both apexes, rotating one of the two halves by 90 degrees, and reattaching the two halves. Alternatively, the surface of a sphericon can be formed by cutting and gluing a paper template in the form of four circular sectors (with central angles ) joined edge-to-edge. Geometric properties The surface area of a sphericon with radius is given by . The volume is given by , exactly half the volume of a sphere with the same radius. History Around 1969, Colin Roberts (a carpenter from the UK) made a sphericon out of wood while attempting to carve a Möbius strip without a hole. In 1979, David Hirsch invented a device for generating a meander motion. The device consisted of two perpendicular half discs joined at their axes of symmetry. While examining various configurations of this device, he discovered that the form created by joining the two half discs, exactly at their diameter centers, is actually a skeletal structure of a solid made of two half bicones, joined at their square cross-sections with an offset angle of 90 degrees, and that the two objects have exactly the same meander motion. Hirsch filed a patent in Israel in 1980, and a year later, a pull toy named Wiggler Duck, based on Hirsch's device, was introduced by Playskool Company. In 1999, Colin Roberts se
https://en.wikipedia.org/wiki/RUCAPS
RUCAPS (Really Universal Computer-Aided Production System) was a computer aided design (CAD) system for architects, first developed during the 1970s and 1980s, and today credited as a forerunner of Building Information Modelling (BIM). It ran on minicomputers from Prime Computer and Digital Equipment Corporation (DEC). Development The system was initially developed by two graduates of Liverpool University, Dr John Davison and John Watts in the early 1970s. They took their work to architects Gollins Melvin Ward (GMW Architects) in London in the late 1970s, and developed it whilst working on a project for Riyadh University. It became the Really Universal Computer Aided Production System (RUCAPS), and from 1977 was sold through GMW Computers Ltd in several countries worldwide. The term 'building model' (in the sense of BIM as used today) was first used in papers in the mid-1980s: in a 1985 paper by Simon Ruffle, and later in a 1986 paper by Robert Aish - then at GMW Computers - referring to the software's use at London's Heathrow Airport. RUCAPS was a significant milestone in the development of building modellers, selling many hundreds of copies during the early 1980s when CAD was rare and expensive, and introducing thousands of architects to computer aided design. It is regarded as a forerunner to today's BIM software, and is seen by some writers, e.g.: Jerry Laiserin, as the inspiration behind Autodesk's Revit: While Autodesk Revit may not contain genomic snippets of Reflex code, Revit clearly is spiritual heir to a lineage of BIM "begats" — RUCAPS begat Sonata, Sonata begat Reflex, and Reflex begat Revit. RUCAPS was superseded in the mid-late 1980s by Sonata, developed by former GMW employee Jonathan Ingram. This was sold to T2 Solutions (renamed from GMW Computers in 1987), which was eventually bought by Alias|Wavefront but then "disappeared in a mysterious, corporate black hole, somewhere in eastern Canada in 1992." Ingram then went on to develop Reflex, bough
https://en.wikipedia.org/wiki/Receptor%20editing
Receptor editing is a process that occurs during the maturation of B cells, which are part of the adaptive immune system. This process forms part of central tolerance to attempt to change the specificity of the antigen receptor of self reactive immature B-cells, in order to rescue them from programmed cell death, called apoptosis. It is thought that 20-50% of all peripheral naive B cells have undergone receptor editing making it the most common method of removing self reactive B cells. During maturation in the bone marrow, B cells are tested for interaction with self antigens, which is called negative selection. If the maturing B cells strongly interact with these self antigens, they undergo death by apoptosis. Negative selection is important to avoid the production of B cells that could cause autoimmune diseases. They can avoid apoptosis by modifying the sequence of light chain V and J genes (components of the antigen receptor) so that it has a different specificity and may not recognize self antigens anymore. This process of changing the specificity of the immature B cell receptor is called receptor editing.
https://en.wikipedia.org/wiki/IFTTT
IFTTT (, an acronym of if this, then that) is a private commercial company founded in 2011, that runs online digital automation platforms which it offers as a service. Their platforms provide a visual interface for making cross-platform if statements. , they have 18 million users. IFTTT has partnerships with different providers of everyday services as well as using public APIs to integrate them with each other through its platform. They supply event notifications to IFTTT and execute commands that implement the responses. History On December 14, 2010, Linden Tibbets, the co-founder of IFTTT, posted a blog post titled "ifttt the beginning..." on the IFTTT website, announcing the new project. The first IFTTT applications were designed and developed by Linden with co-founders Jesse Tane and Alexander Tibbets. The product was officially launched on September 7, 2011. By April 2012, users had created one million tasks. In June 2012, the service entered the internet of things space by integrating with Belkin Wemo devices, allowing applets to interact with the physical world. In July 2013, IFTTT released an iPhone app and later released a version for iPad and iPod touch. An Android version was launched in July 2014. By the end of 2014, the IFTTT business was valued at approximately US$170 million. On February 19, 2015, IFTTT launched three new applications: Do Button (that triggers an action when it is pressed), Do Camera (which automatically uploads an image to services such as Facebook, Twitter, and Dropbox), and Do Notes (which uploads text to services). In November 2016, the four apps were merged. By December 2016, the company announced a partnership with JotForm to integrate an applet to create actions in other applications. Part of IFTTT's revenue comes from IFTTT platform partners, who pay to have their products connected to the service. On September 10, 2020, the service switched to a limited freemium model with a subscription-based version known as "IFTTT P
https://en.wikipedia.org/wiki/Women%20medical%20practitioners%20in%20Early%20Modern%20Europe
Early Modern Europe marked a period of transition within the medical world. Universities for doctors were becoming more common and standardized training was becoming a requirement. During this time, a few universities were beginning to train women as midwives, but rhetoric against women healers was increasing. The literature against women in medicine started in the 13th century, and the Early Modern period gave way to a widespread call for licensing and proper training for midwives, which was largely unavailable. Within family households, however, women continued heal, as seen in the recipe books many women kept and passed down from generation to generation. Women "were brought up to know how to make medicines and how to use them." The roots of medicine within Europe largely originated from women and their knowledge. Women medical practitioners were both paid and unpaid. Women were healers whether compensated or not, but harder to separate into specialties as with male healers, because women had no formal guilds. This has resulted in some confusion about what women's roles actually were when it came to medicine. Paid labor Midwives Midwifery's move was relatively recent to medical standards of practice. In the 16th century, university training became a norm for male physicians and increasing worry started to permeate throughout Europe concerning midwives and their training. Men did not rival midwives in childbirth care until midwives' training was questioned. It would take four centuries for men to be the majority in childbirth. Supporters of licensing midwives said was that midwives learned the trade of childbirth through trial and error. As well, midwives were consistently accused of practicing witchcraft. Throughout Europe, midwifery and witchcraft had strong ties. Reasons for those ties included midwives' use of empiricism. This, along with their use of herbs and mixtures, including the highly toxic belladonna, made them a target of the church and claim
https://en.wikipedia.org/wiki/Self-consistent%20mean%20field%20%28biology%29
The self-consistent mean field (SCMF) method is an adaptation of mean field theory used in protein structure prediction to determine the optimal amino acid side chain packing given a fixed protein backbone. It is faster but less accurate than dead-end elimination and is generally used in situations where the protein of interest is too large for the problem to be tractable by DEE. General principles Like dead-end elimination, the SCMF method explores conformational space by discretizing the dihedral angles of each side chain into a set of rotamers for each position in the protein sequence. The method iteratively develops a probabilistic description of the relative population of each possible rotamer at each position, and the probability of a given structure is defined as a function of the probabilities of its individual rotamer components. The basic requirements for an effective SCMF implementation are: A well-defined finite set of discrete independent variables A precomputed numerical value (considered the "energy") associated with each element in the set of variables, and associated with each binary element pair An initial probability distribution describing the starting population of each individual rotamer A way of updating rotamer energies and probabilities as a function of the mean-field energy The process is generally initialized with a uniform probability distribution over the rotamers—that is, if there are rotamers at the position in the protein, then the probability of any individual rotamer is . The conversion between energies and probabilities is generally accomplished via the Boltzmann distribution, which introduces a temperature factor (thus making the method amenable to simulated annealing). Lower temperatures increase the likelihood of converging to a single solution, rather than to a small subpopulation of solutions. Mean-field energies The energy of an individual rotamer is dependent on the "mean-field" energy of the other positions—tha
https://en.wikipedia.org/wiki/Nasal-associated%20lymphoid%20tissue
Nasal- or nasopharynx- associated lymphoid tissue (NALT) represents immune system of nasal mucosa and is a part of mucosa-associated lymphoid tissue (MALT) in mammals. It protects body from airborne viruses and other infectious agents. In humans, NALT is considered analogous to Waldeyer's ring. Structure NALT in mice is localized on cartilaginous soft palate of upper jaw, it is situated bilaterally on the posterior side of the palate. It consists mainly of lymphocytes, T cell and B cell enriched zones, follicle-associated epithelium (FAE) with epithelial M cells and some erythrocytes. M cells are typical for antigen intake from mucosa. In some areas of NALT, there are lymphatic vessels and HEVs (high endothelial venule). Dendritic cells and macrophages are also present. NALT contains about same amount of T cells and B cells. The T-cell population contains about 3–4 times more CD4+ T cells than CD8+ T cells. Most of T cells are with αβ T cell receptor (TCR) and only few are with γδ TCR. CD4+ T cells are in naive state, marked by high expression of CD45RB. B cells are mostly in unswitched state, with sIgM+ IgD+ phenotype. Development Formation of NALT starts early after birth, it is not present during embrygenesis or in newborn mice. First signs of NALT (HEV with associated lymphocytes) occurs one week after birth, but full formation is established after 5–8 weeks. In contrast to Peyer's patches and lymph nodes, NALT formation is independent of IL-7R, LT-βR and ROR-γ signalling. It requires Id2 gene, which induce genesis of CD3−CD4+CD45+ cells. These cells accumulates on the site of NALT after birth and induce NALT formation. Function NALT in mice has strategic position for incoming pathogens and it is the first site of recognition and elimination of inhaled pathogens. It has a key role in inducing mucosal and systemic immune response. NALT is inductive site of MALT similarly to Peyer's patches in a small intestine. After intranasal immunization or pathogen r
https://en.wikipedia.org/wiki/Wally%20the%20Green%20Monster
Wally the Green Monster is the official mascot for the Boston Red Sox. His name is derived from the Green Monster, the nickname of the 37-foot 2-inch wall in left field at Fenway Park. Wally debuted on April 13, 1997. Although he was an immediate success with children, he was not as well-received by older fans. Wally has since become more accepted by Red Sox fans of all ages, in part due to the late fan-favorite Red Sox broadcaster Jerry Remy creating stories about him and sharing them during televised games. Fictional biography According to the Boston Red Sox, Wally has been a long-time resident of Fenway Park, residing in the Green Monster wall since 1947. He wears Red Sox jersey #97, indicating the year of his emergence from the wall, and consistently wears his team-issued size 37 cap. In his spare time, Wally likes to play catch with the Red Sox players, and read his favorite book "Hello, Wally" written by his good friend the late NESN Red Sox Broadcaster Jerry Remy. He also sneaks into the concession stands when no one is looking to grab a bite (or more) to eat. He prepares for every Red Sox game by eating a good meal, watching batting practice, and tuning into Red Sox Pregame as he ties up his shoes and grabs his trusty Red Sox flag which he waves. A win flag variant is carried by him in home wins. Role in games As pregame starts, Wally is on the field greeting fans near field level by taking pictures, signing autographs and sneaking in a kiss or two to the many fans of Red Sox Nation. After photo ops with some of Fenway's special guests of the game, he can be seen waving his flag and cheering on the Red Sox as the starting lineups are announced. After the national anthem is sung, Wally and a guest formally begin the game with the announcement "play ball". Throughout the game, Wally can be seen making seat visits for some special guests, and jumping up on the dugout to sing "Take Me Out to the Ball Game" during the seventh-inning stretch. On weekends, Wall
https://en.wikipedia.org/wiki/Anatomically%20correct%20doll
An anatomically correct doll or anatomically precise doll is a doll that depicts some of the primary and secondary sex characteristics of a human for educational purposes. A very detailed type of anatomically correct doll may be used in questioning children who may have been sexually abused. The use of dolls as interview aids has been criticized, and the validity of information obtained this way has been contested. Overview Some children's baby dolls and potty training dolls are anatomically correct for educational purposes. There are also dolls that are used as medical models, particularly in explaining medical procedures to child patients. These have a more detailed depiction of the human anatomy and may include features like removable internal organs. One notable anatomically correct doll was the "Archie Bunker's Grandson Joey Stivic" doll that was made by the Ideal Toy Co. in 1976. The doll, which was modeled after infant character Joey Stivic from the Television sitcom series All In The Family, was considered to be the first anatomically correct boy doll. The dolls are also sometimes used by parents or teachers as sex education. Forensic use A particular type of anatomically correct dolls are used in law enforcement and therapy. These dolls have detailed depictions of all the primary and secondary sexual characteristics of a human: "oral and anal openings, ears, tongues, nipples, and hands with individual fingers" for all and a "vagina, clitoris and breasts" for each of the female dolls and a "penis and testicles" for each of the male dolls. These dolls are used during interviews with children who may have been sexually abused. The dolls wear removable clothing, and the anatomically correct and similarly scaled body parts ensure that sexual activity can be simulated realistically. There is some criticism with regard to using anatomically correct dolls to question victims of sexual abuse. Critics argue that because of the novelty of the dolls, children wil
https://en.wikipedia.org/wiki/Reactions%20on%20surfaces
Reactions on surfaces are reactions in which at least one of the steps of the reaction mechanism is the adsorption of one or more reactants. The mechanisms for these reactions, and the rate equations are of extreme importance for heterogeneous catalysis. Via scanning tunneling microscopy, it is possible to observe reactions at the solid gas interface in real space, if the time scale of the reaction is in the correct range. Reactions at the solid–gas interface are in some cases related to catalysis. Simple decomposition If a reaction occurs through these steps: A + S ⇌ AS → Products where A is the reactant and S is an adsorption site on the surface and the respective rate constants for the adsorption, desorption and reaction are k1, k−1 and k2, then the global reaction rate is: where: r is the rate, mol·m−2·s−1 is the concentration of adsorbate, mol·m−3 is the surface concentration of occupied sites, mol·m−2 is the concentration of all sites (occupied or not), mol·m−2 is the surface coverage, (i.e. ) defined as the fraction of sites which are occupied, which is dimensionless is time, s is the rate constant for the surface reaction, s−1. is the rate constant for surface adsorption, m3·mol−1·s−1 is the rate constant for surface desorption, s−1 is highly related to the total surface area of the adsorbent: the greater the surface area, the more sites and the faster the reaction. This is the reason why heterogeneous catalysts are usually chosen to have great surface areas (in the order of a hundred m2/gram) If we apply the steady state approximation to AS, then: so and The result is equivalent to the Michaelis–Menten kinetics of reactions catalyzed at a site on an enzyme. The rate equation is complex, and the reaction order is not clear. In experimental work, usually two extreme cases are looked for in order to prove the mechanism. In them, the rate-determining step can be: Limiting step: adsorption/desorption The order respect to A is 1. Examples o
https://en.wikipedia.org/wiki/Anonymous%20function
In computer programming, an anonymous function (function literal, lambda abstraction, lambda function, lambda expression or block) is a function definition that is not bound to an identifier. Anonymous functions are often arguments being passed to higher-order functions or used for constructing the result of a higher-order function that needs to return a function. If the function is only used once, or a limited number of times, an anonymous function may be syntactically lighter than using a named function. Anonymous functions are ubiquitous in functional programming languages and other languages with first-class functions, where they fulfil the same role for the function type as literals do for other data types. Anonymous functions originate in the work of Alonzo Church in his invention of the lambda calculus, in which all functions are anonymous, in 1936, before electronic computers. In several programming languages, anonymous functions are introduced using the keyword lambda, and anonymous functions are often referred to as lambdas or lambda abstractions. Anonymous functions have been a feature of programming languages since Lisp in 1958, and a growing number of modern programming languages support anonymous functions. Names The names "lambda abstraction", "lambda function", and "lambda expression" refer to the notation of function abstraction in lambda calculus, where the usual function would be written ( is an expression that uses ). Compare to the Python syntax of lambda x: M. The name "arrow function" refers to the mathematical "maps to" symbol, . Compare to the JavaScript syntax of x => M. Uses Anonymous functions can be used for containing functionality that need not be named and possibly for short-term use. Some notable examples include closures and currying. The use of anonymous functions is a matter of style. Using them is never the only way to solve a problem; each anonymous function could instead be defined as a named function and called by nam
https://en.wikipedia.org/wiki/Flavoprotein
Flavoproteins are proteins that contain a nucleic acid derivative of riboflavin. These proteins are involved in a wide array of biological processes, including removal of radicals contributing to oxidative stress, photosynthesis, and DNA repair. The flavoproteins are some of the most-studied families of enzymes. Flavoproteins have either FMN (flavin mononucleotide) or FAD (flavin adenine dinucleotide) as a prosthetic group or as a cofactor. The flavin is generally tightly bound (as in adrenodoxin reductase, wherein the FAD is buried deeply). About 5-10% of flavoproteins have a covalently linked FAD. Based on the available structural data, FAD-binding sites can be divided into more than 200 different types. 90 flavoproteins are encoded in the human genome; about 84% require FAD and around 16% require FMN, whereas 5 proteins require both. Flavoproteins are mainly located in the mitochondria. Of all flavoproteins, 90% perform redox reactions and the other 10% are transferases, lyases, isomerases, ligases. Discovery Flavoproteins were first mentioned in 1879, when they isolated as a bright-yellow pigment from cow's milk. They were initially termed lactochrome. By the early 1930s, this same pigment had been isolated from a range of sources, and recognised as a component of the vitamin B complex. Its structure was determined and reported in 1935 and given the name riboflavin, derived from the ribityl side chain and yellow colour of the conjugated ring system. The first evidence for the requirement of flavin as an enzyme cofactor came in 1935. Hugo Theorell and coworkers showed that a bright-yellow-coloured yeast protein, identified previously as essential for cellular respiration, could be separated into apoprotein and a bright-yellow pigment. Neither apoprotein nor pigment alone could catalyse the oxidation of NADH, but mixing of the two restored the enzyme activity. However, replacing the isolated pigment with riboflavin did not restore enzyme activity, despite bein
https://en.wikipedia.org/wiki/Sanjeev%20Arora
Sanjeev Arora (born January 1968) is an Indian American theoretical computer scientist. Life He was a visiting scholar at the Institute for Advanced Study in 2002–03. In 2008 he was inducted as a Fellow of the Association for Computing Machinery. In 2011 he was awarded the ACM Infosys Foundation Award (now renamed ACM Prize in Computing), given to mid-career researchers in Computer Science. Arora has been awarded the Fulkerson Prize for 2012 for his work on improving the approximation ratio for graph separators and related problems (jointly with Satish Rao and Umesh Vazirani). In 2012 he became a Simons Investigator. Arora was elected in 2015 to the American Academy of Arts and Sciences and in 2018 to the National Academy of Science He is a coauthor (with Boaz Barak) of the book Computational Complexity: A Modern Approach and is a founder, and on the Executive Board, of Princeton's Center for Computational Intractability. He and his coauthors have argued that certain financial products are associated with computational asymmetry, which under certain conditions may lead to market instability. Books
https://en.wikipedia.org/wiki/Software%20engineering%20demographics
Software engineers form part of the workforce around the world. There are an estimated 26.9 million professional software engineers in the world as of 2022, up from 21 million in 2016. By Country United States In 2022, there were an estimated 4.4 million professional software engineers in North America. There are 152 million people employed in the US workforce, making software engineers 2.54% of the total workforce. The total above is an increase compared to around 3.87 million software engineers employed in 2016. Summary Based on data from the U.S. Bureau of Labor Statistics from 2002, about 612,000 software engineers worked in the U.S. about one out of every 200 workers. There were 55% to 60% as many software engineers as all traditional engineers. This comparison holds whether one compares the number of practitioners, managers, educators, or technicians/programmers. Software engineering had 612,000 practitioners; 264,790 managers, 16,495 educators, and 457,320 programmers. Software Engineers Vs. Traditional Engineers The following two tables compare the number of software engineers (611,900 in 2002) versus the number of traditional engineers (1,157,020 in 2002). There are another 1,500,000 people in system analysis, system administration, and computer support, many of whom might be called software engineers. Many systems analysts manage software development teams and analysis is an important software engineering role, so many of them might be considered software engineers in the near future. This means that the number of software engineers may actually be much higher. It's important to note that the number of software engineers declined by 5-to-10 percent from 2000 to 2002. Computer Managers Versus Construction and Engineering Managers Computer and information system managers (264,790) manage software projects, as well as computer operations. Similarly, Construction and engineering managers (413,750) oversee engineering projects, manufacturing plants
https://en.wikipedia.org/wiki/Programming%20Languages%3A%20Application%20and%20Interpretation
Programming Languages: Application and Interpretation (PLAI) is a free programming language textbook by Shriram Krishnamurthi. It is in use at over 30 universities, in several high-schools. The book differs from most other programming language texts in its attempt to wed two different styles of programming language education: one based on language surveys and another based on interpreters. In the former style, it can be too easy to ignore difficult technical points, which are sometimes best understood by trying to reproduce them (via implementation); in the latter, it can be too easy to miss the high-level picture in the forest of details. PLAI therefore interleaves the two, using the survey approach to motivate ideas and interpreters to understand them. The book is accompanied by supporting software that runs in the Racket programming language. Since PLAI is constantly under development, some of the newer material (especially assignments) is found on course pages at Brown University. PLAI is also an experiment in publishing methods. The essay Books as Software discusses why the book is self-published. The current public release is version 3.2.2 (2023-02-26) is available as a free electronic edition for screen use or printing.
https://en.wikipedia.org/wiki/Mechanical%20filter
A mechanical filter is a signal processing filter usually used in place of an electronic filter at radio frequencies. Its purpose is the same as that of a normal electronic filter: to pass a range of signal frequencies, but to block others. The filter acts on mechanical vibrations which are the analogue of the electrical signal. At the input and output of the filter, transducers convert the electrical signal into, and then back from, these mechanical vibrations. The components of a mechanical filter are all directly analogous to the various elements found in electrical circuits. The mechanical elements obey mathematical functions which are identical to their corresponding electrical elements. This makes it possible to apply electrical network analysis and filter design methods to mechanical filters. Electrical theory has developed a large library of mathematical forms that produce useful filter frequency responses and the mechanical filter designer is able to make direct use of these. It is only necessary to set the mechanical components to appropriate values to produce a filter with an identical response to the electrical counterpart. Steel alloys and iron–nickel alloys are common materials for mechanical filter components; nickel is sometimes used for the input and output couplings. Resonators in the filter made from these materials need to be machined to precisely adjust their resonance frequency before final assembly. While the meaning of mechanical filter in this article is one that is used in an electromechanical role, it is possible to use a mechanical design to filter mechanical vibrations or sound waves (which are also essentially mechanical) directly. For example, filtering of audio frequency response in the design of loudspeaker cabinets can be achieved with mechanical components. In the electrical application, in addition to mechanical components which correspond to their electrical counterparts, transducers are needed to convert between the
https://en.wikipedia.org/wiki/Behind-armor%20debris
Behind-armor debris is debris particles eroded from the penetrator of armor as well as spalled material ejected from the target itself. Behind-armor debris characteristics can be described by the number, position, and size range of debris particles.
https://en.wikipedia.org/wiki/State-universal%20coupled%20cluster
State-universal coupled cluster (SUCC) method is one of several multi-reference coupled-cluster (MR) generalizations of single-reference coupled cluster method. It was first formulated by Bogumił Jeziorski and Hendrik Monkhorst in their work published in Physical Review A in 1981. State-universal coupled cluster is often abbreviated as SUMR-CC or MR-SUCC.
https://en.wikipedia.org/wiki/Falanja
Falanja (فلنجه) is a red seed (perhaps the seed of cubeb) used in the making of perfumes. It was used to make perfumes by women in the court of Jahangir (1605–1627).
https://en.wikipedia.org/wiki/Connecticut%20statistical%20areas
The U.S. currently has nine statistical areas that have been delineated by the Office of Management and Budget (OMB). On March 6, 2020, the OMB delineated three combined statistical areas, five metropolitan statistical areas, and one micropolitan statistical area in Connecticut. Statistical areas The Office of Management and Budget (OMB) has designated more than 1,000 statistical areas for the United States and Puerto Rico. These statistical areas are important geographic delineations of population clusters used by the OMB, the United States Census Bureau, planning organizations, and federal, state, and local government entities. The OMB defines a core-based statistical area (commonly referred to as a CBSA) as "a statistical geographic entity consisting of the county or counties (or county-equivalents) associated with at least one core of at least 10,000 population, plus adjacent counties having a high degree of social and economic integration with the core as measured through commuting ties with the counties containing the core." The OMB further divides core-based statistical areas into metropolitan statistical areas (MSAs) that have "a population of at least 50,000" and micropolitan statistical areas (μSAs) that have "a population of at least 10,000, but less than 50,000." The OMB defines a combined statistical area (CSA) as "a geographic entity consisting of two or more adjacent core-based statistical areas with employment interchange measures of at least 15%." The primary statistical areas (PSAs) include all combined statistical areas and any core-based statistical area that is not a constituent of a combined statistical area. Table The table below describes the nine United States statistical areas and eight counties of the State of Connecticut with the following information: The combined statistical area (CSA) as designated by the OMB. The CSA population according to 2019 US Census Bureau population estimates. The core based statistical area (CBSA) as des
https://en.wikipedia.org/wiki/Marchenko%20equation
In mathematical physics, more specifically the one-dimensional inverse scattering problem, the Marchenko equation (or Gelfand-Levitan-Marchenko equation or GLM equation), named after Israel Gelfand, Boris Levitan and Vladimir Marchenko, is derived by computing the Fourier transform of the scattering relation: Where is a symmetric kernel, such that which is computed from the scattering data. Solving the Marchenko equation, one obtains the kernel of the transformation operator from which the potential can be read off. This equation is derived from the Gelfand–Levitan integral equation, using the Povzner–Levitan representation. Application to scattering theory Suppose that for a potential for the Schrödinger operator , one has the scattering data , where are the reflection coefficients from continuous scattering, given as a function , and the real parameters are from the discrete bound spectrum. Then defining where the are non-zero constants, solving the GLM equation for allows the potential to be recovered using the formula See also Lax pair
https://en.wikipedia.org/wiki/Roberts%20syndrome
Roberts syndrome, or sometimes called pseudothalidomide syndrome, is an extremely rare autosomal recessive genetic disorder that is characterized by mild to severe prenatal retardation or disruption of cell division, leading to malformation of the bones in the skull, face, arms, and legs. It is caused by a mutation in the ESCO2 gene. It is one of the rarest autosomal recessive disorders, affecting approximately 150 known individuals. The mutation causes cell division to occur slowly or unevenly, and the cells with abnormal genetic content die. Roberts syndrome can affect both males and females. Although the disorder is rare, the affected group is diverse. The mortality rate is high in severely affected individuals. The syndrome is named after American surgeon and physician John Bingham Roberts (1852–1924), who first described it in 1919. Symptoms and signs The following is a list of symptoms that have been associated with Roberts syndrome: Bilateral symmetric tetraphocomelia- a birth defect in which the hands and feet are attached to shortened arms and legs Prenatal growth retardation Hypomelia (hypoplasia)- the incomplete development of a tissue or organ; less drastic than aplasia, which is no development at all Oligodactyly- fewer than normal number of fingers or toes Thumb aplasia- the absence of a thumb Syndactyly- condition in which two or more fingers (or toes) are joined together; the joining can involve the bones or just the skin between the fingers Clinodactyly- curving of the fifth finger (little finger) towards the fourth finger (ring finger) due to the underdevelopment of the middle bone in the fifth finger Elbow/knee flexion contractures- an inability to fully straighten the arm or leg Cleft lip- the presence of one or two vertical fissures in the upper lip; can be on one side (unilateral) or on both sides (bilateral) Cleft palate- opening in the roof of the mouth Premaxillary protrusion- upper part of the mouth sticks out farther than the
https://en.wikipedia.org/wiki/T%20meson
T mesons are hypothetical mesons composed of a top quark and either an up (), down (), strange (), charm () or bottom antiquark (). Because of the top quark's short lifetime, T mesons are not expected to be found in nature. The combination of a top quark and top antiquark is not an open T meson, but rather a closed top-quark meson called toponium. Each T meson has an antiparticle that is composed of a top antiquark and an up (), down (), strange (), charm quark () or bottom quark () respectively.
https://en.wikipedia.org/wiki/Radiodrome
In geometry, a radiodrome is the pursuit curve followed by a point that is pursuing another linearly-moving point. The term is derived from the Greek words and . The classic (and best-known) form of a radiodrome is known as the "dog curve"; this is the path a dog follows when it swims across a stream with a current after something it has spotted on the other side. Because the dog drifts with the current, it will have to change its heading; it will also have to swim further than if it had taken the optimal heading. This case was described by Pierre Bouguer in 1732. A radiodrome may alternatively be described as the path a dog follows when chasing a hare, assuming that the hare runs in a straight line at a constant velocity. Mathematical analysis Introduce a coordinate system with origin at the position of the dog at time zero and with y-axis in the direction the hare is running with the constant speed . The position of the hare at time zero is with and at time it is The dog runs with the constant speed towards the instantaneous position of the hare. The differential equation corresponding to the movement of the dog, , is consequently It is possible to obtain a closed-form analytic expression for the motion of the dog. From () and (), it follows that Multiplying both sides with and taking the derivative with respect to , using that one gets or From this relation, it follows that where is the constant of integration determined by the initial value of ' at time zero, , i.e., From () and (), it follows after some computation that Furthermore, since , it follows from () and () that If, now, , relation () integrates to where is the constant of integration. Since again , it's The equations (), () and (), then, together imply If , relation () gives, instead, Using once again, it follows that The equations (), () and (), then, together imply that If , it follows from
https://en.wikipedia.org/wiki/Andrew%20Pritchard
Andrew Pritchard FRSE (14 December 1804 – 24 November 1882) was an English naturalist and natural history dealer who made significant improvements to microscopy and studied microscopic organisms. His belief that God and nature were one led him to the Unitarians, a religious movement to which he and his family devoted much energy. He became a leading member of Newington Green Unitarian Church in north London, and worked to build a school there. Early life Andrew Pritchard was born in Hackney, then a village just north of London on 14 December 1804, the son of John Pritchard and his wife, Ann Fleetwood. He was educated at St Saviour's Grammar School in Southwark. Microscopy Pritchard set up as an optician, and also sold microscopes and microslide preparations. These slides he prepared by studying the microscopic organisms that he saw, and identifying and labelling them. Starting in 1830, he collaborated with C.R. Goring to produce beautifully illustrated books showing the "animalcules" visible through the microscope. His shops were in central London, more towards The City than the West End, variously at 162 Fleet Street, Pickett Street and 312 & 263 The Strand. The Oxford Dictionary of National Biography says his List of 2000 Microscopic Objects (1835) "is very important in the history of microscopy... his History of the Infusoria (1841) was long a standard work, and the impetus it gave to the study of biological science cannot be overestimated." ("Infusoria" is a term then current for aquatic micro-organisms.) This latter book was enlarged and revised by John Ralfs and other botanists; Pritchard in turn condensed Ralfs's contribution on the diatomaceæ (diatoms, a type of phytoplankton), and wrote many books and articles on "natural history as seen through the microscope, on optical instruments, and on patents" Religious ties Pritchard held various Dissenting religious views over his lifetime, holding that science and religion were one. Through the Varleys he atten
https://en.wikipedia.org/wiki/Maya%20Manithan
Maya Manithan () is a 1958 Indian Tamil language science fiction film produced and directed by T. P. Sundaram. The film stars Sriram, S. A. Asokan, and Chandrakantha. Cast & Crew Adapted from Film credits. Cast Sriram C. D. Vanaja S. A. Ashokan Maithili G. M. Basheer Chandrakantha Kaka Radhakrishnan T. P. Muthulakshmi K. Kannan Kandhala Devi Master Vijayakumar Pathangudi N. S. Subbaiah Samikannu Gajakarnam Rajamani Joseph Raja M. R. L. Narayan Dance Helen (Kannukkulle Minnalaadudhu) Sukumari Bala Jeeva Malathi Crew Producer: T. P. Sundaram & Harilal Badeviya Director: T. P. Sundaram Screenplay & Dialogues: A. S. Muthu Cinematography: M. Krishnaswamy Audiography: T. S. Rangasamy, Kanniappan & Mohanasundaram (songs) Nageswaran and Narasimman (dialogues) Choreography: R. Krishnaraj, Chopra (Helen dance) Art: C. Ramaraju Photography: R. N. Nagaraja Rao Studio & Lab: Golden Cine Studio Processing: Krishnan Stunt: Somu & Party Soundtrack Music was composed by G. Govindarajulu Naidu while the lyrics were penned by A. Maruthakasi. Playback Singers are: Jikki, P. Susheela, S. Janaki, T. V. Rathnam, A. G. Rathnamala, A. L. Raghavan, S. V. Ponnusamy, S. C. Krishnan.
https://en.wikipedia.org/wiki/IBM%20Intelligent%20Cluster
The IBM Intelligent Cluster was a cluster solution for x86-based high-performance computing composed primarily of IBM (System x, BladeCenter and System Storage) components, integrated with network switches from various vendors and optional high-performance InfiniBand interconnects. History The solution was formerly known as the IBM eServer Cluster 1300 (or e1300) based on then-current Pentium III processors, which was introduced in November 2001. This was replaced by the e1350 in October 2002 with the introduction of Pentium 4-based Intel Xeon processors. Later (in 2008-2009) solution also was known as the IBM System Cluster 1350; in 2010 released line with final IBM Intelligent Cluster name. Roughly twice a year the solution components were updated to include the then-current products from IBM and other vendors. In 2014 this cluster solution was sold and rebranded as Lenovo Intelligent Cluster. Architecture The Intelligent Cluster system is a (integrated, factory-built and tested) rack tower-size cluster solution with comprehensive warranty service for all components, including third-party options. The system could comprise traditional rack-optimized nodes, as well as IBM BladeCenter, Flex System or iDataPlex blade nodes, or another rack-mounted servers with processor choices between x86-based (Intel Xeon and AMD Opteron) or uncommon Power-based processors options (only for blade servers), along with integrated storage and switches to provide a turnkey Linux or Microsoft cluster environment. These platform also supports the water-cooling module options (Heat Exchange Doors) for some rack towers designs. Operating system choices were officially limited to Enterprise Linux distributions from Red Hat and SUSE and to Microsoft Windows HPC Server 2008. For systems management IBM offered xCAT. Additional software, such as GPFS and LoadLeveler, could also be ordered from IBM. See also IBM BladeCenter and IBM Flex System iDataPlex
https://en.wikipedia.org/wiki/Direct%20Push
Direct Push is Microsoft's technology for receiving e-mail instantly on Windows Mobile 5.0, 6.0 and 6.1 enabled devices, from Microsoft Exchange Servers, Kerio Connect and Zarafa. This service was launched primarily for business users and was supported around 2006 by about 100 operators. It provides response times similar to the push technology of RIM's BlackBerry service, but needs no special server upgrades other than having Exchange Server 2003 Service Pack 2. It works by initiating an HTTPS connection to the server through any connectivity that can carry an IP traffic such as GPRS or EDGE technologies, through the firewall, then a front-end server that connects to the Exchange server that hosts the user mailbox. It also eliminates the need for devices to have a dedicated IP address, but requires "Always on" GPRS or 3G connection. The Direct Push technology served as a feature for the Exchange Server ActiveSync service, which allowed Windows Mobile 5.0 and later versions of Windows Mobile software to keep their data up-to-date.
https://en.wikipedia.org/wiki/Barnes%20interpolation
Barnes interpolation, named after Stanley L. Barnes, is the interpolation of unevenly spread data points from a set of measurements of an unknown function in two dimensions into an analytic function of two variables. An example of a situation where the Barnes scheme is important is in weather forecasting where measurements are made wherever monitoring stations may be located, the positions of which are constrained by topography. Such interpolation is essential in data visualisation, e.g. in the construction of contour plots or other representations of analytic surfaces. Introduction Barnes proposed an objective scheme for the interpolation of two dimensional data using a multi-pass scheme. This provided a method to interpolating sea-level pressures across the entire United States of America, producing a synoptic chart across the country using dispersed monitoring stations. Researchers have subsequently improved the Barnes method to reduce the number of parameters required for calculation of the interpolated result, increasing the objectivity of the method. The method constructs a grid of size determined by the distribution of the two dimensional data points. Using this grid, the function values are calculated at each grid point. To do this the method utilises a series of Gaussian functions, given a distance weighting in order to determine the relative importance of any given measurement on the determination of the function values. Correction passes are then made to optimise the function values, by accounting for the spectral response of the interpolated points. Method Here we describe the method of interpolation used in a multi-pass Barnes interpolation. First pass For a given grid point i, j the interpolated function g(xi, yi) is first approximated by the inverse weighting of the data points. To do this as weighting values is assigned to each Gaussian for each grid point, such that where is a falloff parameter that controls the width of the Gaussian fu